基于本文,我们现在可以将 Gemini 与 OpenAI 库一起使用。所以,我决定在这篇文章中尝试一下
目前仅提供聊天完成 API 和嵌入 API。
在本文中,我尝试使用 Python 和 JavaScript。
首先,我们来搭建环境。
pip install openai python-dotenv
接下来,让我们运行以下代码。
import os from dotenv import load_dotenv from openai import OpenAI load_dotenv() GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") client = OpenAI( api_key=GOOGLE_API_KEY, base_url="https://generativelanguage.googleapis.com/v1beta/" ) response = client.chat.completions.create( model="gemini-1.5-flash", n=1, messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": "Explain briefly(less than 30 words) to me how AI works." } ] ) print(response.choices[0].message.content)
返回了以下响应。
AI mimics human intelligence by learning patterns from data, using algorithms to solve problems and make decisions.
在内容字段中,您可以指定字符串或“类型”:“文本”。
import os from dotenv import load_dotenv from openai import OpenAI load_dotenv() GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") client = OpenAI( api_key=GOOGLE_API_KEY, base_url="https://generativelanguage.googleapis.com/v1beta/" ) response = client.chat.completions.create( model="gemini-1.5-flash", n=1, messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "Explain briefly(less than 30 words) to me how AI works.", }, ] } ] ) print(response.choices[0].message.content)
但是,图像和音频输入出现错误。
图像输入示例代码
import os from dotenv import load_dotenv from openai import OpenAI load_dotenv() GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") client = OpenAI( api_key=GOOGLE_API_KEY, base_url="https://generativelanguage.googleapis.com/v1beta/" ) # png to base64 text import base64 with open("test.png", "rb") as image: b64str = base64.b64encode(image.read()).decode("utf-8") response = client.chat.completions.create( model="gemini-1.5-flash", # model="gpt-4o", n=1, messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "Describe the image in the image below.", }, { "type": "image_url", "image_url": { "url": f"data:image/png;base64,{b64str}" } } ] } ] ) print(response.choices[0].message.content)
音频输入示例代码
import os from dotenv import load_dotenv from openai import OpenAI load_dotenv() GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") client = OpenAI( api_key=GOOGLE_API_KEY, base_url="https://generativelanguage.googleapis.com/v1beta/" ) # png to base64 text import base64 with open("test.wav", "rb") as audio: b64str = base64.b64encode(audio.read()).decode("utf-8") response = client.chat.completions.create( model="gemini-1.5-flash", # model="gpt-4o-audio-preview", n=1, modalities=["text"], messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "What does he say?", }, { "type": "input_audio", "input_audio": { "data": b64str, "format": "wav", } } ] } ] ) print(response.choices[0].message.content)
返回了以下错误响应。
openai.BadRequestError: Error code: 400 - [{'error': {'code': 400, 'message': 'Request contains an invalid argument.', 'status': 'INVALID_ARGUMENT'}}]
目前仅支持文字输入,不过后续似乎会支持图片和音频输入。
让我们看一下 JavaScript 示例代码。
首先,我们来搭建环境。
npm init -y npm install openai npm pkg set type=module
接下来,让我们运行以下代码。
import OpenAI from "openai"; const GOOGLE_API_KEY = process.env.GOOGLE_API_KEY; const openai = new OpenAI({ apiKey: GOOGLE_API_KEY, baseURL: "https://generativelanguage.googleapis.com/v1beta/" }); const response = await openai.chat.completions.create({ model: "gemini-1.5-flash", messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Explain briefly(less than 30 words) to me how AI works", }, ], }); console.log(response.choices[0].message.content);
运行代码时,请确保在 .env 文件中包含 API 密钥。 .env 文件将在运行时加载。
node --env-file=.env run.js
返回了以下响应。
AI systems learn from data, identify patterns, and make predictions or decisions based on those patterns.
我们可以在同一个库中使用其他模型,这真是太棒了。
就我个人而言,我对此感到很高兴,因为 OpenAI 使编辑对话历史记录变得更加容易。
以上是将 Gemini 与 OpenAI 库结合使用的详细内容。更多信息请关注PHP中文网其他相关文章!