首页 >web前端 >js教程 >Vercel人工智能SDK

Vercel人工智能SDK

WBOY
WBOY原创
2024-08-16 06:12:02544浏览

引入跟踪、多模式附件、到客户端的 JSON 流式传输等。

Vercel AI SDK 是一个使用 JavaScript 和 TypeScript 构建 AI 应用程序的工具包。其统一的 API 允许您使用任何语言模型,并提供与领先 Web 框架(例如 Next.js 和 Svelte)的强大 UI 集成。

Vercel AI SDK 3.3 引入四大功能:

  • 跟踪(实验):使用 OpenTelemetry
  • 检测 AI SDK 函数
  • 多模式文件附件(实验):使用 useChat 发送文件附件
  • useObject hook(实验):将结构化对象生成流传输到客户端
  • 其他 LLM 设置:用于工具和结构化对象生成、停止序列和发送自定义标头的原始 JSON

我们还添加了 AWS Bedrock 和 Chrome AI(社区)模型提供商以及许多较小的功能和附加功能。您可以在我们的变更日志中找到所有更改,包括次要功能。

实验性功能让您尽快使用最新的AI SDK功能。但是,它们可能会在补丁版本中发生变化。如果您决定使用实验性功能,请固定补丁版本。

追踪

鉴于语言模型的不确定性,可观察性对于理解和开发人工智能应用程序至关重要。您需要能够跟踪和理解各个模型调用的时间、令牌使用、提示和响应内容。

Vercel AI SDK 现在支持使用 OpenTelemetry 进行跟踪,作为一项实验性功能,OpenTelemetry 是一种用于记录遥测信息的开源标准。以下是 Vercel Datadog 集成跟踪可视化的示例:

Vercel AI SDK

使用 Datadog 和 Vercel AI SDK 进行跟踪可视化

您可以使用 Datadog、Sentry 和 Axiom 等 Vercel 可观测性集成来分析 AI SDK 跟踪数据。或者,您可以使用 LLM 可观察性提供商,例如 LangFuse、Braintrust 或 LangSmith。

要将遥测与 Vercel AI SDK 结合使用,您需要为您的应用程序进行配置。我们建议使用@vercel/otel。如果您使用 Next.js 并在 Vercel 上部署,您可以将带有以下代码的 Instrumentation.ts 添加到您的项目中:

import { registerOTel } from '@vercel/otel';

export function register() {
  registerOTel({ serviceName: 'your-project-nameapp' });
}

由于跟踪功能是实验性的,因此您需要选择使用experimental_telemetry 选项来记录信息。您还可以提供函数 ID 来识别调用位置以及您想要记录的其他元数据。

const result = await generateText({
  model: anthropic('claude-3-5-sonnet-20240620'),
  prompt: 'Write a short story about a cat.',
  experimental_telemetry: { 
    isEnabled: true,
    functionId: 'my-awesome-function',
    metadata: {
      something: 'custom',
      someOtherThing: 'other-value',
    },
  },
});

启用该功能将记录您的函数调用的跟踪数据。您可以在 AI SDK 遥测文档中找到更多详细信息。如果您想开始使用,请查看我们的可部署 AI SDK Next.js 跟踪模板。

多模式文件附件

在许多人工智能聊天应用程序中,用户需要随消息一起发送附件,例如图像、PDF 和各种媒体文件。这些附件还需要可以与用户查看的消息一起预览。

因此,我们将experimental_attachments 添加到useChat() React hook 的handleSubmit() 处理程序中。

Vercel AI SDK

使用 useChat 发送图像和文本附件

查看此示例的实际操作并部署模板。

有两种方法可以通过消息发送附件,通过向handleSubmit 函数提供 FileList 对象或 URL 列表:

文件列表

通过使用 FileList,您可以使用文件输入元素将多个文件作为附件连同消息一起发送。 useChat 钩子会自动将它们转换为数据 URL 并将它们发送给 AI 提供商。

const { input, handleSubmit, handleInputChange } = useChat();
const [files, setFiles] = useState<FileList | undefined>(undefined);
return (
  <form
    onSubmit={(event) => {
      handleSubmit(event, {
        experimental_attachments: files,
      });
    }}
  >
    <input
      type="file"
      onChange={(event) => {
        if (event.target.files) {
          setFiles(event.target.files);
        }
      }}
      multiple
    />
    <input type="text" value={input} onChange={handleInputChange} />
  </form>
);

网址

您还可以将 URL 作为附件与消息一起发送。这对于发送外部资源或媒体内容的链接很有用。

const { input, handleSubmit, handleInputChange } = useChat();
const [attachments] = useState<Attachment[]>([
  {
    name: 'earth.png',
    contentType: 'image/png',
    url: 'https://example.com/earth.png',
  }
]);
return (
  <form
    onSubmit={event => {
      handleSubmit(event, {
        experimental_attachments: attachments,
      });
    }}
  >
    <input type="text" value={input} onChange={handleInputChange} />
  </form>
)

您可以在我们的多模式聊天机器人指南中了解更多信息。

使用对象钩子

结构化数据生成是人工智能应用中的常见要求,例如用于从自然语言输入中提取信息。使用新的 useObject 挂钩,您可以将结构化对象生成直接流式传输到客户端。这项实验性功能现已在 React 中推出,它允许您创建动态界面,在 JSON 对象流式传输时显示它们。

例如,想象一个应用程序,您可以在其中以文本形式输入费用以进行报销。您可以使用人工智能将文本输入转换为结构化对象,并在处理结构化费用时将其流式传输给用户:

Vercel AI SDK

Extracting and streaming an expense from plain text with useObject

Here's how you could implement this in a Next.js application. First, define a schema for the expenses. The schema is shared between client and server:

import { z } from 'zod';

export const expenseSchema = z.object({
  expense: z.object({
    category: z
      .string()
      .describe(
        'Category of the expense. Allowed categories: ' +
        'TRAVEL, MEALS, ENTERTAINMENT, OFFICE SUPPLIES, OTHER.',
      ),
    amount: z.number().describe('Amount of the expense in USD.'),
    date: z
      .string()
      .describe('Date of the expense. Format yyyy-mmm-dd, e.g. 1952-Feb-19.'),
    details: z.string().describe('Details of the expense.'),
  }),
});

export type PartialExpense = DeepPartial<typeof expenseSchema>['expense'];
export type Expense = z.infer<typeof expenseSchema>['expense'];

Then, you use streamObject on the server to call the language model and stream an object:

import { anthropic } from '@ai-sdk/anthropic';
import { streamObject } from 'ai';
import { expenseSchema } from './schema';

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

export async function POST(req: Request) {
  const { expense }: { expense: string } = await req.json();
  const result = await streamObject({
    model: anthropic('claude-3-5-sonnet-20240620'),
    system:
      'You categorize expenses into one of the following categories: ' +
      'TRAVEL, MEALS, ENTERTAINMENT, OFFICE SUPPLIES, OTHER.' +
      // provide date (including day of week) for reference:
      'The current date is: ' +
      new Date()
        .toLocaleDateString('en-US', {
          year: 'numeric',
          month: 'short',
          day: '2-digit',
          weekday: 'short',
        })
        .replace(/(\w+), (\w+) (\d+), (\d+)/, '$4-$2-$3 ($1)') +
      '. When no date is supplied, use the current date.',
    prompt: `Please categorize the following expense: "${expense}"`,
    schema: expenseSchema,
    onFinish({ object }) {
      // you could save the expense to a database here
    },
  });
  return result.toTextStreamResponse();
}

Finally, you consume the expense stream on a client page. While the expense is streaming, we preview the partial expense, and once the generation is finished, we append it to the list of expenses:

'use client';

import { experimental_useObject as useObject } from 'ai/react';
import {
  Expense,
  expenseSchema,
  PartialExpense,
} from '../api/expense/schema';
import { useState } from 'react';

export default function Page() {
  const [expenses, setExpenses] = useState<Expense[]>([]);
  const { submit, isLoading, object } = useObject({
    api: '/api/expense',
    schema: expenseSchema,
    onFinish({ object }) {
      if (object != null) {
        setExpenses(prev => [object.expense, ...prev]);
      }
    },
  });
  return (
    <div>
      <form onSubmit={e => {
        e.preventDefault();
        const input = e.currentTarget.expense as HTMLInputElement;
        if (input.value.trim()) {
          submit({ expense: input.value });
          e.currentTarget.reset();
        }
      }}
      >
        <input type="text" name="expense" placeholder="Enter expense details"/>
        <button type="submit" disabled={isLoading}>Log expense</button>
      </form>
      {isLoading && object?.expense && (
        <ExpenseView expense={object.expense} />
      )}
      {expenses.map((expense, index) => (
        <ExpenseView key={index} expense={expense} />
      ))}
    </div>
  );
}

The expenses are rendered using an ExpenseView that can handle partial objects with undefined properties with .? and ?? (styling is omitted for illustration purposes):

const ExpenseView = ({ expense }: { expense: PartialExpense | Expense }) => (
  <div>
    <div>{expense?.date ?? ''}</div>
    <div>${expense?.amount?.toFixed(2) ?? ''}</div>
    <div>{expense?.category ?? ''}</p></div>
    <div>{expense?.details ?? ''}</div>
  </div>
);

Check out this example in action and deploy the template.

You can use this approach to create generative user interfaces client-side for many different use cases. You can find more details on how to use it in our object generation documentation.

Additional LLM Settings

Calling language models is at the heart of the Vercel AI SDK. We have listened to your feedback and extended our functions to support the following features:

  • JSON schema support for tools and structured object generation: As an alternative to Zod schemas, you can now use JSON schemas directly with the jsonSchema function. You can supply the type annotations and an optional validation function, giving you more flexibility especially when building applications with dynamic tools and structure generation.
  • Stop sequences: Text sequences that stop generations have been an important feature when working with earlier language models that used raw text prompts. They are still relevant for many use cases, allowing you more control over the end of a text generation. You can now use the stopSequences option to define stop sequences in streamText and generateText.
  • Sending custom headers: Custom headers are important for many use cases, like sending tracing information, enabling beta provider features, and more. You can now send custom headers using the headers option in most AI SDK functions.

With these additional settings, you have more control and flexibility when working with language models in the Vercel AI SDK.

Conclusion

With new features like OpenTelemetry support, useObject, and support for attachments with useChat, it’s never been a better time to start building AI applications.

  • Start a new AI project: Ready to build something new? Check out our multi-modal chatbot guide.
  • Explore our templates: Visit our Template Gallery to see the AI SDK in action and get inspired for your next project.
  • Join the community: Let us know what you’re building with the AI SDK in our GitHub Discussions.

We can't wait to see what you'll build next with Vercel AI SDK 3.3!

Contributors

Vercel AI SDK 3.3 is the result of the combined work of our core team at Vercel and many community contributors.

Special thanks for contributing merged pull requests:

gclark-eightfold, dynamicwebpaige, Und3rf10w, elitan, jon-spaeth, jeasonstudio, InfiniteCodeMonkeys, ruflair, MrMaina100, AntzyMo, samuelint, ian-pascoe, PawelKonie99, BrianHung, Ouvill, gmickel, developaul, elguarir, Kunoacc, florianheysen, rajuAhmed1705, suemor233, eden-chan, DraganAleksic99, karl-richter, rishabhbizzle, vladkampov, AaronFriel, theitaliandev, miguelvictor, jferrettiboke, dhruvvbhavsar, lmcgartland, PikiLee

Your feedback and contributions are invaluable as we continue to evolve the SDK.

以上是Vercel人工智能SDK的详细内容。更多信息请关注PHP中文网其他相关文章!

声明:
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn