大家好! ?
你知道是什么让我彻夜难眠吗?思考如何让我们的人工智能系统更智能、更高效。今天,我想谈谈一些听起来很基础但在构建强大的人工智能应用程序时至关重要的事情:分块 ✨。
将分块视为人工智能将大量信息分解为可管理的小部分的方式。就像你不会尝试一下子把整个披萨塞进嘴里一样(或者也许你会,这里没有判断力!),你的人工智能需要将大文本分解成更小的片段才能有效地处理它们。
这对于我们所说的 RAG(检索增强生成)模型尤其重要。这些坏孩子不只是编造事实——他们实际上从外部来源获取真实信息。很整洁,对吧?
看,如果你正在构建任何处理文本的东西 - 无论是客户支持聊天机器人还是花哨的知识库搜索 - 正确进行分块是提供准确答案的人工智能与仅给出答案的人工智能之间的区别。 嗯。
块太大?你的模型没有抓住要点。
块太小?它迷失在细节中。
首先,让我们看一个使用 LangChain 进行语义分块的 Python 示例:
from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.document_loaders import TextLoader def semantic_chunk(file_path): # Load the document loader = TextLoader(file_path) document = loader.load() # Create a text splitter text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, length_function=len, separators=["\n\n", "\n", " ", ""] ) # Split the document into chunks chunks = text_splitter.split_documents(document) return chunks # Example usage chunks = semantic_chunk('knowledge_base.txt') for i, chunk in enumerate(chunks): print(f"Chunk {i}: {chunk.page_content[:50]}...")
现在,让我们构建一些真实的东西 - 使用 AWS CDK 和 Node.js 的无服务器知识库! ?
首先,CDK 基础设施(这就是神奇发生的地方):
import * as cdk from 'aws-cdk-lib'; import * as s3 from 'aws-cdk-lib/aws-s3'; import * as lambda from 'aws-cdk-lib/aws-lambda'; import * as opensearch from 'aws-cdk-lib/aws-opensearch'; import * as iam from 'aws-cdk-lib/aws-iam'; export class KnowledgeBaseStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); // S3 bucket to store our documents const documentBucket = new s3.Bucket(this, 'DocumentBucket', { removalPolicy: cdk.RemovalPolicy.DESTROY, }); // OpenSearch domain for storing our chunks const openSearchDomain = new opensearch.Domain(this, 'DocumentSearch', { version: opensearch.EngineVersion.OPENSEARCH_2_5, capacity: { dataNodes: 1, dataNodeInstanceType: 't3.small.search', }, ebs: { volumeSize: 10, }, }); // Lambda function for processing documents const processorFunction = new lambda.Function(this, 'ProcessorFunction', { runtime: lambda.Runtime.NODEJS_18_X, handler: 'index.handler', code: lambda.Code.fromAsset('lambda'), environment: { OPENSEARCH_DOMAIN: openSearchDomain.domainEndpoint, }, timeout: cdk.Duration.minutes(5), }); // Grant permissions documentBucket.grantRead(processorFunction); openSearchDomain.grantWrite(processorFunction); } }
现在,执行分块和索引的 Lambda 函数:
import { S3Event } from 'aws-lambda'; import { S3 } from 'aws-sdk'; import { Client } from '@opensearch-project/opensearch'; import { defaultProvider } from '@aws-sdk/credential-provider-node'; import { AwsSigv4Signer } from '@opensearch-project/opensearch/aws'; const s3 = new S3(); const CHUNK_SIZE = 1000; const CHUNK_OVERLAP = 200; // Create OpenSearch client const client = new Client({ ...AwsSigv4Signer({ region: process.env.AWS_REGION, service: 'es', getCredentials: () => { const credentialsProvider = defaultProvider(); return credentialsProvider(); }, }), node: `https://${process.env.OPENSEARCH_DOMAIN}`, }); export const handler = async (event: S3Event) => { for (const record of event.Records) { const bucket = record.s3.bucket.name; const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' ')); // Get the document from S3 const { Body } = await s3.getObject({ Bucket: bucket, Key: key }).promise(); const text = Body.toString('utf-8'); // Chunk the document const chunks = chunkText(text); // Index chunks in OpenSearch for (const [index, chunk] of chunks.entries()) { await client.index({ index: 'knowledge-base', body: { content: chunk, documentKey: key, chunkIndex: index, timestamp: new Date().toISOString(), }, }); } } }; function chunkText(text: string): string[] { const chunks: string[] = []; let start = 0; while (start < text.length) { const end = Math.min(start + CHUNK_SIZE, text.length); let chunk = text.slice(start, end); // Try to break at a sentence boundary const lastPeriod = chunk.lastIndexOf('.'); if (lastPeriod !== -1 && lastPeriod !== chunk.length - 1) { chunk = chunk.slice(0, lastPeriod + 1); } chunks.push(chunk); start = Math.max(start + chunk.length - CHUNK_OVERLAP, start + 1); } return chunks; }
以下是如何查询此知识库的快速示例:
async function queryKnowledgeBase(query: string) { const response = await client.search({ index: 'knowledge-base', body: { query: { multi_match: { query: query, fields: ['content'], }, }, }, }); return response.body.hits.hits.map(hit => ({ content: hit._source.content, documentKey: hit._source.documentKey, score: hit._score, })); }
使用 S3、Lambda 和 OpenSearch 等 AWS 服务可以让我们:
好了,伙计们!如何在无服务器知识库中实现分块的真实示例。最好的部分?它会自动缩放并可以处理任何尺寸的文档。
记住,良好分块的关键是:
您在构建知识库方面有什么经验?您尝试过不同的分块策略吗?请在下面的评论中告诉我! ?
以上是人工智能中的分块 - 你缺少的秘密武器的详细内容。更多信息请关注PHP中文网其他相关文章!