Rumah >pembangunan bahagian belakang >Tutorial Python >IRIS-RAG-Gen: Memperibadikan Aplikasi ChatGPT RAG Dikuasakan oleh Carian Vektor IRIS
Hai Komuniti,
Dalam artikel ini, saya akan memperkenalkan aplikasi saya iris-RAG-Gen .
Iris-RAG-Gen ialah aplikasi Generatif Retrieval-Augmented Generatif (RAG) AI yang memanfaatkan kefungsian Carian Vektor IRIS untuk memperibadikan ChatGPT dengan bantuan rangka kerja web Streamlit, LangChain dan OpenAI. Aplikasi ini menggunakan IRIS sebagai kedai vektor.
Ikuti Langkah Di Bawah untuk Mengambil dokumen:
Kefungsian Ingest Document memasukkan butiran dokumen ke dalam jadual rag_documents dan mencipta jadual 'rag_document id' (id of the rag_documents) untuk menyimpan data vektor.
Kod Python di bawah akan menyimpan dokumen yang dipilih ke dalam vektor:
from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.document_loaders import PyPDFLoader, TextLoader from langchain_iris import IRISVector from langchain_openai import OpenAIEmbeddings from sqlalchemy import create_engine,text <span>class RagOpr:</span> #Ingest document. Parametres contains file path, description and file type <span>def ingestDoc(self,filePath,fileDesc,fileType):</span> embeddings = OpenAIEmbeddings() #Load the document based on the file type if fileType == "text/plain": loader = TextLoader(filePath) elif fileType == "application/pdf": loader = PyPDFLoader(filePath) #load data into documents documents = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=0) #Split text into chunks texts = text_splitter.split_documents(documents) #Get collection Name from rag_doucments table. COLLECTION_NAME = self.get_collection_name(fileDesc,fileType) # function to create collection_name table and store vector data in it. db = IRISVector.from_documents( embedding=embeddings, documents=texts, collection_name = COLLECTION_NAME, connection_string=self.CONNECTION_STRING, ) #Get collection name <span>def get_collection_name(self,fileDesc,fileType):</span> # check if rag_documents table exists, if not then create it with self.engine.connect() as conn: with conn.begin(): sql = text(""" SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'SQLUser' AND TABLE_NAME = 'rag_documents'; """) result = [] try: result = conn.execute(sql).fetchall() except Exception as err: print("An exception occurred:", err) return '' #if table is not created, then create rag_documents table first if len(result) == 0: sql = text(""" CREATE TABLE rag_documents ( description VARCHAR(255), docType VARCHAR(50) ) """) try: result = conn.execute(sql) except Exception as err: print("An exception occurred:", err) return '' #Insert description value with self.engine.connect() as conn: with conn.begin(): sql = text(""" INSERT INTO rag_documents (description,docType) VALUES (:desc,:ftype) """) try: result = conn.execute(sql, {'desc':fileDesc,'ftype':fileType}) except Exception as err: print("An exception occurred:", err) return '' #select ID of last inserted record sql = text(""" SELECT LAST_IDENTITY() """) try: result = conn.execute(sql).fetchall() except Exception as err: print("An exception occurred:", err) return '' return "rag_document"+str(result[0][0])
Taip perintah SQL di bawah dalam portal pengurusan untuk mendapatkan semula data vektor
SELECT top 5 id, embedding, document, metadata FROM SQLUser.rag_document2
Pilih Dokumen daripada bahagian pilihan sembang pilih dan taip soalan. Aplikasi akan membaca data vektor dan mengembalikan jawapan yang berkaitan
Kod Python di bawah akan menyimpan dokumen yang dipilih ke dalam vektor:
from langchain_iris import IRISVector from langchain_openai import OpenAIEmbeddings,ChatOpenAI from langchain.chains import ConversationChain from langchain.chains.conversation.memory import ConversationSummaryMemory from langchain.chat_models import ChatOpenAI <span>class RagOpr:</span> <span>def ragSearch(self,prompt,id):</span> #Concat document id with rag_doucment to get the collection name COLLECTION_NAME = "rag_document"+str(id) embeddings = OpenAIEmbeddings() #Get vector store reference db2 = IRISVector ( embedding_function=embeddings, collection_name=COLLECTION_NAME, connection_string=self.CONNECTION_STRING, ) #Similarity search docs_with_score = db2.similarity_search_with_score(prompt) #Prepair the retrieved documents to pass to LLM relevant_docs = ["".join(str(doc.page_content)) + " " for doc, _ in docs_with_score] #init LLM llm = ChatOpenAI( temperature=0, model_name="gpt-3.5-turbo" ) #manage and handle LangChain multi-turn conversations conversation_sum = ConversationChain( llm=llm, memory= ConversationSummaryMemory(llm=llm), verbose=False ) #Create prompt template = f""" Prompt: <span>{prompt} Relevant Docuemnts: {relevant_docs} """</span> #Return the answer resp = conversation_sum(template) return resp['response']
Untuk butiran lanjut, sila lawati iris-RAG-Gen halaman aplikasi pertukaran terbuka.
Terima kasih
Atas ialah kandungan terperinci IRIS-RAG-Gen: Memperibadikan Aplikasi ChatGPT RAG Dikuasakan oleh Carian Vektor IRIS. Untuk maklumat lanjut, sila ikut artikel berkaitan lain di laman web China PHP!