Home > Article > Technology peripherals > Understanding GraphRAG (1): Challenges of RAG
RAG (Risk Assessment Grid) is a method of enhancing existing large language models (LLM) with external knowledge sources to provide answers that are more relevant to the context. In RAG, the retrieval component obtains additional information, the response is based on a specific source, and then feeds this information into the LLM prompt so that the LLM's response is based on this information (enhancement phase). RAG is more economical compared to other techniques such as trimming. It also has the advantage of reducing hallucinations by providing additional context based on this information (augmentation stage) - your RAG becomes the workflow method for today's LLM tasks (such as recommendation, text extraction, sentiment analysis, etc.).
If we break this idea down further, based on user intent, we typically query a database of vectors. Vector databases use a continuous vector space to capture the relationship between two concepts using proximity-based search.
In vector space, whether it is text, images, audio, or any other type of information, it is transformed into vector. Vectors are numerical representations of data in high-dimensional space. Each dimension corresponds to a feature of the data, and the values in each dimension reflect the strength or presence of that feature. Through vector representation, we can perform operations such as mathematical operations, distance calculations, and similarity comparisons on the data. The values corresponding to different dimensions reflect the strength or presence of the feature. Taking text data as an example, each document can be represented as a vector, where each dimension represents the frequency of a word in the document. This way, two documents can be
# to perform a proximity based search in a database by calculating the distance between their vectors, involving and querying those databases using another vector, and searching A vector that is "close" to it in vector space. The proximity between vectors is usually determined by distance measures such as Euclidean distance, cosine similarity, or Manhattan distance. The proximity between vectors is usually determined by distance measures such as Euclidean distance, cosine similarity, or Manhattan distance.
When you perform a search into a database, you provide a query that the system converts into a vector. The database then calculates the distance or similarity between this query vector and vectors already stored in the database. Vectors that are close to the query vector (according to the chosen metric) are considered the most relevant results. These vectors that are closest to the query vector (according to the chosen metric) are considered the most relevant results.
Proximity-based search is particularly powerful in vector databases and is suitable for tasks such as recommendation systems, information retrieval, and anomaly detection.
This approach enables the system to operate more intuitively and respond to user queries more effectively by understanding the context and deeper meaning in the data, rather than relying solely on surface matches.
However, there are some limitations in applications connecting to databases for advanced searches, such as data quality, the ability to handle dynamic knowledge, and transparency.
According to the size of the document, RAG is roughly divided into three categories: if the document is small, it can be accessed contextually; if the document Very large (or have multiple documents), at query time smaller chunks are generated which are indexed and used in response to the query.
Despite its success, RAG has some shortcomings.
The two main indicators to measure the performance of RAG are perplexity and hallucination. Perplexity represents the number of equally possible next word choices in the text generation process. That is, how "confused" the language model is in its selection. Hallucinations are untrue or imaginary statements made by AI.
While RAG helps reduce hallucination, it does not eliminate it. If you have a small and concise document, you can reduce confusion (since LLM options are few) and reduce hallucinations (if you only ask what's in the document). Of course, the flip side is that a single small document results in a trivial application. For more complex applications, you need a way to provide more context.
For example, consider the word "bark" - we have at least two different contexts:
Context of the tree: "oak rough" The bark protects it from the cold ”
##Dog context: “The neighbor’s dog barks loudly every time someone passes their house.”
One way to provide more context is to combine a RAG with a knowledge graph (a GRAPHRAG).
In the knowledge graph, these words are connected with their associated context and meaning. For example, "bark" would be connected to nodes representing "tree" and "dog". Other connections can indicate common actions (e.g., the tree's "protection," the dog's "making noise") or properties (e.g., the tree's "roughness," the dog's "loudness"). This structured information allows the language model to choose the appropriate meaning based on other words in the sentence or the overall theme of the conversation.
In the next sections, we will see the limitations of RAG and how GRAPHRAG addresses them.
Original title: Understanding GraphRAG – 1: The challenges of RAG
Original author: ajitjaokar
The above is the detailed content of Understanding GraphRAG (1): Challenges of RAG. For more information, please follow other related articles on the PHP Chinese website!