Home >Technology peripherals >AI >GPTCache Tutorial: Enhancing Efficiency in LLM Applications
GPTCache is an open-source framework for large language model (LLM) applications like ChatGPT. It stores previously generated LLM responses to similar queries. Instead of relying on the LLM, the application checks the cache for a relevant response to save you time.
This guide explores how GPTCache works and how you can use it effectively in your projects.
GPTCache is a caching system designed to improve the performance and efficiency of large language models (LLMs) like GPT-3. It helps LLMs store the previously generated queries to save time and effort.
When a similar query comes up again, the LLM can pull up the cached response instead of developing a new one from scratch.
Unlike other tools, GPTCache works on semantic caching. Semantic caches hold the objective of a query/request. As a result, when the previously stored queries are recalled, their result reduces the server’s workload and improves cache hit rates.
The main idea behind GPTCache is to store and reuse the intermediate computations generated during the inference process of an LLM. Doing so has several benefits:
Most LLMs charge a specific fee per request based on the number of tokens processed. That’s when GPTCache comes in handy. It minimizes the number of LLM API calls by serving previously generated responses for similar queries. As a result, this saves costs by reducing extra LLM call expenses.
Retrieving the response from a cache is substantially faster than generating it from scratch by querying the LLM. It boosts the speed and improves response times. Efficient responses reduce the burden on the LLM itself and free up space that can be allocated to other tasks.
Suppose you’re searching questions for your content. Every question you ask takes ages for AI to answer. Why? Because most LLM services enforce request limits within set periods. Exceeding these limits blocks further requests until the limit resets, which causes service interruptions.
ChatGPT can reach its response generating limit
To avoid these issues, GPTchache caches previous answers to similar questions. When you ask for something, it quickly checks its memory and delivers the information in a flash. As a result, you get your response in less time than usual.
Simply put, by leveraging cached responses, GPTCache ensures LLM-based applications become responsive and efficient—just like you'd expect from any modern tool.
Here’s how you can install GPTCache directly:
Install the GPTCache package using this code.
! pip install -q gptcache
Next, import GPTCache into your application.
from gptcache import GPTCache cache = GPTCache() # keep the mode default
That’s it, and you’re done!
You can integrate GPTCache with LLMs through its LLM Adapter. As of now, it is compatible with only two large language model adapters:
Here’s how you can integrate it with both adapters:
To integrate GPTCache with OpenAI, initialize the cache and import openai from gptcache.adapter.
from gptcache import cache from gptcache.adapter import openai cache.init() cache.set_openai_key()
Before you run the example code, set the OPENAI_API_KEY environment variable by executing echo $OPENAI_API_KEY.
If it is not already set, you can set it by using export OPENAI_API_KEY=YOUR_API_KEY on Unix/Linux/MacOS systems or set OPENAI_API_KEY=YOUR_API_KEY on Windows systems.
Then, if you ask ChatGPT two exact questions, it will retrieve the answer to the second question from the cache instead of asking ChatGPT again.
Here’s an example code for similar search cache:
import time def response_text(openai_resp): return openai_resp['choices'][0]['message']['content'] print("Cache loading.....") # To use GPTCache, that's all you need # ------------------------------------------------- from gptcache import cache from gptcache.adapter import openai cache.init() cache.set_openai_key() # ------------------------------------------------- question = "what's github" for _ in range(2): start_time = time.time() response = openai.ChatCompletion.create( model='gpt-3.5-turbo', messages=[ { 'role': 'user', 'content': question } ], ) print(f'Question: {question}') print("Time consuming: {:.2f}s".format(time.time() - start_time)) print(f'Answer: {response_text(response)}\n')
Here’s what you will see in the output:
The second time, GPT took nearly 0 seconds to answer the same question
If you want to utilize a different LLM, try the LangChain adapter. Here’s how you can integrate GPTCahe with LangChain:
from langchain.globals import set_llm_cache from langchain_openai import OpenAI # To make the caching really obvious, lets use a slower model. llm = OpenAI(model_name="gpt-3.5-turbo-instruct", n=2, best_of=2)
Learn how to build LLM applications with Langchain.
Let's look at how GPTCache can support your projects.
LLMs can become ineffective due to the inherent complexity and variability of LLM queries, resulting in a low cache hit rate.
To overcome this limitation, GPTCache adopts semantic caching strategies. Semantic caching stores similar or related queries—increasing the probability of cache hits and enhancing the overall caching efficiency.
GPTCache leverages embedding algorithms to convert queries into numerical representations called embeddings. These embeddings are stored in a vector store, enabling efficient similarity searches. This process allows GPTCache to identify and retrieve similar or related queries from the cache storage.
With its modular design, you can customize semantic cache implementations according to your requirements.
However—sometimes false cache hits and cache misses can occur in a semantic cache. To monitor this performance, GPTCache provides three performance metrics:
All basic data elements like the initial queries, prompts, responses, and access timestamps are stored in a 'data manager.' GPTCache currently supports the following cache storage options:
It doesn’t support the ‘NoSQL’ database yet, but it’s planned to be incorporated soon.
However, GPTCache can remove data from the cache storage based on a specified limit or count. To manage the cache size, you can implement either a Least Recently Used (LRU) eviction policy or a First In, First Out (FIFO) approach.
GPTCache uses an ‘evaluation’ function to assess whether a cached response addresses a user query. To do so, it takes three inputs:
You can also use two other functions:
Now that you know how GPTCache functions, here are some best practices and tips to ensure you reap its benefits.
There are several steps you can take to optimize the performance of GPTCache, as outlined below.
How you prompt your LLM impacts how well GPTCache works. So, keep your phrasing consistent to enhance your chances of reaching the cache.
For example, use consistent phrasing like "I can't log in to my account." This way, GPTCache recognizes similar issues, such as "Forgot my password" or "Account login problems," more efficiently.
Monitor built-in metrics like hit ratio, recall, and latency to analyze your cache’s performance. A higher hit ratio indicates that the cache more effectively serves requested content from stored data, helping you understand its effectiveness.
To scale GPTCache for larger LLM applications, implement a shared cache approach that utilizes the same cache for user groups with similar profiles. Create user profiles and classify them to identify similar user groups.
Leveraging a shared cache for users of the same profile group yields good returns regarding cache efficiency and scalability.
This is because users within the same profile group tend to have related queries that can benefit from cached responses. However, you must employ the right user profiling and classification techniques to group users and maximize the benefits of shared caching accurately.
If you’re struggling with GPTCache, there are several steps you can take to troubleshoot the issues.
GPTCache relies on up-to-date cache responses. If the underlying LLM's responses or the user's intent changes over time, the cached responses may become inaccurate or irrelevant.
To avoid this, set expiration times for cached entries based on the expected update frequency of the LLM and regularly refresh the cache.
While GPTCache can improve efficiency, over-reliance on cached responses can lead to inaccurate information if the cache is not invalidated properly.
For this purpose, make sure your application occasionally retrieves fresh responses from the LLM, even for similar queries. This maintains the accuracy and quality of the responses when dealing with critical or time-sensitive information.
The quality and relevance of the cached response impact the user experience. So, you should use evaluation metrics to assess the quality of cached responses before serving them to users.
By understanding these potential pitfalls and their solutions, you can ensure that GPTCache effectively improves the performance and cost-efficiency of your LLM-powered applications—without compromising accuracy or user experience.
GPTCache is a powerful tool for optimizing the performance and cost-efficiency of LLM applications. Proper configuration, monitoring, and cache evaluation strategies are required to ensure you get accurate and relevant responses.
If you’re new to LLMs, these resources might help:
To initialize the cache and import the OpenAI API, import openai from gptcache.adapter. This will automatically set the data manager to match the exact cache. Here’s how you can do this:
! pip install -q gptcache
GPTCache stores the previous responses in the cache and retrieves the answer from the cache instead of making a new request to the API. So, the answer to the second question will be obtained from the cache without requesting ChatGPT again.
The above is the detailed content of GPTCache Tutorial: Enhancing Efficiency in LLM Applications. For more information, please follow other related articles on the PHP Chinese website!