search
HomeTechnology peripheralsAISemantically Compress Text to Save On LLM Costs

A summary of AI comments for processing ultra-long text: a multi-channel approach based on hierarchical clustering

Semantically Compress Text to Save On LLM Costs

Originally published on October 28, 2024, Bazaarvoice Developer Blog

Introduction

Large Language Models (LLMs) are powerful tools for handling unstructured text, but what if your text exceeds the limitations of the context window? Bazaarvoice faces this challenge when building its AI review summary feature: millions of user reviews simply cannot fit into the context window of even the latest LLM, and even if it can be accommodated, the cost is prohibitive.

This article will share how Bazaarvoice solves this problem by compressing input text (without losing semantics). Specifically, we used a multi-channel hierarchical clustering approach that allows us to explicitly adjust the level of detail we want to lose in exchange for compression regardless of the embedding model selected. The ultimate technology makes our review summary functionality economically viable and lays the foundation for future business expansion.

Question

Bazaarvoice has been collecting user-generated product reviews for nearly 20 years, so we have a large amount of data. These product reviews are completely unstructured, with varying lengths and content. Large language models are ideal for processing unstructured text: they can process unstructured data and identify relevant information among distractors. However, LLM also has its limitations, one of which is the context window: the number of tags (approximately the number of words) that can be entered at a time. State-of-the-art large language models, such as Anthropic's Claude version 3, have an oversized context window with up to 200,000 markers. This means you can put small novels in it, but the internet is still a huge and growing collection of data, and our user-generated product reviews are no exception.

We encountered limitations in the context window when building our review summary feature (summarizing all reviews for specific products on the customer's website). However, over the past 20 years, many products have accumulated thousands of reviews that have quickly overloaded the LLM context window. In fact, we even have some products with millions of reviews that require a huge redesign of the LLM to be handled in one prompt.

Even if technically feasible, the cost can be very high. All LLM providers are charged based on the number of input and output markers. As you approach the context window limit for each product (we have millions of products), our cloud hosting bills quickly exceed six figures.

Our Method

To overcome these technical and economic constraints to publish a review summary, we focused on a fairly simple insight into our data: many comments express the same meaning. In fact, the entire concept of the abstract depends on this: a review summary captures the commenter’s recurring insights, themes, and emotions. We realized that we could use this data duplication to reduce the amount of text that needs to be sent to the LLM, thus avoiding meeting the context window limits and reducing the operating costs of our system. To do this, we need to identify fragments of text that express the same meaning. Such a task is easier said than done: People often use different words or phrases to express the same meaning.

Luckily, recognizing whether text semantics are similar has always been an active research area in the field of natural language processing. Agirre et al.'s work in 2013 (

SEM 2013 Shared Task: Semantic Text Similarity. At the Second Joint Conference on Vocabulary and Computational Semantics

) even published data on a set of human-labeled semantic similar sentences , called STS benchmark. In it, they ask people to indicate whether text sentences are semantically similar or different based on ranks of 1-5, as shown in the following table (from Cer et al., SemEval-2017 Task 1: Semantic Text Similarity Multilingual and Cross-language focus assessment):

Semantically Compress Text to Save On LLM CostsSTS benchmark datasets are commonly used to evaluate the ability of text embedding models to correlate semantic similar sentences in their high-dimensional space. Specifically, Pearson correlation is used to measure the extent to which the embedded model represents human judgment.

Therefore, we can use such an embedding model to identify semantic similar phrases in product reviews and then delete duplicate phrases before sending them to the LLM.

Our method is as follows:

First, divide the product review into sentences.
  • Compute embedding vectors for each sentence using a network that performs well in the STS benchmark.
  • Use condensation hierarchical clustering for all embedding vectors for each product.
  • Keep the example sentences closest to the cluster centroid in each cluster (sent to LLM) and delete other sentences in each cluster.
  • Treat any small cluster as outliers and randomly draw these outliers to include in the LLM.
  • The number of sentences that contain each cluster representative is in the LLM prompt to ensure that the weight of each emotion is considered.
  • This seems simple when written in a bulleted list, but before we can trust this approach, we have to solve some details.

Embedding Model Evaluation

First of all, we must make sure that the model we use effectively embed text into spaces where semantic similar sentences are close to sentences that are not semantic similar, and semantic different sentences are far away. To do this, we simply use the STS benchmark dataset and calculate the Pearson correlation of the model we want to consider. We use AWS as a cloud provider, so we naturally want to evaluate its Titan text embedding model.

The following table shows the Pearson correlations of different Titan embedding models on STS benchmarks:

Therefore, AWS's embedding model is excellent in embedding sentences with similar semantics. This is good news for us – we can use these models directly, and they are extremely cheap.

Semantic similarity cluster

The next challenge we face is: How to enforce semantic similarity during clustering? Ideally, no cluster has a lower semantic similarity than humans can accept—the score in the table above is 4. However, these fractions cannot be directly converted into embedding distances, which is required for the aggregation hierarchical clustering threshold.

To solve this problem, we turn to the STS benchmark dataset again. We compute the distances for all pairs in the training dataset and fit the polynomial to the distance threshold according to the fraction.

Semantically Compress Text to Save On LLM Costs

This polynomial allows us to calculate the distance threshold required to satisfy any semantic similarity target. For comment summary, we chose 3.5 points, so almost all clusters contain sentences that are “roughly” to “most” equivalent or higher.

It is worth noting that this can be done on any embedded network. This allows us to experiment with the advent of new embedded networks and quickly replace them when needed without worrying that the cluster will contain sentences with dissimilar semantics.

Multi-channel clustering

So far, we know we can trust our semantic compression, but it's not clear how much compression we can get from the data. As expected, the amount of compression varies by product, customer and industry.

In the absence of semantic information loss, i.e. a hard threshold of 4, we only achieved a compression ratio of 1.18 (i.e. a 15% space saving).

Obviously, lossless compression is not enough to make this function economically feasible.

However, the distance selection method we discussed above provides an interesting possibility here: we can gradually increase the amount of information loss by repeatedly running clusters on the remaining data at a lower threshold.

The method is as follows:

  • Run the cluster using the threshold selected from score=4. This is considered lossless.
  • Select any exception cluster, i.e. those with only a few vectors. These are considered "uncompressed" and used in the next stage. We chose to rerun the cluster for any clusters of sizes less than 10.
  • Run the cluster again using the threshold selected from score=3. This is not lossless, but it is not too bad.
  • Select any cluster with size less than 10.
  • Repeat as needed and continuously lower the score threshold.

Therefore, in each channel of clustering we are sacrificing more information loss, but getting more compression, and not confusing the lossless representative phrases we chose in the first channel.

Additionally, this approach is not only very useful for comment summary (we hope to get a high level of semantic similarity at the expense of less compression), but also for other use cases where we may not be too concerned about it. Semantic information is lost, but hopefully it costs less on prompt input.

In practice, even after multiple reductions of the score threshold, there are still a large number of clusters with only one vector. These are considered outliers and are randomly sampled to include in the final prompt. We chose the sample size to ensure that the final prompt has 25,000 marks, but no more than that.

Ensure authenticity

Multi-channel clustering and random outlier sampling allow for the sacrifice of semantic information at the expense of a smaller context window (sent to LLM). This raises the question: How good is our summary?

At Bazaarvoice, we know that authenticity is a necessary condition for consumer trust and that our review summary must remain true to truly represent all the sounds captured in the comments. Any lossy compression method has the risk of misrepresenting or excluding consumers who spend time writing reviews.

To ensure that our compression technology is effective, we measured this directly. Specifically, for each product, we took some reviews and then used LLM Evals to determine if the summary is representative and relevant to each review. This provides us with a hard metric to evaluate and balance our compression.

Result

Over the past 20 years, we have collected nearly 1 billion user-generated comments and need to generate summary for tens of millions of products. Many of these products have thousands of reviews, some even as many as millions, which can drain the context window of LLM and significantly increase the price.

However, using our above method, we reduced the input text size by 97.7% (compression ratio is 42), allowing us to be able to make all products and any quantity in the future The number of comments extends this solution. In addition, the cost of generating digests for all our billion-level datasets is 82.4%. This includes the cost of embedding sentence data and storing them in a database.

The above is the detailed content of Semantically Compress Text to Save On LLM Costs. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Personal Hacking Will Be A Pretty Fierce BearPersonal Hacking Will Be A Pretty Fierce BearMay 11, 2025 am 11:09 AM

Cyberattacks are evolving. Gone are the days of generic phishing emails. The future of cybercrime is hyper-personalized, leveraging readily available online data and AI to craft highly targeted attacks. Imagine a scammer who knows your job, your f

Pope Leo XIV Reveals How AI Influenced His Name ChoicePope Leo XIV Reveals How AI Influenced His Name ChoiceMay 11, 2025 am 11:07 AM

In his inaugural address to the College of Cardinals, Chicago-born Robert Francis Prevost, the newly elected Pope Leo XIV, discussed the influence of his namesake, Pope Leo XIII, whose papacy (1878-1903) coincided with the dawn of the automobile and

FastAPI-MCP Tutorial for Beginners and Experts - Analytics VidhyaFastAPI-MCP Tutorial for Beginners and Experts - Analytics VidhyaMay 11, 2025 am 10:56 AM

This tutorial demonstrates how to integrate your Large Language Model (LLM) with external tools using the Model Context Protocol (MCP) and FastAPI. We'll build a simple web application using FastAPI and convert it into an MCP server, enabling your L

Dia-1.6B TTS : Best Text-to-Dialogue Generation Model - Analytics VidhyaDia-1.6B TTS : Best Text-to-Dialogue Generation Model - Analytics VidhyaMay 11, 2025 am 10:27 AM

Explore Dia-1.6B: A groundbreaking text-to-speech model developed by two undergraduates with zero funding! This 1.6 billion parameter model generates remarkably realistic speech, including nonverbal cues like laughter and sneezes. This article guide

3 Ways AI Can Make Mentorship More Meaningful Than Ever3 Ways AI Can Make Mentorship More Meaningful Than EverMay 10, 2025 am 11:17 AM

I wholeheartedly agree. My success is inextricably linked to the guidance of my mentors. Their insights, particularly regarding business management, formed the bedrock of my beliefs and practices. This experience underscores my commitment to mentor

AI Unearths New Potential In The Mining IndustryAI Unearths New Potential In The Mining IndustryMay 10, 2025 am 11:16 AM

AI Enhanced Mining Equipment The mining operation environment is harsh and dangerous. Artificial intelligence systems help improve overall efficiency and security by removing humans from the most dangerous environments and enhancing human capabilities. Artificial intelligence is increasingly used to power autonomous trucks, drills and loaders used in mining operations. These AI-powered vehicles can operate accurately in hazardous environments, thereby increasing safety and productivity. Some companies have developed autonomous mining vehicles for large-scale mining operations. Equipment operating in challenging environments requires ongoing maintenance. However, maintenance can keep critical devices offline and consume resources. More precise maintenance means increased uptime for expensive and necessary equipment and significant cost savings. AI-driven

Why AI Agents Will Trigger The Biggest Workplace Revolution In 25 YearsWhy AI Agents Will Trigger The Biggest Workplace Revolution In 25 YearsMay 10, 2025 am 11:15 AM

Marc Benioff, Salesforce CEO, predicts a monumental workplace revolution driven by AI agents, a transformation already underway within Salesforce and its client base. He envisions a shift from traditional markets to a vastly larger market focused on

AI HR Is Going To Rock Our Worlds As AI Adoption SoarsAI HR Is Going To Rock Our Worlds As AI Adoption SoarsMay 10, 2025 am 11:14 AM

The Rise of AI in HR: Navigating a Workforce with Robot Colleagues The integration of AI into human resources (HR) is no longer a futuristic concept; it's rapidly becoming the new reality. This shift impacts both HR professionals and employees, dem

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function