搜索
首页科技周边人工智能R.E.D。:与专家代表团的缩放文本分类

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples.

What happens when you want to consistently achieve performancehigher than that — when prompt engineering no longer suffices?

The classification conundrum

Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it shouldreallynot be hard to build robust, well-performing classifiers that handle a large number of input classes, right…?

Welp. It is.

It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under:

  • low amount of training data per class
  • high classification accuracy (that plummets as you add more classes)
  • possible addition ofnew classesto an existing subset of classes
  • quick training/inference
  • cost-effectiveness
  • (potentially) really large number of training classes
  • (potentially) endlessrequiredretraining ofsomeclasses due to data drift, etc.

Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…)

Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classifyone sample.That is after making peace with the throughput of the API, even if you are running async queries.

In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers.

The R.E.D. algorithm

R.E.D: Recursive Expert Delegationis a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is nofundamentally differentarchitecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable.

In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as asemi-supervised learningproblem via R.E.D.

Let’s dive in.

How it works

R.E.D。:与专家代表团的缩放文本分类

Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently:

  1. Divides and conquers— Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach.
  2. Learns efficiently— Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data fromother subsets.
  3. Delegates to an expert— Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’howa human expert validates an output.
  4. Recursive retraining— Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved

The intuition behind it is not very hard to grasp:Active Learningemploys humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later.

Let’s take a deeper look…

Greedy subset selection with least similar elements

When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enoughsamplesto learn from — i.e. each of the training classes has only a few samples.

This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D.

Some ways of improving a classifier’s performance under these constraints:

  • Restrictthe number of classes a classifier needs to classify between
  • Make the decision boundary between classes clearer, i.e., train the classifier onhighly dissimilar classes

Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then formSsubsets from them. Each of theSsubsets has elements asntraining labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset:

import numpy as np
from sklearn.metrics.pairwise import cosine_similarity


def avg_embedding(candidate_embeddings):
    return np.mean(candidate_embeddings, axis=0)

def get_least_similar_embedding(target_embedding, candidate_embeddings):
    similarities = cosine_similarity(target_embedding, candidate_embeddings)
    least_similar_index = np.argmin(similarities)  # Use argmin to find the index of the minimum
    least_similar_element = candidate_embeddings[least_similar_index]
    return least_similar_element


def get_embedding_class(embedding, embedding_map):
    reverse_embedding_map = {value: key for key, value in embedding_map.items()}
    return reverse_embedding_map.get(embedding)  # Use .get() to handle missing keys gracefully


def select_subsets(embeddings, n):
    visited = {cls: False for cls in embeddings.keys()}
    subsets = []
    current_subset = []

    while any(not visited[cls] for cls in visited):
        for cls, average_embedding in embeddings.items():
            if not current_subset:
                current_subset.append(average_embedding)
                visited[cls] = True
            elif len(current_subset) >= n:
                subsets.append(current_subset.copy())
                current_subset = []
            else:
                subset_average = avg_embedding(current_subset)
                remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]]
                if not remaining_embeddings:
                    break # handle edge case
                
                least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings)

                visited_class = get_embedding_class(least_similar, embeddings)

                
                if visited_class is not None:
                  visited[visited_class] = True


                current_subset.append(least_similar)
    
    if current_subset:  # Add any remaining elements in current_subset
        subsets.append(current_subset)
        

    return subsets

the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most onlynclasses. This inherently makes the job of a classifier easier, compared to the originalSclasses it would have to classify between otherwise!

Semi-supervised classification with noise oversampling

Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a givensubsetof classes.

Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well?

We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to bepre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified.

As such, we created a design for how it would treat its data:

  • n+1classes, where the last class isnoise
  • noise:data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels

Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification.

How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples.

Proxy active learning via an LLM agent

This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling

Let’s get an intuitive understanding of Active Labelling:

  • Use an ML model to learn on a sample input dataset, predict on a large set of datapoints
  • For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions
  • Recursively, new ‘corrected’ samples are added as training data to the ML model
  • The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions

For Active Labelling to work, there are expectations involved for an SME:

  • when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is
  • a human expert will use judgement to evaluate ‘what else’ definitely belongs to a labelLwhen deciding if a new sample should belong toL

Given these expectations and intuitions, we can ‘mimic’ these using an LLM:

  • give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model tocritically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a32B variant of DeepSeekthat was self-hosted.
R.E.D。:与专家代表团的缩放文本分类
  • Instead of predicting what is the correct label,leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only(i.e., LLM only has to answer a binary query).
  • Reinforce the idea of what other valid samples for the label look like,i.e., for every pre-emptively predicted label for a sample, dynamically sourcecclosest samples in its training (guaranteed valid) set when prompting for validation.

The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification):

import math

def calculate_uncertainty(clf, sample):
    predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0]  # Reshape sample for predict_proba
    uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities)
    return uncertainty


def select_informative_samples(clf, data, k):
    informative_samples = []
    uncertainties = [calculate_uncertainty(clf, sample) for sample in data]

    # Sort data by descending order of uncertainty
    sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True)

    # Get top k samples with highest uncertainty
    for sample, uncertainty in sorted_data[:k]:
        informative_samples.append(sample)

    return informative_samples


def proxy_label(clf, llm_judge, k, testing_data):
    #llm_judge - any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it
    predicted_classes = clf.predict(testing_data)

    # Select k most informative samples using uncertainty sampling
    informative_samples = select_informative_samples(clf, testing_data, k)

    # List to store correct samples
    voted_data = []

    # Evaluate informative samples with the LLM judge
    for sample in informative_samples:
        sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue
        predicted_class = predicted_classes[sample_index]

        # Check if LLM judge agrees with the prediction
        if llm_judge(sample, predicted_class):
            # If correct, add the sample to voted data
            voted_data.append(sample)

    # Return the list of correct samples with proxy labels
    return voted_data

By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm:

R.E.D。:与专家代表团的缩放文本分类

By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to1,000 classes while maintaining a competent degree of accuracyalmost on par with human experts (90%+ agreement).

I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results.

All images, unless otherwise noted, are by the author

Interested in more details? Reach out to me over Medium or email for a chat!

以上是R.E.D。:与专家代表团的缩放文本分类的详细内容。更多信息请关注PHP中文网其他相关文章!

声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
让我们跳舞:结构化运动以微调我们的人类神经网让我们跳舞:结构化运动以微调我们的人类神经网Apr 27, 2025 am 11:09 AM

科学家已经广泛研究了人类和更简单的神经网络(如秀丽隐杆线虫中的神经网络),以了解其功能。 但是,出现了一个关键问题:我们如何使自己的神经网络与新颖的AI一起有效地工作

新的Google泄漏揭示了双子AI的订阅更改新的Google泄漏揭示了双子AI的订阅更改Apr 27, 2025 am 11:08 AM

Google的双子座高级:新的订阅层即将到来 目前,访问Gemini Advanced需要$ 19.99/月Google One AI高级计划。 但是,Android Authority报告暗示了即将发生的变化。 最新的Google P中的代码

数据分析加速度如何求解AI的隐藏瓶颈数据分析加速度如何求解AI的隐藏瓶颈Apr 27, 2025 am 11:07 AM

尽管围绕高级AI功能炒作,但企业AI部署中潜伏的巨大挑战:数据处理瓶颈。首席执行官庆祝AI的进步时,工程师努力应对缓慢的查询时间,管道超载,一个

Markitdown MCP可以将任何文档转换为Markdowns!Markitdown MCP可以将任何文档转换为Markdowns!Apr 27, 2025 am 09:47 AM

处理文档不再只是在您的AI项目中打开文件,而是将混乱变成清晰度。诸如PDF,PowerPoints和Word之类的文档以各种形状和大小淹没了我们的工作流程。检索结构化

如何使用Google ADK进行建筑代理? - 分析Vidhya如何使用Google ADK进行建筑代理? - 分析VidhyaApr 27, 2025 am 09:42 AM

利用Google的代理开发套件(ADK)的力量创建具有现实世界功能的智能代理!该教程通过使用ADK来构建对话代理,并支持Gemini和GPT等各种语言模型。 w

在LLM上使用SLM进行有效解决问题-Analytics Vidhya在LLM上使用SLM进行有效解决问题-Analytics VidhyaApr 27, 2025 am 09:27 AM

摘要: 小型语言模型 (SLM) 专为效率而设计。在资源匮乏、实时性和隐私敏感的环境中,它们比大型语言模型 (LLM) 更胜一筹。 最适合专注型任务,尤其是在领域特异性、控制性和可解释性比通用知识或创造力更重要的情况下。 SLM 并非 LLMs 的替代品,但在精度、速度和成本效益至关重要时,它们是理想之选。 技术帮助我们用更少的资源取得更多成就。它一直是推动者,而非驱动者。从蒸汽机时代到互联网泡沫时期,技术的威力在于它帮助我们解决问题的程度。人工智能 (AI) 以及最近的生成式 AI 也不例

如何将Google Gemini模型用于计算机视觉任务? - 分析Vidhya如何将Google Gemini模型用于计算机视觉任务? - 分析VidhyaApr 27, 2025 am 09:26 AM

利用Google双子座的力量用于计算机视觉:综合指南 领先的AI聊天机器人Google Gemini扩展了其功能,超越了对话,以涵盖强大的计算机视觉功能。 本指南详细说明了如何利用

Gemini 2.0 Flash vs O4-Mini:Google可以比OpenAI更好吗?Gemini 2.0 Flash vs O4-Mini:Google可以比OpenAI更好吗?Apr 27, 2025 am 09:20 AM

2025年的AI景观正在充满活力,而Google的Gemini 2.0 Flash和Openai的O4-Mini的到来。 这些尖端的车型分开了几周,具有可比的高级功能和令人印象深刻的基准分数。这个深入的比较

See all articles

热AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover

AI Clothes Remover

用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool

Undress AI Tool

免费脱衣服图片

Clothoff.io

Clothoff.io

AI脱衣机

Video Face Swap

Video Face Swap

使用我们完全免费的人工智能换脸工具轻松在任何视频中换脸!

热工具

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

功能强大的PHP集成开发环境

记事本++7.3.1

记事本++7.3.1

好用且免费的代码编辑器

DVWA

DVWA

Damn Vulnerable Web App (DVWA) 是一个PHP/MySQL的Web应用程序,非常容易受到攻击。它的主要目标是成为安全专业人员在合法环境中测试自己的技能和工具的辅助工具,帮助Web开发人员更好地理解保护Web应用程序的过程,并帮助教师/学生在课堂环境中教授/学习Web应用程序安全。DVWA的目标是通过简单直接的界面练习一些最常见的Web漏洞,难度各不相同。请注意,该软件中

SublimeText3 Mac版

SublimeText3 Mac版

神级代码编辑软件(SublimeText3)

SublimeText3 英文版

SublimeText3 英文版

推荐:为Win版本,支持代码提示!