search
HomeTechnology peripheralsAIThe rephrased title is: ByteDance's cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

It is well known that large language models (LLM) can learn from a small number of examples through contextual learning without the need for model fine-tuning. Currently, this contextual learning phenomenon can only be observed in large models. For example, large models such as GPT-4 and Llama have shown excellent performance in many fields, but due to resource constraints or high real-time requirements, large models cannot be used in many scenarios

So, do regular-sized models have this capability? In order to explore the contextual learning capabilities of small models, research teams from Byte and East China Normal University conducted research on scene text recognition tasks.

Currently, in practical application scenarios, scene text recognition faces a variety of challenges: different scenes, text layout, deformation, lighting changes, blurred writing, font diversity, etc. Therefore, It is difficult to train a unified text recognition model that can handle all scenarios.

A direct way to solve this problem is to collect corresponding data and fine-tune the model in specific scenarios. However, this process requires retraining the model, which is computationally intensive, and requires saving multiple model weights to adapt to different scenarios. If the text recognition model can have context learning capabilities, when faced with new scenarios, it only needs a small amount of annotated data as prompts to improve its performance on new scenarios, thus solving the above problems. However, scene text recognition is a resource-sensitive task, and using a large model as a text recognizer will consume a lot of resources. Through preliminary experimental observations, researchers found that traditional large model training methods are not suitable for scene text recognition tasks

In order to solve this problem, research from ByteDance and East China Normal University The team proposed a self-evolving text recognizer, E2STR (Ego-Evolving Scene Text Recognizer). This is a regular-sized text recognizer that incorporates contextual learning capabilities and can quickly adapt to different text recognition scenarios without the need for fine-tuning

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

paper Link: https://arxiv.org/pdf/2311.13120.pdf

E2STR is equipped with a contextual training and contextual reasoning mode, which not only reaches the SOTA level on conventional data sets , and a single model can be used to improve the recognition performance in various scenarios and achieve rapid adaptation to new scenarios, even exceeding the recognition performance of a dedicated model after fine-tuning. E2STR demonstrates that regular-sized models are sufficient to achieve effective context learning capabilities in text recognition tasks.

Method

In Figure 1, the training and inference process of E2STR is shown

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

1. Basic text recognition training

#The basic text recognition training phase uses an autoregressive framework to train the visual encoder and language decoder , the purpose is to obtain text recognition capabilities:

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

2. Context training

Context training phase E2STR will be further trained according to the context training paradigm proposed in the article. At this stage, E2STR will learn to understand the connections between different samples to gain reasoning capabilities from contextual cues.

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

As shown in Figure 2, this article proposes the ST strategy to randomly segment and transform the scene text data to generate a set of "subsample". The subsamples are intrinsically linked both visually and linguistically. These intrinsically related samples are spliced ​​into a sequence, and the model learns contextual knowledge from these semantically rich sequences, thereby gaining the ability to learn context. This stage also uses the autoregressive framework for training:

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

The content that needs to be rewritten is: 3. Contextual reasoning Rewritten content: 3. Reasoning based on context

#For a test sample, the framework will select N samples from the contextual cue pool, which are in the visual latent space Has the highest similarity with the test sample. Specifically, this article calculates image embedding I by averaging pooling on the visual token sequence. Then, the top N samples with the highest cosine similarity between image embeddings and I are selected from the context pool, thus forming contextual cues.

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

After the contextual cues and test samples are spliced ​​together and fed into the model, E2STR will learn new knowledge from the contextual cues without training. , thereby improving the recognition accuracy of test samples. It is important to note that the contextual cue pool only retains tokens output by the visual encoder, making the contextual cue selection process very efficient. Furthermore, since the context hint pool is small and E2STR requires no training for inference, additional computational overhead is also minimized

Experiment

The experiment is mainly conducted from three aspects: traditional text recognition set, cross-domain scene recognition and difficult sample correction

1. Traditional data set

Randomly select a few samples (1000, 0.025% of the number of samples in the training set) from the training set to form a context prompt pool, and test it in 12 common scene text recognition test sets , the results are as follows:

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

##It can be found that E2STR still improves on traditional data sets whose recognition performance is almost saturated, surpassing the performance of the SOTA model .

The content that needs to be rewritten is: 2. Cross-domain scenario

Each test set in the cross-domain scenario Only 100 in-domain training samples are provided. The comparison results between no training and fine-tuning are as follows. E2STR even exceeds the fine-tuning results of the SOTA method.

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

The content that needs to be rewritten is: 3. Modify difficult samples

The researchers collected a batch of difficult samples and provided 10% to 20% annotations for these samples. They compared E2STR's context learning method without training and the SOTA method's fine-tuning learning method. The results are as follows:

The rephrased title is: ByteDances cooperation with East China Normal University: Exploring the contextual learning capabilities of small models

Compared with fine-tuning methods, E2STR-ICL significantly reduces the error rate of difficult samples

Future Outlook

E2STR proves that using appropriate training and inference strategies, small models can also have in-context learning capabilities similar to LLM. In some tasks with strong real-time requirements, small models can also be used to quickly adapt to new scenarios. More importantly, this method of using a single model to achieve rapid adaptation to new scenarios brings one step closer to building a unified and efficient small model.

The above is the detailed content of The rephrased title is: ByteDance's cooperation with East China Normal University: Exploring the contextual learning capabilities of small models. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
How to Run LLM Locally Using LM Studio? - Analytics VidhyaHow to Run LLM Locally Using LM Studio? - Analytics VidhyaApr 19, 2025 am 11:38 AM

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri Helps Flavor McCormick's Future Through Data TransformationGuy Peri Helps Flavor McCormick's Future Through Data TransformationApr 19, 2025 am 11:35 AM

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

What is the Chain of Emotion in Prompt Engineering? - Analytics VidhyaWhat is the Chain of Emotion in Prompt Engineering? - Analytics VidhyaApr 19, 2025 am 11:33 AM

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

12 Best AI Tools for Data Science Workflow - Analytics Vidhya12 Best AI Tools for Data Science Workflow - Analytics VidhyaApr 19, 2025 am 11:31 AM

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

AV Byte: OpenAI's GPT-4o Mini and Other AI InnovationsAV Byte: OpenAI's GPT-4o Mini and Other AI InnovationsApr 19, 2025 am 11:30 AM

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

Perplexity's Android App Is Infested With Security Flaws, Report FindsPerplexity's Android App Is Infested With Security Flaws, Report FindsApr 19, 2025 am 11:24 AM

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

Everyone's Getting Better At Using AI: Thoughts On Vibe CodingEveryone's Getting Better At Using AI: Thoughts On Vibe CodingApr 19, 2025 am 11:17 AM

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Rocket Launch Simulation and Analysis using RocketPy - Analytics VidhyaRocket Launch Simulation and Analysis using RocketPy - Analytics VidhyaApr 19, 2025 am 11:12 AM

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)