search
HomeTechnology peripheralsAIThe answer extraction accuracy rate reaches 96.88%, xFinder eliminates the 'cheating' mentality of large models

The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the cheating mentality of large models
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

##The first author and corresponding author of this article are both from Shanghai Algorithm Innovation Institute. Among them, the corresponding author Dr. Li Zhiyu graduated from the Computer Science Department of Renmin University of China and has been engaged in algorithm implementation and research in Internet companies such as Alibaba and Xiaohongshu. He has participated in projects including hundreds of billions of products knowledge map, user map and public opinion map. Research and development work, a total of more than 40 papers have been published. Li Zhiyu is currently responsible for the overall technology research and development work in the large model department of Shanghai Algorithm Innovation Research Institute (led by Dr. Xiong Feiyu). Institute homepage: https://www.iaar.ac.cn/

The rapid development of large language models (LLMs) has triggered questions about how to evaluate them. Fairness and reliability are hotly debated.

Although existing evaluation frameworks such as OpenCompass, LM Eval Harness and UltraEval and various Benchmarks have promoted industry progress, focusing on the core components of these evaluation frameworks or There are only a few reliability measurement teams.

Recently, a research team from the Shanghai Algorithm Innovation Research Institute and Renmin University of China released a paper titled "xFinder: Robust and Pinpoint Answer Extraction for Large Language Models" . This paper provides an in-depth analysis of the overall process of the LLM evaluation framework, focusing on evaluating the reliability and consistency of the answer extractor component in large model evaluation.
The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the cheating mentality of large models
  • Paper address:
    https://arxiv.org/abs/2405.11874
  • Github link:
    https://github.com/IAAR-Shanghai/xFinder
  • Huggingface link:
    https:// huggingface.co/collections/IAAR-Shanghai/xfinder-664b7b21e94e9a93f25a8412

The current evaluation framework mainly relies on regular expressions (RegEx) to extract answers. But this approach has obvious flaws. Manual review results show that the best extraction accuracy rate is only 74.38%, and the evaluation results are extremely unreliable.

In addition, the RegEx method is easily fitted intentionally or unintentionally, increasing the possibility of "cheating" and thus affecting the reliability and consistency of the evaluation results. The figure below shows the RegEx component extraction error in the LLM evaluation framework.
The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the cheating mentality of large models
In order to effectively solve this problem, a research team from the Shanghai Algorithm Innovation Institute and Renmin University of China developed a new model called xFinder for more accurate extraction. Key answer.

xFinder has the following advantages:

(1) It does not require answer output in a specific format and has strong The answer extraction is robust and the extraction accuracy is as high as 95.18%, which is significantly better than the RegEx method in the current best LLM evaluation framework.

(2) Supports diversified question types, can automatically convert letter multiple-choice questions into quiz questions, and supports mixed assessment of different question types, thereby reducing test takers’ preparation Possibility of fitting the question type.

Method introduction
The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the cheating mentality of large models
The implementation process of xFinder mainly includes the generation of LLM response content, Annotation of KAF data set and training of xFinder. In order to achieve effective training of the xFinder model, the team built a specialized data set - the Key Answer Finder (KAF) data set. The dataset contains 26,900 training samples, 4,961 test samples, and 4,482 generalization samples, covering a variety of evaluation tasks.

Large language model response generation
         
First, the research team started from existing major evaluation benchmarks and Several typical assessment task data sets were selected in the report, and these tasks were classified into four types: letter option tasks, short text option tasks, classification label tasks, and math tasks.

The team then used different series of LLMs (such as Qwen, InternLM, ChatGLM, etc.) to generate data pairs for these tasks. Through various LLMs, the team generated rich and diverse data pairs, providing sufficient data support for the training of the xFinder model.

Automatic annotation and manual review
             
The team used a strategy to extract from the LLM response Key answers are used as labels to build high-quality KAF datasets. In order to improve the annotation efficiency of the training set, they adopted a semi-automatic process, using GPT-4 to generate two sets of annotations through different prompts, and used the self-consistency strategy to filter out items with inconsistent annotations and all mathematical questions, and submitted them to manual review. To ensure the validity and reliability of the test set and generalization set, all labels undergo two rounds of manual annotation.

Training xFinder
           
In order to enhance the diversity of the KAF data set and the generalization ability of the model, The research team adopted two data enhancement strategies:

(1) Simulate LLM response: modify 50% of the letter option questions in the KAF training set, add or delete one to Two options to simulate the diverse response of LLM.

(2) Enrich prompt forms: Extract 10% of the LLM responses containing key answer sentences and replace the prompt part, for example, replace "The final answer is A" "Based on the context of the question, A is the most likely answer".

In addition, the team used the XTuner tool and QLoRA method to fine-tune base models such as the Llama series, Qwen series, and Gemma series, and finally obtained xFinder.
The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the cheating mentality of large models
Experimental results

The team conducted extensive experiments to evaluate xFinder in different The performance on the task is compared with existing RegEx methods.

Results on the KAF test set
         
On the KAF test set, the average of xFinder-qwen1505 The extraction accuracy reaches 96.88%, which is significantly higher than the 74.38% of the RegEx method in the best evaluation framework.

Specifically, the extraction accuracy of xFinder-qwen1505 in the letter option task is 97.35%; in the short text option task it is 96.83%; in the classification label task was 98.05%; in the math option task it was 92.76%. These results show that xFinder performs well across a wide range of tasks, significantly improving the accuracy and reliability of assessments.
The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the cheating mentality of large models
The results on the KAF generalization set
         
On the new KAF generalization set (the The generalization set was constructed using samples generated by LLM and test tasks that were different from the training set and test set in the KAF dataset), xFinder-qwen1505 showed excellent performance, with an average extraction accuracy of 93.42%.

Experimental results show that xFinder's performance is not only better than other RegEx-based evaluation frameworks, but even significantly better than GPT-4, fully demonstrating its high robustness and versatility. ization ability. The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the cheating mentality of large models
Evaluation in real-world scenarios
           
The research team conducted a comprehensive evaluation of 10 LLMs using xFinder and traditional evaluation frameworks . The evaluation tasks cover CommonsenseQA, BoolQ, GSM8K, etc. A series of comparative experiments were conducted by applying five answer extraction schemes to 10 different LLMs.

To sum up, the experimental results mainly reveal three key findings:

(1) The same model performs better in different There are often large differences in the rankings under the framework, which is difficult to accurately reflect the true capabilities of the model and shows low consistency.

(2) Different xFinders showed a high degree of consistency in these experiments, and also surpassed other evaluation frameworks in the accuracy of extracting answers, indicating that xFinder It is a more reliable evaluation method.

(3) Compared with traditional letter option settings, directly using option text can significantly improve the consistency of rankings, reflecting the instability of letter option settings. More details and experimental results are presented in the appendix, which further confirm the validity of the above findings.
The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the cheating mentality of large models
Conclusion

In general, xFinder optimizes the key answer extraction module, Improved accuracy and reliability of LLM assessment. Experimental results show that xFinder performs well on a variety of tasks and has high robustness and generalization capabilities. In the future, the research team will continue to optimize xFinder and study other key evaluation issues to provide a solid foundation for reliable evaluation of LLM performance.

The above is the detailed content of The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the 'cheating' mentality of large models. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
4090生成器:与A100平台相比,token生成速度仅低于18%,上交推理引擎赢得热议4090生成器:与A100平台相比,token生成速度仅低于18%,上交推理引擎赢得热议Dec 21, 2023 pm 03:25 PM

PowerInfer提高了在消费级硬件上运行AI的效率上海交大团队最新推出了超强CPU/GPULLM高速推理引擎PowerInfer。PowerInfer和llama.cpp都在相同的硬件上运行,并充分利用了RTX4090上的VRAM。这个推理引擎速度有多快?在单个NVIDIARTX4090GPU上运行LLM,PowerInfer的平均token生成速率为13.20tokens/s,峰值为29.08tokens/s,仅比顶级服务器A100GPU低18%,可适用于各种LLM。PowerInfer与

思维链CoT进化成思维图GoT,比思维树更优秀的提示工程技术诞生了思维链CoT进化成思维图GoT,比思维树更优秀的提示工程技术诞生了Sep 05, 2023 pm 05:53 PM

要让大型语言模型(LLM)充分发挥其能力,有效的prompt设计方案是必不可少的,为此甚至出现了promptengineering(提示工程)这一新兴领域。在各种prompt设计方案中,思维链(CoT)凭借其强大的推理能力吸引了许多研究者和用户的眼球,基于其改进的CoT-SC以及更进一步的思维树(ToT)也收获了大量关注。近日,苏黎世联邦理工学院、Cledar和华沙理工大学的一个研究团队提出了更进一步的想法:思维图(GoT)。让思维从链到树到图,为LLM构建推理过程的能力不断得到提升,研究者也通

复旦NLP团队发布80页大模型Agent综述,一文纵览AI智能体的现状与未来复旦NLP团队发布80页大模型Agent综述,一文纵览AI智能体的现状与未来Sep 23, 2023 am 09:01 AM

近期,复旦大学自然语言处理团队(FudanNLP)推出LLM-basedAgents综述论文,全文长达86页,共有600余篇参考文献!作者们从AIAgent的历史出发,全面梳理了基于大型语言模型的智能代理现状,包括:LLM-basedAgent的背景、构成、应用场景、以及备受关注的代理社会。同时,作者们探讨了Agent相关的前瞻开放问题,对于相关领域的未来发展趋势具有重要价值。论文链接:https://arxiv.org/pdf/2309.07864.pdfLLM-basedAgent论文列表:

吞吐量提升5倍,联合设计后端系统和前端语言的LLM接口来了吞吐量提升5倍,联合设计后端系统和前端语言的LLM接口来了Mar 01, 2024 pm 10:55 PM

大型语言模型(LLM)被广泛应用于需要多个链式生成调用、高级提示技术、控制流以及与外部环境交互的复杂任务。尽管如此,目前用于编程和执行这些应用程序的高效系统却存在明显的不足之处。研究人员最近提出了一种新的结构化生成语言(StructuredGenerationLanguage),称为SGLang,旨在改进与LLM的交互性。通过整合后端运行时系统和前端语言的设计,SGLang使得LLM的性能更高、更易控制。这项研究也获得了机器学习领域的知名学者、CMU助理教授陈天奇的转发。总的来说,SGLang的

大模型也有小偷?为保护你的参数,上交大给大模型制作「人类可读指纹」大模型也有小偷?为保护你的参数,上交大给大模型制作「人类可读指纹」Feb 02, 2024 pm 09:33 PM

将不同的基模型象征为不同品种的狗,其中相同的「狗形指纹」表明它们源自同一个基模型。大模型的预训练需要耗费大量的计算资源和数据,因此预训练模型的参数成为各大机构重点保护的核心竞争力和资产。然而,与传统软件知识产权保护不同,对预训练模型参数盗用的判断存在以下两个新问题:1)预训练模型的参数,尤其是千亿级别模型的参数,通常不会开源。预训练模型的输出和参数会受到后续处理步骤(如SFT、RLHF、continuepretraining等)的影响,这使得判断一个模型是否基于另一个现有模型微调得来变得困难。无

FATE 2.0发布:实现异构联邦学习系统互联FATE 2.0发布:实现异构联邦学习系统互联Jan 16, 2024 am 11:48 AM

FATE2.0全面升级,推动隐私计算联邦学习规模化应用FATE开源平台宣布发布FATE2.0版本,作为全球领先的联邦学习工业级开源框架。此次更新实现了联邦异构系统之间的互联互通,持续增强了隐私计算平台的互联互通能力。这一进展进一步推动了联邦学习与隐私计算规模化应用的发展。FATE2.0以全面互通为设计理念,采用开源方式对应用层、调度、通信、异构计算(算法)四个层面进行改造,实现了系统与系统、系统与算法、算法与算法之间异构互通的能力。FATE2.0的设计兼容了北京金融科技产业联盟的《金融业隐私计算

220亿晶体管,IBM机器学习专用处理器NorthPole,能效25倍提升220亿晶体管,IBM机器学习专用处理器NorthPole,能效25倍提升Oct 23, 2023 pm 03:13 PM

IBM再度发力。随着AI系统的飞速发展,其能源需求也在不断增加。训练新系统需要大量的数据集和处理器时间,因此能耗极高。在某些情况下,执行一些训练好的系统,智能手机就能轻松胜任。但是,执行的次数太多,能耗也会增加。幸运的是,有很多方法可以降低后者的能耗。IBM和英特尔已经试验过模仿实际神经元行为设计的处理器。IBM还测试了在相变存储器中执行神经网络计算,以避免重复访问RAM。现在,IBM又推出了另一种方法。该公司的新型NorthPole处理器综合了上述方法的一些理念,并将其与一种非常精简的计算运行

14秒就能重建视频,还能变换角色,Meta让视频合成提速44倍14秒就能重建视频,还能变换角色,Meta让视频合成提速44倍Dec 27, 2023 pm 06:35 PM

Meta的视频合成新框架给我们带来了一些惊喜就今天的人工智能发展水平来说,文生图、图生视频、图像/视频风格迁移都已经不算什么难事。生成式AI天赋异禀,能够毫不费力地创建或修改内容。尤其是图像编辑,在以十亿规模数据集为基础预训练的文本到图像扩散模型的推动下,经历了重大发展。这股浪潮催生了大量图像编辑和内容创建应用。基于图像的生成模型所取得的成就基础上,下一个挑战的领域必然是为其增加「时间维度」,从而实现轻松而富有创意的视频编辑。一种直接策略是使用图像模型逐帧处理视频,然而,生成式图像编辑本身就具有

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool