The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
##The first author and corresponding author of this article are both from Shanghai Algorithm Innovation Institute. Among them, the corresponding author Dr. Li Zhiyu graduated from the Computer Science Department of Renmin University of China and has been engaged in algorithm implementation and research in Internet companies such as Alibaba and Xiaohongshu. He has participated in projects including hundreds of billions of products knowledge map, user map and public opinion map. Research and development work, a total of more than 40 papers have been published. Li Zhiyu is currently responsible for the overall technology research and development work in the large model department of Shanghai Algorithm Innovation Research Institute (led by Dr. Xiong Feiyu). Institute homepage: https://www.iaar.ac.cn/The rapid development of large language models (LLMs) has triggered questions about how to evaluate them. Fairness and reliability are hotly debated.
Although existing evaluation frameworks such as OpenCompass, LM Eval Harness and UltraEval and various Benchmarks have promoted industry progress, focusing on the core components of these evaluation frameworks or There are only a few reliability measurement teams.
Recently, a research team from the Shanghai Algorithm Innovation Research Institute and Renmin University of China released a paper titled "xFinder: Robust and Pinpoint Answer Extraction for Large Language Models" . This paper provides an in-depth analysis of the overall process of the LLM evaluation framework, focusing on evaluating the reliability and consistency of the answer extractor component in large model evaluation.
- Paper address: https://arxiv.org/abs/2405.11874
- Github link: https://github.com/IAAR-Shanghai/xFinder
- Huggingface link: https:// huggingface.co/collections/IAAR-Shanghai/xfinder-664b7b21e94e9a93f25a8412
The current evaluation framework mainly relies on regular expressions (RegEx) to extract answers. But this approach has obvious flaws. Manual review results show that the best extraction accuracy rate is only 74.38%, and the evaluation results are extremely unreliable.
In addition, the RegEx method is easily fitted intentionally or unintentionally, increasing the possibility of "cheating" and thus affecting the reliability and consistency of the evaluation results. The figure below shows the RegEx component extraction error in the LLM evaluation framework.
In order to effectively solve this problem, a research team from the Shanghai Algorithm Innovation Institute and Renmin University of China developed a new model called xFinder for more accurate extraction. Key answer.
xFinder has the following advantages:
(1) It does not require answer output in a specific format and has strong The answer extraction is robust and the extraction accuracy is as high as 95.18%, which is significantly better than the RegEx method in the current best LLM evaluation framework.
(2) Supports diversified question types, can automatically convert letter multiple-choice questions into quiz questions, and supports mixed assessment of different question types, thereby reducing test takers’ preparation Possibility of fitting the question type.
The implementation process of xFinder mainly includes the generation of LLM response content, Annotation of KAF data set and training of xFinder. In order to achieve effective training of the xFinder model, the team built a specialized data set - the Key Answer Finder (KAF) data set. The dataset contains 26,900 training samples, 4,961 test samples, and 4,482 generalization samples, covering a variety of evaluation tasks.
Large language model response generationFirst, the research team started from existing major evaluation benchmarks and Several typical assessment task data sets were selected in the report, and these tasks were classified into four types: letter option tasks, short text option tasks, classification label tasks, and math tasks. The team then used different series of LLMs (such as Qwen, InternLM, ChatGLM, etc.) to generate data pairs for these tasks. Through various LLMs, the team generated rich and diverse data pairs, providing sufficient data support for the training of the xFinder model. Automatic annotation and manual reviewThe team used a strategy to extract from the LLM response Key answers are used as labels to build high-quality KAF datasets. In order to improve the annotation efficiency of the training set, they adopted a semi-automatic process, using GPT-4 to generate two sets of annotations through different prompts, and used the self-consistency strategy to filter out items with inconsistent annotations and all mathematical questions, and submitted them to manual review. To ensure the validity and reliability of the test set and generalization set, all labels undergo two rounds of manual annotation. In order to enhance the diversity of the KAF data set and the generalization ability of the model, The research team adopted two data enhancement strategies: (1) Simulate LLM response: modify 50% of the letter option questions in the KAF training set, add or delete one to Two options to simulate the diverse response of LLM. (2) Enrich prompt forms: Extract 10% of the LLM responses containing key answer sentences and replace the prompt part, for example, replace "The final answer is A" "Based on the context of the question, A is the most likely answer". In addition, the team used the XTuner tool and QLoRA method to fine-tune base models such as the Llama series, Qwen series, and Gemma series, and finally obtained xFinder. The team conducted extensive experiments to evaluate xFinder in different The performance on the task is compared with existing RegEx methods. Results on the KAF test setOn the KAF test set, the average of xFinder-qwen1505 The extraction accuracy reaches 96.88%, which is significantly higher than the 74.38% of the RegEx method in the best evaluation framework. Specifically, the extraction accuracy of xFinder-qwen1505 in the letter option task is 97.35%; in the short text option task it is 96.83%; in the classification label task was 98.05%; in the math option task it was 92.76%. These results show that xFinder performs well across a wide range of tasks, significantly improving the accuracy and reliability of assessments. The results on the KAF generalization setOn the new KAF generalization set (the The generalization set was constructed using samples generated by LLM and test tasks that were different from the training set and test set in the KAF dataset), xFinder-qwen1505 showed excellent performance, with an average extraction accuracy of 93.42%. Experimental results show that xFinder's performance is not only better than other RegEx-based evaluation frameworks, but even significantly better than GPT-4, fully demonstrating its high robustness and versatility. ization ability. Evaluation in real-world scenariosThe research team conducted a comprehensive evaluation of 10 LLMs using xFinder and traditional evaluation frameworks . The evaluation tasks cover CommonsenseQA, BoolQ, GSM8K, etc. A series of comparative experiments were conducted by applying five answer extraction schemes to 10 different LLMs.To sum up, the experimental results mainly reveal three key findings: (1) The same model performs better in different There are often large differences in the rankings under the framework, which is difficult to accurately reflect the true capabilities of the model and shows low consistency. (2) Different xFinders showed a high degree of consistency in these experiments, and also surpassed other evaluation frameworks in the accuracy of extracting answers, indicating that xFinder It is a more reliable evaluation method. (3) Compared with traditional letter option settings, directly using option text can significantly improve the consistency of rankings, reflecting the instability of letter option settings. More details and experimental results are presented in the appendix, which further confirm the validity of the above findings. In general, xFinder optimizes the key answer extraction module, Improved accuracy and reliability of LLM assessment. Experimental results show that xFinder performs well on a variety of tasks and has high robustness and generalization capabilities. In the future, the research team will continue to optimize xFinder and study other key evaluation issues to provide a solid foundation for reliable evaluation of LLM performance. The above is the detailed content of The answer extraction accuracy rate reaches 96.88%, xFinder eliminates the 'cheating' mentality of large models. For more information, please follow other related articles on the PHP Chinese website!