search
HomeTechnology peripheralsAIDon't let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

Dont let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

"Don't let large models get fooled by benchmark evaluations."

This is the title of a latest study, from the School of Information at Renmin University, the School of Artificial Intelligence at Hillhouse, and the University of Illinois at Urbana-Champaign.

Dont let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

Research has found that it is becoming more and more common for relevant data in benchmark tests to be accidentally used for model training.

Because the pre-training corpus contains a lot of public text information, and the evaluation benchmark is also based on this information, this situation is inevitable.

Now the problem is getting worse as big models try to collect more public data.

You must know that the harm caused by this kind of data overlap is very great.

Not only will it lead to falsely high test scores for some parts of the model, but it will also cause the model's generalization ability to decline and the performance of irrelevant tasks to plummet. It may even cause large models to cause "harm" in practical applications.

Dont let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

So this study officially issued a warning and verified the actual hazards that may be induced through multiple simulation tests, specifically.

It is very dangerous for large models to "miss questions"

The research mainly tests and observes the impact of large models by simulating extreme data leakage situations.

There are four ways to extremely leak data:

  • Use the training set of MMLU
  • Use the training set of all test benchmarks except MMLU
  • Use all training sets to test prompt
  • Use all training sets, test sets and test prompts (This is the most extreme case, only an experimental simulation, it will not happen under normal circumstances)

Then the researchers "poisoned" the four large models, and then observed their performance in different benchmarks, mainly evaluating their performance in tasks such as question and answer, reasoning, and reading comprehension.

The models used are:

  • GPT-Neo (1.3B)
  • phi-1.5 (1.3B)
  • OpenLLaMA (3B )
  • LLaMA-2 (7B)

Also use LLaMA (13B/30B/65B) as a control group.

The results found that when the pre-training data of a large model contains data from a certain evaluation benchmark, it will perform better in this evaluation benchmark, but its performance in other unrelated tasks will decline.

For example, after training with the MMLU data set, while the scores of multiple large models improved in the MMLU test, their scores in the common sense benchmark HSwag and the mathematics benchmark GSM8K dropped.

This indicates that the generalization ability of large models is affected.

Dont let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

On the other hand, it may also result in falsely high scores on irrelevant tests.

The four training sets used to "poison" the large model as mentioned above only contain a small amount of Chinese data. However, after the large model was "poisoned", the scores in C3 (Chinese benchmark test) all became higher. .

This increase is unreasonable.

Dont let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

This kind of training data leakage can even cause model test scores to abnormally exceed the performance of larger models.

For example, phi-1.5 (1.3B) performs better than LLaMA65B on RACE-M and RACE-H, the latter being 50 times the size of the former.

But this kind of score increase is meaningless, it’s just cheating.

Dont let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

What’s more serious is that even tasks that have not had data leaked will be affected and their performance will decline.

As can be seen in the table below, in the code task HEval, the scores of both large models have dropped significantly.

Dont let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

After the data was leaked at the same time, the fine-tuning improvement of the large model was far inferior to the situation without leakage.

Dont let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.

This study analyzes various possibilities in the event of data overlap/leakage.

For example, large model pre-training corpus and benchmark test data will use public texts (webpages, papers, etc.), so overlap is inevitable.

And currently large model evaluations are performed locally, or the results are obtained through API calls. This method cannot strictly check some abnormal numerical increases.

and the pre-training corpus of current large models are regarded as core secrets by all parties and cannot be evaluated by the outside world.

This resulted in large models being accidentally "poisoned".

How to avoid this problem? The research team also made some suggestions.

How to avoid it?

The research team gave three suggestions:

First, it is difficult to completely avoid data overlap in actual situations, so large models should use multiple benchmark tests for a more comprehensive evaluation.

Second, for large model developers, they should desensitize the data and disclose the detailed composition of the training corpus.

Third, for benchmark maintainers, benchmark data sources should be provided, the risk of data contamination should be analyzed, and multiple evaluations should be conducted using more diverse prompts.

However, the team also said that there are still certain limitations in this research. For example, there is no systematic testing of different degrees of data leakage, and the failure to directly introduce data leakage in pre-training for simulation.

This research was jointly brought by many scholars from the School of Information at Renmin University of China, the School of Artificial Intelligence at Hillhouse, and the University of Illinois at Urbana-Champaign.

In the research team, we found two giants in the field of data mining: Wen Jirong and Han Jiawei.

Professor Wen Jirong is currently the Dean of the Hillhouse School of Artificial Intelligence and the Dean of the School of Information at Renmin University of China. The main research directions are information retrieval, data mining, machine learning, and the training and application of large-scale neural network models.

Professor Han Jiawei is an expert in the field of data mining. He is currently a professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign, an academician of the American Computer Society and an IEEE academician.

Paper address: https://arxiv.org/abs/2311.01964.

The above is the detailed content of Don't let big models get fooled by benchmark evaluations! The test set is randomly included in the pre-training, the scores are falsely high, and the model becomes stupid.. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
2023年机器学习的十大概念和技术2023年机器学习的十大概念和技术Apr 04, 2023 pm 12:30 PM

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

人工智能自动获取知识和技能,实现自我完善的过程是什么人工智能自动获取知识和技能,实现自我完善的过程是什么Aug 24, 2022 am 11:57 AM

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

超参数优化比较之网格搜索、随机搜索和贝叶斯优化超参数优化比较之网格搜索、随机搜索和贝叶斯优化Apr 04, 2023 pm 12:05 PM

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠​审校 | 孙淑娟​简介​通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。​顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

得益于OpenAI技术,微软必应的搜索流量超过谷歌得益于OpenAI技术,微软必应的搜索流量超过谷歌Mar 31, 2023 pm 10:38 PM

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。​​​​截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

荣耀的人工智能助手叫什么名字荣耀的人工智能助手叫什么名字Sep 06, 2022 pm 03:31 PM

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

人工智能在教育领域的应用主要有哪些人工智能在教育领域的应用主要有哪些Dec 14, 2020 pm 05:08 PM

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

30行Python代码就可以调用ChatGPT API总结论文的主要内容30行Python代码就可以调用ChatGPT API总结论文的主要内容Apr 04, 2023 pm 12:05 PM

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在生活中的应用有哪些人工智能在生活中的应用有哪些Jul 20, 2022 pm 04:47 PM

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software