Home >Technology peripherals >AI >Google releases ASPIRE, a model training framework that allows AI to independently judge output accuracy

Google releases ASPIRE, a model training framework that allows AI to independently judge output accuracy

王林
王林forward
2024-01-23 17:36:101172browse

可令 AI 自我判断输出内容正确性,谷歌公布模型训练框架 ASPIRE

Google recently issued a press release announcing the launch of the ASPIRE training framework, specially designed for large language models. This framework aims to improve the selective prediction capabilities of AI models.

可令 AI 自我判断输出内容正确性,谷歌公布模型训练框架 ASPIRE

Google mentioned that large language models are developing rapidly in natural language understanding and content generation, and have been used to build various innovative applications, but they must be used in high-risk applications It is still inappropriate in decision-making situations. This is due to the uncertainty and possibility of "hallucination" in model predictions. Therefore, Google has developed an ASPIRE training framework, which introduces a "credibility" mechanism to a series of models, that is - the model will output a series of answers, Each answer will have a probability of being correct score .

可令 AI 自我判断输出内容正确性,谷歌公布模型训练框架 ASPIRE

▲ Picture source Google press release (the same below)

At the technical level, the training framework can be divided into three stages: specific task adjustment, answer Sampling and self-assessment learning.

The "specific task adjustment" stage is to conduct in-depth training of large-scale language models that have received basic training, focusing on strengthening the prediction ability of the model. Researchers mainly introduce a series of adjustable parameters to the model and fine-tune the pre-trained language model on the training data set of specific tasks, thereby improving the model's prediction performance and allowing the model to better solve specific problems.

可令 AI 自我判断输出内容正确性,谷歌公布模型训练框架 ASPIRE

The second stage is "answer sampling". After specific fine-tuning, the model can use the previously learned adjustable parameters to generate different answers for each training question. and create datasets for self-assessment learning, generating a range of answers with high confidence. The researchers also used the "Beam Search" method and the Rouge-L algorithm to evaluate the quality of the answers, and re-entered the generated answers and scores into the model to start the third stage.

可令 AI 自我判断输出内容正确性,谷歌公布模型训练框架 ASPIRE

In the third stage of "self-evaluation learning", the researchers added a set of adjustable parameters to the model specifically to improve the model's self-evaluation capabilities. The goal of this stage is to let the model learn to "judge the accuracy of the output answer by itself", so that when the large language model generates the answer, it will also attach the correct probability score of the answer.

Google researchers used three question and answer data sets, CoQA, TriviaQA and SQuAD, to verify the results of the ASPIRE training framework. It is said that "the OPT-2.7B small model adjusted by ASPIRE performs far better than the larger OPT- 30B Model”. The experimental results also show that with appropriate adjustments, even a small language model can surpass a large language model in some scenarios.

可令 AI 自我判断输出内容正确性,谷歌公布模型训练框架 ASPIRE

The researchers concluded that ASPIRE framework training can significantly improve the output accuracy of large language models, even for smaller models, after fine-tuning. “Accurate and Confident” Forecast.

The above is the detailed content of Google releases ASPIRE, a model training framework that allows AI to independently judge output accuracy. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete