Home  >  Article  >  Technology peripherals  >  To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

王林
王林Original
2024-07-25 06:42:231011browse

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

Editor | ScienceAI

Question and Answer (QA) data sets play a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge.

Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings.

First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but they limit the model’s answer selection range and cannot fully test the model’s ability to answer scientific questions. In contrast, open question answering (openQA) can more comprehensively evaluate the model's capabilities, but lacks suitable evaluation metrics.

Second, many of the contents of existing data sets come from textbooks at university level and below, making it difficult to evaluate LLM’s high-level knowledge retention capabilities in actual academic research or production environments.

Third, the creation of these benchmark datasets relies on human expert annotation.

Addressing these challenges is crucial to building a more comprehensive QA data set and is also conducive to more accurate assessment of scientific LLM.

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

Illustration: SciQAG framework for generating high-quality scientific question and answer pairs from scientific literature.

To this end, the Argonne National Laboratory in the United States, the team of Professor Ian Foster of the University of Chicago (2002 Gordon Bell Prize winner), the UNSW AI4Science team of Professor Bram Hoex of the University of New South Wales, Australia, the AI4Science company GreenDynamics and the team of Professor Jie Chunyu of the City University of Hong Kong jointly proposed SciQAG, the first novel framework to automatically generate high-quality scientific open question and answer pairs from large scientific literature corpora based on large language models (LLM).

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

Paper link:https://arxiv.org/abs/2405.09939

github link:https://github.com/MasterAI-EAM/SciQAG

Based on SciQAG, the researchers built SciQAG-24D, a large-scale, high-quality, open scientific QA dataset, contains 188,042 QA pairs extracted from 22,743 scientific papers in 24 scientific fields, and is designed to serve the fine-tuning of LLM and the assessment of scientific problem-solving capabilities.

Experiments demonstrate that fine-tuning LLMs on the SciQAG-24D dataset can significantly improve their performance in open-ended question answering and scientific tasks.

The data set, model and evaluation code have been open sourced (https://github.com/MasterAI-EAM/SciQAG) to promote the joint development of open scientific Q&A by the AI ​​for Science community.

SciQAG framework with SciQAG-24D benchmark dataset

SciQAG consists of a QA generator and a QA evaluator, aiming to quickly generate diverse open question and answer pairs based on scientific literature at scale. First, the generator converts scientific papers into question and answer pairs, and then the evaluator filters out the question and answer pairs that do not meet the quality standards, thereby obtaining a high-quality scientific question and answer dataset.

QA Generator

The researchers designed a two-step prompt (prompt) through comparative experiments, allowing LLM to first extract keywords and then generate question and answer pairs based on the keywords.

Since the generated question and answer data set adopts the "closed book" mode, that is, the original paper is not provided and only focuses on the extracted scientific knowledge itself. The prompt requires that the generated question and answer pairs do not rely on or refer to the unique information in the original paper (for example, no modern nomenclature is allowed). Such as "this/this paper", "this/this research", etc., or asking questions about the tables/pictures in the article).

To balance performance and cost, the researchers chose to fine-tune an open source LLM as the generator. SciQAG users can choose any open source or closed source LLM as the generator according to their own circumstances, either using fine-tuning or prompt word engineering.

QA Evaluator

The evaluator is used to accomplish two purposes: (1) Evaluate the quality of generated question and answer pairs; (2) Discard low-quality question and answer pairs based on set criteria.

Researchers developed a comprehensive evaluation index RACAR, which consists of five dimensions: relevance, agnosticism, completeness, accuracy and reasonableness.

In this study, the researchers directly used GPT-4 as the QA evaluator to evaluate the generated QA pairs according to RACAR, with an evaluation level of 1-5 (1 means unacceptable, 5 means completely acceptable).

As shown in the figure, to measure the consistency between GPT-4 and manual evaluation, two domain experts used the RACAR metric to perform manual evaluation on 10 articles (a total of 100 question and answer pairs). Users can choose any open source or closed source LLM as an evaluator according to their needs.

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

Illustration: Spearman and Pearson correlations between GPT-4 assigned scores and expert annotation scores.

Application of SciQAG framework

This study obtained a total of 22,743 highly cited papers in 24 categories from the Web of Science (WoS) core collection database, from the fields of materials science, chemistry, physics, energy, etc., aiming to build a A reliable, rich, balanced and representative source of scientific knowledge.

To fine-tune the open source LLM to form a QA generator, the researchers randomly selected 426 papers from the paper collection as input and generated 4260 seed QA pairs by prompting GPT-4.

Then, the researchers fine-tuned the Vicuna-7b model on these seed data, the generation prompts were converted into instructions , the input fields were populated with the paper content, and the output were the generated QA pairs

, and the model generation was trained in a standard supervised manner Example output.

Using the trained QA generator to perform inference on the remaining papers, a total of 227,430 QA pairs (including seed QA pairs) were generated. Fifty papers were extracted from each category (1,200 papers in total), GPT-4 was used to calculate the RACAR score of each generated QA pair, and QA pairs with any dimension score lower than 3 were filtered out as the test set.

For the remaining QA pairs, a rule-based method is used to filter out all question and answer pairs that contain unique information of the paper to form a training set.

SciQAG-24D benchmark data set

Based on the above, researchers established the open scientific QA benchmark data set SciQAG-24D. The filtered training set includes 21,529 papers and 179,511 QA pairs, and the filtered The test set contains 1,199 papers and 8,531 QA pairs.

Statistics show that 99.15% of the data in the answers come from the original paper, 87.29% of the questions have a similarity below 0.3, and the answers cover 78.26% of the original content.

This data set is widely used: the training set can be used to fine-tune LLM and inject scientific knowledge into it; the test set can be used to evaluate the performance of LLM on open QA tasks in a specific or overall scientific field. Since the test set is larger, it can also be used as high-quality data for fine-tuning.

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

Illustration: The proportion of articles in different categories in the training and testing of the SciQAG-24D dataset.

Experimental results

The researchers conducted comprehensive experiments to compare the performance differences in scientific question answering between different language models and explore the impact of fine-tuning.

Zero-shot setting

The researchers used part of the test set in SciQAG-24D to conduct a zero-shot performance comparison of the five models. Two of them are open source LLMs: LLaMA1 (7B) and LLaMA2-chat (7B), and the rest are closed source LLMs.

Called via API: GPT3.5 (gpt-3.5-turbo), GPT-4 (gpt-4-1106-preview) and Claude 3 (claude-3-opus-20240229). Each model was prompted with 1,000 questions in the test, and its output was evaluated by the CAR metric (adapted from the RACAR metric, focusing only on response evaluation) to measure its zero-shot ability to answer scientific research questions.

As shown in the figure, among all models, GPT-4 has the highest score for completeness (4.90) and plausibility (4.99), while Claude 3 has the highest accuracy score (4.95). GPT-3.5 also performs very well, scoring closely behind GPT-4 and Claude 3 on all metrics.

Notably, LLaMA1 has the lowest scores in all three dimensions. In contrast, although the LLaMA2-chat model does not score as high as the GPT model, it significantly improves over the original LLaMA1 in all metrics. The results demonstrate the superior performance of commercial LLMs in answering scientific questions, while open source models (such as LLaMA2-chat) have also made significant progress in this regard.

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

Illustration: Zero-sample test and fine-tuning test (LLAMA1-QA) on SciQAG-24D

fine-tuning setting (fine-tuning setting)

The researchers selected LLaMA1 with the worst zero-sample performance Fine-tuning is performed on the training set of SciQAG-24D to obtain LLaMA1-QA. Through three experiments, the researchers demonstrated that SciQAG-24D can be used as effective fine-tuning data to improve the performance of downstream scientific tasks:

(a) LLaMA-QA versus original LLaMA1 on the unseen SciQAG-24D test set Performance comparison.

As shown in the figure above, the performance of LLaMA1-QA has been significantly improved compared to the original LLaMA1 (completeness increased by 13%, accuracy and plausibility increased by more than 30%). This shows that LLaMA1 has learned the logic of answering scientific questions from the training data of SciQAG-24D and internalized some scientific knowledge.

(b) Comparison of fine-tuning performance on SciQ, a scientific MCQ benchmark.

The first row of the table below shows that LLaMA1-QA is slightly better than LLaMA1 (+1%). According to observations, fine-tuning also enhanced the model's instruction following ability: the probability of unparsable output dropped from 4.1% in LLaMA1 to 1.7% in LLaMA1-QA.

(c) Comparison of fine-tuning performance on various scientific tasks.

In terms of evaluation indicators, F1-score is used for classification tasks, MAE is used for regression tasks, and KL divergence is used for transformation tasks. As shown in the table below, LLaMA1-QA has significant improvements compared to the LLaMA1 model in scientific tasks.

The most obvious improvement is reflected in the regression task, where the MAE dropped from 463.96 to 185.32. These findings suggest that incorporating QA pairs during training can enhance the model's ability to learn and apply scientific knowledge, thereby improving its performance in downstream prediction tasks.

Surprisingly, compared to specially designed machine learning models with features, LLM can achieve results comparable to or even surpass them in some tasks. For example, in the band gap task, although LLaMA1-QA does not perform as well as models such as MODNet (0.3327), it has surpassed AMMExpress v2020 (0.4161).

In the diversity task, LLaMA1-QA outperforms the deep learning baseline (0.3198). These findings indicate that LLM has great potential in specific scientific tasks.

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework

Illustration: Fine-tuning performance of LLaMA1 and LLaMA1-QA on SciQ and scientific tasks (M represents multiple choice, C represents classification, R represents regression, T represents transformation)

Summary and Outlook

(1) SciQAG is a framework for generating QA pairs from scientific literature. Combined with the RACAR metric for evaluating and screening QA pairs, it can efficiently generate large amounts of knowledge-based QA data for resource-poor scientific fields.

(2) The team generated a comprehensive open source scientific QA dataset containing 188,042 QA pairs, called SciQAG-24D. The training set is used to fine-tune the LLM, and the test set evaluates the performance of the LLM on open-ended closed-book scientific QA tasks.

The zero-sample performance of several LLMs on the SciQAG-24D test set was compared, and LLaMA1 was fine-tuned on the SciQAG-24D training set to obtain LLaMA1-QA. This fine-tuning significantly improves its performance on multiple scientific tasks.

(3) Research shows that LLM has potential in scientific tasks, and the results of LLaMA1-QA can reach levels even exceeding the machine learning baseline. This demonstrates the multifaceted utility of SciQAG-24D and shows that incorporating scientific QA data into the training process can enhance LLM's ability to learn and apply scientific knowledge.

The above is the detailed content of To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn