search
HomeTechnology peripheralsAIThe first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

When answering complex questions, humans can understand information in different modalities and form a complete chain of thought (CoT). Can the deep learning model open the "black box" and provide a chain of thinking for its reasoning process? Recently, UCLA and the Allen Institute for Artificial Intelligence (AI2) proposed ScienceQA, the first multi-modal scientific question and answer data set with detailed explanations, to test the multi-modal reasoning capabilities of the model. In the ScienceQA task, the author proposed the GPT-3 (CoT) model, which introduced prompt learning based on thought chains into the GPT-3 model, so that the model can generate corresponding reasoning explanations while generating answers. GPT-3 (CoT) achieves 75.17% accuracy on ScienceQA; and human evaluation shows that it can generate higher quality explanations.

Learning and completing complex tasks as effectively as humans is one of the long-term goals pursued by artificial intelligence. Humans can follow a complete chain of thought (CoT) reasoning process during the decision-making process to make reasonable explanations for the answers given.

However, most existing machine learning models rely on a large number of input-output sample training to complete specific tasks. These black box models often directly generate the final answer without revealing the specific reasoning process.

Science Question Answering can well diagnose whether the artificial intelligence model has multi-step reasoning capabilities and interpretability. To answer scientific questions, a model not only needs to understand multimodal content, but also extract external knowledge to arrive at the correct answer. At the same time, a reliable model should also provide explanations that reveal its reasoning process. However, most of the current scientific question and answer data sets lack detailed explanations of the answers, or are limited to text modalities.

Therefore, The author collected a new science question and answer data set ScienceQA, which contains 21,208 question and answer multiple-choice questions from primary and secondary school science courses. A typical question contains multi-modal context (context), correct options, general background knowledge (lecture), and specific explanation (explanation).

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

An example of the ScienceQA dataset.

To answer the example shown above, we must first recall the definition of force: "A force is a push or a pull that ... The direction of a push is... The direction of a pull is...", and then form a multi-step reasoning process: "The baby's hand applies a force to the cabinet door. → This force causes the door to open. → The direction of this force is toward the baby's hand.", and finally got the correct answer: "This force is a pull.".

In the ScienceQA task, the model needs to predict the answer while outputting a detailed explanation. In this article, The author utilizes a large-scale language model to generate background knowledge and explanations as a chain of thought (CoT) to imitate the multi-step reasoning ability that humans have.

Experiments show that current multi-modal question answering methods cannot achieve good performance in the ScienceQA task. On the contrary, Through prompt learning based on thought chains, the GPT-3 model can achieve an accuracy of 75.17% on the ScienceQA data set and can generate higher-quality explanations: According to human assessment, where 65.2% of explanations were relevant, correct, and complete. Thoughtchain can also help the UnifiedQA model achieve a 3.99% improvement on the ScienceQA dataset.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

###
  • Paper link: https://arxiv.org/abs/2209.09513
  • Code link: https:/ /github.com/lupantech/ScienceQA
  • Project homepage: https://scienceqa.github.io/
  • Data visualization: https://scienceqa.github.io/explore.html
  • Leaderboard: https://scienceqa.github .io/leaderboard.html

1. ScienceQA data set

Dataset statistics

ScienceQA’s main statistics are shown below.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

Main information of ScienceQA data set

ScienceQA contains 21208 examples, including 9122 different questions. 10332 tracks (48.7%) had visual background information, 10220 tracks (48.2%) had textual background information, and 6532 tracks (30.8%) had visual textual background information. The vast majority of questions are annotated with detailed explanations: 83.9% of the questions have background knowledge annotations (lecture), and 90.5% of the questions have detailed answers (explanation).

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

## Question and background distribution in ScienceQA dataset.

Dataset topic distribution

Different from existing data sets,ScienceQA covers three major branches of natural sciences, social sciences and linguistics, including 26 topics, 127 categories and 379 knowledge skills (skills).

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

## Topic distribution of ScienceQA.

Word cloud distribution of the data set

The word cloud distribution in the figure below is shown, Questions in ScienceQA are rich in semantic diversity. Models need to understand different problem formulations, scenarios, and background knowledge.

#Word cloud distribution of ScienceQA.

Dataset comparison

ScienceQA is the first A multi-modal scientific question and answer dataset with detailed explanations. Compared with existing data sets, ScienceQA's data size, question type diversity, topic diversity and other dimensions reflect its advantages.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

##Comparison of ScienceQA dataset with other scientific question and answer datasets . 2. Models and methods

Baselines

The author evaluates different benchmark methods on the ScienceQA dataset, including VQA models such as Top-Down Attention, MCAN, BAN, DFAF, ViLT, Patch-TRM and VisualBERT, and large-scale language models such as UnifiedQA and GPT- 3, as well as random chance and human performance. For language models UnifiedQA and GPT-3, background images are converted into textual captions.

GPT-3 (CoT)

Recent research work has shown that, given appropriate cues, The GPT-3 model can show excellent performance on different downstream tasks. To this end, the author proposes the GPT-3 (CoT) model, which adds a chain of thought (CoT) to the prompts, so that the model can generate corresponding background knowledge and explanations while generating answers.

The specific prompt template is shown in the figure below. where Ii represents the training example and It represents the testing example. The training example contains question, options, context and answer elements, where the answer consists of the correct answer, background knowledge (Lecture) and explanation (Explanation). GPT-3 (CoT) will complete the predicted answers, background knowledge and explanations of the test examples based on the input prompt information.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

## Hint template adopted by GPT-3 (CoT). 3. Experiment and analysis

Experimental results

are different The accuracy results of the benchmarks and methods on the ScienceQA test set are shown in the table below. VisualBERT, one of the current best VQA models, can only achieve 61.87% accuracy.

Introducing CoT data during the training process, the UnifiedQA_BASE model can achieve an accuracy of 74.11%. And GPT-3 (CoT) achieved an accuracy of 75.17% with the prompt of 2 training examples, which is higher than other benchmark models. Humans perform well on the ScienceQA dataset, achieving an overall accuracy of 88.40% and performing stably across different categories of questions.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

##The results of different methods on the ScienceQA test set.

Evaluation of generated explanations

The author uses automatic evaluation metrics such as BLEU-1, BLEU-2, ROUGE-L and Sentence Similarity evaluate the explanations generated by different methods. Since automatic evaluation metrics can only measure the similarity between prediction results and annotated content, the authors further adopted manual evaluation methods to evaluate the relevance, correctness, and completeness of the generated explanations. As can be seen, 65.2% of the explanations generated by

GPT-3 (CoT) meet the Gold standard

.

# Different evaluation methods generate interpreted results.

Different prompt templates

The author compared the different The impact of prompt templates on GPT-3 (CoT) accuracy

. It can be seen that under the QAM-ALE template, GPT-3 (CoT) can obtain the largest average accuracy and the smallest variance. Additionally, GPT-3 (CoT) performs best when prompted with 2 training examples.

#Comparison of results from different prompt templates.

Model upper limit

In order to explore the performance upper limit of the GPT-3 (CoT) model, the author added annotated background knowledge and explanations to the input of the model (QCMLE*-A). We can see that GPT-3 (CoT) can achieve up to 94.13% accuracy. This also suggests a possible direction for model improvement: the model can perform step-by-step reasoning, that is, first retrieve accurate background knowledge and generate accurate explanations, and then use these results as input. This process is very similar to how humans solve complex problems.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

## Performance upper limit for GPT-3 (CoT) models.

Different ALE locations

The author further discusses GPT-3 (CoT) When generating predictions, the impact of different ALE positions on the results. Experimental results on ScienceQA show that if GPT-3 (CoT) first generates background knowledge L or explanation E, and then generates answer A, its prediction accuracy will drop significantly. The main reason is that background knowledge L and explanation E have a large number of words. If LE is generated first, the GPT-3 model may run out of the maximum number of words, or stop generating text early, so that the final answer A cannot be obtained.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chainDifferent LE locations.

Successful Cases

Among the following 4 examples, GPT-3 (CoT) Not only generates correct answers, but also gives relevant, correct and complete explanations. This shows that GPT-3 (CoT) exhibits strong multi-step reasoning and explanation capabilities on the ScienceQA dataset.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

##GPT-3 (CoT) Examples of generating correct answers and explanations.

Failure Case I

In the three examples below, GPT-3 (CoT)

Although the correct answer was generated, the explanation generated was irrelevant, incorrect, or incomplete. This shows that GPT-3 (CoT) still faces greater difficulties in generating logically consistent long sequences.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

#GPT-3 (CoT) can generate the correct answer, but the generated explanation is incorrect.

Failure Case II

In the following four examples, GPT-3 (CoT) cannot be generated correctly The answer also cannot generate the correct explanation . The reasons are: (1) The current image captioning model cannot accurately describe the semantic information of schematic diagrams, tables and other pictures. If the picture is represented by picture annotation text, GPT-3 (CoT) cannot yet answer the question that contains the chart background. problems; (2) When GPT-3 (CoT) generates long sequences, it is prone to inconsistent or incoherent problems; (3) GPT-3 (CoT) is not yet able to answer specific questions. Domain knowledge issues.

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain

#GPT-3 (CoT) can generate examples of incorrect answers and explanations.

4. Conclusion and Outlook

The author proposed ScienceQA, the first multi-modal scientific question and answer data set with detailed explanations. ScienceQA contains 21,208 multiple-choice questions from primary and secondary school science subjects, covering three major science fields and a variety of topics. Most questions are annotated with detailed background knowledge and explanations. ScienceQA evaluates a model's capabilities in multimodal understanding, multistep reasoning, and interpretability. The authors evaluate different baseline models on the ScienceQA dataset and propose that the GPT-3 (CoT) model can generate corresponding background knowledge and explanations while generating answers. A large number of experimental analyzes and case studies have provided useful inspiration for the improvement of the model.

The above is the detailed content of The first multi-modal scientific question and answer data set with detailed explanations, deep learning model reasoning has a thinking chain. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
人工智能(AI)、机器学习(ML)和深度学习(DL):有什么区别?人工智能(AI)、机器学习(ML)和深度学习(DL):有什么区别?Apr 12, 2023 pm 01:25 PM

人工智能Artificial Intelligence(AI)、机器学习Machine Learning(ML)和深度学习Deep Learning(DL)通常可以互换使用。但是,它们并不完全相同。人工智能是最广泛的概念,它赋予机器模仿人类行为的能力。机器学习是将人工智能应用到系统或机器中,帮助其自我学习和不断改进。最后,深度学习使用复杂的算法和深度神经网络来重复训练特定的模型或模式。让我们看看每个术语的演变和历程,以更好地理解人工智能、机器学习和深度学习实际指的是什么。人工智能自过去 70 多

深度学习GPU选购指南:哪款显卡配得上我的炼丹炉?深度学习GPU选购指南:哪款显卡配得上我的炼丹炉?Apr 12, 2023 pm 04:31 PM

众所周知,在处理深度学习和神经网络任务时,最好使用GPU而不是CPU来处理,因为在神经网络方面,即使是一个比较低端的GPU,性能也会胜过CPU。深度学习是一个对计算有着大量需求的领域,从一定程度上来说,GPU的选择将从根本上决定深度学习的体验。但问题来了,如何选购合适的GPU也是件头疼烧脑的事。怎么避免踩雷,如何做出性价比高的选择?曾经拿到过斯坦福、UCL、CMU、NYU、UW 博士 offer、目前在华盛顿大学读博的知名评测博主Tim Dettmers就针对深度学习领域需要怎样的GPU,结合自

字节跳动模型大规模部署实战字节跳动模型大规模部署实战Apr 12, 2023 pm 08:31 PM

一. 背景介绍在字节跳动,基于深度学习的应用遍地开花,工程师关注模型效果的同时也需要关注线上服务一致性和性能,早期这通常需要算法专家和工程专家分工合作并紧密配合来完成,这种模式存在比较高的 diff 排查验证等成本。随着 PyTorch/TensorFlow 框架的流行,深度学习模型训练和在线推理完成了统一,开发者仅需要关注具体算法逻辑,调用框架的 Python API 完成训练验证过程即可,之后模型可以很方便的序列化导出,并由统一的高性能 C++ 引擎完成推理工作。提升了开发者训练到部署的体验

基于深度学习的Deepfake检测综述基于深度学习的Deepfake检测综述Apr 12, 2023 pm 06:04 PM

深度学习 (DL) 已成为计算机科学中最具影响力的领域之一,直接影响着当今人类生活和社会。与历史上所有其他技术创新一样,深度学习也被用于一些违法的行为。Deepfakes 就是这样一种深度学习应用,在过去的几年里已经进行了数百项研究,发明和优化各种使用 AI 的 Deepfake 检测,本文主要就是讨论如何对 Deepfake 进行检测。为了应对Deepfake,已经开发出了深度学习方法以及机器学习(非深度学习)方法来检测 。深度学习模型需要考虑大量参数,因此需要大量数据来训练此类模型。这正是

地址标准化服务AI深度学习模型推理优化实践地址标准化服务AI深度学习模型推理优化实践Apr 11, 2023 pm 07:28 PM

导读深度学习已在面向自然语言处理等领域的实际业务场景中广泛落地,对它的推理性能优化成为了部署环节中重要的一环。推理性能的提升:一方面,可以充分发挥部署硬件的能力,降低用户响应时间,同时节省成本;另一方面,可以在保持响应时间不变的前提下,使用结构更为复杂的深度学习模型,进而提升业务精度指标。本文针对地址标准化服务中的深度学习模型开展了推理性能优化工作。通过高性能算子、量化、编译优化等优化手段,在精度指标不降低的前提下,AI模型的模型端到端推理速度最高可获得了4.11倍的提升。1. 模型推理性能优化

聊聊实时通信中的AI降噪技术聊聊实时通信中的AI降噪技术Apr 12, 2023 pm 01:07 PM

Part 01 概述 在实时音视频通信场景,麦克风采集用户语音的同时会采集大量环境噪声,传统降噪算法仅对平稳噪声(如电扇风声、白噪声、电路底噪等)有一定效果,对非平稳的瞬态噪声(如餐厅嘈杂噪声、地铁环境噪声、家庭厨房噪声等)降噪效果较差,严重影响用户的通话体验。针对泛家庭、办公等复杂场景中的上百种非平稳噪声问题,融合通信系统部生态赋能团队自主研发基于GRU模型的AI音频降噪技术,并通过算法和工程优化,将降噪模型尺寸从2.4MB压缩至82KB,运行内存降低约65%;计算复杂度从约186Mflop

深度学习撞墙?LeCun与Marcus到底谁捅了马蜂窝深度学习撞墙?LeCun与Marcus到底谁捅了马蜂窝Apr 09, 2023 am 09:41 AM

今天的主角,是一对AI界相爱相杀的老冤家:Yann LeCun和Gary Marcus在正式讲述这一次的「新仇」之前,我们先来回顾一下,两位大神的「旧恨」。LeCun与Marcus之争Facebook首席人工智能科学家和纽约大学教授,2018年图灵奖(Turing Award)得主杨立昆(Yann LeCun)在NOEMA杂志发表文章,回应此前Gary Marcus对AI与深度学习的评论。此前,Marcus在杂志Nautilus中发文,称深度学习已经「无法前进」Marcus此人,属于是看热闹的不

英伟达首席科学家:深度学习硬件的过去、现在和未来英伟达首席科学家:深度学习硬件的过去、现在和未来Apr 12, 2023 pm 03:07 PM

过去十年是深度学习的“黄金十年”,它彻底改变了人类的工作和娱乐方式,并且广泛应用到医疗、教育、产品设计等各行各业,而这一切离不开计算硬件的进步,特别是GPU的革新。 深度学习技术的成功实现取决于三大要素:第一是算法。20世纪80年代甚至更早就提出了大多数深度学习算法如深度神经网络、卷积神经网络、反向传播算法和随机梯度下降等。 第二是数据集。训练神经网络的数据集必须足够大,才能使神经网络的性能优于其他技术。直至21世纪初,诸如Pascal和ImageNet等大数据集才得以现世。 第三是硬件。只有

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment