


Failing in the high school math test is a nightmare for many people.
If you say that your high school math test is not as good as AI, is it more difficult to accept?
Yes, the Codex from OpenAI has achieved an accuracy rate of 81.1% in MIT’s seven advanced mathematics courses, which is a decent level for MIT undergraduates.
The courses range from elementary calculus to differential equations, probability theory, and linear algebra. In addition to calculations, the questions also include drawing.
#This matter has recently been on Weibo hot search.
△ "Only" scored 81 points, and the expectations for AI are too high
Now, the latest big news comes from Google :
Not only in mathematics, our AI has even achieved the highest score in the entire science and engineering subjects!
It seems that the technology giants have reached a new level in cultivating "AI problem solvers".
Google, the latest AI question maker, took four exams.
In the mathematics competition exam MATH, only three-time IMO gold medalists have scored 90 points in the past, and ordinary computer doctors can only get about 40 points.
As for other AI question-answers, the previous best score was only 6.9 points...
But this time, Google's new AI scored 50 points, which is higher than the computer doctor.
The comprehensive exam MMLU-STEM includes mathematics, physics, chemistry, biology, electronic engineering and computer science. The difficulty of the questions reaches the high school or even college level.
This time, Google AI's "full health version" also got the highest score among all the questions, directly raising the score by about 20 points.
The primary school math problem GSM8k directly raised the score to 78 points. In comparison, GPT-3 has not passed (only 55 points).
Even for MIT undergraduate and graduate courses such as solid state chemistry, astronomy, differential equations, and special relativity, Google’s new AI can answer nearly one-third of the more than 200 questions.
The most important thing is that unlike OpenAI’s method of obtaining high scores in mathematics by relying on “programming skills”, Google AI this time is taking the approach of “thinking like a human” Luzi——
It is like a liberal arts student who only memorizes but does not do questions, but he has mastered better problem-solving skills in science and engineering.
It is worth mentioning that Lewkowycz, the first author of the paper, also shared a highlight that was not written in the paper:
Our model participated in this year’s Polish Mathematics College Entrance Examination. Scores are higher than the national average.
Seeing this, some parents can no longer sit still.
If I tell my daughter this, I am afraid that she will use AI to do her homework. But if you don’t tell her, you’re not preparing her for the future!
#In the eyes of industry insiders, reaching this level by relying only on language models without hard-coding arithmetic, logic and algebra is the most amazing thing about this research. place.
So, how is this done?
AI reads 2 million papers on arXiv
The new model Minerva is based on the general language model PaLM under the Pathway architecture.
Further training is performed on the basis of the 8 billion, 60 billion and 540 billion parameter PaLM models respectively.
Minerva’s approach to answering questions is completely different from Codex’s.
Codex’s method is to rewrite each math problem into a programming problem, and then solve it by writing code.
Minerva, on the other hand, read papers crazily and forced himself to understand mathematical symbols in the same way as natural language.
Continue training on the basis of PaLM. The new data set has three parts:
Mainly includes 2 million academic papers collected on arXiv, 60GB web pages with LaTeX formulas, and a small Some of the texts used in the PaLM training phase.
The usual NLP data cleaning process will delete all symbols and keep only pure text, resulting in incomplete formulas. For example, only Einstein’s famous mass-energy equation remains Emc2.
But this time Google retained all the formulas and went through the Transformer training program just like plain text, allowing the AI to understand symbols like it understands language.
Compared with previous language models, this is one of the reasons why Minerva performs better on mathematical problems.
But compared with AI that specializes in doing math problems, Minerva does not have an explicit underlying mathematical structure in its training, which brings a disadvantage and an advantage.
The disadvantage is that the AI may use wrong steps to get the correct answer.
The advantage is that it can be adapted to different disciplines. Even if some problems cannot be expressed in formal mathematical language, they can be solved by combining natural language understanding capabilities.
In the AI reasoning stage, Minerva also combines several new technologies recently developed by Google.
The first is the Chain of Thought thinking link prompt, which was proposed by the Google Brain team in January this year.
Specifically, when asking a question, give an example of a step-by-step answer to guide you. AI can use a similar thinking process when answering questions to correctly answer questions that would otherwise be answered incorrectly.
Then there is the Scrathpad method developed jointly by Google and MIT, which allows AI to temporarily store the intermediate results of step-by-step calculations.
Finally, there is the Majority Voting method, which was only released in March this year.
Let AI answer the same question multiple times and choose the answer that appears most frequently.
After all these techniques are used, Minerva with 540 billion parameters reaches SOTA in various test sets.
Even the 8 billion parameter version of Minerva can reach the level of the latest updated davinci-002 version of GPT-3 in competition-level mathematics problems and MIT open course problems.
Having said so much, what specific questions can Minerva solve?
Google has also opened up a sample set, let’s take a look.
It is omnipotent in mathematics, physics, chemistry, and even machine learning
In mathematics, Minerva can calculate values step by step like humans, instead of directly solving violent problems.
For word problems, you can list the equations yourself and simplify them.
You can even derive the proof.
In physics, Minerva can solve university-level questions such as finding the total spin quantum number of electrons in the neutral nitrogen ground state (Z = 7).
In biology and chemistry, Minerva can also answer various multiple-choice questions with its language understanding ability.
Which of the following point mutation forms does not have a negative impact on the protein formed from the DNA sequence?
Which of the following is a radioactive element?
And astronomy: Why does the Earth have a strong magnetic field?
In terms of machine learning, it correctly gives another way of saying this term by explaining the specific meaning of "out-of-distribution sample detection".
......
However, Minerva sometimes makes some stupid mistakes, such as eliminating the √ on both sides of the equation.
In addition, Minerva will have a "false positive" situation where the reasoning process is wrong but the result is correct, such as the following, with an 8% probability.
After analysis, the team found that the main error forms came from calculation errors and reasoning errors, and only a small part came from errors in understanding the meaning of the question and using wrong facts in the steps. Other cases.
The calculation errors can be easily solved by accessing an external calculator or Python interpreter, but other types of errors are difficult to adjust because the neural network is too large.
Overall, Minerva’s performance has surprised many people, and they have asked for APIs in the comment area (unfortunately, Google has no public plans yet).
Some netizens thought that, coupled with the "coaxing" method that made GPT-3's problem-solving accuracy soar by 61% in the past few days, its accuracy may still be Can it be improved further?
However, the author’s response is that the coaxing method belongs to zero-sample learning, and no matter how strong it is, it may not be as good as few-sample learning with 4 examples.
Some netizens also asked, since it can do questions, can it be used in reverse?
In fact, MIT has teamed up with OpenAI to use AI to set questions for college students.
They mixed questions posed by humans and questions posed by AI, and asked students to do questionnaires. It was difficult for everyone to tell whether a question was posed by AI.
In short, the current situation is except that the AI people are busy reading this paper.
Students look forward to one day being able to use AI to do their homework.
#Teachers are also looking forward to the day when they can use AI to produce test papers.
Paper address: https://storage.googleapis.com/minerva-paper/minerva_paper.pdf
Demo address: https://minerva- demo.github.io/
Related papers: Chain of Thought https://arxiv.org/abs/2201.11903Scrathpads https://arxiv.org/abs/2112.00114Majority Voting https://arxiv.org /abs/2203.11171
Reference link:
https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html
https: //twitter.com/bneyshabur/status/1542563148334596098
https://twitter.com/alewkowycz/status/1542559176483823622
The above is the detailed content of AI is going crazy when it comes to quizzes! The accuracy rate of the high-level mathematics examination is 81%, and the competition question score exceeds that of the computer science doctor. For more information, please follow other related articles on the PHP Chinese website!

The unchecked internal deployment of advanced AI systems poses significant risks, according to a new report from Apollo Research. This lack of oversight, prevalent among major AI firms, allows for potential catastrophic outcomes, ranging from uncont

Traditional lie detectors are outdated. Relying on the pointer connected by the wristband, a lie detector that prints out the subject's vital signs and physical reactions is not accurate in identifying lies. This is why lie detection results are not usually adopted by the court, although it has led to many innocent people being jailed. In contrast, artificial intelligence is a powerful data engine, and its working principle is to observe all aspects. This means that scientists can apply artificial intelligence to applications seeking truth through a variety of ways. One approach is to analyze the vital sign responses of the person being interrogated like a lie detector, but with a more detailed and precise comparative analysis. Another approach is to use linguistic markup to analyze what people actually say and use logic and reasoning. As the saying goes, one lie breeds another lie, and eventually

The aerospace industry, a pioneer of innovation, is leveraging AI to tackle its most intricate challenges. Modern aviation's increasing complexity necessitates AI's automation and real-time intelligence capabilities for enhanced safety, reduced oper

The rapid development of robotics has brought us a fascinating case study. The N2 robot from Noetix weighs over 40 pounds and is 3 feet tall and is said to be able to backflip. Unitree's G1 robot weighs about twice the size of the N2 and is about 4 feet tall. There are also many smaller humanoid robots participating in the competition, and there is even a robot that is driven forward by a fan. Data interpretation The half marathon attracted more than 12,000 spectators, but only 21 humanoid robots participated. Although the government pointed out that the participating robots conducted "intensive training" before the competition, not all robots completed the entire competition. Champion - Tiangong Ult developed by Beijing Humanoid Robot Innovation Center

Artificial intelligence, in its current form, isn't truly intelligent; it's adept at mimicking and refining existing data. We're not creating artificial intelligence, but rather artificial inference—machines that process information, while humans su

A report found that an updated interface was hidden in the code for Google Photos Android version 7.26, and each time you view a photo, a row of newly detected face thumbnails are displayed at the bottom of the screen. The new facial thumbnails are missing name tags, so I suspect you need to click on them individually to see more information about each detected person. For now, this feature provides no information other than those people that Google Photos has found in your images. This feature is not available yet, so we don't know how Google will use it accurately. Google can use thumbnails to speed up finding more photos of selected people, or may be used for other purposes, such as selecting the individual to edit. Let's wait and see. As for now

Reinforcement finetuning has shaken up AI development by teaching models to adjust based on human feedback. It blends supervised learning foundations with reward-based updates to make them safer, more accurate, and genuinely help

Scientists have extensively studied human and simpler neural networks (like those in C. elegans) to understand their functionality. However, a crucial question arises: how do we adapt our own neural networks to work effectively alongside novel AI s


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SublimeText3 Chinese version
Chinese version, very easy to use

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.
