Home >Technology peripherals >AI >Who is bigger, 9.11 or 9.9? We actually tested 15 large models, and more than half of them overturned.
Machine Power Report
Editor: Yang Wen
The big models still can’t solve simple math problems.
In the past few days, a reminder word to test whether the large model "brain" is working has become popular -
Which one is bigger, 9.11 or 9.9?
This is a math question that even primary school students can answer correctly, but it stumps a group of "heroes" in the large model industry.
Here’s the thing.
Scale AI's senior prompt engineer Riley Goodside asked GPT-4o the prompt word "9.11 and 9.9 - which is bigger?", but got the answer "the former is bigger". Other large models also overturned.
On July 17, we conducted a centralized evaluation on 12 large domestic models, plus foreign GPT-4o, Claude 3.5 Sonnet and Google’s Gemini. The following are the evaluation results:
Next, let’s take a look at the detailed evaluation process.
-1-
GPT-4o
GPT-4o The car rolled over quite completely.
We first asked GPT-4o using English prompt words, and it still believed that 9.11 was greater than 9.9. Then we asked it in Chinese and English how much the difference was, and all answers were wrong.
-2-
Claude-3.5-Sonnet
We asked Claude-3.5-Sonnet in the same way, but no matter how we asked, it was on the wrong path. Eyes running wildly. Among them, when comparing the decimal parts, it clearly knows that 0.9 is larger than 0.11, but still reaches the wrong conclusion in the end.
-3-
Gemini
Google Gemini is not much better. I asked twice in English which of the two is bigger. The first time it gave the correct answer. But when it comes to the same integer the second time, the more decimal places, the larger the number.
We asked again in Chinese. Google Gemini compared the size based on specific real-life scenarios. For example, from a time perspective, 9.11 usually refers to the 9.11 incident, while 9.9 usually refers to 9:09, so 9.11 is better than 9.9 It means more.
When asked about the difference between the two, Gemini came up with a negative number.
-4-
Baidu Wenxin Yiyan
When facing who is bigger, 9.11 or 9.9, Wenxin 3.5 answered correctly; when we asked it both When there was a big difference, it went around in a big circle and finally gave the correct conclusion.
-5-
Ali Tongyi Thousand Questions
All Ali Tongyi Thousand Questions answered correctly.
-6-
Byte Beanbag
We asked who is bigger, 9.11 or 9.9, Beanbao’s analysis is clear and logical, and it can also be placed in daily life scenes. For example, a running runner's time is 9.11 seconds and 9.9 seconds, which means 9.11 seconds is faster; from a price point of view, the 9.9 yuan product is more expensive. However, once it reaches the conclusion, it answers wrongly.
As for the difference between the two, Doubao’s answer is correct.
-7-
Tencent Yuanbao
Tencent Yuanbao triggered the search function when faced with this question, cited 7 materials as reference, and finally answered correctly.
However, what is the difference between 9.11 and 9.9? The Yuanbao equation is correct, but the arithmetic results in 16 decimal places.
-8-
Zhipu Qingyan
Zhipu Qingyan mistakenly believed that the value represented by two decimals is greater than one decimal, resulting in wrong answers. When asked about the difference between the two, it calculated a negative number.
It also did not forget to say "Many AI model errors may be due to algorithmic flaws in processing numbers and decimal points."
-9-
Dark Side of the Moon - Kimi
Kimi is also at a loss this time, not only can’t tell who is bigger, but also calculated 9.11-9.9 as 0.21.
-10-
iFlytek Spark
iFlytek Spark answered correctly.
-11-
Baichuan Intelligence - Baixiaoying
Baixiaoying mistakenly believed that 9.11 was bigger, but calculated the difference between the two and calculated it correctly.
-12-
Step Stars - Yue Wen
Yue Wen had no problem in the initial analysis, but then he got confused and made a "reversal conclusion", which led to the wrong final answer.
When we asked it again why, it suddenly understood, corrected its mistake, and correctly calculated the difference between the two.
-13-
SenseTime - Discussion
Two questions were answered incorrectly.
-14-
Kunlun Wanwei - Tiangong
The answer is correct.
-15-
Zero One Everything - Wanzhi
Answered two questions incorrectly.
Why can’t the big models even solve simple math common sense questions? We interviewed Wang Xiaoming, product manager of Tongyi Laboratory.
According to Wang Xiaoming, the large model is implemented based on the Transformer architecture. Its essence is to perform next token prediction instead of direct arithmetic calculation. Therefore, when dealing with simple mathematical problems such as size ratio, it depends on the success rate of the prediction model.
In addition, when dealing with scenarios like "9.11 is bigger than 9.9", large models are usually processed through the tokenizer. When parsing such expressions, the tokenizer may recognize the number as a date or version number for comparison, ultimately leading to an incorrect answer. This processing method is determined by the specific algorithm and mechanism of the tokenizer.
During the actual testing process, we also found that many large models may provide wrong answers when answering for the first time. However, when asked a second round of questions, these models were often able to give the correct answer.
In response to this problem, Wang Xiaoming believes that it is mainly caused by three reasons.
First, due to the certain randomness in the prediction process, the second round is more accurate than the first round.
Second, large models have strong context understanding capabilities. They can regenerate more accurate answers based on previous answers and correction information.
Third, the questioner’s guidance method will also affect the answer results of the large model. For example, using qualifiers, providing clear context, and guiding the model to follow specific instructions can all help to increase the probability of getting the correct answer.
He also said that the core of improving the mathematical capabilities of large models lies in providing high-quality data support, especially in mathematical calculations and logical reasoning. For example, Tongyi Qianwen specifically adds high-quality data for training in such scenarios, allowing it to maintain a high accuracy rate when facing such problems.
In the future, we will bring more first-hand reviews of large AI models and AI applications, and everyone is welcome to join the group for communication.
The above is the detailed content of Who is bigger, 9.11 or 9.9? We actually tested 15 large models, and more than half of them overturned.. For more information, please follow other related articles on the PHP Chinese website!