The winning AI Math Olympiad model is out!
A total of 5 teams won this competition. The Numina team won the first place, CMU_MATH ranked second, after exams temporarily ranked third, codeinter and Conor #2 teams won the fourth and third place respectively. Score of five. Tao Zhexuan was surprised.
At that time, the official only announced the list of winners and did not reveal more information about the models behind them. Everyone is curious, which model did the winning team use? Just now, the models behind the top four AIMO Progress Awards were announced.
The model used by the championship team is NuminaMath 7B TIR, which is a fine-tuned version of deepseek-math-7b-base.
The second place team fine-tuned two DeepSeek-Math-7B-RL models, one as a policy model (for generating solutions) and one as a reward model (for weighted majority voting) Solutions are scored). The third place also used the DeepSeek-Math-7B-RL model without any fine-tuning, and used a majority voting strategy to select the correct answer through the established scoring rules. The fourth-ranked team also used deepseek-math-7b-rl, with parameter settings temperature of 0.9, top_p of 1.0, and max tokens of 2048. Paired with coding tools, this model achieves 58.8% on the MATH benchmark. It is not difficult to find that the top four teams all chose DeepSeekMath-7B as the basic model and achieved good results. The mathematical reasoning ability of this model is close to that of GPT-4, surpassing a number of 30B~70B open source models on the MATH benchmark list. Champion: NuminaMath 7B TIR model
Next, let’s take a detailed look at the championship plan for this competition.
NuminaMath is a family of language models trained to solve mathematical problems using Tool Integrated Reasoning (TIR). NuminaMath 7B TIR is a fine-tuned version of deepseek-math-7b-base with two stages of supervised fine-tuning: Stage 1: Large-scale modeling of natural language mathematical problems and solutions , the base model is fine-tuned on diverse datasets, where each solution is templated using Chains of Thoughts (CoT) to facilitate inference. Phase 2: Fine-tune the model obtained in Phase 1 on a synthetic dataset from Tool Integrated Reasoning (TIR), where each mathematical problem is broken down into a series of underlying principles, Python programs, and their outputs. This will prompt GPT-4 to generate a ToRA format (Microsoft) solution with code execution feedback. Fine-tuning on this data results in a reasoning agent that can solve mathematical problems by combining natural language reasoning and computing intermediate results using the Python REPL.
It’s worth noting that NuminaMath 7B TIR is specifically created to solve competition-level math problems. Therefore, this model should not be used in general chat applications. Using greedy decoding, the winning team found that the model was able to solve AMC level 12 problems, but generally struggled to generate efficient solutions to difficult AIME and Math Olympiad level problems. The model also has difficulty solving geometric problems, possibly due to its limited capacity and lack of modalities such as vision. The above is the detailed content of The first AI Mathematical Olympiad competition plan was announced: the four winning teams all chose the domestic model DeepSeekMath. For more information, please follow other related articles on the PHP Chinese website!
Statement:The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn