Home >Technology peripherals >AI >The model will evolve after merging, and directly win SOTA! Transformer author's new entrepreneurial achievements are popular

The model will evolve after merging, and directly win SOTA! Transformer author's new entrepreneurial achievements are popular

王林
王林forward
2024-03-26 11:30:14441browse

Use the ready-made models on Huggingface to "save up" -

can you directly combine them to create new powerful models? !

The Japanese large model company sakana.ai was very creative (it was the company founded by one of the "Transformer Eight") and came up with such a coup to evolve the merged model.

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

This method can not only automatically generate a new basic model, but alsoThe performance is absolutely bad:

They Utilizing a large model of Japanese mathematics containing 7 billion parameters, it achieved state-of-the-art results on relevant benchmarks, surpassing previous models such as the 70 billion parameter Llama-2.

The most important thing is that arriving at such a model does not require any gradient training, so the computing resources required are greatly reduced.

NVIDIA scientist Jim Fan praised after reading it:

This is one of the most imaginative papers I have read recently.

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

Merge and evolve, automatically generate new basic models

Most of the best-performing models from the open source large model rankings are no longer LLaMA Or "original" models like Mistral, but after some fine-tuning or merging models, we can see:

A new trend has emerged.

Sakana.ai introduced that the open source basic model can be easily extended and fine-tuned in hundreds of different directions, and then produce new models that perform well in new fields.

Among these, model merging shows great prospects.

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

However, it may be a kind of "black magic" that relies heavily on intuition and professional knowledge.

Therefore, we need a more systematic approach.

Inspired by natural selection in nature, Sakana.ai focused on evolutionary algorithms, introduced the concept of "Evolutionary Model Merge", and proposed a method that can discover the best model General method of combination.

This method combines two different ideas:

(1) merging models in the data flow space (layer) , and (2) merging parameter spaces (weights) model in .

Specifically, the first data flow space method uses evolution to discover the best combination of different model layers to form a new model.

In the past, the community relied on intuition to determine how and which layers of a model can be combined with layers of another model.

But in fact, Sakana.ai introduced that this problem has a search space with a huge number of combinations, which is most suitable for searching by optimization algorithms such as evolutionary algorithms.

The operation examples are as follows:

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

As for the second parameter space method, multiple model weights are mixed to form a new model.

There are actually countless ways to implement this method, and in principle, each layer of mixing can use different mixing ratios, even more.

And here, using evolutionary methods can effectively find more novel hybrid strategies.

The following is an example of mixing the weights of two different models to obtain a new model:

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

Combine the above two methods, that’s it :

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

The authors introduce that they hope to form new emerging fields that have not been explored before in distant fields, such as mathematics and non-English languages, vision and non-English languages. combination.

The result is really a bit surprising.

New model easily wins SOTA

Using the above evolutionary merging method, the team obtained 3 basic models:

  • Large language model EvoLLM-JP

is formed by merging the large Japanese model Shisa-Gamma and the large mathematical model WizardMath/Abel. It is good at solving Japanese mathematics problems and has evolved for 100-150 generations.

  • Visual language model EvoVLM-JP
##Japanese large model Shisa Gamma 7B v1 LLaVa-1.6-Mistral-7B , is a VLM with Japanese language ability.

  • Image generation model EvoSDXL-JP
Supports Japanese SDXL diffusion model.

The first two have been released on Hugging Face and GitHub, and the last one will be launched soon.

Look specifically.

1. EvoLLM-JP

It achieved the following results on the Japanese evaluation set of MGSM, a multilingual version of the GSM8K data set:

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

It can be seen that the performance of EvoLLM-JP in solving mathematical problems in Japanese exceeds their original models, and also exceeds high-performance models such as Llama-2 and GPT-3.5 .

Among them, Model 4 is optimized only in the parameter space, and Model 6 is the result of further optimization using Model 4 in the data flow space.

On the Japanese lm-evaluation-harness benchmark, which evaluates both data capabilities and general Japanese language skills, EvoLLM-JP achieved a maximum average score of 70.5 on 9 tasks - using only 7 billion parameters. It beats models such as the 70 billion Llama-2.

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

The team stated that EvoLLM-JP is good enough to be used as a general Japanese model and solve some interesting examples:

For example, specific Japanese culture is required Math problems of knowledge, or telling Japanese jokes in Kansai dialect.

2, EvoVLM-JP

On the following two benchmark data sets of image question and answer, the higher the score, the model answers in Japanese The description is more accurate.

As a result, it is not only better than the English VLM LLaVa-1.6-Mistral-7B on which it is based, but also better than the existing Japanese VLM.

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

As shown in the picture below, when asked what the color of the signal light in the picture is, only EvoVLM-JP answered correctly: blue.

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

3. EvoSDXL-JP

This SDXL model that supports Japanese only requires 4 diffusion models Inference can be performed and the generation speed is quite fast.

The specific running scores have not yet been released, but the team revealed that it is "quite promising."

You can enjoy some examples:

The prompt words include: Miso ラーメン, the highest quality Ukiyoe, Katsushika Hokusai, Edo era.

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

For the above 3 new models, the team pointed out:

In principle, we can use gradient-based backpropagation to further improve the above performance of these models.

But we don’t use , because the purpose now is to show that even without backpropagation, we can still get a sufficiently advanced basic model to challenge the current "expensive Paradigm”.

Netizens liked this one after another.

Jim Fan also added:

In the field of basic models, the current community is almost entirely focused on letting the model learn, and does not pay much attention to search , but the latter actually has huge potential in the training (that is, the evolutionary algorithm proposed in this article) and inference stage.

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular
Liked by Musk

So, as netizens said:

We are now in the Cambrian of the model Is it the era of the Great Explosion?

The model will evolve after merging, and directly win SOTA! Transformer authors new entrepreneurial achievements are popular

Paper address: https://arxiv.org/abs/2403.13187

The above is the detailed content of The model will evolve after merging, and directly win SOTA! Transformer author's new entrepreneurial achievements are popular. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete