Home >Technology peripherals >AI >With 3.6 trillion tokens and 340 billion parameters, details of Google's large model PaLM 2 have been exposed

With 3.6 trillion tokens and 340 billion parameters, details of Google's large model PaLM 2 have been exposed

WBOY
WBOYforward
2023-05-21 08:07:20751browse

Last Thursday, at the 2023 Google I/O conference, Google CEO Pichai announced the launch of PaLM 2, a large model that benchmarks GPT-4, and officially released a preview version, which improved mathematics, code, and reasoning. , multi-language translation and natural language generation capabilities.

3.6万亿token、3400亿参数,谷歌大模型PaLM 2细节遭曝光

PaLM 2 model provides four versions of different sizes, from small to large For Gecko, Otter, Bison and Unicorn, easier to deploy for a variety of use cases. Among them, the lightweight Gecko model can run on mobile devices very quickly, and excellent interactive applications can be run on the device without being connected to the Internet.

However, at the meeting, Google did not give specific technical details about PaLM 2, only stating that it is built on Google’s latest JAX and TPU v4.

3.6万亿token、3400亿参数,谷歌大模型PaLM 2细节遭曝光

##Yesterday, according to internal documents seen by foreign media CNBC, PaLM 2 was Training on 3.6 trillion tokens. For comparison, the previous generation PaLM was trained on 780 billion tokens.

In addition, Google has previously stated that PaLM 2 is smaller than previous LLMs, which means it can become more efficient while completing more complex tasks. This has also been verified in internal documents. The training parameter size of PaLM 2 is 340 billion, which is much smaller than PaLM's 540 billion.

3.6万亿token、3400亿参数,谷歌大模型PaLM 2细节遭曝光

How do the training tokens and parameters of PaLM 2 compare with other LLMs? For comparison, LLaMA, released by Meta in February, was trained on 1.4 trillion tokens. OpenAI’s 175 billion parameter GPT-3 was trained on 300 billion tokens.

While Google has been eager to demonstrate the power of its AI technology and how it can be embedded in search, email, document processing and spreadsheets, it has also been reluctant to disclose the size of its training data or the size of its training data. Other details. In fact, Google is not the only one doing this. OpenAI is also silent on the details of its latest multi-modal large model GPT-4. They all said the non-disclosure of details stemmed from the competitive nature of the business.

However, as the AI ​​arms race continues to heat up, the research community is increasingly demanding more transparency. And in an internal Google document leaked some time ago, Google internal researchers expressed this view: Although on the surface it seems that OpenAI and Google are chasing each other on large AI models, the real winner may not necessarily come from these two. Produced at home, because the third party force "open source" is quietly rising.

At present, the authenticity of this internal document has not been verified, and Google has not commented on the relevant content.

Netizen Comments

At the beginning of the official announcement of PaLM 2, some netizens predicted the number of parameters based on Chinchilla’s law. They predicted that the parameter result of the PaLM 2 model family would be 80B. / 90B / 100B, which is still very different from the 340B reported this time.

3.6万亿token、3400亿参数,谷歌大模型PaLM 2细节遭曝光

Some people have made a wave of predictions about the training cost of PaLM 2. According to the development of large models in the past, This netizen said that it would cost US$100 million to build PaLM 2.

3.6万亿token、3400亿参数,谷歌大模型PaLM 2细节遭曝光

PaLM 2 parameters have been leaked, you can try to guess Bard, this netizen said:

3.6万亿token、3400亿参数,谷歌大模型PaLM 2细节遭曝光

With the leak of the number of PaLM 2 tokens, netizens can’t help but wonder, how many tokens will it take for a big turning point before the arrival of AGI?

3.6万亿token、3400亿参数,谷歌大模型PaLM 2细节遭曝光

The above is the detailed content of With 3.6 trillion tokens and 340 billion parameters, details of Google's large model PaLM 2 have been exposed. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete