Home  >  Article  >  Technology peripherals  >  Google Gemini 1.5 is launched quickly: MoE architecture, 1 million contexts

Google Gemini 1.5 is launched quickly: MoE architecture, 1 million contexts

WBOY
WBOYforward
2024-02-16 18:50:10549browse

Today, Google announced the launch of Gemini 1.5.

Gemini 1.5 is developed based on research and engineering innovations in Google's foundational models and infrastructure. This release introduces a new Mix of Experts (MoE) architecture to improve the efficiency of training and serving Gemini 1.5.

What Google has launched is the first version of Gemini 1.5 for early testing, namely Gemini 1.5 Pro. It is a medium-sized multimodal model that is scaled and optimized for a variety of tasks. Compared to Google's largest model, 1.0 Ultra, Gemini 1.5 Pro delivers similar performance levels and introduces groundbreaking experimental features to better understand long context.

The number of token context windows for Gemini 1.5 Pro is 128,000. However, starting today, Google is offering a private preview of AI Studio and Vertex AI to a limited number of developers and enterprise customers, allowing them to try it out in a contextual window of up to 1,000,000 tokens. In addition, Google has made several optimizations aimed at improving latency, reducing computing requirements, and improving user experience.

Google CEO Sundar Pichai and Google DeepMind CEO Demis Hassabis gave a special introduction to the new model.

Google Gemini 1.5 is launched quickly: MoE architecture, 1 million contexts

# This

Gemini 1.5 builds on Google’s leading research into Transformer and MoE architectures. The traditional Transformer acts as one large neural network, while the MoE model is divided into smaller "expert" neural networks.

Depending on the type of input given, the MoE model learns to selectively activate only the most relevant expert paths in its neural network. This specialization greatly increases the efficiency of the model. Google has been an early adopter and pioneer of deep learning MoE technology through research on sparse gated MoE, GShard-Transformer, Switch-Transformer, M4, and more.

Google’s latest innovations in model architecture enable Gemini 1.5 to learn complex tasks faster and maintain quality, while training and serving more efficiently. These efficiencies are helping Google teams iterate, train, and deliver more advanced versions of Gemini faster than ever before, and are working on further optimizations.

Longer context, more useful features

" of artificial intelligence models" "Context windows" are composed of tokens, which are the building blocks for processing information. A token can be an entire part or subpart of text, image, video, audio, or code. The larger the model's context window, the more information it can receive and process in a given prompt, making its output more consistent, relevant, and useful.

Through a series of machine learning innovations, Google has increased the context window capacity of 1.5 Pro well beyond the original 32,000 tokens of Gemini 1.0. The large model can now run in production with up to 1 million tokens.

This means the 1.5 Pro can handle large amounts of information at once, including 1 hour of video, 11 hours of audio, over 30,000 lines of code, or a code base of over 700,000 words . In Google's research, up to 10 million tokens were also successfully tested.

Complex reasoning about large amounts of information

1.5 Pro Can perform within a given prompt Seamlessly analyze, categorize and summarize large amounts of content. For example, when given a 402-page transcript of the Apollo 11 moon landing mission, it could reason about dialogue, events, and details throughout the document.
Google Gemini 1.5 is launched quickly: MoE architecture, 1 million contexts
# This Gemini 1.5 Pro can understand, reasoning and identifying the curiosity details in the 402 pages of Apollo 11th moon landing mission.

Better understanding and reasoning across modalities

1.5 Pro can perform highly complex understanding and reasoning tasks across different modalities, including video. For example, when given a 44-minute silent film by Buster Keaton, the model could accurately analyze various plot points and events, even reasoning about small details in the film that were easily overlooked.
Gemini 1.5 Pro can understand, reason about, and identify curious details in the 402 pages of records from the Apollo 11 moon landing mission.

Better understanding and reasoning across modalities

1.5 Pro Can perform highly complex understanding and reasoning tasks on different modalities including video. For example, when given a 44-minute silent film by Buster Keaton, the model could accurately analyze various plot points and events, even reasoning about small details in the film that were easily overlooked.
Gemini 1.5 Pro can understand, reason about, and identify curious details in the 402 pages of records from the Apollo 11 moon landing mission.

Better understanding and reasoning across modalities

1.5 Pro Can perform highly complex understanding and reasoning tasks on different modalities including video. For example, when given a 44-minute silent film by Buster Keaton, the model could accurately analyze various plot points and events, even reasoning about small details in the film that were easily overlooked.
Google Gemini 1.5 is launched quickly: MoE architecture, 1 million contextsThe Gemini 1.5 Pro could identify 44 minutes of scenes from Buster Keaton's silent films when given simple line drawings as reference material for real-life objects.

Use longer code blocks to solve related problems

1.5 Pro can span longer Long code blocks perform more relevant problem-solving tasks. When given hints on more than 100,000 lines of code, it can better reason through examples, suggest useful modifications, and explain how different parts of the code work. Google Gemini 1.5 is launched quickly: MoE architecture, 1 million contexts
                                                                                                                                                                            off ‐                                 toward 1.5 ’s s to 1.5's- to 1.5’s- to 1.5’s 1.5’s 1.5’s to 1.5G-1.5G to 1.5G, Gemini 1.5 Pro, and Gemini 1.5 #EnhancedPerformance

When tested on a comprehensive panel of text, code, image, audio, video evaluation, 1.5 Pro was used to develop large language models (LLM). ), 87% of the benchmarks performed better than 1.0 Pro. Compared to the 1.0 Ultra in the same benchmarks, it performs roughly similarly.

Gemini 1.5 Pro maintains a high level of performance even as the context window increases.

In the NIAH assessment, where a small piece of text containing a specific fact or statement was intentionally placed within a very long block of text, 1.5 Pro found the embedding 99% of the time The text of , there are only 1 million tokens in the data block.

Gemini 1.5 Pro also demonstrates impressive "in-context learning" skills, meaning it can learn from long prompts Learn new skills from information without the need for additional fine-tuning. Google tested this skill on the MTOB (Translation from One Book) benchmark, which shows the model's ability to learn from information it has never seen before. When given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model can learn to translate English into Kalamang at a level similar to a human learning the same content.

Since 1.5 Pro’s long context window is a first for a large model, Google is constantly developing new evaluations and benchmarks to test its novel features.

For more details, see the Gemini 1.5 Pro Technical Report.

Technical report address: https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf

Build and experiment with Gemini models

Google is committed to responsibly bringing each new generation of Gemini models to billions of people around the world Used by people, developers and enterprise users.

Starting today, Google is making 1.5 Pro preview available to developers and enterprise customers through AI Studio and Vertex AI.

In the future, when the model goes to wider release, Google will launch 1.5 Pro with a standard 128,000 token context window. Soon, Google plans to introduce pricing tiers starting with the standard 128,000 context windows and scaling up to 1 million tokens as it improves the model.

Early testers can try 1 million token context windows for free during testing, and significant speed improvements are coming.

Developers interested in testing 1.5 Pro can register now in AI Studio, while enterprise customers can contact their Vertex AI account team.

Reference link: https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/ #sundar-note

The above is the detailed content of Google Gemini 1.5 is launched quickly: MoE architecture, 1 million contexts. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete