Home >Technology peripherals >AI >A full breakthrough, Google updated a large number of large model products last night

A full breakthrough, Google updated a large number of large model products last night

王林
王林forward
2024-04-10 14:34:091203browse

This Tuesday, Google released a series of AI-related model updates and products at Google's Cloud Next 2024, including Gemini 1.5 Pro which provides local speech (speech) understanding functions and code generation for the first time New model CodeGemma, the first self-developed Arm processor Axion, and more.

A full breakthrough, Google updated a large number of large model products last night

Gemini 1.5 Pro

Gemini 1.5 Pro is Google’s most powerful generative AI model , is now available in public preview on Vertex AI, Google’s enterprise-focused AI development platform. This is Google’s AI development platform for enterprises. The context it can handle increases from 128,000 tokens to 1 million tokens. One million tokens is equivalent to approximately 700,000 words, or approximately 30,000 lines of code. That’s roughly four times the amount of data Anthropic’s flagship model Claude 3 can handle as input, and about eight times the maximum context amount of OpenAI’s GPT-4 Turbo.

A full breakthrough, Google updated a large number of large model products last night

Official original text link: https://developers.googleblog.com/2024/04/gemini-15-pro-in-public-preview -with-new-features.html

This version provides local audio (speech) understanding capabilities and a new file API for the first time, making file processing easier. Gemini 1.5 Pro’s input modes are being expanded to include audio (speech) understanding in the Gemini API and Google AI Studio. Additionally, Gemini 1.5 Pro is now able to perform inference on the images (frames) and audio (speech) of videos uploaded in Google AI Studio.

A full breakthrough, Google updated a large number of large model products last night

You can upload a recording of a lecture, such as this lecture by Jeff Dean with more than 117,000 tokens, Gemini 1.5 Pro can convert it into a test with answers. (Demo has been accelerated)

Google has also made improvements in the Gemini API, mainly including the following three contents:

Currently available on Google System instructions are used in AI Studio and Gemini API to guide the model's response. Define roles, formats, goals, and rules to guide your model's behavior for your specific use cases.

A full breakthrough, Google updated a large number of large model products last night

Easily set system instructions in Google AI Studio

2.JSON mode: Indicates that the model only Output JSON object. This pattern makes it possible to extract structured data from text or images. cURL is now available, with Python SDK support coming soon.

3. Improvements to function calls: Now you can select modes to limit the output of the model and improve reliability. Select text, function calls, or just the function itself.

In addition, Google will release a next-generation text embedding model that outperforms similar models. Starting today, developers will be able to access next-generation text embedding models through the Gemini API. This new model, text-embedding-004 (text-embedding-preview-0409 in Vertex AI), achieves stronger retrieval performance on the MTEB benchmark and outperforms existing models with comparable dimensions.

A full breakthrough, Google updated a large number of large model products last night

In the MTEB benchmark, Text-embedding-004 (aka Gecko) using 256 dims output outperforms all larger 768 dims Output Model

However, it is important to note that Gemini 1.5 Pro is not available for people without access to Vertex AI and AI Studio. Currently, most people engage with Gemini language models through the Gemini chatbot. Gemini Ultra powers the Gemini Advanced chatbot, and while it's powerful and can understand long commands, it's not as fast as Gemini 1.5 Pro.

A full breakthrough, Google updated a large number of large model products last night

Three major open source tools

At the Google Cloud Next conference in 2024, the company launched Multiple open source tools primarily used to support generative AI projects and infrastructure. One is Max Diffusion, which is a collection of reference implementations of various diffusion models that run on XLA (Accelerated Linear Algebra) devices.

A full breakthrough, Google updated a large number of large model products last night

GitHub address: https://github.com/google/maxdiffusion

The second is Jetstream, a running generation formula A new engine for AI models. Currently, JetStream only supports TPU, but may be compatible with GPU in the future. Google claims that JetStream can deliver up to 3x the price/performance of models like Google’s own Gemma 7B and Meta’s Llama 2.

A full breakthrough, Google updated a large number of large model products last night


##GitHub address: https://github.com/google/JetStream

The third is MaxTest, a collection of text generation AI models for TPUs and Nvidia GPUs in the cloud. MaxText now includes Gemma 7B, OpenAI’s GPT-3, Llama 2 and models from AI startup Mistral, all of which Google says can be customized and fine-tuned to developers’ needs.

A full breakthrough, Google updated a large number of large model products last night

##GitHub address: https://github.com/google/maxtext

The first self-made Research on Arm processor Axion

A full breakthrough, Google updated a large number of large model products last night

Google Cloud announced the launch of its first self-developed Arm processor, called Axion. It's based on Arm's Neoverse 2 and is designed for data centers. Google says its Axion instances perform 30% better than other Arm-based instances from competitors like AWS and Microsoft, and are up to 50% better in performance and 60% more energy efficient than corresponding X86-based instances.

Google emphasized during Tuesday's launch that because Axion is built on an open foundation, Google Cloud customers will be able to bring their existing Arm workloads to Google Cloud, without any modification.

However, Google has not yet released any detailed introduction to this.

Code completion and generation tool——CodeGemma

CodeGemma is based on the Gemma model and brings powerful and lightweight tools to the community. encoding function. The model can be divided into a 7B pre-trained variant that specifically handles code completion and code generation tasks, a 7B command-tuned variant for code chat and command following, and a 2B pre-trained variant that runs fast code completion on the local computer. Variants.

A full breakthrough, Google updated a large number of large model products last night

CodeGemma has the following major advantages:

    Intelligent code completion and generation: Complete lines, functions, and even generate entire blocks of code, whether you're working locally or in the cloud; 500 billion tokens of English language data are trained, and the generated code is not only more grammatically correct, but also more semantically meaningful, helping to reduce errors and debugging time;
  • Multi-language capabilities: support Python, JavaScript, Java, and other popular programming languages;
  • Simplify your workflow: Integrate CodeGemma into your development environment to write less boilerplate code and write important code faster , interesting and differentiated code.
  • Some comparison results between CodeGemma and other mainstream code large models are shown in the figure below:

CodeGemma 7B model and Gemma 7B model comparison results on GSM8K, MATH and other data sets.

A full breakthrough, Google updated a large number of large model products last night

For more technical details and experimental results, please refer to the paper released simultaneously by Google.

A full breakthrough, Google updated a large number of large model products last night

Paper address: https://storage.googleapis.com/deepmind-media/gemma/codegemma_report.pdf

Open language model - RecurrentGemma

Google DeepMind also released a series of open weight language models - RecurrentGemma. RecurrentGemma is based on the Griffin architecture, which enables fast inference when generating long sequences by replacing global attention with a mixture of local attention and linear recurrences.

A full breakthrough, Google updated a large number of large model products last night

Technical report: https://storage.googleapis.com/deepmind-media/gemma/recurrentgemma-report.pdf

RecurrentGemma-2B achieves excellent performance on downstream tasks, comparable to Gemma-2B (transformer architecture).

A full breakthrough, Google updated a large number of large model products last night

At the same time, RecurrentGemma-2B achieves higher throughput during inference, especially on long sequences.

A full breakthrough, Google updated a large number of large model products last night

Video editing tool - Google Vids

Google Vids is an AI video creation tool. It is a new feature added in Google Workspace.

A full breakthrough, Google updated a large number of large model products last night

With Google Vids, users can create videos alongside other Workspace tools like Docs and Sheets, and collaborate with colleagues in real time, Google said.

A full breakthrough, Google updated a large number of large model products last night

Enterprise-specific code assistant——Gemini Code Assist

Gemini Code Assist is an enterprise-specific code assistant AI code completion and auxiliary tools, benchmarked against GitHub Copilot Enterprise. Code Assist will be available as a plug-in for popular editors like VS Code and JetBrains.

A full breakthrough, Google updated a large number of large model products last night

Image source: https://techcrunch.com/2024/04/09/google-launches-code-assist-its-latest- challenger-to-githubs-copilot/

Code Assist is powered by Gemini 1.5 Pro. Gemini 1.5 Pro has a million-token context window, which allows Google's tools to introduce more context than competitors. Google says this means Code Assist can provide more accurate code suggestions and the ability to reason about and change large chunks of code.

Google said: "Code Assist enables customers to make large-scale changes to their entire code base, enabling AI-assisted code transformations that were previously impossible."

Agent Builder——Vertex AI

AI agents are a hot industry development direction this year. Google has now announced a new tool to help enterprises build AI agents – Vertex AI Agent Builder.

Thomas Kurian, CEO of Google Cloud, said: “Vertex AI Agent Builder makes it extremely easy and fast to build and deploy production-ready, AI-driven generative conversations. The agent can guide the agent in the same way as a human being to improve the quality and correctness of the model generated results."

The above is the detailed content of A full breakthrough, Google updated a large number of large model products last night. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete