Home  >  Article  >  Technology peripherals  >  After investing US$4 billion in Anthropic, Amazon Cloud Technology officially announced five generative AI innovations

After investing US$4 billion in Anthropic, Amazon Cloud Technology officially announced five generative AI innovations

王林
王林forward
2023-10-11 20:05:011349browse

亚马逊云科技投资Anthropic 40亿美元后,强势官宣五项生成式AI创新

With the development of computing power and model scale, multi-modal large models have ushered in the "emergence". On this basis, generative AI has become a typical example of large-scale model implementation, helping humans to efficiently create new content and ideas. Generative AI is supported by machine learning models. Enterprises and individuals can retrain based on open source basic models to create models and applications that suit their own needs. In this process, retraining the required computing infrastructure and trying multiple open source models require a lot of investment. Can the low-threshold and easy-to-deploy characteristics of "cloud services" be reproduced in enterprises' AI journey?

Amazon Bedrock, Amazon Cloud Technology’s fully managed generative AI service, has been officially launched. Customers can leverage high-performance foundational models from multiple leading AI companies and gain access to a set of capabilities to build generative AI applications that simplify the development process while ensuring privacy and security

○ Amazon Bedrock has added Amazon Titan Embeddings and Meta Llama 2 models to provide customers with more flexible choices when looking for models suitable for their application scenarios;

○ The new Amazon CodeWhisperer feature will provide customized code suggestions based on generative AI, making full use of the enterprise's internal code base to improve developer productivity;

○ Amazon QuickSight provides generative BI dashboard creation capabilities, allowing business analysts to explore data more conveniently and quickly, and create visual reports using natural language descriptions;

○ Adidas, BMW Group, GoDaddy, Merck, NatWest Group, Persistent, PGA TOUR, Takenaka Corporation and Traeger Grills are among the Applying generative AI innovation from Amazon Cloud Technology to reshape its products and services.

Amazon Cloud Technology announced five generative AI innovations that enable businesses of all sizes to build new generative AI applications, increase employee productivity and transform their businesses. These five innovations include: Amazon Bedrock, a comprehensive managed service by Amazon Cloud Technology, is officially available, providing basic models (FM) from leading AI companies through a unified application programming interface (API); Amazon Cloud Technology announced that the Amazon Titan Embeddings model is officially available, Provide customers with more basic model choices; Amazon Bedrock recently introduced the Meta Llama 2 model, which is the first service to provide fully managed Meta Llama 2 models through API; the new features of the AI ​​programming assistant Amazon CodeWhisperer will soon be available for preview, and can be based on The enterprise's internal code base securely customizes CodeWhisperer's code suggestions to help developers gain greater value from generative AI; Amazon QuickSight's generative BI authoring function is now available in preview, which can improve the work efficiency of business analysts. This feature is a unified BI service built cloud natively that enables customers to create visualizations, format charts, perform calculations, and more by simply describing what they want through natural language. From Amazon Bedrock and Amazon Titan Embeddings to Amazon CodeWhisperer and Amazon QuickSight, these innovations enhance Amazon Cloud Technology’s capabilities at every level of the generative AI stack, giving businesses of any size access to enterprise-grade security and privacy. , select a model and customize it.

“In the past year, the explosion of massive data, the availability of large-scale elastic computing power, and the rapid progress of machine learning technology have ignited people’s enthusiasm for generative AI, profoundly changing all walks of life, and It has reshaped the way people work." Swami Sivasubramanian, global vice president of data and machine learning at Amazon Cloud Technology, said, "With enterprise-level security and privacy protection, leading basic model selection, and data-first methodology, , and high-performance, cost-effective infrastructure, Amazon Cloud Technology has won the trust of enterprises and uses generative AI solutions at every layer of the technology stack to help enterprises continue to innovate. This release is an important milestone, It provides generative AI to every enterprise, from start-ups to large enterprises, from development engineers to data analysts. Through powerful innovation, Amazon Cloud Technology brings stronger security, multiple options and outstanding performance, while also helping them to closely align with the enterprise's data strategy, thereby unlocking the full potential of generative AI."

Enterprises across all industries hope to use generative artificial intelligence to change the way they operate, rethink how to solve complex problems, and create new user experiences. Although recent advances in generative artificial intelligence have attracted widespread attention, many enterprises have not been able to participate in this transformation process. On the one hand, they are eager to use generative artificial intelligence, but on the other hand, they are worried about the security and privacy issues of these tools. These companies want to be able to test multiple basic models to find the one that best suits their application scenarios. They also want to make full use of the data they already have and provide unique experiences to end users through customized models. Finally, enterprises need tools to go to market quickly and the infrastructure to deploy generative AI applications globally

This is why many companies are seeking generative AI services from Amazon Cloud Technology, such as Adidas, Alida, BMW Group, Genesys, Glide, GoDaddy, Intuit, LexisNexis Legal & Professional, Lonely Planet, Merck, NatWest, Perplexity AI, Persistent, Quext, RareJob Technologies, Rocket Mortgage, SnapLogic, Takenaka Works Store Co., Ltd., Traeger Grills, PGA Tour, Verint, Verisk and WPS, among others.

Amazon Bedrock is generally available to help more customers build and scale generative AI applications

Amazon Bedrock is a fully managed service that provides high-performance base models for overseas business from many leading AI companies, including AI21 Labs, Anthropic, Cohere, Meta, Stability AI and Amazon, as well as enterprise build generation A set of capabilities required for modern AI applications that simplify development while ensuring privacy and security. The basic model has good applicability and can provide support for many fields such as information search, content creation and drug discovery. But for many businesses looking to take advantage of generative AI, there are some issues that need to be addressed. First, they need simple and intuitive selection and access to high-performance basic models that meet their scenario needs and perform well; second, customers want applications to be seamlessly integrated without having to manage huge infrastructure clusters or spend a lot of money; finally, customers want It is easy to build differentiated applications with the help of basic models and combined with their own data. The data used by these customers for customization is undoubtedly a very valuable asset and has intellectual property rights. Therefore, it must be fully protected during use, while ensuring security and Privacy while ensuring that customers have control over how data is shared and used.

With Amazon Bedrock’s comprehensive capabilities, enterprises can more conveniently and easily try out a variety of leading basic models and customize models using their own proprietary data. In addition, Amazon Bedrock offers differentiated capabilities such as managed agents (AI agents) that can be created without writing any code and can perform complex tasks such as travel bookings, processing insurance claims, planning advertising campaigns, and managing inventory. Because Amazon Bedrock is serverless, customers don't have to manage any infrastructure and can securely integrate and deploy generative AI capabilities into their applications using familiar Amazon cloud services.

Amazon Bedrock was developed with security and privacy protection in mind to help customers protect sensitive data. Customers can leverage Amazon PrivateLink to establish a dedicated, secure connection between Amazon Bedrock and a virtual private network (VPC), ensuring that all data transfers are not exposed to the public network. For customers with highly regulatory needs, Amazon Bedrock is HIPAA (Health Insurance Portability and Accountability Act) compliant and can be used under GDPR (EU General Data Protection Regulation) compliance standards, allowing more customers to generate data from Benefit from AI

Amazon Bedrock further expands the range of optional models through Amazon Titan Embeddings and Llama 2 to help each customer find the model that suits the application scenario

In fact, no single model can be suitable for all application scenarios. Therefore, to tap the value of generative AI, companies often need to access multiple models and find the most suitable one based on their requirements. To this end, Amazon Bedrock allows overseas customers to find and test leading basic models provided by AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API. In addition, Amazon Cloud Technology recently announced that all future basic models of Anthropic will be available on Amazon Bedrock, and will provide Amazon Cloud Technology's overseas customers with priority access to special features such as model customization and fine-tuning. From now on, Amazon Bedrock has once again introduced a new basic model to bring more choices:

Amazon Titan Embeddings is now officially available: Amazon Titan basic models are a series of models created and pre-trained by Amazon Cloud Technology on large data sets, which can support various application scenarios. The first of these models to be officially available, Amazon Titan Embeddings is a large language model (LLM) that converts text into numerical representations called embeddings to support retrieval-augmented generation (RAG) application scenarios. The base model, while suitable for a variety of tasks, can only answer questions based on information learned from the training data and context of the cue words. The effectiveness of these responses is limited when they require the use of time-sensitive knowledge or proprietary data. To improve responses from underlying models by extending data, many companies are turning to RAG, a popular model customization technology that connects underlying models to referenceable knowledge bases to improve responses. To start using RAG, clients must first access an embedding model, which converts data into embedding vectors, making it easier for the underlying model to understand the semantics and relationships between the data. However, building embedding models requires a large amount of data and resources, as well as deep machine learning expertise, making it difficult for many customers to build them themselves and thus unable to implement RAG. Amazon Titan Embeddings makes it easier for customers to enable RAG to extend the capabilities of various underlying models with proprietary data. Amazon Titan Embeddings supports over 25 languages ​​and context lengths of up to 8192 tokens, making it ideal for enterprise-based application scenarios that process single words, phrases, or entire documents. The model returns an output vector of 1536 dimensions, ensuring high accuracy while being optimized for lower latency and better price/performance.

Llama 2 coming in the coming weeks: Amazon Bedrock is the industry’s first fully managed generative AI service to offer Llama 2, Meta’s next-generation large language model, via a managed API. The Llama 2 model offers significant improvements over the previous Llama model, including using 40% more training data than the original and having a longer context length (4000 tokens) to handle larger documents. The Llama 2 model provided by Amazon Bedrock has been optimized to provide fast response on Amazon cloud technology infrastructure, making it ideal for conversational application scenarios. Customers can build generative AI applications driven by 13 billion and 70 billion parameter Llama 2 models without having to set up and manage any infrastructure.

New capabilities in Amazon CodeWhisperer will further increase developer productivity by allowing customers to securely use private code repositories to customize CodeWhisperer’s code suggestions

Amazon CodeWhisperer is an AI-based programming assistant that improves developer productivity by training on billions of lines of code sourced from Amazon and publicly available. Although developers often use CodeWhisperer in their daily work, sometimes they need to integrate private code bases within the enterprise (such as internal APIs, code libraries, packages and classes) into applications, and these codes are not part of CodeWhisperer's training data . Using internal code is also a challenge, as documentation is limited and there are no public resources or forums where developers can turn to

For example, to write a function that removes an item from a shopping cart, a developer must first understand the APIs, collections, and other internal code used to interact with the application. Previously, developers could spend hours examining previously written internal code to find the information they needed and understand how it worked. Even if they find the right resource, they still need to double-check the code to make sure it adheres to the company's coding best practices and doesn't repeatedly reference any flaws or vulnerabilities in the code.

Amazon CodeWhisperer’s new customization capabilities will unlock the full potential of generative AI programming, delivering customized recommendations by securely leveraging customers’ internal code bases and resources. This allows developers to get more accurate code suggestions across a variety of tasks, saving time. First, administrators need to connect to their private code repository from a source (such as GitLab or Amazon S3) and schedule a job to create their custom content. When creating custom content, CodeWhisperer leverages a variety of models and contextual customization techniques to learn from customers' code bases and improve real-time code suggestions, allowing developers to spend less time looking for the right answers to undifferentiated questions while integrating more Time invested in creating new, differentiated experiences. Administrators can centrally manage all custom features in the Amazon Console, view evaluation metrics, estimate the performance of each custom feature, and selectively deploy them to specific developers within the company to limit sensitive Code access.

By selecting high-quality repositories, administrators can ensure that the customized recommendations provided by CodeWhisperer do not contain deprecated code to meet enterprise quality and security standards. With enterprise-grade security and privacy in mind, this feature ensures that custom content is completely private, while the underlying base model that powers CodeWhisperer does not use custom content during training, protecting customers’ valuable intellectual property. This customization feature will soon be available to customers in preview as part of CodeWhisperer Enterprise Edition. In addition, CodeWhisperer's custom settings ensure security by default, and whether customers use Amazon CodeWhisperer Professional or Enterprise Edition, Amazon Cloud Technologies does not store or log any customer content when handling requests from the developer IDE

Amazon QuickSight’s new generative BI authoring capabilities help business analysts easily create and customize data visualizations using natural language commands

Amazon QuickSight is a unified BI service built for the cloud that enables the creation of interactive dashboards, paginated reports, embedded analytics, and the ability to perform natural language queries using QuickSight Q, so every user in the enterprise can Get the insights they need in their preferred format.

Typically, business analysts spend hours using business intelligence tools (BI tools) to explore a variety of different data sources, add calculations, create and refine visualizations, and then present them in dashboards for the business Used by stakeholders. To create a simple chart, an analyst must first find the right data source, identify the data fields, set up filters, and make the necessary personalization for a good visualization

If visualizing the data requires new calculations (such as annual sales), the analyst must also determine the required reference data and then create, validate, and add the visuals to the report. Businesses also benefit if they can reduce the time business analysts spend manually creating and tweaking charts and calculations, allowing them to devote more time to high-value tasks.

New generative BI authoring capabilities extend QuickSight Q’s natural language query capabilities to not only answer clearly stated questions (e.g., “What are the top 10 best-selling products in California?”) but also help Analysts quickly create customizable visuals from question snippets (e.g., “Top 10 selling products”), clarify query intent by asking follow-up questions, refine visuals, and complete complex calculations. Business analysts simply describe the results they want to achieve, and QuickSight generates visuals that look good. Analysts can easily add it to a dashboard or report with just a few clicks.

For example, an analyst can ask QuickSight Q to create a visualization for "Monthly Trends in Sneaker Sales in 2022 and 2023" and the service will automatically select the appropriate data and use the chart format that makes the most sense based on the request (e.g. line graph or bar graph) to plot the required information. QuickSight Q will also provide preset prompt questions to help analysts clarify ambiguities that may arise when multiple data fields match a query (such as whether a chart should include total dollar amounts of sneaker sales or number of units sold).

After analysts obtain the initial visualization content, they can also use natural language to add complex calculations, change the chart type, or optimize the visualization effect. New generative BI authoring capabilities in QuickSight Q enable business analysts to quickly and easily create great visuals that more quickly provide valuable information for large-scale data-driven decisions.

Customers across a wide range of industries are using Amazon Cloud Technology’s generative AI services to create new applications, improve developer efficiency, and help analysts gain insights faster

Adidas is one of the largest sports brands in the world. “We’re excited to participate in the Amazon Bedrock preview and experience the service first hand. Amazon Bedrock has been instrumental in building our generative AI tools by taking on the heavy infrastructure management work of building generative AI applications, enabling We are able to focus on the core aspects of the big language model project." Adidas Vice President of Enterprise Architecture Daniel Eichten said, "We have used Amazon Bedrock to develop a generative AI solution that allows Adidas' vast number of engineers to only Through a single conversational interface, you can find the information and answers you need from the knowledge base to answer a variety of technical questions from entry-level to complex." Merck is an R&D-intensive biopharmaceutical company that has been committed to Discover and develop innovative medicines and vaccines to save lives and improve health. "Many manual and time-consuming processes exist across the entire pharmaceutical value chain, which prevent more valuable work from being carried out while failing to effectively use data to improve employee, customer and patient experiences." Merck Data Science Executive Director Suman Giri said, “With Amazon Bedrock, we quickly built generative AI capabilities to make work such as knowledge mining and market research more efficient. In our U.S. patient analytics workflow, we can leverage these capabilities to provide insights into patient care, improve quality of life and expand commercial impact, while filling gaps in data sharing and creating a data governance ecosystem for responsible generative AI.” The BMW Group is one of the world’s top car and motorcycle manufacturers one. "BMW's regional experts work on optimizing inventory across the entire supply chain. They often receive requests from stakeholders such as board members or supply chain experts to create new dashboard views so they can analyze the latest trends." Christoph Albrecht, BMW Group data engineering and analytics expert, said, "The QuickSight Q authoring experience saves significant time by creating calculations without reference, quickly building visuals, and then accurately adjusting the visual presentation through natural language. Our business users are impressed by the rapid feedback from regional experts, allowing them to make important decisions faster.”

Summarize

Through Amazon Cloud Technology’s fully managed generative artificial intelligence service, enterprises of any size can quickly build infrastructure, flexibly select models, and quickly start customization, thereby helping enterprises fully tap the value of data and promote business innovation

The content that needs to be rewritten is: END

The above is the detailed content of After investing US$4 billion in Anthropic, Amazon Cloud Technology officially announced five generative AI innovations. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete