Home  >  Article  >  Technology peripherals  >  Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure

Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure

WBOY
WBOYforward
2023-04-15 08:43:021140browse

In November 2021, Microsoft released a multi-modal vision basic model Florence (Florence), which swept more than 40 benchmark tasks and was easily applicable to classification, target detection, VQA, talking through pictures, video retrieval and action recognition. Wait for multiple tasks.

After a year and a half, Florence has officially launched its commercial phase!

What can Florence do?

Recently, Microsoft Global Artificial Intelligence Chief Technology Officer Huang Xuedong officially announced the public preview version of Microsoft’s Florence basic model.

The Florence model has been trained on billions of text-image pairs and has been integrated into the Azure cognitive vision service. It has reached the requirements of the "production environment" in terms of "price" and "performance". Currently, In free trial phase.

Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure

Improved vision services enable developers to create cutting-edge, market-ready, and responsible computer vision applications across diverse industries. Customers can seamlessly digitize, analyze and connect their data into natural language interactions to derive more precise information from image and video content, protect users from harmful content, enhance security and speed incident response.

Florence’s actual capabilities are also very powerful, and users can experience it “out of the box” in Vision Studio.

Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure

Experience URL: https://portal.vision.cognitive.azure.com/gallery/featured

Specific include:

Dense Captions: Automatically provide rich descriptions, design suggestions, accessible alternative text, search engine optimization, smart photo management, and more to support digital content.

Image retrieval: Use natural language queries to seamlessly measure similarities between images and text to improve search recommendations and ads.

Background Removal: People and objects can be easily separated from the original background and replaced with other background scenes, thereby changing the look and feel of the image.

Model customization: Reduce the cost and time of delivering custom models that can match unique business needs with greater accuracy, even with only a small number of available images.

Video Summary: Search and interact with video content, thinking and writing in the same intuitive way humans do. Can help find relevant content and requires no additional metadata.

Reddit

Reddit consumer products product manager Tiffany Ong said that through Microsoft's Vision technology, it can make it easier for users to discover and understand content on Reddit.

Newly created image descriptions will make Reddit more accessible to users, using image descriptions to help improve search results for articles, giving Reddit users more opportunities to explore images on the site, participate in conversations, and Ultimately building connections and a sense of community.

Florence’s ability to generate up to 10,000 tags per image gives Reddit more control over the number of objects in an image and helps generate better image descriptions.

Microsoft 365

In addition to Microsoft data centers, Microsoft is also improving Microsoft 365 applications (including Teams, PowerPoint, Outlook, Word, Designer, OneDrive) Vision service capabilities.

With the help of image segmentation capabilities, Teams is driving innovation in the digital space and taking the virtual meeting experience to new heights.

PowerPoint, Outlook and Word improve accessibility with image descriptions that automatically replace text.

Microsoft Designer and OneDrive are simplifying image discoverability and editing with improved image descriptions, image search, and background generation.

Microsoft data centers are leveraging Vision Services to enhance security and infrastructure reliability.

LinkedIn

Jennison Asuncon, director of accessibility engineering at LinkedIn, said that more than 40% of posts on LinkedIn contain at least one image, which is particularly important for blind or low-income people. For sighted users, vision services give all users equal access to reading and enable them to participate in online conversations.

Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure

With the Azure Visual Cognitive Service, LinkedIn can provide automatic image description editing and support for alternative text, which is a new experience.

Not only am I excited about this, my colleagues just shared a photo of themselves attending the event, and LinkedIn CEO Ryan Roslansky was in the photo.

Innovate Responsibly

Review the Responsible Artificial Intelligence Principles to learn how Microsoft is committed to developing artificial intelligence systems to improve the accessibility of the world .

Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure

Microsoft is committed to helping organizations make the most of artificial intelligence, and is investing heavily in projects that provide technology, resources and expertise to empower those working to create a more sustainable, better world. Safer and easier access to the world of human capabilities.

Multimodality is the future

Many technology giants, including Microsoft and Google, are surprisingly consistent in the development direction of artificial intelligence. They believe that "multimodal models" are the key to improving artificial intelligence systems. The best way to achieve capabilities is that a single model can understand language, images, videos, and audios at the same time, and can complete tasks that single-modal models cannot complete, such as adding text descriptions to videos.

Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure

Why not string together several "single-modal" models to achieve the same purpose, such as using one model to understand images, and another model using To understand the language?

The first reason is that, with the background information provided by other modalities, multimodal models can perform better than single-modal models on the same task in some situations.

For example, an AI assistant that understands images, pricing data, and purchase history can provide better personalized product recommendations than an AI that “only understands pricing data.”

And from a computational perspective, multi-modal models are often more efficient, which can increase the speed of data processing and reduce back-end costs.

There is no doubt that all business companies are eager to reduce costs and increase efficiency.

Florence can understand images, videos and languages ​​and the relationship between these modalities, so that it can do some tasks that cannot be completed by a single modality, such as measuring the similarity between images and text, segmenting photos objects and then paste them onto another background.

Almost all AI model training faces data copyright issues. John Montgomery, corporate vice president (CVP) of Azure AI, did not reveal much information when answering about "Florence's training data". He only said that Florence used It is a "responsibly acquired" data source, including data from partners; in addition, Montgomery said that potentially problematic content has been removed from the training data, which is also a common feature of public training data sets.

Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure

Montgomery believes that when using a large base model, the most important thing is to ensure the quality of the training data set and create a basis for the adaptive model for each vision task. Microsoft is targeting The tuned models for each vision task are tested for fairness, adversarial and challenging cases, and implement the same content moderation service as Azure Open AI Service and DALL-E.

In the future, consumers can use Florence to do even more things, such as detect defects in the manufacturing process and enable self-checkout in retail stores.

However, Montgomery points out that these use cases don’t actually require a multimodal vision model, but he asserts that multimodality can add something valuable in the process.

Florence is a "completely rethought" visual model that opens up a whole new world of unknown possibilities once a simple and high-quality translation process is achieved between images and text.

Customers can experience significantly improved image search, train image and vision models as well as other model types such as language and speech into entirely new types of applications, and easily improve the quality of custom models.

The above is the detailed content of Microsoft wins! Billions of text-image pair training, multi-modal Florence starts free trial, available on Azure. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete