Home  >  Article  >  Technology peripherals  >  Meta is an open source multi-sensory artificial intelligence model that integrates six types of data including text, audio, and vision.

Meta is an open source multi-sensory artificial intelligence model that integrates six types of data including text, audio, and vision.

王林
王林forward
2023-05-16 09:43:051388browse

Meta Inc. has released ImageBind, a new open source artificial intelligence model that integrates multiple data streams, including text, audio, visual data, temperature and motion readings, and more. The model is currently just a research project and has no direct consumer or practical applications yet, but it demonstrates the possibilities for future generative AI systems that can create immersive, multi-sensory experiences. At the same time, the model also shows Meta's open attitude in the field of artificial intelligence research, while its competitors such as OpenAI and Google are becoming increasingly closed.

Meta 开源多感官人工智能模型,整合文本、音频、视觉等六类数据

#The core concept of the research is to integrate multiple types of data into a multidimensional index (or in artificial intelligence terminology, "embedding space"). The concept may be a little abstract, but it’s the basis of the recent boom in generative artificial intelligence. For example, AI image generators such as DALL-E, Stable Diffusion, and Midjourney rely on systems that tie text and images together during the training phase. They look for patterns in visual data while connecting this information to the description of the image. This is why these systems are able to generate images based on user text input. The same goes for many AI tools that can generate video or audio in the same way.

Meta says its model ImageBind is the first to integrate six types of data into a single embedding space. The six types of data include: visual (including images and videos); thermal (infrared images); text; audio; depth information; and, the most interesting of all, motion readings produced by an inertial measurement unit (IMU). (IMUs are found in phones and smartwatches and are used to perform a variety of tasks, from switching a phone from landscape to portrait to distinguishing between different types of movement.)

Future AI systems will be able to perform tasks as current Just like systems for text input, cross-reference this data. For example, imagine a future virtual reality device that is capable of generating not only audio and visual input, but also motion of your environment and physical platform. You can ask it to simulate a long sea journey, and it not only puts you on a ship with the sound of waves in the background, but also makes you feel the decks rocking under your feet and the sea breeze blowing.

Meta noted in a blog post that future models could also add other sensory input streams, including "tactile, speech, odor, and brain fMRI signals." The company also claims that this research "brings machines closer to the human ability to learn from many different forms of information simultaneously, comprehensively, and directly."

Of course, a lot of this is based on predictions, And it's likely that the direct applications of this research will be very limited. Last year, for example, the company Meta demonstrated an AI model capable of generating short, blurry videos based on text descriptions. Research like ImageBind shows how future versions of the system can incorporate other data streams, such as generating audio that matches the video output.

For industry observers, this research is also interesting, because IT House noticed that Meta Company has open sourced the underlying model, which is a practice that has attracted increasing attention in the field of artificial intelligence.

The above is the detailed content of Meta is an open source multi-sensory artificial intelligence model that integrates six types of data including text, audio, and vision.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete