Home > Article > Technology peripherals > How does the Metaverse “feed” artificial intelligence models?
The visual world is made up of many moving parts containing multiple data types, interfaces and artificial intelligence models. 3D interfaces contain many data types with time- and space-related attributes, which are important for capturing and analyzing past trends and predicting future trends.
This visual simulation technology has now been applied in some important projects, such as DeepMind’s AlphaFold AI research project, which can predict the 3D structure of more than 200 million known proteins . Protein folding is the basis of drug discovery, and AlphaFold is used in medical research to treat COVID-19. In the field of high-performance computing, the Metaverse provides the conditions for researchers to collaborate in virtual simulations.
Nvidia, one of the biggest proponents of the Metaverse, is promoting the concept through a product called Omniverse, which includes a suite of artificial intelligence, software and vision technologies for research and scientific modeling.
Nvidia has been vague about the capabilities of its Omniverse products, but recently revealed some information. The platform uses a complex set of technologies to collect, organize, translate and correlate data, which is ultimately collected into datasets. Artificial intelligence models will analyze these data sets and then provide visual models for scientific applications, which may include models for understanding planetary trends or developing drugs.
The latest collaborative use case for the platform is that the National Oceanic and Atmospheric Administration will use technology from Omniverse and Lockheed Martin to visualize climate and weather trend data, which will then be made available to researchers. Forecasting and other research.
The information collected by the OR3D platform developed by Lockheed Martin is important for visualizing weather and climate data, including data from satellites, oceans, previous atmospheric trends and sensors. The data is OR3D file format specific and will be built into a "connector" that converts the data into file types based on the Universal Scene Description (USD) format.
The USD file format has operators that can combine data such as positioning, orientation, color, materials, and layers into a 3D file. Converting to the USD file format is important as it allows visualization files to be shared and multiple users to collaborate, an important consideration in virtual worlds. The USD file is also a converter that breaks down the different types of data in the OR3D file into raw input for the artificial intelligence model.
Data types can include temporal and spatial elements in 3D images, which is particularly important in visualizing climate and weather data. For example, past weather trends need to be captured in seconds or minutes and mapped based on temporal correlation.
A tool from NVIDIA called Nucleus is the main engine of Omniverse, which converts OR3D files into USD files and handles runtime, physics simulation and data mapping from other file formats.
The data set for artificial intelligence can include real-time updated weather data, which is then fed into the artificial intelligence model. NVIDIA's multi-step process for getting raw image data into USD is complex but scalable. It can support multiple data types and is considered more feasible than API connectors (the latter are application-specific and cannot scale for different data types in a single complex model).
The advantage of the USD file format is that it can process different types of data collected from satellites and sensors in real time, which helps build more accurate artificial intelligence models. It can also be shared, making its data extensible to other applications.
The above is the detailed content of How does the Metaverse “feed” artificial intelligence models?. For more information, please follow other related articles on the PHP Chinese website!