Home >Technology peripherals >AI >Just now, Google released a basic world model: 11B parameters, which can generate an interactive virtual world
Generate a playable game world with one click.
It’s only been two weeks since it came out, and Google’s world model is also here, and its capabilities seem even more powerful: the virtual world it generates is “autonomous and controllable.” Just now, Google defined a new paradigm of generative AI - Generative Interactive Environments (Genie). Genie is an 11 billion parameter base world model that can generate playable, interactive environments from a single image prompt.
We can prompt it with images it has never seen before, and then interact with the virtual world of our imagination.
Whether it’s composite images, photos or even hand-drawn sketches, Genie can generate endless playable worlds from them.
Genie consists of three parts: a latent action model to infer potential actions between each pair of frames; a video tokenizer to convert raw video frames into discrete tokens; and a dynamic model to Predict the next frame of a video given a potential action and a past frame token.
Seeing the release of this technology, many people said: Google is coming to lead AI technology again.
Google also proposes that the potential actions learned by Genie can be transferred to real human-designed environments. Based on this hypothesis, Google trained a Genie model on robot videos as a proof-of-concept for potential world model applications in the field of robotics.
Disrupted gaming, design, XR, robotics industries...
We can understand the revolutionary significance of Genie from four dimensions.
First, Genie can learn controls without action tags.
Specifically, Genie is trained with a large number of public Internet video data sets without any action label data.
This would have been a challenge because Internet videos often don’t have labels about which action is being performed and which part of the image should be controlled, but Genie is able to learn fine-grained control specifically from Internet videos.
For Genie, it not only understands which parts of observations are generally controllable, but also infers various potential actions that are consistent in the generated environment. Note how the same underlying action can produce similar behavior in different prompt images.
Secondly, Genie can cultivate the next generation of "creators".
Creating a completely new interactive environment with just a single image opens the door to a variety of new ways of generating and entering virtual worlds. For example, we can use a state-of-the-art text generation image model to generate the starting frame, and then work with Genie to generate a dynamic interactive environment.
In the following animation, Google used Imagen2 to generate images, and then used Genie to turn them into reality:
Genie can do more than that, it can also be applied to human design-related creative fields such as sketching .
Or, applied to real-world images:
Once again, Google believes that Genie is the cornerstone of realizing general-purpose intelligence. Previous research has shown that gaming environments can be effective testbeds for developing AI agents, but are often limited by the number of games available.
Now with Genie, future AI agents can be trained in the endless curriculum of the newly generated world. Google presents a proof of concept that the potential actions learned by Genie can be transferred to real human-designed environments.
Finally, Google stated that Genie is a general method that can be applied to multiple domains without requiring any additional domain knowledge.
Although the data used is more 2D Platformer game play and robot videos, the method is general and applicable to any type of domain and can be extended to larger Internet data sets.
Google trained a smaller 2.5B model on RT1’s motion-free videos. As is the case with Platformers, trajectories with the same underlying action sequence will often exhibit similar behavior.
This shows that Genie can learn a consistent action space, which may be suitable for training robots to create generalized embodied intelligence.
Technology Revealed: The paper "Genie: Generative Interactive Environments" has been released
Google DeepMind has released the Genie paper.
Paper address: https://arxiv.org/pdf/2402.15391.pdf
Project homepage: https://sites.google.com/view/genie-2024/home?pli= 1
There are as many as 6 co-authors of this paper, including Chinese scholar Yuge (Jimmy) Shi. She is currently a research scientist at Google DeepMind and received her PhD in machine learning from the University of Oxford in 2023.
Method Introduction
Multiple components in the Genie architecture are built based on Vision Transformer (ViT). It is worth noting that due to the secondary memory cost of Transformer, which brings challenges to the video field, a video can contain up to ?(10^4) tokens. Therefore, Google uses a memory-efficient ST-transformer architecture (see Figure 4) in all model components to balance model capacity and computational constraints.
Genie contains three key components (as shown in the figure below):
1) Latent Action Model (LAM), used to reason about potential actions between each pair of frames;
2) Video tokenizer (Tokenizer), used to convert original video frames into discrete tokens?;
3) Dynamic model, given potential actions and tokens of past frames, used to predict the next frame of the video.
Specifically:
Latent action model: In order to achieve controllable video generation, Google uses the action taken in the previous frame as a condition for future frame prediction. However, such action labels are rarely available in videos on the Internet, and the cost of obtaining action annotations can be high. Instead, Google learns potential actions in a completely unsupervised manner (see Figure 5).
Video tokenizer: Based on previous research, Google compresses videos into discrete tokens to reduce dimensionality and achieve higher quality video generation (see Figure 6). For implementation, Google uses VQ-VAE, which takes ? frames of a video as input and generates a discrete representation for each frame: , where ? is the discrete latent space size. The tokenizer is trained on the entire video sequence using standard VQ-VQAE.
Dynamic model: is a decoder-only MaskGIT transformer (Figure 7).
Genie’s inference process is as follows
Experimental results
Extension results
In order to study the expansion behavior of the model, Google conducted experiments on models with parameter sizes ranging from 2.7B to 41M To explore the impact of model size and batch size, the experimental results are shown in Figure 9 below.
It can be observed that as the model size increases, the final training loss will decrease. This is a strong indication that the Genie approach benefits from scaling. At the same time, increasing the batch size will also bring gains to model performance.
Qualitative results
Google presents qualitative experimental results for the Genie 11B parametric model trained on the Platformers dataset and a smaller model trained on the Robotics dataset. The results show that the Genie model can generate high-quality, controllable videos across different domains. Notably, Google only uses out-of-distribution (OOD) image prompts to qualitatively evaluate its platform training models, demonstrating the robustness of the Genie approach and the value of large-scale data training.
Agent training. Perhaps one day, Genie can be used as a base world model for training multi-task agents. In Figure 14, the authors show that the model can already be used to generate different trajectories in a novel RL environment given a starting frame.
The authors conduct evaluations in CoinRun, a procedurally generated 2D platform game environment, and compare with an oracle behavioral clone (BC) model with access to expert operations as an upper limit.
Ablation research. Selection When designing the latent action model, the authors carefully considered the types of inputs to be used. While the final choice was to use raw images (pixels), the authors evaluated this choice against the alternative of using tokenized images (replacing x with z in Figure 5) when designing Genie. This alternative is called the “token input” model (see Table 2).
Tokenizer architecture ablation. The authors compared the performance of three tokenizer choices, including 1) (spatial only) ViT, 2) (spatial and temporal) ST-ViViT, and 3) (spatial and temporal) CViViT (Table 3).
The above is the detailed content of Just now, Google released a basic world model: 11B parameters, which can generate an interactive virtual world. For more information, please follow other related articles on the PHP Chinese website!