search
HomeTechnology peripheralsAITsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models

iVideoGPT meets the high interactivity needs of the world model.

Generative models have made significant progress in recent years, among which video generation is becoming a new frontier. An important application of these generative video models is to learn in an unsupervised manner on diverse Internet-scale data for building predictive world models. These world models are expected to accumulate common-sense knowledge about how the world works, allowing predictions of potential future outcomes based on the behavior of agents.

By leveraging these world models, reinforcement learning-based agents can imagine, reason, and plan within the world model, allowing them to perform tasks in the real world with just a few experiments. Acquire new skills more safely and efficiently.

Despite the fundamental connection between generative models and world models, there are still significant differences between the development of generative models for video generation and world models for agent learning. the gap. One of the main challenges is how to achieve the best balance between interactivity and scalability.

#In the field of model-based reinforcement learning, world models mainly use recurrent network architecture. This design facilitates interactive behavioral learning by allowing observations or latent states to be passed based on actions at each step. However, these models mostly focus on game or simulation environments, have simple data, and have limited ability to model large-scale and complex in-the-wild data.

In contrast, Internet-scale video generation models can synthesize realistic long videos that can be controlled with textual descriptions or future action sequences. While such models allow for high-level, long-term planning, their trajectory-level interactivity does not provide agents with sufficient granularity to effectively learn precise behaviors as a fundamental skill.

Researchers from Tsinghua University, Huawei Noah's Ark Laboratory, and Tianjin University proposed iVideoGPT (Interactive VideoGPT), which is an scalable autoregressive Transformer framework. It integrates multi-modal signals (visual observations, actions and rewards) into a series of tokens, enabling the agent to conduct interactive experiences by predicting the next token.

iVideoGPT uses novel compression tokenization technology to effectively discrete high-dimensional visual observations. Leveraging its scalable architecture, researchers were able to pre-train iVideoGPT on millions of human and robot operation trajectories, thereby establishing a versatile foundation that can be used as an interactive world model for a variety of downstream tasks. This research promotes the development of interactive universal world models.
Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models
  • Paper address: https://arxiv.org/pdf/2405.15223
  • Paper title: iVideoGPT: Interactive VideoGPTs are Scalable World Models

Method

In this part, the research team introduces a scalable world model architecture - iVideoGPT, which is extremely flexible and can integrate multi-modal information, including visual observations, actions, rewards and other potential inputs.

#The core of iVideoGPT includes a compression tokenizer for discretizing video frames, and an autoregressive transformer for predicting subsequent tokens. By pre-training on diverse video data, the model can acquire extensive world knowledge and then efficiently transfer to downstream tasks.
Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models
Architecture

Compression tokenization. Transformer performs particularly well at handling sequences of discrete tokens. VQGAN is a commonly used visual tokenizer that converts raw pixels into discrete tokens. The researchers proposed to use a new conditional VQGAN consisting of a dual encoder and decoder {(E_c, D_c), (E_p, D_p)} to tokenize the video.

As shown in Figure 3a, the initial context frame Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models contains rich context information and is tokenized and reconstructed independently through N tokens:

Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models

In contrast, due to the temporal redundancy between context frames and future frames, only necessary change information, such as the position and pose of moving objects, needs to be encoded. The above process is achieved by using conditional encoders and decoders:

Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models

#The researchers implemented the conditional mechanism by using cross-attention between multi-scale feature maps. In general, the tokenizer is trained with the following goals:

Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models

The tokenization proposed in this study mainly has two benefits:

  • First, it significantly reduces the sequence length of the tokenized video, which grows linearly with the number of frames, but the growth rate n is much smaller;
  • Secondly, through conditional encoding, the transformer that predicts subsequent tokens can more easily maintain the temporal consistency of the context and focus on modeling the necessary dynamic information.

Interactive predictions for Transformer. After tokenization, the video is flattened into a series of tokens:

Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models

length is Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models. Special slot tokens [S] are inserted to delineate frame boundaries and facilitate the fusion of additional low-dimensional modalities such as actions. As shown in Figure 3b, a GPT-like autoregressive transformer is used for interactive video prediction by generating next-tokens frame by frame. In this work, the team used the model size of GPT-2 but adapted the LLaMA architecture in order to take advantage of recent innovations in LLM architectures, such as rotational position embedding.

Pre-training

Large language models can be self-supervised through next-word prediction way to gain extensive knowledge from Internet texts. Similarly, the action-free video pre-training paradigm of the world model uses video prediction as the pre-training goal to provide Internet-scale supervision for the physical world knowledge that LLM lacks.

The researchers pre-trained iVideoGPT on this general goal, applying cross-entropy loss to predict subsequent video tokens:

Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models

Pre-training data. Although there are a large number of videos available on the Internet, due to computational limitations, the researchers pretrained iVideoGPT specifically for the field of robotic manipulation. They utilized a mix of 35 datasets from the Open X-Embodiment (OXE) dataset and the Something-Something v2 (SSv2) dataset, totaling 1.5 million trajectories.

Fine-tuning

action conditions and reward prediction. The team’s architecture is designed to flexibly integrate additional modalities to learn an interactive world model, as shown in Figure 3b. Actions are integrated via linear projection and added to slot token embeddings. For reward prediction, instead of learning a separate reward predictor, they added a linear head on the hidden state of the last token of each observation.

This multi-task learning method can enhance the model's attention to task-related information, thereby improving the prediction accuracy of control tasks. In addition to the cross-entropy loss of Equation (3), they also used the mean square error loss for reward prediction.

Tokenizer Adapt. The research team chose to update the complete model, including the tokenizer, to adapt to downstream tasks, and found this strategy to be more effective than parameter-efficient fine-tuning methods.

#There is little literature exploring the use of VQGAN tokenizer for domain-specific data. In this work, since tokenization decouples dynamic information from contextual conditions, it is assumed that although this model may encounter unseen objects in downstream tasks, such as different types of robots, the transformer learns from diverse scenarios Basic physics knowledge - such as movement and interaction - is shared.

This hypothesis is supported by experiments where they migrated iVideoGPT from mixed pre-trained data to the unseen BAIR dataset, where the pre-trained transformer can zero-sample Generalizing to predict natural motion requires only fine-tuning the tokenizer for unseen robot grippers (see Figure 7). This feature is particularly important for scaling GPT-like transformers to large sizes, enabling lightweight alignment across domains while keeping the transformer intact.
Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models
Experiment

As shown in Table 1, compared with the SOTA method, iVideoGPT Demonstrates competitive performance while enabling interactivity and scalability in its architecture. Although preliminary experiments were performed at the low resolution of 64×64, iVideoGPT can be easily extended to RoboNet’s 256×256.
Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models
See Figure 9 for qualitative results.
Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models
Figure 4 shows the success rate of iVideoGPT compared to the baseline model. iVideoGPT significantly outperforms all baselines on both RoboDesk tasks and achieves average performance comparable to the strongest model SVG'.
Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models
Figure 6 shows that the model-based algorithm not only improves the sample efficiency than the model-free algorithm, but also meets or exceeds the performance of DreamerV3.
Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models
The next study analyzes the zero-shot video prediction capabilities of large-scale pre-trained iVideoGPT on the unseen BAIR dataset. Interestingly, we observe in the second row of Figure 7 that iVideoGPT predicts the natural motion of a robot gripper without fine-tuning—albeit differently from the pre-training dataset. This shows that although the model has limited zero-shot generalization to completely unseen robots due to insufficient diversity in the pre-training data, it effectively separates scene context from motion dynamics. In contrast, using the adapted tokenizer, the non-fine-tuned Transformer successfully transfers the pre-trained knowledge and predicts the motion of the novel robot in the third row, providing similar perception to the fully fine-tuned Transformer in the fourth row. The quality and quantitative results are shown in Figure 8a.
Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models
For more results, please refer to the original paper.

The above is the detailed content of Tsinghua University, Huawei and others proposed iVideoGPT: specializing in interactive world models. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
DSA如何弯道超车NVIDIA GPU?DSA如何弯道超车NVIDIA GPU?Sep 20, 2023 pm 06:09 PM

你可能听过以下犀利的观点:1.跟着NVIDIA的技术路线,可能永远也追不上NVIDIA的脚步。2.DSA或许有机会追赶上NVIDIA,但目前的状况是DSA濒临消亡,看不到任何希望另一方面,我们都知道现在大模型正处于风口位置,业界很多人想做大模型芯片,也有很多人想投大模型芯片。但是,大模型芯片的设计关键在哪,大带宽大内存的重要性好像大家都知道,但做出来的芯片跟NVIDIA相比,又有何不同?带着问题,本文尝试给大家一点启发。纯粹以观点为主的文章往往显得形式主义,我们可以通过一个架构的例子来说明Sam

阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型Sep 25, 2023 pm 10:25 PM

2021年9月25日,阿里云发布了开源项目通义千问140亿参数模型Qwen-14B以及其对话模型Qwen-14B-Chat,并且可以免费商用。Qwen-14B在多个权威评测中表现出色,超过了同等规模的模型,甚至有些指标接近Llama2-70B。此前,阿里云还开源了70亿参数模型Qwen-7B,仅一个多月的时间下载量就突破了100万,成为开源社区的热门项目Qwen-14B是一款支持多种语言的高性能开源模型,相比同类模型使用了更多的高质量数据,整体训练数据超过3万亿Token,使得模型具备更强大的推

ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项Oct 04, 2023 pm 09:37 PM

在法国巴黎举行了国际计算机视觉大会ICCV(InternationalConferenceonComputerVision)本周开幕作为全球计算机视觉领域顶级的学术会议,ICCV每两年召开一次。ICCV的热度一直以来都与CVPR不相上下,屡创新高在今天的开幕式上,ICCV官方公布了今年的论文数据:本届ICCV共有8068篇投稿,其中有2160篇被接收,录用率为26.8%,略高于上一届ICCV2021的录用率25.9%在论文主题方面,官方也公布了相关数据:多视角和传感器的3D技术热度最高在今天的开

复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据Sep 29, 2023 pm 01:17 PM

随着智慧司法的兴起,智能化方法驱动的智能法律系统有望惠及不同群体。例如,为法律专业人员减轻文书工作,为普通民众提供法律咨询服务,为法学学生提供学习和考试辅导。由于法律知识的独特性和司法任务的多样性,此前的智慧司法研究方面主要着眼于为特定任务设计自动化算法,难以满足对司法领域提供支撑性服务的需求,离应用落地有不小的距离。而大型语言模型(LLMs)在不同的传统任务上展示出强大的能力,为智能法律系统的进一步发展带来希望。近日,复旦大学数据智能与社会计算实验室(FudanDISC)发布大语言模型驱动的中

百度文心一言全面向全社会开放,率先迈出重要一步百度文心一言全面向全社会开放,率先迈出重要一步Aug 31, 2023 pm 01:33 PM

8月31日,文心一言首次向全社会全面开放。用户可以在应用商店下载“文心一言APP”或登录“文心一言官网”(https://yiyan.baidu.com)进行体验据报道,百度计划推出一系列经过全新重构的AI原生应用,以便让用户充分体验生成式AI的理解、生成、逻辑和记忆等四大核心能力今年3月16日,文心一言开启邀测。作为全球大厂中首个发布的生成式AI产品,文心一言的基础模型文心大模型早在2019年就在国内率先发布,近期升级的文心大模型3.5也持续在十余个国内外权威测评中位居第一。李彦宏表示,当文心

AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验Sep 20, 2023 pm 10:45 PM

保险行业对于社会民生和国民经济的重要性不言而喻。作为风险管理工具,保险为人民群众提供保障和福利,推动经济的稳定和可持续发展。在新的时代背景下,保险行业面临着新的机遇和挑战,需要不断创新和转型,以适应社会需求的变化和经济结构的调整近年来,中国的保险科技蓬勃发展。通过创新的商业模式和先进的技术手段,积极推动保险行业实现数字化和智能化转型。保险科技的目标是提升保险服务的便利性、个性化和智能化水平,以前所未有的速度改变传统保险业的面貌。这一发展趋势为保险行业注入了新的活力,使保险产品更贴近人民群众的实际

致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑Oct 07, 2023 pm 10:09 PM

不得不说,Llama2的「二创」项目越来越硬核、有趣了。自Meta发布开源大模型Llama2以来,围绕着该模型的「二创」项目便多了起来。此前7月,特斯拉前AI总监、重回OpenAI的AndrejKarpathy利用周末时间,做了一个关于Llama2的有趣项目llama2.c,让用户在PyTorch中训练一个babyLlama2模型,然后使用近500行纯C、无任何依赖性的文件进行推理。今天,在Karpathyllama2.c项目的基础上,又有开发者创建了一个启动Llama2的演示操作系统,以及一个

腾讯与中国宋庆龄基金会发布“AI编程第一课”,教育部等四部门联合推荐腾讯与中国宋庆龄基金会发布“AI编程第一课”,教育部等四部门联合推荐Sep 16, 2023 am 09:29 AM

腾讯与中国宋庆龄基金会合作,于9月1日发布了名为“AI编程第一课”的公益项目。该项目旨在为全国零基础的青少年提供AI和编程启蒙平台。只需在微信中搜索“腾讯AI编程第一课”,即可通过官方小程序免费体验该项目由北京师范大学任学术指导单位,邀请全球顶尖高校专家联合参研。“AI编程第一课”首批上线内容结合中国航天、未来交通两项国家重大科技议题,原创趣味探索故事,通过剧本式、“玩中学”的方式,让青少年在1小时的学习实践中认识A

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft