Home >Technology peripherals >AI >A Tsinghua-based start-up company released the world's first 4D skeletal animation framework, which can convert real scenes into animations with one click and can generate personalized characters.
Apple recently announced that it will launch its first virtual head-mounted display device, Vision Pro, on February 2. This XR device is expected to lead the rapid development of the next generation of terminals. With the popularization of virtual display devices, digital interaction will move from two-dimensional to three-dimensional, and three-dimensional models and three-dimensional animation will become the mainstream content forms in the future. Multi-dimensional immersive interaction will also become a trend under the trend of virtual and real integration.
However, from the perspective of data scale, the data accumulation in the content industry at this stage is still mainly 2D images and flat videos, while the data foundation of 3D models, 4D animations and other data is relatively weak. Among them, 4D animation introduces time sequences on the basis of traditional 3D models, that is, 3D models that change over time, which can present dynamic three-dimensional effects. It is widely used in game animation, movie special effects, virtual reality and other fields, but it is also currently The most difficult link in content ecological development.
Therefore, for the upcoming multi-dimensional immersive experience, building three-dimensional digital content will become an important basic work.
Facing this cutting-edge field, Tsinghua University’s entrepreneurial team Shengshu Technology has carried out a series of research and product development, and recently launched the world’s first 4D based on “skeletal animation” in conjunction with Tsinghua University, Tongji University and other universities The animation generation framework "AnimatableDreamer" can directly convert 2D video materials into dynamic three-dimensional models (i.e. 4D animation) with one click. It supports automatic extraction of skeletal movements, one-click conversion of animation effects, and personalized character generation through text input. .
Paper address: https://arxiv.org/pdf/2312.03795.pdf
Project Address: https://animatabledreamer.github.io/
Paper title: AnimatableDreamer: Text-Guided Non-rigid 3D Model Generation and Reconstruction with Canonical Score Distillation
New content production method
Subversion of the three-dimensional animation development process
Past Industry Producing three-dimensional animation requires complex processes such as 3D modeling, texture rendering, bone binding, and animation production. It requires the participation of professionals such as modelers and animators to complete, which is low-efficiency and high-cost. According to statistics, the production cycle of a single static 3D model is from hours to days, and the cost can be as high as thousands of dollars. The cost of dynamic processing will be even higher.
As can be seen from the official demo video, upload a 2D real-life video of a squirrel, and enter the text description of "A squirrel with red sweater" to create the original real-life squirrel with one click Convert to animation style, and perfectly retain the action postures. Synchronization can also generate a three-dimensional dynamic model from a 360-degree perspective. By switching text descriptions, you can customize the character and switch the squirrel to different cartoon images such as fox and Squirtle. .
"AnimatableDreamer" can automatically extract the skeletal movements of target objects (characters, animals, etc.) in the video, and then convert the object into any skeletal animation model through text description. The entire process is not limited by templates, supports any video length and any type of action, has a high degree of temporal consistency and multi-view consistency, and the exported dynamic three-dimensional model can be rendered in any 3D environment.
For post-production in film, television, animation and other fields, more editing is usually required for animation. "AnimatableDreamer" also supports the replacement or editing of animation files for models that have completed bone binding, achieving a higher degree of freedom. . When related technologies are gradually implemented in the future, the 3D modeling and animation production processes for game development, film and television animation and other scenes are expected to become more efficient.
Facing the future
It is expected to form a new content ecology
In terms of implementation principles, the research team innovatively proposed Canonical Score Distillation (CSD) renders and denoises 3D models of different frames and different perspectives in the camera space that changes over time, and uniformly returns the gradients to different Distillation is performed in the canonical space shared by the camera space, and the dimensionality of 4D generation is reduced to 3D, that is, the 4D generation problem is simplified into generation in 3D space.
"AnimatableDreamer" can extract joint movements directly from videos. By decoupling the object's model and motion, the generated model has a high degree of temporal consistency and geometric rationality, is not limited by video length, and can be effectively Minimally eliminate issues such as shape breakage, flickering, and multi-view inconsistencies.
In scenes with limited viewing angles and large movements, due to the introduction of prior knowledge of the Diffusion model, even if the input video does not cover the complete object, "AnimatableDreamer" can automatically complete the picture information and achieve better generation quality. .
It can be said that the proposal of "AnimatableDreamer" directly connects text to 4D skeletal animation generation, modeling, mapping, and bone binding , action-driven in one go! Input a natural language description and automatically output a three-dimensional animation video. No professional knowledge is required. Ordinary people can get started and easily customize animation content.
Work based on "AnimatableDreamer" will greatly reduce the difficulty of producing 3D and 4D digital content, enrich the interactive experience, and enable everyone to generate and edit creative content, which will give birth to the 3D era. Create a new content entertainment and content consumption model.
Imagine that in the future virtual world, users can quickly build customized digital spaces and create personalized interactive experiences. For example:
Each character image in the digital space can be generated at will, such as dressing children in Superman clothes, switching to holiday-themed costumes for Halloween, etc.;
Users who have pets can cartoonize their pets, for example, into a virtual Mickey Mouse image. Pets' daily life is just like cartoons, and the daily interaction between owners and pets will become interesting;
The ways of interaction between people will also become richer, and you can have sex anytime, anywhere. For a theme party, you can generate the desired party environment, character costumes, etc. in real time.
As a start-up company established less than a year ago, the Shengshu Technology team has long been committed to the fields of multi-modal large models such as images, 3D, and video, and released 3D assets in September The creation tool VoxCraft, officially launched on Discord, supports text and image guidance, minute-level creation of 3D models, custom replacement of 3D textures and other functions, empowering the 3D modeling process of game development, film and television animation and other scenes. The 4D skeletal animation generation launched this time is another new exploration work of Shengshu Technology, which will be integrated into VoxCraft products in the future.
VoxCraft tool address: https://voxcraft.ai/
The arrival of Apple Vision Pro is not only an important innovation at the hardware device level, but will also start a revolution in content and experience. prelude. The innovative capabilities of generative AI such as 4D animation generation will not only bring better visual presentation, but will also open up multi-dimensional digital experiences in new ways, bringing more possibilities to the next generation of human-computer interaction.
The above is the detailed content of A Tsinghua-based start-up company released the world's first 4D skeletal animation framework, which can convert real scenes into animations with one click and can generate personalized characters.. For more information, please follow other related articles on the PHP Chinese website!