Home >Technology peripherals >AI >The eve of AIGC producing content for the Metaverse
From AI painting, AI arranging to AI generated videos, increasingly “smart” AI has brought a new content production model, AIGC.
In the past few decades, the content obtained by humans can be roughly divided into two categories: PGC (professionally produced content), and UGC (user-generated content). The emergence of AIGC has once again diversified content production models, and at the same time, it has further deepened humankind's dependence on the digital world in a subtle way.
According to IDC statistics, global VR/AR terminal shipments will reach 11.23 million units in 2021. As the entrance to the metaverse, the tens of millions of sales of VR/AR have also made people think about how to produce more complex metaverse content compared to the Internet?
The emergence of AIGC provides a new idea for the content production of the Metaverse.
However, in 2022, when the Metaverse is still in its infancy and AIGC has not yet completed its evolution, amid the AIGC pandemic, some new problems have begun to surface.
In 2016, Alpha Go defeated the world's Go master Lee Sedol, and the third wave of artificial intelligence led by deep learning reached its peak. Afterwards, artificial intelligence fell silent again, especially Under the influence of the global economic downturn, the flame of artificial intelligence is beginning to dim.
"Some leading artificial intelligence companies that we were originally optimistic about (during this period) did not go smoothly when they were listed, and many artificial intelligence companies had to face operating pressure," looking back on the past few years The development history of artificial intelligence enterprises, said Shi Lin, deputy director of the Content Technology Department of the Cloud Computing and Big Data Institute of China Academy of Information and Communications Technology.
At this time, artificial intelligence is in urgent need of a phenomenal product to boost the entire industry. AIGC’s timely emergence has become the “good medicine” for artificial intelligence to continue its life.
The so-called AIGC is actually a technology that uses artificial intelligence algorithms to automatically generate content.
AIGC has been used for a long time. As early as 2011, the Los Angeles Times in the United States had begun to develop Quakebot, a news writing robot for the earthquake field. In March 2013, Quakebot immediately attracted public attention when it was the first to report a 4.4-magnitude earthquake in Southern California. Subsequently, Reuters, Bloomberg, the Washington Daily News, and the New York Times introduced writing robots, and automated news became the earliest application form of AIGC.
In the art competition of the Colorado State Fair in the United States in 2022, a game designer named Jason Allen won the Digital Art/ As soon as this news was announced as the champion of the digital photography competition, it quickly aroused widespread social attention.
And this is not the only global hot search for AIGC this year.
On December 5, 2022, OpenAI CEO Sam Altman posted on social media that the large-scale language model ChatGPT trained by OpenAI had exceeded 1 million users as of that day. At this time, ChatGPT was only launched for five days, and it took Facebook, one of the four giants in Silicon Valley, 10 months to initially acquire one million registered users.
Ma Zhibo, chief scientist of Peking Data, analyzed that “OpenAI itself is a non-profit organization, but the chatGPT it released was able to gain millions of users within a week, even though the shocked capital market could not make a decision for it. Valuation, but if a company can implement technical services or technology business well, the capital market will still design a valuation system to earn this wave of dividends."
Capital and technology have always been together. Mutual development, and only capital can pave a path for technology to quickly lead to commercial applications.
From automated news to ChatGPT, AIGC has evolved for ten years. However, Li Xuan, director of digital learning at the School of Continuing Education at Tsinghua University, believes that if AIGC is divided into If it is divided into five stages: prototype, standard, complete, superb, and ultimate, the current AIGC is only beginning to take shape.
A very important reason for AIGC’s popularity this year is the open source of the Stable Diffusion model. In August 2022, when Stability AI released Stable Diffusion, the company also made the weights and code of this model open source.
Tang Kangqi, senior solution architect at NVIDIA, said, "The Stable Diffusion model is very small, only about a dozen G. It only requires a 20 series GPU to run, and the speed of generating images from text is only It takes about one minute (it only takes ten seconds to deploy the open source model by yourself), which was unimaginable before.”
However, Tang Kangqi also pointed out that AIGC still needs to be deployed on a large scale for commercial use. There are four limitations:
First, Limitations of computing power. Although Stable Diffusion is very convenient to use, the training cost of the entire model is still very high. The training of this type of model generally requires 516 A top-of-the-line Ampere GPU requires hundreds of thousands of hours of training time, and the training cost is generally in the order of millions of dollars;
Second,limitations of data sources, the data used for training the Stable Diffusion model is currently the world's largest open image-text pair data set LAION-5B, and the training data for the chatGPT model comes from Wikipedia and some question and answer forums. Who owns the data property rights? Will the data "manufacturer" then impose any restrictions on the use of the data? These are also issues that need to be clarified in the future;
Third,The limitations of accurately using trigger words, the Stable Diffusion model requires that the input trigger words are accurate enough and express the meaning It is clear enough, so that it is easier to create the content that users want;
Fourth, Limitations of three-dimensional model generation, until the metaverse content is truly produced When doing this, three-dimensional models will inevitably be involved. Currently, there is still a lot of room for improvement in three-dimensional model generation, including the improvement of professional knowledge in CG (computer graphics).
These four restrictions make AIGC still have a long way to go before it can truly move toward large-scale commercialization, especially in producing content that is truly unique to the Metaverse.
Although there is a long way to go before AIGC can be commercialized on a large scale, the road to becoming a future productivity tool has begun to become clear.
Regarding the future development of AIGC, and even the entire AI technology, Li Xuan believes, “Just like scenes in science fiction movies, physical or mental work in the real world is replaced by robots, and physical or mental work in the virtual world is replaced by robots. Scenarios in which labor is replaced by virtual humans may occur in the not-too-distant future. In the future market, only jobs that require a sense of experience will require human participation."
In addition, Li Xuan also pointed out that As AIGC brings more and more AI tools, we now have several aspects of "obscuring" in our lives and work:
First, Information "obscuring" , while artificial intelligence helps us make "choices", information cocoons are gradually generated. For example, in the APPs we often use, the content you like to see will be continuously pushed to you, and the information you encounter will be superimposed on it. There will be more and more barriers, and the information cocoon will get bigger and bigger;
Second, organs will be "shielded", in the future, time and space such as VR and AR Flow, its density and content will become larger and larger, and then a "colloid" of information will appear. This type of information will be refracted, distorted, and blurred;
Third,Interaction "shielding", with the development of AI and robots, there are more and more humans and platforms. This type of interaction is actually an interaction with non-human beings. This type of interaction may lead to capital control or maximization of platform control.
Facing such an upcoming new world, how should we break through the "cocoon", avoid "obstruction", and live better in the metaverse full of AI?
The answer given by Li Xuan is: embrace change, lifelong learning, break through the cocoon, go beyond the shadow, through systematic thinking, open source technology and tools, and a lifelong learning mentality, better make progress in the future development of.
The above is the detailed content of The eve of AIGC producing content for the Metaverse. For more information, please follow other related articles on the PHP Chinese website!