Home  >  Article  >  Technology peripherals  >  OpenAI releases GPT-4. What technology trends are worth paying attention to?

OpenAI releases GPT-4. What technology trends are worth paying attention to?

WBOY
WBOYforward
2023-04-11 20:52:011391browse

This article is Zhang Junlin, head of new technology research and development at Sina Weibo and director of the Chinese Information Society of China, in his answer to the Zhihu question "OpenAI releases GPT-4, what are the technical optimizations or breakthroughs?" It summarizes the GPT4 technical report The three directions pointed out here also mentioned two other technical directions.

OpenAI releases GPT-4. What technology trends are worth paying attention to?

#At this historic moment, answer a question and leave your own footprints as a witness to history.

The technical report of GPT4 clearly pointed out three new directions:

First, the closure of the most cutting-edge research on LLM Or small circles. The technical report stated that due to competition and safety considerations, technical details such as model size were not announced. From the open source of GPT 2.0, to GPT 3.0 there were only papers, to ChatGPT there were no papers, and until GPT 4.0 the technical reports were more like performance evaluation reports. An obvious trend is that OpenAI has solidified its name as CloseAI, and OpenAI will no longer release papers on its cutting-edge LLM research.

In this case, other companies with relatively advanced technology have two options. One is to do a more extreme open source LLM. For example, Meta seems to have chosen this path. This is generally a reasonable choice made by companies at a competitive disadvantage, but the relevant technology is often not the most cutting-edge technology; the other option is Following up on OpenAI, we also chose to close the technology. Google was previously considered the second echelon of LLM. But under the combined punch of "Microsoft OpenAI", the situation is now a bit embarrassing. GPT 4.0 was completed in August last year. It is estimated that GPT 5.0 is currently in the process of being refined. With such a long time window, Google will end up in the current situation. Think about some very critical research such as Transformer and CoT, which are all done by themselves. I don’t know what the senior officials think when they come out and end up like this. If Google can follow up quickly later, it should not be a big problem to stay in the second tier. It is likely to be much ahead of the third place in technology. Due to competitive considerations, I guess Google will most likely follow OpenAI's path of technological closure. The most advanced LLM technology will be used first to refine its own elixir, rather than writing a paper and releasing it to benefit the public, especially OpenAI. This is likely to lead to the closure of the most cutting-edge research in LLM.

Counting from now on, after a period of time in China (it should be faster to achieve a 60 to 30% discount on ChatGPT, and it is estimated that it will take a longer time to equal), it will inevitably be forced to Entering into a situation of independent innovation. Judging from various domestic situations in the past three months, what will the future be like? Most likely not optimistic. Of course, this level is definitely difficult, but it must be passed. I can only wish those who have the ability and determination to do their best.

Second, the "Capability Prediction" of the LLM model mentioned in the GPT 4 technical report is a very valuable new research direction (in fact, there are also some other materials before, I remember reading it, but I can’t remember which one specifically). Use a small model to predict a certain ability of a large model under certain parameter combinations. If the prediction is accurate enough, it can greatly shorten the elixir refining cycle and greatly reduce the cost of trial and error. Therefore, regardless of the theoretical or practical value, this is definitely It is worth carefully studying the specific technical methods.

Thirdly, GPT 4 open sourced an LLM evaluation framework, which is also a very important direction for the rapid development of LLM technology later. Especially for Chinese, it is of particular significance to build practical Chinese LLM evaluation data and framework. Good LLM evaluation data can quickly discover the current shortcomings and improvement directions of LLM, which is of great significance. However, it is obvious that this area is basically blank at present. . This resource requirement is actually not that high and is suitable for many organizations, but it is indeed hard work.

In addition to the three directions clearly pointed out in the GPT 4 technical report, because there has been a lot of news about LLM recently, I will write down two other technical directions.

First of all, Stanford University is based on Meta’s 7B open source LLaMA and adds Self Instruct​Technical construction ​Alpaca​ also represents a technical direction. If summarized, this direction can be called the direction of "low-cost reproduction of ChatGPT". The so-called Self Instruct is to adopt certain technical means without manually marking the Instruct. Instead, the Instruct is extracted from the OpenAI interface, which is better known as "distilling" it. That is, it does not require human marking. Instead, ChatGPT acts as a teacher and marks your Instruct. result. This brings the cost of Instruct marking directly to the benchmark of several hundred dollars, and the time cost is even shorter. In addition, the scale of Model 7B is not large, so it can be regarded as a technical route to "reproduce ChatGPT at low cost".

I estimate that many people in China have already adopted this technical route. There is no doubt that this is a shortcut, but there are advantages and disadvantages to taking shortcuts, so I won’t go into details. In the process of catching up with ChatGPT, I personally think it is feasible and supportive to reduce the cost first and reproduce ChatGPT to 70 to 80%. After all, the poor have their own ways of playing. Of course, the pursuit of making the model smaller without sacrificing the effect is very valuable if it can be done in a down-to-earth manner.

In addition, embodied intelligence will undoubtedly be the key research direction of LLM in the next stage. The representative in this regard is PaLM-E released by Google a while ago. With the current GPT 4, we can think that humans have created a super brain, but still locked it in a GPU cluster. And this super brain needs a body. GPT 4 needs to connect, communicate and interact with the physical world, and get real feedback in the physical world to learn to survive in the real world, and use real-world feedback, such as reinforcement learning. Come and learn the ability to move around the world. This will definitely be the hottest LLM research direction in the near future.

Multimodal LLM gives GPT 4 its eyes and ears, while embodied intelligence gives GPT 4 its body, feet and hands. GPT 4 has some connections with you and me, and relying on the powerful learning ability of GPT 4 itself, this thing is expected to appear around us soon.

If you think about it carefully, there are actually many other promising directions. My personal judgment is that the next 5 to 10 years will be the golden decade of the fastest development of AGI. If we stand at the time point of the next 30 years, when we look back on these 10 years, some of us will definitely think of the following verse: "Understand, but it is too late, they make the sun sad on the way, nor gentle. into that good night.”

The above is the detailed content of OpenAI releases GPT-4. What technology trends are worth paying attention to?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete