Home >Technology peripherals >AI >Interview with Sam Altman: GPT-4 didn't surprise me much, but ChatGPT surprised me

Interview with Sam Altman: GPT-4 didn't surprise me much, but ChatGPT surprised me

WBOY
WBOYforward
2023-04-04 13:05:071627browse

ChatGPT and GPT-4 are undoubtedly the biggest “hot hits” in the artificial intelligence industry at the beginning of 2023.

· I have no idea what history books will say about various versions of GPT. But if I have to pick out a key node that I have seen so far, I think it is still ChatGPT. GPT-4 didn’t surprise me too much, but ChatGPT made me a little bit overjoyed.

· To a certain extent, the GPT-4 system enhances human intelligence and can be applied to a variety of scenarios.

· The ease of use of the system itself is sometimes more important than the capabilities of the underlying model.

· GPT-4 is not yet conscious and cannot replace good programmers. A truly conscious artificial intelligence should be able to tell others that it is conscious, express its own pain and other emotions, understand its own situation, have its own memory, and interact with others.

· Artificial intelligence will bring huge improvements to the quality of human life. We can cure diseases, create wealth, increase resources, and make humans happy... It seems that humans don’t need to work anymore, but Human beings also need social status, need passion, need to create, and need to feel their own value. Therefore, after the advent of the artificial intelligence era, what we need to do is to find new jobs and lifestyles, and embrace the huge improvements brought by new technologies.

Interview with Sam Altman: GPT-4 didnt surprise me much, but ChatGPT surprised me

Sam Altman, one of the founders of OpenAI, is currently the president of Y Combinator and the CEO of OpenAI, an American artificial intelligence laboratory. Led the artificial intelligence laboratory OpenAI to develop the chatbot program ChatGPT, and was called the "Father of ChatGPT" by the media.

(L refers to Lex Fridman, S refers to Sam Altman)

If the history of AI is written on Wikipedia, ChatGPT is still the most critical node

Q1

#L: What is GPT-4? How does it work? What's the most amazing thing about it?

S: Looking back now, it is still a very rudimentary artificial intelligence system. Its work efficiency is low, there are some minor problems, and many things are not completed satisfactorily. Still, it points to a path forward for truly important technologies in the future (even if the process takes decades).

Q2

L: 50 years later, when people look back at early intelligent systems, will GPT-4 be a truly huge leap forward? Is this a pivotal moment? When people write the history of artificial intelligence on Wikipedia, which version of GPT will they write about?

S: This process of progress is ongoing, and it is difficult to pinpoint a historic moment. I have no idea what the history books will say about the various versions of GPT. But if I had to pick out one key node that I have seen so far, I think it is ChatGPT. What is really important about ChatGPT is not its underlying model itself, but how to utilize the underlying model, which involves reinforcement learning based on human feedback (RLHF) and its interface.

Q3

L: How does RLHF make ChatGPT have such amazing performance?

S: We trained these models on large amounts of text data. In the process, they learned some knowledge about low-level representations and were able to do some amazing things. But if we use this basic model immediately after training is completed, although it can have good performance on the test set, it is not very easy to use. To this end, we implemented RLHF by introducing some human feedback. The simplest RLHF is: give the model two versions of the output, let it judge which one human raters will prefer, and then feed that information back to the model through reinforcement learning. RLHF is surprisingly effective. We can make the model more practical with very little data. We use this technology to align the model with human needs and make it easier to give correct answers that are helpful to people. Regardless of the underlying model capabilities, the ease of use of the system is critical.

Q4

L: How do you understand that by using RLHF technology, we no longer need as much human supervision?

S: To be fair, our research on this part is still in its early stages compared to the original scientific study of creating pre-trained large models, but it does require less data.

L: Research on human guidance is very interesting and important. We use this type of research to understand how to make systems more useful, smarter, ethical and consistent with human intent. The process of introducing human feedback is also important.

Q5

L: How large is the pre-training data set?

S: We have spent a lot of effort working with our partners to capture these pre-training data from various open source databases on the Internet and build a huge data set. In fact, apart from Reddit, newspapers and other media, there is a lot of content in the world that most people don’t expect. Cleaning and filtering data is more difficult than collecting it.

Q6

L: Building ChatGPT requires solving many problems, such as: design of model architecture scale, data selection, RLHF. What’s so magical about these parts coming together?

S: GPT-4 is the version we actually rolled out inside the final product of ChatGPT, and the number of parts required to create it is hard to know and it’s a lot of work. At every early stage, we need to come up with new ideas or execute existing ones well.

L: Some technical steps in GPT-4 are relatively mature, such as predicting the performance that the model will achieve before completing the complete training model. How can we know the special characteristics of a fully trained system based on a small amount of training? It's like looking at a one-year-old baby and knowing how many points he got in the college entrance examination.

S: This achievement is surprising. It involves many scientific factors behind it and finally reaches the level of intelligence expected by humans. This implementation process is much more scientific than I could imagine. As with all new branches of science, we will find new things that don't fit the data and come up with better explanations for it. This is just how science develops. Although we have posted some information about GPT-4 on social media, we should still be in awe of its magic.

GPT-4 systematically enhances human intelligence

Q7

##L: GPT-4 This type of language model can be learned or referenced Materials from various fields. Are the researchers and engineers within OpenAI gaining a deeper understanding of the wonders of language models?

S: We can evaluate the model in various ways. After training the model, we can test it on various tasks. We have also opened up the testing process of the model on Github. This is helpful. The important thing is that we spend a lot of manpower, financial resources, and time to analyze the practicality of the model, how the model can bring happiness and help to people, how to create a better world, and generate new products and services. Of course, we still don’t fully understand all the internal processes by which the model accomplishes its tasks, but we will continue to work toward this.

Q8

L: GPT-4 compresses the vast amount of information on the Internet into "relatively few" parameters in the black box model, forming human intelligence. What kind of leap does it take from fact to wisdom?

S: We use the model as a database to absorb human knowledge, rather than using it as an inference engine, and the system's processing power is magically improved. In this way, the system can actually achieve a certain degree of reasoning, although some scholars may think this statement is not rigorous. To some extent, the GPT-4 system enhances human intelligence and can be applied to a variety of scenarios.

L: ChatGPT seems to "possess" intelligence in its continuous interaction with humans. It admits its wrong assumptions and denies inappropriate requests in this dialogue.

GPT-4 is not conscious and will not replace good programmers

Q9

#L: Some people enjoy working with Programming with GPT, some people are afraid that their jobs will be replaced by GPT. What do you think of this phenomenon?

S: There are some critical programming tasks that still require a human creative element. GPT-like models will automate some programming tasks, but they still cannot replace a good programmer. Some programmers will feel anxious about the uncertainty of the future, but more people will feel that it improves their work efficiency.


Twenty or thirty years ago, when "Deep Blue" defeated the chess master Kasparov, some people thought that there was no need to continue playing chess. But chess is still popular around the world.

Artificial intelligence will bring huge improvements to the quality of human life. We can cure diseases, create wealth, increase resources, and make humans happy... It seems that humans do not need to work anymore, but humans You also need social status, you need passion, you need to create, you need to feel your own value. Therefore, after the advent of the artificial intelligence era, what we need to do is to find new jobs and lifestyles, and embrace the huge improvements brought by new technologies.

Q10

L: Eliezer Yudkowsky warned that artificial intelligence can harm humans and gave some examples. It is almost impossible for us to keep super artificial intelligence and Human intent “aligns.” Do you agree with him?

S: It's possible. If we don't talk about this potential possibility, we won't put enough effort into developing new technologies to solve such problems. Such problems exist in many emerging fields, and now people are concerned about the capabilities and safety of artificial intelligence. Elizer's article is well written, but it's difficult to follow some of his work, there are some logical issues, and I don't entirely support his views.

There was a lot of work on AI safety long before people believed in the power of deep learning, large language models, and I don’t think there have been enough updates in this area. Theory is indeed important, but we need to constantly learn from changes in technological trajectories, and this cycle needs to be more compact. I think now is a good time to look at AI safety and explore the “alignment” of these new tools and technologies with human intent.

Q11

L: Artificial intelligence technology is developing at a rapid pace, and some people say that we have now entered the stage of "take-off" of artificial intelligence. When someone actually builds general artificial intelligence, how will we know about this change?

S: GPT-4 didn’t surprise me too much, but ChatGPT slightly surprised me. As impressive as GPT-4 is, it is not yet AGI. The true definition of AGI is increasingly important, but I think it’s still very far away.

Q12

L: Do you think GPT-4 is conscious?

S: No, I don’t think it’s conscious yet.

L: I think a truly conscious artificial intelligence should be able to tell others that it is conscious, express its own pain and other emotions, understand its own situation, have its own memory, and be able to interact with people. And I think these abilities are interface abilities, not underlying knowledge.

S: Our chief scientist at OpenAI, Ilya Sutskever, once discussed with me "How to know whether a model is conscious." He believes that if we carefully train a model on a data set without mentioning the subjective experience of consciousness or any related concepts, then we describe this subjective experience of consciousness to the model and see if the model can understand the information we convey.

General Artificial Intelligence, where have we come?

Q13

L: Chomsky and others are critical of the ability of “large language models” to achieve general artificial intelligence. What do you think of it? Are large language models the right path to general artificial intelligence?

S: I think large language models are one part of the road to AGI, and we also need other very important parts.

L: Do you think an intelligent agent needs a "body" to experience the world?

S: I'm cautious about this. But in my opinion, a system that cannot be well integrated into known scientific knowledge cannot be called "superintelligence". It is like inventing new basic science. In order to achieve "super intelligence", we need to continue to expand the paradigm of the GPT class, which still has a long way to go.

L: I think that by changing the data used to train GPT, various huge scientific breakthroughs can already be achieved.

Q14

L: As the prompt chain gets longer and longer, these interactions themselves will become part of human society and serve as mutual aids. Base. How do you view this phenomenon?

S: Compared with the fact that the GPT system can complete certain tasks, what excites me more is that humans participate in the feedback loop of this tool. We can learn more from the trajectories of multiple rounds of interactions. Lots of stuff. AI will expand and amplify human intentions and capabilities, which will also shape how people use it. We may never build AGI, but making humans better is a huge victory in itself.

The above is the detailed content of Interview with Sam Altman: GPT-4 didn't surprise me much, but ChatGPT surprised me. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete