Home  >  Article  >  Technology peripherals  >  Sam Altman talks about OpenAI: Facing GPU shortage panic, GPT-3 may be open source

Sam Altman talks about OpenAI: Facing GPU shortage panic, GPT-3 may be open source

PHPz
PHPzforward
2023-06-09 15:27:56999browse

Since the advent of ChatGPT, large models and AI technology have attracted widespread attention around the world. On the one hand, people marvel at the emerging capabilities of large models; on the other hand, they are concerned about the controllability and future development of artificial intelligence. This year, many experts in the AI ​​field, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, have jointly warned that large AI models will cause a series of risks, and some have even called for a halt to the development of subsequent large AI models for GPT-4. .

OpenAI, as the company behind large models such as ChatGPT and GPT-4, has undoubtedly been pushed to the forefront. OpenAI CEO Sam Altman is currently on a global speaking tour to dispel people's "fear" about artificial intelligence and listen to the opinions of developers and users of OpenAI products.

Sam Altman谈OpenAI:面临GPU短缺恐慌,GPT-3或将开源

According to "Fortune" report, in May Sam Altman met behind closed doors with some developers and startup founders, And talked about OpenAI’s roadmap and challenges. One of the participants in this closed-door meeting, Raza Habib, co-founder and CEO of Humanloop, recently mentioned OpenAI’s product planning and development bottlenecks in a blog.

The original blog has been deleted, but some netizens have uploaded a snapshot (copy) of the blog. Let’s take a look at the specific content of the blog:

OpenAI is now facing The biggest problem is that it is limited by GPU

Currently OpenAI faces very severe GPU limitations, which also delays the implementation of some of their short-term plans. The most common customer complaints these days are about the reliability and speed of APIs. Sam acknowledged the problem, explaining that most of the issues customers complained about were due to GPU shortages.

When it comes to processing text, the longer 32k context is not yet available to more people. Now OpenAI has not completely overcome the O (n^2) expansion problem of the attention mechanism. Although OpenAI seems to be able to achieve 100k-1M token context window (within this year) text processing soon, larger text processing windows require further progress. research breakthroughs.

Not only that, but currently, the fine-tuning API is also limited by GPU supply. OpenAI does not yet use efficient fine-tuning methods like Adapters or LoRa, so fine-tuning is very computationally intensive to run and manage. Sam revealed that better fine-tuning technology will be introduced in the future, and they may even provide a community dedicated to research models.

Additionally, dedicated capacity provision is limited by GPU supply. OpenAI also offers dedicated capacity, giving customers a private copy of their model. To use the service, customers must be willing to commit $100,000 up front.

OpenAI’s near-term roadmap

During the conversation, Sam shared the near-term roadmap of OpenAI API, which is mainly divided into two stages:

Road to 2023:

  • #OpenAI’s top priority is to launch a cheaper, faster GPT-4 — Overall, OpenAI’s goals are Reduce the cost of intelligence as much as possible, so the cost of the API will decrease over time.
  • Longer context windows - In the near future, context windows may be as high as 1 million tokens.
  • Nudge API - The Nudge API will be extended to the latest models, but its exact form will be determined by the developers.
  • Status API - Now when calling the chat API, you have to go through the same session history over and over again and pay for the same token again and again. A future version of the API will be able to remember session history.
  • Road to 2024:
  • Multimodal - This was demonstrated as part of the GPT-4 release, but in more It won't be scalable to everyone until the GPU comes online.

The plugin does not have a PMF and will not appear in the API anytime soon

Many developers are interested in accessing the ChatGPT plugin through the API, but Sam said He doesn't think these plugins will be released anytime soon. Use of plugins other than browsing suggests they don't have PMF yet. Sam pointed out that many people want their applications to be within ChatGPT, but what they really want is ChatGPT within their applications.

OpenAI will avoid competing with its customers except with ChatGPT-like competitors

Many developers say that when OpenAI releases new products, they are nervous about applications built using the OpenAI API because OpenAI may eventually release a competing product. Sam said that OpenAI will not release more products besides ChatGPT. He said there are a lot of great platform companies that have a killer app, and ChatGPT will allow them to make their APIs better by becoming customers of their own products. The vision for ChatGPT is to be a super-intelligent work assistant, but there are many other GPT use cases that OpenAI won’t be getting into.

Regulation is necessary, but so is open source

Although Sam advocates for regulation of future models, he does not believe that existing models are dangerous, And think regulating or banning them would be a huge mistake. He once again emphasized the importance of open source and said that OpenAI is considering open source GPT-3. Part of the reason why OpenAI has been slow to open source is because they feel that not many people and companies have the ability to properly manage such large language models.

The law of scaling still exists

Many recent articles have claimed that "the era of giant artificial intelligence models is over." Sam said that didn't convey exactly what he meant.

OpenAI’s internal data shows that the law of scaling still holds, and increasing the size of the model will continue to improve performance. However, model size cannot always increase at the same scale, as OpenAI has already increased model size millions of times in just a few years, and continuing to do so will be unsustainable. But that doesn’t mean OpenAI will stop trying to make its models bigger, but it does mean they might double or triple in size every year instead of growing by orders of magnitude.

The fact that the extended model is still valid has important implications for the development of AGI. The idea of ​​scaling is that we probably already have most of the elements needed to build an AGI, and most of the remaining work will be taking existing methods and scaling them to larger models and larger data sets. If the era of model extensions is over, it will be even longer before we reach AGI. The fact that the law of scaling still applies implies that we will achieve AGI in less time.

The above is the detailed content of Sam Altman talks about OpenAI: Facing GPU shortage panic, GPT-3 may be open source. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete