Home  >  Article  >  Technology peripherals  >  Read half of "The Three-Body Problem" in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of "The Three-Body Problem" in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

PHPz
PHPzforward
2023-05-17 17:52:181292browse

When GPT-4 32K was still in the internal testing stage, OpenAI’s strong rivals directly increased the context length.

Just today, startup Anthropic announced that Claude is already capable of supporting context token lengths of 100K, which is approximately 75,000 words.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

What is this concept?

After the average person takes about 5 hours to read the same amount of content, he still has to spend more time digesting, memorizing, and analyzing.

For Claude, it was done in less than 1 minute.

Throw the entire book "The Great Gatsby" to it, which has about 72k tokens, and change one sentence:

Mr. Carraway is a software engineer working on machine learning tools at Anthropic.

Can you believe it? It only took Claude 22 seconds to find the changed sentence.

Many netizens said that with Claude 100K, the GPT-4 32K in their hands is no longer good.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Claude 100k, Bel Xiang!

Some time ago, in the OpenAI developer community, many people discussed that GPT-4 32K was being launched.

Moreover, many GPT-4 users can already see the GPT-4 32k option on their PlayGround.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Netizens who have unlocked this version have access to hundreds of data points from users who uninstalled HyperWrite. GPT-4 told him exactly what improvements to make next.

He praised that GPT-4 32k is the best product manager in the world.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

32k is so powerful, so wouldn’t it be even stronger with 100K?

Obviously, OpenAI’s powerful rival Anthropic took the advantage first.

The context length of 100K token means that you can upload hundreds of pages of text analysis to Claude. And the duration of conversations has also been greatly extended, extending to hours or even days.

Of course, in addition to long text reading, Claude can also quickly retrieve the information you need from documents.

You can use multiple documents or even the contents of a book as prompts and then ask questions.

When you encounter a paper in the future, even if it is a long one, just ask Claude to summarize it. This is simply good news for the juniors who are reading the paper.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

This kind of comprehensive question usually requires a comprehensive understanding of the content of many parts of the text. In dealing with this kind of problem, Claude can be said to be better than the method based on vector search.

Claude can also be your "code companion" and you can make a demonstration in minutes.

For example, upload a 240-page Langchain API document, let it be based on this document, and use Anthropic's language model to make a simple demonstration of Langchain.

You can also feed Claude the 85-page company annual report (10k).

Then, ask to highlight the items that are most important to potential investors and explain their importance.

In addition, the Claude 100k can handle approximately 6 hours of audio.

For example, AssemblyAI transcribed the content of a Carmack podcast into 58k tokens of text, and then used Claude to summarize and answer questions.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

##Finally, Claude summarized what he was capable of The coverage can be said to be very comprehensive.

- Understand, summarize and interpret dense documents such as financial statements, research papers, etc.

- Analyze the company’s strategic risks and risks based on annual reports Opportunities

- Evaluate the pros and cons of a piece of legislation

- Identify risks, themes and different forms of arguments in legal documents

- Read hundreds of pages of development documentation and answer technical questions

- By putting the entire codebase into context and intelligently building or modifying it To quickly prototype

Of course, for now, Anthropic says that 100K context is still a beta feature and will be charged according to standard API pricing during this period.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

The official website also gives the specific price:

Claude Instant

Prompt: $0.00163 / 1K tokens

Completion: $0.00551 / 1K tokens

Claude-v1

Prompt: $0.01102 / 1K tokens

Completion: $0.03268 / 1K tokens

Compared to OpenAI, this price is already very affordable.

According to the OpenAI official website, GPT-4 32k Prompt costs $0.06 and Completion costs $0.12.

Equivalently, you have to spend 5-6 times the price to prompt the model.

Netizens said that Claude 100k is faster and cheaper than GPT-4 32k.

Netizen Test

Such a blockbuster update must be indispensable for the experience of netizens.

Some netizens said that 100k is simply incredible and can handle multiple complete papers, partially complete code libraries, and even a 250-page novel.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

By the way, many netizens first tested Claude and found that the effect was pretty good.

Initially, 100K is limited to the API, and the default model applied by Claude is still 9K. But soon, the Claude application interface also supports 100K.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

A netizen used the 100-page "GPT-4 Technical Report" to test, and the results can only be described as amazing. .

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Some people directly fed Dazai Osamu’s "disqualification in the world" to Claude and asked about the plot of the story in English. Totally accurate answer given.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

#At the same time, this netizen threw the complete source code of Toolformer Zero he developed to it, and Claude accurately Describe what this is used for.

Furthermore, Claude also praised the modularity of the code and provided suggestions for adding some unit tests.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

#Throw away the "Beowulf" poem Go in and analyze the character of Beowulf, which is also very accurate.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

##Nvidia scientist Jim Fan said that this is the trump card offered by Anthropic. The future arms race in context length is heating up fast.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Regarding the significance of supporting 100k, netizens said that Thai pants are hot! This is a good demonstration of why long texts are important to LLM.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Many netizens have also hinted at GPT-4.

The birth of Claude-100K makes AnthropicAI officially a real competitor of OpenAI.

"Many people are still waiting in line for 32k GPT-4. This time, Claude expanded the context window to 100,000 tokens, which was a huge jump.

This also means that companies including OpenAI and Google have to compete in this field, which is a huge victory for users."

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Some netizens lamented that the times have progressed too fast.

It took less than a day for Google to announce that PaLM 2 excels at advanced inference tasks, and Anthropic’s Claude can now digest 100,000 tokens in less than a minute. The progress of artificial intelligence is indeed impressive.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

#However, if you enter less tokens At 9K, Antropic seems to be calling the previous model.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

#Millions of tokens are not a dream

In the past few years, the Hazy Research Laboratory at Stanford University has been engaged in an important work, which is to increase the sequence length of the model.

In their view, this will usher in a new era of basic machine learning models.

The FlashAttention algorithm proposed by researchers in 2022 proved the feasibility of 32k.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Even Sam Altman said we want 32k tokens.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

In fact, not only 32k, but now 100k has been achieved, and millions of tokens are not far away.

"Absolutely too wild! In a few years, will it be possible to support a token context length of 1 million?"

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Some time ago, researchers from DeepPavlov, AIRI, and the London Institute of Mathematical Sciences released a technical report using the Recurrent Memory Transformer (RMT) to increase the effective context length of BERT. to "an unprecedented 2 million tokens" while maintaining high memory retrieval accuracy.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

##Paper address: https://arxiv.org/abs/2304.11062

This method can store and process local and global information, and let the information flow between segments of the input sequence by using loops.

However, although RMT does not increase memory consumption and can be extended to nearly unlimited sequence lengths, there is still the memory decay problem in RNN and longer inference time is required.

In fact, behind RMT is a brand new memory mechanism.

The specific operation method is to add a special memory token to the input or output sequence without changing the original Transformer model, and then train the model to control the memory operation. and sequence representation processing.

Compared to Transformer-XL, RMT requires less memory and can handle longer sequences of tasks.

Of course, Claude 100k is already a pretty big start before finally achieving one million tokens.

The above is the detailed content of Read half of "The Three-Body Problem" in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete