search
HomeTechnology peripheralsAIRead half of 'The Three-Body Problem' in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

When GPT-4 32K was still in the internal testing stage, OpenAI’s strong rivals directly increased the context length.

Just today, startup Anthropic announced that Claude is already capable of supporting context token lengths of 100K, which is approximately 75,000 words.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

What is this concept?

After the average person takes about 5 hours to read the same amount of content, he still has to spend more time digesting, memorizing, and analyzing.

For Claude, it was done in less than 1 minute.

Throw the entire book "The Great Gatsby" to it, which has about 72k tokens, and change one sentence:

Mr. Carraway is a software engineer working on machine learning tools at Anthropic.

Can you believe it? It only took Claude 22 seconds to find the changed sentence.

Many netizens said that with Claude 100K, the GPT-4 32K in their hands is no longer good.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Claude 100k, Bel Xiang!

Some time ago, in the OpenAI developer community, many people discussed that GPT-4 32K was being launched.

Moreover, many GPT-4 users can already see the GPT-4 32k option on their PlayGround.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Netizens who have unlocked this version have access to hundreds of data points from users who uninstalled HyperWrite. GPT-4 told him exactly what improvements to make next.

He praised that GPT-4 32k is the best product manager in the world.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

32k is so powerful, so wouldn’t it be even stronger with 100K?

Obviously, OpenAI’s powerful rival Anthropic took the advantage first.

The context length of 100K token means that you can upload hundreds of pages of text analysis to Claude. And the duration of conversations has also been greatly extended, extending to hours or even days.

Of course, in addition to long text reading, Claude can also quickly retrieve the information you need from documents.

You can use multiple documents or even the contents of a book as prompts and then ask questions.

When you encounter a paper in the future, even if it is a long one, just ask Claude to summarize it. This is simply good news for the juniors who are reading the paper.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

This kind of comprehensive question usually requires a comprehensive understanding of the content of many parts of the text. In dealing with this kind of problem, Claude can be said to be better than the method based on vector search.

Claude can also be your "code companion" and you can make a demonstration in minutes.

For example, upload a 240-page Langchain API document, let it be based on this document, and use Anthropic's language model to make a simple demonstration of Langchain.

You can also feed Claude the 85-page company annual report (10k).

Then, ask to highlight the items that are most important to potential investors and explain their importance.

In addition, the Claude 100k can handle approximately 6 hours of audio.

For example, AssemblyAI transcribed the content of a Carmack podcast into 58k tokens of text, and then used Claude to summarize and answer questions.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

##Finally, Claude summarized what he was capable of The coverage can be said to be very comprehensive.

- Understand, summarize and interpret dense documents such as financial statements, research papers, etc.

- Analyze the company’s strategic risks and risks based on annual reports Opportunities

- Evaluate the pros and cons of a piece of legislation

- Identify risks, themes and different forms of arguments in legal documents

- Read hundreds of pages of development documentation and answer technical questions

- By putting the entire codebase into context and intelligently building or modifying it To quickly prototype

Of course, for now, Anthropic says that 100K context is still a beta feature and will be charged according to standard API pricing during this period.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

The official website also gives the specific price:

Claude Instant

Prompt: $0.00163 / 1K tokens

Completion: $0.00551 / 1K tokens

Claude-v1

Prompt: $0.01102 / 1K tokens

Completion: $0.03268 / 1K tokens

Compared to OpenAI, this price is already very affordable.

According to the OpenAI official website, GPT-4 32k Prompt costs $0.06 and Completion costs $0.12.

Equivalently, you have to spend 5-6 times the price to prompt the model.

Netizens said that Claude 100k is faster and cheaper than GPT-4 32k.

Netizen Test

Such a blockbuster update must be indispensable for the experience of netizens.

Some netizens said that 100k is simply incredible and can handle multiple complete papers, partially complete code libraries, and even a 250-page novel.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

By the way, many netizens first tested Claude and found that the effect was pretty good.

Initially, 100K is limited to the API, and the default model applied by Claude is still 9K. But soon, the Claude application interface also supports 100K.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

A netizen used the 100-page "GPT-4 Technical Report" to test, and the results can only be described as amazing. .

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Some people directly fed Dazai Osamu’s "disqualification in the world" to Claude and asked about the plot of the story in English. Totally accurate answer given.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

#At the same time, this netizen threw the complete source code of Toolformer Zero he developed to it, and Claude accurately Describe what this is used for.

Furthermore, Claude also praised the modularity of the code and provided suggestions for adding some unit tests.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

#Throw away the "Beowulf" poem Go in and analyze the character of Beowulf, which is also very accurate.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

##Nvidia scientist Jim Fan said that this is the trump card offered by Anthropic. The future arms race in context length is heating up fast.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Regarding the significance of supporting 100k, netizens said that Thai pants are hot! This is a good demonstration of why long texts are important to LLM.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Many netizens have also hinted at GPT-4.

The birth of Claude-100K makes AnthropicAI officially a real competitor of OpenAI.

"Many people are still waiting in line for 32k GPT-4. This time, Claude expanded the context window to 100,000 tokens, which was a huge jump.

This also means that companies including OpenAI and Google have to compete in this field, which is a huge victory for users."

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Some netizens lamented that the times have progressed too fast.

It took less than a day for Google to announce that PaLM 2 excels at advanced inference tasks, and Anthropic’s Claude can now digest 100,000 tokens in less than a minute. The progress of artificial intelligence is indeed impressive.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

#However, if you enter less tokens At 9K, Antropic seems to be calling the previous model.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

#Millions of tokens are not a dream

In the past few years, the Hazy Research Laboratory at Stanford University has been engaged in an important work, which is to increase the sequence length of the model.

In their view, this will usher in a new era of basic machine learning models.

The FlashAttention algorithm proposed by researchers in 2022 proved the feasibility of 32k.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Even Sam Altman said we want 32k tokens.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

In fact, not only 32k, but now 100k has been achieved, and millions of tokens are not far away.

"Absolutely too wild! In a few years, will it be possible to support a token context length of 1 million?"

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

Some time ago, researchers from DeepPavlov, AIRI, and the London Institute of Mathematical Sciences released a technical report using the Recurrent Memory Transformer (RMT) to increase the effective context length of BERT. to "an unprecedented 2 million tokens" while maintaining high memory retrieval accuracy.

Read half of The Three-Body Problem in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed

##Paper address: https://arxiv.org/abs/2304.11062

This method can store and process local and global information, and let the information flow between segments of the input sequence by using loops.

However, although RMT does not increase memory consumption and can be extended to nearly unlimited sequence lengths, there is still the memory decay problem in RNN and longer inference time is required.

In fact, behind RMT is a brand new memory mechanism.

The specific operation method is to add a special memory token to the input or output sequence without changing the original Transformer model, and then train the model to control the memory operation. and sequence representation processing.

Compared to Transformer-XL, RMT requires less memory and can handle longer sequences of tasks.

Of course, Claude 100k is already a pretty big start before finally achieving one million tokens.

The above is the detailed content of Read half of 'The Three-Body Problem' in one sitting! The strongest competitor of GPT-4 suddenly upgraded to 100,000 tokens, and the paper code demonstration was completed. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
You Must Build Workplace AI Behind A Veil Of IgnoranceYou Must Build Workplace AI Behind A Veil Of IgnoranceApr 29, 2025 am 11:15 AM

In John Rawls' seminal 1971 book The Theory of Justice, he proposed a thought experiment that we should take as the core of today's AI design and use decision-making: the veil of ignorance. This philosophy provides a simple tool for understanding equity and also provides a blueprint for leaders to use this understanding to design and implement AI equitably. Imagine that you are making rules for a new society. But there is a premise: you don’t know in advance what role you will play in this society. You may end up being rich or poor, healthy or disabled, belonging to a majority or marginal minority. Operating under this "veil of ignorance" prevents rule makers from making decisions that benefit themselves. On the contrary, people will be more motivated to formulate public

Decisions, Decisions… Next Steps For Practical Applied AIDecisions, Decisions… Next Steps For Practical Applied AIApr 29, 2025 am 11:14 AM

Numerous companies specialize in robotic process automation (RPA), offering bots to automate repetitive tasks—UiPath, Automation Anywhere, Blue Prism, and others. Meanwhile, process mining, orchestration, and intelligent document processing speciali

The Agents Are Coming – More On What We Will Do Next To AI PartnersThe Agents Are Coming – More On What We Will Do Next To AI PartnersApr 29, 2025 am 11:13 AM

The future of AI is moving beyond simple word prediction and conversational simulation; AI agents are emerging, capable of independent action and task completion. This shift is already evident in tools like Anthropic's Claude. AI Agents: Research a

Why Empathy Is More Important Than Control For Leaders In An AI-Driven FutureWhy Empathy Is More Important Than Control For Leaders In An AI-Driven FutureApr 29, 2025 am 11:12 AM

Rapid technological advancements necessitate a forward-looking perspective on the future of work. What happens when AI transcends mere productivity enhancement and begins shaping our societal structures? Topher McDougal's upcoming book, Gaia Wakes:

AI For Product Classification: Can Machines Master Tax Law?AI For Product Classification: Can Machines Master Tax Law?Apr 29, 2025 am 11:11 AM

Product classification, often involving complex codes like "HS 8471.30" from systems such as the Harmonized System (HS), is crucial for international trade and domestic sales. These codes ensure correct tax application, impacting every inv

Could Data Center Demand Spark A Climate Tech Rebound?Could Data Center Demand Spark A Climate Tech Rebound?Apr 29, 2025 am 11:10 AM

The future of energy consumption in data centers and climate technology investment This article explores the surge in energy consumption in AI-driven data centers and its impact on climate change, and analyzes innovative solutions and policy recommendations to address this challenge. Challenges of energy demand: Large and ultra-large-scale data centers consume huge power, comparable to the sum of hundreds of thousands of ordinary North American families, and emerging AI ultra-large-scale centers consume dozens of times more power than this. In the first eight months of 2024, Microsoft, Meta, Google and Amazon have invested approximately US$125 billion in the construction and operation of AI data centers (JP Morgan, 2024) (Table 1). Growing energy demand is both a challenge and an opportunity. According to Canary Media, the looming electricity

AI And Hollywood's Next Golden AgeAI And Hollywood's Next Golden AgeApr 29, 2025 am 11:09 AM

Generative AI is revolutionizing film and television production. Luma's Ray 2 model, as well as Runway's Gen-4, OpenAI's Sora, Google's Veo and other new models, are improving the quality of generated videos at an unprecedented speed. These models can easily create complex special effects and realistic scenes, even short video clips and camera-perceived motion effects have been achieved. While the manipulation and consistency of these tools still need to be improved, the speed of progress is amazing. Generative video is becoming an independent medium. Some models are good at animation production, while others are good at live-action images. It is worth noting that Adobe's Firefly and Moonvalley's Ma

Is ChatGPT Slowly Becoming AI's Biggest Yes-Man?Is ChatGPT Slowly Becoming AI's Biggest Yes-Man?Apr 29, 2025 am 11:08 AM

ChatGPT user experience declines: is it a model degradation or user expectations? Recently, a large number of ChatGPT paid users have complained about their performance degradation, which has attracted widespread attention. Users reported slower responses to models, shorter answers, lack of help, and even more hallucinations. Some users expressed dissatisfaction on social media, pointing out that ChatGPT has become “too flattering” and tends to verify user views rather than provide critical feedback. This not only affects the user experience, but also brings actual losses to corporate customers, such as reduced productivity and waste of computing resources. Evidence of performance degradation Many users have reported significant degradation in ChatGPT performance, especially in older models such as GPT-4 (which will soon be discontinued from service at the end of this month). this

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.