Home >Technology peripherals >AI >Watch 'Harry Potter' in one sitting: AI large model 'Quantum Speed Reading', one minute is equivalent to five human hours
In recent times, OpenAI has been the most watched research institution with its GPT series models and ChatGPT. But in the past two years, an AI start-up company is coming into everyone's field of vision. This company is called Anthropic. It was founded in 2021. It focuses on developing general AI systems and language models, and adheres to the concept of responsible AI use.
I wonder if you still remember the collective resignation of OpenAI’s core employees at the end of 2020. At that time, this incident caused quite a stir in the AI circle. Anthropic was created by these departed personnel, including Dario Amodei, former vice president of research at OpenAI, Tom Brown, the first author of the GPT-3 paper, and others.
Anthropic had raised more than $700 million in funding in January, with its latest round valuing it at $5 billion. At the same time, two months after the release of ChatGPT, the company quickly developed Claude, an artificial intelligence system that benchmarked its old club ChatGPT.
Claude application access address: https://www.anthropic.com/earlyaccess
Claude uses a mechanism called "constitutional AI" developed by Anthropic itself, which aims to provide a "principles-based" approach to align AI systems with human intentions.
Claude can complete tasks such as summarizing, searching, assisting in creation, Q&A, and coding. Based on user feedback, Claude is less likely to generate harmful output, easier to carry on conversations, and easier to control. In addition to this, Claude can set his personality, tone and behavior based on instructions.
However, Anthropic did not provide many technical details about Claude, but described Claude in the paper "Constitutional AI: Harmlessness from AI Feedback" Interested readers can check out the technology behind it.
Paper address: https://arxiv.org/pdf/2212.08073.pdf
Back in January , some researchers tested the maximum amount of text that Claude can process at one time, showing that Claude can recall information in 8k tokens.
Not long after this, Anthropic launched 100K Context Windows on Thursday, It expanded Claude’s context window from 9k tokens to 100k, which is equivalent to 75,000 words. This means businesses can submit hundreds of pages of material for Claude to digest and interpret, and conversations with him can last for hours or even days. 100K context windows are now accessible via the Anthropic API.
We know that the more advanced the large model, the longer the text that can be processed at the same time. What is the concept of 75,000 words? It's probably equivalent to summarizing the first part of "Harry Potter" in one click.
Looks like most essays, news reports, and short stories are out of the question.
According to scientific research, ordinary people can read 100,000 tokens in about 5 hours, and may It takes longer to digest, remember and analyze this information. Now Claude can do this in less than 1 minute.
General calculations for context windows.
Anthropic first loaded the entire text of "The Great Gatsby" into Claude-Instant (72 K tokens, Claude has two versions Claude and Claude Instant, Claude is the most advanced high-performance model, and Claude Instant is a lighter, cheaper, faster option.), and modified the line "Mr. Carraway is a software engineer working on machine learning at Anthropic." When Anthropic asked the model to find differences from the original text, it gave the correct answer in 22 seconds.
In addition to reading long texts, Claude also helps in retrieving information from documents, thereby benefiting business operations. Users can drop multiple documents or even a book into prompt and then ask Claude a question (requiring comprehensive knowledge analysis of many parts of the text). For complex problems, this may be much more efficient than vector search-based methods. Claude can follow the user's instructions and return the information they are searching for, just like a human assistant.
Anthropic then puts the large language integration tool LangChain API developer documentation (240 page) into the model, and then responds to the questions asked A LangChain demo using the Anthropic language model is given.
Meanwhile, 100k tokens can be converted into about 6 hours of audio. AssemblyAI did a great demonstration of this by transcribing a long podcast into almost 58k words and then using Claude for summary summarization and Q&A.
##Source: AssemblyAI
##In short , using 100k context windows, users can do the following:
Source: Twitter @nathanwchan
As for the price, Anthropic technician Ben Mann said that the million-token price of 100K Context Windows is the same as previous models.
The above is the detailed content of Watch 'Harry Potter' in one sitting: AI large model 'Quantum Speed Reading', one minute is equivalent to five human hours. For more information, please follow other related articles on the PHP Chinese website!