Home >Technology peripherals >AI >Hype and reality of AI agents: Even GPT-4 cannot support it, and the success rate of real-life tasks is less than 15%
According to the continuous evolution and self-innovation of large language models, performance, accuracy, and stability have been greatly improved, which has been verified by various benchmark problem sets.
However, for existing versions of LLM, their comprehensive capabilities do not seem to be able to fully support AI agents.
Multi-modal, multi-task, and multi-domain inference have become AI agents in the public media space Necessary requirements, but the actual effects displayed in specific functional practices vary greatly. This seems to once again remind all AI robot startups and large technology giants to recognize the reality: be more down-to-earth, don’t spread the stall too big, and start with AI enhancement functions.
Recently, a blog about the gap between the publicity and real performance of AI agents emphasized a point: "AI agents are a giant in publicity, but in reality they are It’s not good.” This sentence accurately expresses many people’s views on AI technology. With the continuous advancement of science and technology, AI has been endowed with many eye-catching features and capabilities. However, some problems and
autonomous AI agents are able to perform complex tasks in practical applications. The context of the mission has generated considerable excitement. By interacting with external tools and features, LLMs can complete multi-step workflows without human intervention.
But it turned out to be more challenging than expected.
The WebArena leaderboard is a real and reproducible network environment used to evaluate the performance of practical agents. Benchmarking the performance of LLM agents on real-world tasks showed that even the best-performing model had a success rate of only 35.8%.
WebArena Ranking Benchmark results of LLM agent performance in real-life tasks: SteP model performs best in success rate indicator , reaching 35.8%, while the success rate of the well-known GPT-4 only reached 14.9%.
The term "AI agent" is not really defined, and there is a lot of controversy about what exactly an agent is.
AI agent can be defined as "an LLM that is given the ability to act (usually making function calls in a RAG environment) to make high-level decisions about how to perform tasks in the environment. Decision-making."
Currently, there are two main architectural methods for building AI agents:
Theoretically, a single agent with infinite context length and perfect attention is ideal. Due to the shorter context, multi-agent systems will always perform worse than a single system on a given problem.
After witnessing many attempts at AI agents, the author believes that they are still premature and costly. Too high, too slow and unreliable. Many AI agent startups seem to be waiting for a model breakthrough to start the race to productize their agents.
The performance of AI agents in actual applications is not mature enough, which is reflected in problems such as inaccurate output, unsatisfactory performance, higher costs, compensation risks, and inability to gain user trust:
Currently, the following startups are getting involved in the field of AI agents, but most are still Experimental or invite-only:
Among them, only MultiOn seems to be pursuing the "give instructions and observe their execution" approach, which is more consistent with the promise of AI agents.
Every other company is going the RPA (record-and-replay) route, which may be necessary at this stage to ensure reliability.
Meanwhile, some major companies are also bringing AI capabilities to the desktop and browser, and it looks like they will get native AI integration at the system level.
OpenAI has announced their Mac desktop application that interacts with the operating system screen.
At Google I/O, Google demonstrated Gemini automating shopping returns.
Microsoft announced Copilot Studio, which will allow developers to build AI agent robots.
These technical demonstrations are impressive, and people can wait and see how these agent functions perform when they are publicly released and tested in real scenarios. Not limited to carefully selected demo cases.
The author emphasizes: "AI agents have been over-hyped, and most are not ready for critical missions."
However, As the underlying models and architecture advance rapidly, he said one can still expect to see more successful real-world applications.
The most promising path forward for AI agents may be this:
By combining tightly constrained LLMs, good assessment data, human-machine collaborative supervision and traditional engineering methods, reliable and good results can be achieved in complex tasks such as automation .
Will AI agents automate tedious and repetitive tasks such as web scraping, form filling, and data entry?
Author: "Yes, absolutely."
Will AI agents automatically book vacations without human intervention? ?
Author: "Unlikely at least in the near future."
The above is the detailed content of Hype and reality of AI agents: Even GPT-4 cannot support it, and the success rate of real-life tasks is less than 15%. For more information, please follow other related articles on the PHP Chinese website!