Home >Technology peripherals >AI >ChatGPT and others will not take over human work anytime soon. They are error-prone, and AI will not work for free.

ChatGPT and others will not take over human work anytime soon. They are error-prone, and AI will not work for free.

PHPz
PHPzforward
2023-05-21 08:49:35884browse

The release of large-scale models such as ChatGPT has brought stress and worry to many people, who are worried that AI will soon take over their jobs. In this regard, OpenAI has also published a study showing that the impact of ChatGPT covers all income levels, and high-income jobs may face greater risks. What are the facts?

Should we automate all jobs, even the satisfying ones?

This is one of several questions recently raised by the Future of Life Institute, which has called for a moratorium on large-scale artificial intelligence experiments now that Elon Musk More than 10,000 people including Elon Musk, Steve Wozniak and Andrew Yang have signed on to the initiative. While there may be some hype, it still sounds serious – but how exactly can AI be used to automate all work? Putting aside whether this is desirable – just think, is it really possible?

Douglas Kim, a researcher at the MIT Connection Science Institute, said: I think the real obstacle is that the emergence of general artificial intelligence capabilities we have seen from OpenAI and Google Bard is different from the early A similar situation occurs when the Internet is generally available or cloud infrastructure services are available. It's not yet ready for widespread use by hundreds of millions of workers, as mentioned.

Even researchers can’t keep up with the pace of AI innovation

Douglas Kim points out that while revolutionary technologies can spread quickly, they have to wait until they are proven to be useful, easy to Before using the application, they were often not widely available. He noted that generative AI will require specific business applications to move beyond its core audience of early adopters.

Augment Matthew Kirk, head of AI at the company, also holds a similar view: "I think what is happening in the AI ​​industry is similar to what happened in the early days of the Internet. The various opinions on the Internet at that time were very confusing. There are no standards. It takes time and cooperation for humans to determine the standards that people follow. Even something as mundane as measuring time is very complex."

Standardization is a pain point in the development of artificial intelligence. The methods used to train the models and fine-tune the results are confidential, making fundamental questions about how they work difficult to answer. OpenAI has been touting GPT-4’s ability to pass numerous standardized tests — but does the model truly understand the test, or is it simply trained to reproduce the correct answer? What does this mean for its ability to handle novel tasks? Researchers can't seem to agree on this answer, nor on the methods that might have been used to arrive at their conclusions.

ChatGPT and others will not take over human work anytime soon. They are error-prone, and AI will not work for free.

##Comparing the standardized test score chart of GPT 3.5 and GPT 4

OpenAI's GPT-4 can achieve good results on many standardized tests. Does it truly understand them, or is it trained on the correct answers?

Even if standards can be agreed upon, what is needed for the design and production of widely used AI-powered tools based on large language models (LLMs) such as GPT-4 or other generative AI systems Physical hardware can also be a challenge. Lucas A. Wilson, head of global research infrastructure at Optiver, believes the AI ​​industry is in an arms race to produce the most complex large language models (LLMs) possible. This, in turn, rapidly increases the computational resources required to train models.

Like humans, AI doesn’t work for free

At the same time, developers must find ways to work around limitations. Training a powerful large language model (LLM) from scratch can lead to unique opportunities, but this is only available to large, well-funded organizations. It’s much cheaper to implement a service that can leverage existing models (for example, Open AI’s ChatGPT-3.5 Turbo prices API access at about $0.0027 per 1,000 English words). But when AI-driven services become popular, costs will still increase. In either case, rolling out AI that can be used without restrictions is unrealistic and will force developers to make difficult choices.

Hilary Mason, CEO and co-founder of Hidden Door, a startup that builds an AI platform to create narrative games, said: “Generally speaking, startups founded on AI should have a good understanding of all specific Vendor application programming interface (API) dependencies are very cautious. We can also build architectures that do not have to make the GPU core, but this requires considerable experience."

ChatGPT and others will not take over human work anytime soon. They are error-prone, and AI will not work for free.

Hidden Door is developing software to assist users in creating unique narrative experiences using artificial intelligence.. This is an AI-powered screenshot tool for generating narrative games. Users can choose from multiple roles and prompts, which are included.

Most services built on generative AI have a fixed cap on the amount of content they can generate each month. These professional service fees may increase costs for businesses, thereby slowing down the pace of intelligent automation of people's work tasks. Even OpenAI, with its massive resources, limits paid users of ChatGPT based on current load: as of this writing, it caps it at 25 GPT-4 queries every 3 hours. Therefore, this is a huge problem for anyone who wants to rely on ChatGPT for their work.

Developers of AI-powered tools also face a challenge as old as computers themselves – designing a good user interface. A powerful LLM (Large Language Model) that can accomplish many tasks should be an unparalleled tool, but if the person using it doesn't know where to start, then its ability to accomplish the task won't matter. Kirk noted that while ChatGPT is easy to use, the openness of interacting with the AI ​​via chat can prove overwhelming when users need to focus on a specific task.

Kirk said: "I know from past experience that making tools completely open tends to confuse users rather than help. You can think of it as an endless Hall of Porch. Most people would be confused, confused, and stuck there. We still have a lot of work to do to determine which door is best for users." Mason made a similar observation, adding: "Just Like ChatGPT, which is mainly a UX optimization of GPT-3, I think we have only just begun to create metaphors in UI design. We also need to effectively use AI models in products."

Training to use AI is a job in itself. Hallucination, as a special problem of LLM, has long caused controversy. It has also seriously hindered the process of building AI tools for sensitive and important work. . LLM has an incredible ability to generate unique texts, tell jokes, and invent stories about fictional characters. However, when precision and accuracy are key to the mission, this skill becomes a hindrance, as LLMs often treat non-existent false sources or incorrect statements as fact.

#Kim said: In some highly regulated industries (banking, insurance, health care), it is difficult for specific functions of the company to reconcile very strict data privacy and prevention of discrimination. relationship with other regulatory requirements. In these regulated industries, you can’t have an AI make the kind of mistakes that you can get away with while writing a course paper.

Businesses may be scrambling to hire employees with expertise in AI tools. Artificial intelligence security and research company Anthropic recently made headlines with a job ad for a prompt engineer and librarian, specifying that the candidate would be responsible for building "a high-quality, high-quality environment in addition to their other duties." A library for prompt or prompt chains to accomplish various tasks." Salary $175,000 to $335,000.

However, Wilson sees a tension between the expertise required to use AI tools effectively and the efficiencies AI promises to deliver.

“How do you recruit people to do the new job of training LLMs, freeing up employees who are already focused on more complex or abstract work tasks?” Wilson asked. "I haven't seen a clear answer yet."

Despite these issues, augmenting your work with artificial intelligence may still be worthwhile. This was clearly the case with the computer revolution: while many people needed training to use the tools of Word and Excel, few would suggest that typewriters or chart paper were better alternatives. As the Future of Life Institute’s letter worries, “We are replacing all jobs, including satisfying jobs, with automation.” Although such a future will take at least more than half a year, the artificial intelligence revolution is now beginning, and ten years from today, the picture of the artificial intelligence revolution will continue to unfold.

The above is the detailed content of ChatGPT and others will not take over human work anytime soon. They are error-prone, and AI will not work for free.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete