Home > Article > Technology peripherals > Internal investigation found: OpenAI also doesn't know how its AI makes decisions
Reference News Network reported on May 14According to a report on the Spanish "Vanguard" website on May 13, OpenAI, the company behind ChatGPT, recently tried to discover how its language model performs in an internal survey. The decision maker came to the conclusion: "It's working. But not much is known about how it works."
According to the report, a report by OpenAI researchers published recently stated that "Language models are now becoming more powerful and more common, but we do not understand how they operate." This statement supports the views of those who have raised alarms about the lack of safety of some artificial intelligence promoted by the public.
But OpenAI also stated that "recent research has made progress in understanding small numbers of circuits and constrained behaviors", but the company believes that "to fully understand language models" requires "analyzing millions of neurons." . Faced with a huge workload, the company began using a technology that automatically analyzes all the neurons in a language model.
According to reports, the company has not yet achieved this, based on published research results. Although the company has yet to discover how neurons work, it hopes to be able to do so in the future. "We hope that starting from this approach of automating interpretability will allow us to comprehensively review the security of the model before deployment. Although GPT-4 has been widely deployed, they are currently unable to accomplish this." This task. In other words, they failed to complete this task. (Compiler/Sujiawei)
The above is the detailed content of Internal investigation found: OpenAI also doesn't know how its AI makes decisions. For more information, please follow other related articles on the PHP Chinese website!