Home  >  Article  >  Technology peripherals  >  Stanford University's '2023 AI Index' interprets the prospects of artificial intelligence

Stanford University's '2023 AI Index' interprets the prospects of artificial intelligence

王林
王林forward
2023-05-02 14:52:061247browse

Stanford University’s Human-Centered Artificial Intelligence Institute (HAI) has released the 2023 Artificial Intelligence Index, which analyzes the impact and progress of artificial intelligence. This data-driven report delves into hot topics related to artificial intelligence, such as research, ethics, policy, public opinion and economics.

Stanford Universitys 2023 AI Index interprets the prospects of artificial intelligence

#Key findings from the study include how AI research can expand into specialized areas such as pattern recognition, machine learning and computer vision. The report notes that the number of AI publications has more than doubled since 2010. At the same time, AI industry applications surpassed academia, citing 32 important machine learning models produced by industry, while academia produced only 3. Research attributes this to the massive resources required to train these large models.

Traditional artificial intelligence benchmarks, such as the image classification benchmark ImageNet and the reading comprehension test SQuAD, are no longer sufficient to measure the rapid progress of technology, leading to the emergence of new benchmarks such as BIG bench and HELM . Vanessa Parli, deputy director of HAI and member of the AI ​​Index Steering Committee, explained in an article at Stanford University that many AI benchmarks have reached a saturation point with little improvement, and researchers must adapt to how society wants to interact with AI. to develop new benchmarks. She gave the example of ChatGPT and how it passed many benchmarks but still often gave the wrong information.

Ethical issues such as bias and misinformation are another aspect of artificial intelligence examined in the report. With the rise of popular generative AI models such as DALL-E 2, Stable Diffusion, and of course ChatGPT, the ethical misuse of AI is increasing. The report noted that the number of AI incidents and controversies has increased 26 times since 2012, according to the AIAAIC, an independent database that stores AI abuse. Additionally, concern about AI ethics is growing rapidly, as research found that the number of submissions to the FAccT AI Ethics conference has more than doubled since 2021, and the number of submissions has increased tenfold since 2018.

The scale of large-scale language models is getting larger and larger, and the cost has become sky-high. The report takes the Google PaLM model released in 2022 as an example, pointing out that the cost of this model is 160 times higher than the OpenAI GPT-2 in 2019, and the scale is 360 times larger. In general, the larger the model, the higher the training cost. The study estimates the training costs for Deepmind’s Chinchilla model and HuggingFace’s BLOOM to be $2.1 million and $2.3 million, respectively.

Globally, private investment in AI is currently down 26.7% compared to 2021-2022, and AI financing for startups has also slowed down. However, over the past decade, investment in AI has increased significantly. The report shows that compared with 2013, private investment in artificial intelligence will increase 18 times in 2022. There has also been a plateau in the number of companies adopting new AI initiatives. The report said that the proportion of companies adopting artificial intelligence doubled between 2017 and 2022, but has recently leveled off at about 50-60%.

Another topic of interest is the growing government focus on artificial intelligence. The Artificial Intelligence Index analyzed the legislative records of 127 countries and found that 37 bills containing "artificial intelligence" became law in 2022, compared with just one in 2016. The study found that the U.S. government has increased spending on AI-related contracts by 2.5 times since 2017. Courts are also seeing a surge in AI-related legal cases: in 2022, there were 110 such cases related to civil, intellectual property and contract law.

The AI ​​Index also delves into a Pew Research Center survey on Americans’ views on artificial intelligence. In a survey of more than 10,000 panelists, 45% said they had mixed feelings about the use of artificial intelligence in their daily lives, and 37% said they were more concerned than excited. Only 18% felt excited rather than worried. Among the main hesitations, 74% said they were very or somewhat concerned about AI being used to make important decisions for humans, and 75% were uncomfortable with AI being used to understand people’s thoughts and behaviors.

The above is the detailed content of Stanford University's '2023 AI Index' interprets the prospects of artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete