


The rapid development of AI in the past ten years has benefited from the cooperation of universities, enterprises and individual developers, making the field of artificial intelligence full of open source code, data and tutorials.
Google has also been a leader in the AI industry. It has published papers across natural language processing, computer vision, reinforcement learning and other fields, and has contributed to the industry such as Transformer, Bert, PaLM and many other basic models and architecture.
But OpenAI broke the rules of the game. Not only developed ChatGPT with Transformer, but also relied on the advantages of a start-up company, such as being protected by law and public opinion The impact is less , There is no need to disclose training data, model size, architecture and other information, and it has even poached many employees from major companies such as Google, Google is losing ground.
Faced with OpenAI, which has no martial ethics, Google can only be beaten passively.
According to information provided by anonymous sources, in February this year, Google AI head Jeff Dean said in a meeting:
Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products
Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products
Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products Papers are shared only after lab results have been transformed into products.
Google, which has turned to a "defensive state", may be hoping to get rid of all similar AI companies and better protect its core search business and stock price.
But if AI lacks the open source spirit of these large companies and turns to monopoly, will the previous development miracles in the field of artificial intelligence still occur?Google was stolen because it was "too responsible"
For a company like Google with billions of users, even a small-scale experiment is possible It will affect millions of people and suffer backlash from public opinion. This is why Google has been reluctant to launch chatbots and stick to the bottom line of "responsible AI."
In 2015, Google Photos launched an image classification function and misclassified a black man as Gorillas. Google immediately fell into a public relations crisis and quickly apologized and made rectifications.
Google’s rectification measure is to directly delete the Gorillas tag, and even delete the poodle. Chimpanzee, monkey and other categories.
The result is that the image classifier can no longer identify black people as orangutans, but it can no longer identify real orangutans.
Although Google has been investing a lot of money in developing artificial intelligence technology for many years, due to the black box inexplicability of neural networks, Google cannot fully guarantee the controllability after production. , which requires longer security testing and loses the first-mover advantage.
In April this year, Google CEO Sundar Pichai still made it clear during his participation in the "60 Minutes" program that people need to be cautious about artificial intelligence, which may have a lot of consequences for society. Big harm, such as fake images and videos.
If Google chooses to be "less responsible", it will inevitably attract the attention of more regulatory agencies, artificial intelligence researchers and business leaders.
But DeepMind co-founder Mustafa Suleyman said that it is not because they are too cautious, but because they are unwilling to destroy existing revenue streams and business models. Only when there is real They will only begin to awaken when there is an external threat.
And this threat has already arrived.There is not much time left for Google
Since 2010, Google has begun to acquire artificial intelligence start-ups and gradually integrate related technologies into its own products.
In 2013, Google invited deep learning pioneer and Turing Award winner Hinton to join (he had just resigned); a year later, Google also acquired the startup DeepMind for US$625 million.
###### Shortly after being appointed CEO of Google, Pichai announced that Google would adopt "AI first" as its basic strategy and integrate artificial intelligence technology into all of its products. ######Years of intensive cultivation have also allowed Google’s artificial intelligence research team to make many breakthroughs, but at the same time, some smaller startups have also made some achievements in the field of artificial intelligence.
OpenAI was originally established to check the monopoly of large technology companies on the acquisition of companies in the field of artificial intelligence. With the advantages of small companies, OpenAI is subject to less scrutiny and supervision, and is more Willingness to quickly deliver artificial intelligence models into the hands of ordinary users.
So the artificial intelligence arms race is intensifying without supervision. In the face of competition, the "responsibility" of large enterprises may gradually weaken.
Google executives also have to choose the prospects of general artificial intelligence concepts such as "artificial intelligence technology matching" and "superhuman intelligence" supported by DeepMind.
On April 21 this year, Pichai announced the merger of Google Brain, originally run by Jeff Dean, and the previously acquired DeepMind, and put it in charge of DeepMind co-founder and CEO Demis Hassabis. To accelerate Google’s progress in the field of artificial intelligence.
Hassabis believes that within a few years, the level of artificial intelligence may be closer to the level of human intelligence than most experts predict.
Google enters "war readiness"
According to interviews conducted by foreign media with 11 current and former Google employees, in recent months, Google has made changes to its artificial intelligence business. Undertaken an overhaul, the main goals are to launch products quickly, lower the threshold for rolling out experimental AI tools to small user groups, and develop a new set of evaluation metrics and priorities in areas such as fairness.
Pichai emphasized that Google’s attempt to speed up research and development does not mean cutting corners.
We are establishing a new department aimed at building more capable, safer, and more responsible systems to ensure the responsible development of general artificial intelligence.
Former Google Brain AI researcher Brian Kihoon Lee, who was fired in a wave of mass layoffs in January, described the shift as Google moving from "peacetime" to "wartime." Change, once competition becomes fierce, everything will change. In a wartime situation, competitors' market share gains are also critical.
Google spokesman Brian Gabriel said: In 2018, Google established an internal management structure and comprehensive review process, and has conducted hundreds of reviews in various product areas so far. Google will continue to apply this process across AI-based technologies, and ensuring the AI they develop responsibly remains a top priority for the company.
But the changing criteria for deciding when AI products are ready for market have also caused uneasiness among employees. For example, after Google decided to release Bard, it downgraded its experimental AI products. The test score standards have aroused opposition from internal employees.
In early 2023, Google announced about 20 policies set by two artificial intelligence teams (Responsible Innovation and Responsible AI) around Bard. Employees generally believed that these regulations were quite clear and complete. .
Some employees also believe that these standards are formulated more like a performance for the outside world. It is better to make public training data or open source models so that users can have a clearer understanding of the capabilities of the models.
Publishing papers requires approval
Google’s decision to accelerate research and development is a mixed blessing for employees.
Fortunately, employees in non-scientific research positions are generally optimistic and believe that this decision can help Google regain its upper hand.
But for researchers, the need to obtain additional approvals before publishing related artificial intelligence research results may mean that researchers will miss out on the first launch in the fast-growing field of generative artificial intelligence. Chance.
There are also concerns that Google may quietly suppress controversial papers, such as a 2020 study led by Google’s Ethical AI team and co-authored by Timnit Gebru and Margaret Mitchell. A study of the dangers of large language models.
Over the past year, many top artificial intelligence researchers have been poached by startups, in part because of Google’s underemphasis on and excessive scrutiny of researchers’ work.
Getting a paper approved can require rigorous iterations by senior researchers, a former Google researcher said. Google has committed many researchers so that they can continue to participate in broader research topics in the field, and publishing restrictions may force another group of researchers to leave.
Should AI R&D be slowed down?
As Google accelerates its development, there are also some voices that do not sound so harmonious. They call on major artificial intelligence manufacturers to slow down their development speed, and believe that the development speed of this technology has exceeded that of the inventor. expectations.
Geoffrey Hinton, a pioneer in deep learning, left Google amid concerns about the potential danger of superintelligent AI that could escape from human control..
Consumers are gradually beginning to understand the risks and limitations of large language models, such as the tendency of artificial intelligence to fabricate facts, etc., but the small print disclaimer on ChatGPT does not make it clear limitation.
Downstream applications based on ChatGPT have exposed more problems. For example, Stanford University professor Percy Liang once conducted a study and found that only 70% of the references provided by New Bing are correct. .
On May 4, the White House invited the CEOs of Google, OpenAI, and Microsoft to meet to discuss public concerns about AI technology and how to regulate AI.
U.S. President Biden made it clear in the invitation letter that AI companies must ensure the safety of their products before they can be made available to the public.
The above is the detailed content of Google is panicking! If you want to publish a paper, you need to get approval and give priority to product development: Is ChatGPT a swan song in the field of artificial intelligence?. For more information, please follow other related articles on the PHP Chinese website!

AI Augmenting Food Preparation While still in nascent use, AI systems are being increasingly used in food preparation. AI-driven robots are used in kitchens to automate food preparation tasks, such as flipping burgers, making pizzas, or assembling sa

Introduction Understanding the namespaces, scopes, and behavior of variables in Python functions is crucial for writing efficiently and avoiding runtime errors or exceptions. In this article, we’ll delve into various asp

Introduction Imagine walking through an art gallery, surrounded by vivid paintings and sculptures. Now, what if you could ask each piece a question and get a meaningful answer? You might ask, “What story are you telling?

Continuing the product cadence, this month MediaTek has made a series of announcements, including the new Kompanio Ultra and Dimensity 9400 . These products fill in the more traditional parts of MediaTek’s business, which include chips for smartphone

#1 Google launched Agent2Agent The Story: It’s Monday morning. As an AI-powered recruiter you work smarter, not harder. You log into your company’s dashboard on your phone. It tells you three critical roles have been sourced, vetted, and scheduled fo

I would guess that you must be. We all seem to know that psychobabble consists of assorted chatter that mixes various psychological terminology and often ends up being either incomprehensible or completely nonsensical. All you need to do to spew fo

Only 9.5% of plastics manufactured in 2022 were made from recycled materials, according to a new study published this week. Meanwhile, plastic continues to pile up in landfills–and ecosystems–around the world. But help is on the way. A team of engin

My recent conversation with Andy MacMillan, CEO of leading enterprise analytics platform Alteryx, highlighted this critical yet underappreciated role in the AI revolution. As MacMillan explains, the gap between raw business data and AI-ready informat


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

WebStorm Mac version
Useful JavaScript development tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.