Home >Technology peripherals >AI >Big boss changes job, Microsoft and Google are fighting! The article written nearly a month ago was actually a 'God's prophecy'
Less than a week after this analysis article was published on January 27, Google and Microsoft successively announced that they would integrate AI chatbots into search engines.
Looking back, there are boomerangs everywhere.
Technology giants have always been cautious in developing AI. However, the emergence of ChatGPT made them famous.
The cornerstone of the development of artificial intelligence is laid by large companies such as Google, Meta and Microsoft.
However, now, enterprising small companies are pushing AI to the public at a dizzying speed, catching these giants off guard.
Faced with the continued decline of technology stocks and the pressure brought by the popularity of ChatGPT, Silicon Valley manufacturers seem to have suddenly "enlightened" - willing to take on more "reputational risk".
As the most popular star nowadays, ChatGPT has helped OpenAI attract billions of dollars in investment from Microsoft’s father.
ChatGPT itself is also likely to be integrated into Microsoft’s popular office software Family Bucket and sold to enterprises .
According to anonymous interviews conducted by the Wall Street Journal with six former Google and Meta employees, the surge in attention ChatGPT has received has also put pressure on technology giants such as Meta and Google. Make them less worried about safety issues.
Meta employees revealed that the company’s internal memo shows that the company is urging to speed up the approval process for AI and start using it as soon as possible. Latest technology.
The New York Times reported that after Google saw the prosperity of ChatGPT, in addition to issuing a red code, it also launched a "green channel" to shorten the process of assessing and mitigating potential harm.
In fact, Meta had released a similar chatbot three months before the debut of ChatGPT.
But unlike ChatGPT, which broke one million users in five days, this chatbot called Blenderbot is very boring.
Even Yann LeCun, Meta’s chief AI scientist, has to admit this.
"Can I put the dog in the refrigerator on a hot day?" "I can't discuss this with strangers."
Recently, LeCun explained on a forum: "The reason why it is so boring is because we made it very safe."
He said the reason for the tepid public response was that Meta was "too cautious" in moderating content.
There is a conspiracy theory explaining this: Blenderbot is so boring on purpose. Meta could have made a better AI, maybe had a better AI, but they decided to release a bad AI.
Yes, one problem that big tech companies have long faced is that the review mechanism for the ethical impact of artificial intelligence is not as mature as privacy or data security.
Normally, teams of researchers and engineers hope that the technological innovations they propose can be made into products more quickly, enter the public eye, or be integrated into the company's existing infrastructure.
It is not difficult to imagine that in the process of advancement, it is easy to conflict with the team in the company dedicated to "responsible artificial intelligence."
For example, when Project Maven, a contract to provide computer vision for Pentagon drones, was signed, many employees protested.
Duplex, a project that allows you to call a restaurant to make a reservation without revealing that you are a robot, has also triggered resistance among employees.
To this end, Google officially released the "Seven Principles" of artificial intelligence in 2018.
## Article address: https://blog.google/technology/ai/ai-principles/
Cohere co-founder Nick Frosst, who has worked at Google Brain for three years, said that large companies like Google and Microsoft will focus more on using AI to improve Their existing business model.
After all, the "reputational risk" brought about by radicalization is something that giants cannot afford.
Especially after Microsoft's Tay suffered a disastrous failure, they became extra cautious.
# In 2016, Tay was quickly sealed within a day of its launch. Because someone asked this chatbot to start a race war and whitewash the Nazis. There was an uproar at the time.
In 2022, Meta’s Galactica was downloaded just three days after it went online. Netizens have criticized its summary of scientific research as being very inaccurate and sometimes biased.
However, big manufacturers also have their own opinions on this.
Joelle Pineau, manager of basic artificial intelligence research at Meta, said: “Artificial intelligence is advancing incredibly fast, and we not only need to ensure efficient review processes; Make the right decisions and release the AI models and products that are best for our community."
Google Lily Lin said: "We believe that artificial intelligence is a fundamental and transformative technology. Technology plays a very, very important role for individuals, businesses and communities. We need to consider the broader social impact that these innovations may have. We will continue to test our AI technology internally to ensure that it is useful and safe."
Frank Shaw, Microsoft’s director of communications, said that when Microsoft uses AI like DALLE-2, it will work with OpenAI to build additional security mitigations.
"Microsoft has been committed to advancing the field of artificial intelligence for many years and will publicly guide users to create and use these technologies in a responsible and ethical manner on our platforms."
I didn’t catch up with the hot oneBut the problem is, I worry and worry, and I am left behind in the blink of an eye.
The past year 2022 can be called the first year of AIGC. We have witnessed the explosion of DALL-E 2, Stable Diffusion and ChatGPT.
Especially after OpenAI released ChatGPT, voices of generative artificial intelligence killing Google emerged one after another——
As we all know, ChatGPT can use a A more understandable format provides simple answers without requiring users to click on various links.
However, a very interesting point is that the underlying technologies of these AIs were actually pioneered by giants like Google.
But in recent years, these major manufacturers have become more and more mysterious. They will announce new models or provide demos, but will not announce complete products.
At the same time, research laboratories like OpenAI have steadily launched the latest version of AI. This also makes people wonder what the big manufacturers are doing so slowly. For example, the Google language model LaMDA has been launched for several years.
So, in such a general environment, engineers have to face the frustration of being unable to adopt new technologies that they have worked so hard to research.
Some employees said that they have been suggesting integrating chat functions into search engines in recent years, but have not received feedback.
However, they also understand that Google has legitimate reasons not to rush to change its search products.
After all, responding to user searches with clear answers not only reduces valuable online advertising space, but also creates additional liability for companies when users discover problems.
Of course, Google is not the only company that encounters this problem.
Meta employees also had to deal with the company’s concerns about bad public relations, according to people familiar with the matter.
Specifically, before launching a new product or publishing a research report, employees must answer questions about the potential risks of disclosing their work, including what to do if they are misunderstood by the public.
In this regard, Georgia Institute of Technology computer professor and machine learning expert Mark Riedl said that from a basic technical level, ChatGPT may not necessarily be better than Google or Meta.
But OpenAI has a key advantage: making the model available to the public.
So in the past two years, OpenAI has been able to continuously receive real feedback from users.
For example, give "like" to the answers that you are satisfied with, or "dislike" those that feel inappropriate.
And this is the cornerstone that makes ChatGPT so powerful - "Reinforcement Learning Based on Human Feedback" (RLHF).
In contrast, Google search, as the challenger, after a quarter of a century, Already bloated with advertising and marketers trying to game the system.
Technology expert Can Duruk pointed out, "Google's long-term monopoly position has degenerated their once-compelling search experience into a garbage-filled, search-engine-driven hell."
Of course, there is another point that is also very important - the "reputational risk" we initially discussed.
A Google employee said that in the eyes of the public, OpenAI is newer and more exciting than big companies like us, and it requires less price to pay. Even if it messes up, it will be fine. There won’t be as much criticism and scrutiny.
Indeed, Google was already the undisputed leader in this field 10 years ago.
In 2014, it acquired DeepMind, a top artificial intelligence laboratory.
In 2015, the machine learning framework TensorFlow was open sourced.
By 2016, Pichai promised to make Google an “AI first” company.
However, the talents behind these developments are becoming increasingly restless.
In the past year or so, many big names have jumped to more flexible startups, such as OpenAI and Stable Diffusion.
Including Character.AI, Cohere, Adept, Inflection.AI and Inworld AI, these startups are built around large-scale language models, and they are all produced by Google’s top AI researchers hand.
In addition, there are search startups using similar models to develop chat interfaces, such as Neeva, run by former Google executive Sridhar Ramaswamy.
Among them, Noam Shazeer, the founder of Character.AI, and Aidan Gomez, the co-founder of Cohere, are key figures in the development of Transformer and other core machine learning architectures.
Big companies like Google and Microsoft have generally focused on using artificial intelligence to improve their massive existing business models, said Nick Frosst, who worked at Google Brain for three years. Then he founded Cohere, a Toronto-based startup that builds large-scale language models that can provide customized services for enterprises. One of his co-founders is Aidan Gomez.
"This field is growing so rapidly, it's not surprising to me that the people leading the way are small companies," Frosst said.
Well-known research scientist David Ha said on Twitter: "If Google doesn't get up and start releasing its own artificial intelligence products, it will be used to train an entire generation of machine learning researchers and engineers. The "Whampoa Military Academy" will go down in history."
And this boss also left Google Brain in 2022 and joined the star startup Stable Diffusion.
Perhaps times may really be changing.
Rumman Chowdhury was in charge of Twitter’s machine learning ethics team until Musk disbanded it in November.
Chowdhury said that as big companies like Google and Meta scramble to catch up with OpenAI, critics and ethicists will increasingly be ignored.
The above is the detailed content of Big boss changes job, Microsoft and Google are fighting! The article written nearly a month ago was actually a 'God's prophecy'. For more information, please follow other related articles on the PHP Chinese website!