Home > Article > Technology peripherals > Microsoft is revealed to have disbanded its entire AI risk assessment team
News on March 14th, on Monday, local time in the United States, Microsoft was revealed to have disbanded its entire AI risk assessment team. It is reported that this team is officially called the AI Ethics and Social Team, and its main function is to guide Microsoft's innovation in AI to produce ethical, responsible and sustainable results. Recently, the team has been working hard to assess the possible risks of Microsoft integrating OpenAI technology into its products.
Many former and current Microsoft employees said that as a result of this move, Microsoft will no longer have a dedicated team to ensure that its AI principles are closely integrated with product design. At present, the company is leading the trend and working hard to make AI tools a mainstream. However, Microsoft still maintains a team called the Office of Responsible AI (ORA), which is tasked with setting rules to govern the company's AI initiatives. Microsoft said that despite recent layoffs, its overall investment in responsible work has been increasing.
Microsoft said in a statement: "Microsoft is committed to developing AI products and experiences safely and responsibly, and achieving that goal by investing in people, processes, and partners that prioritize these factors. . Over the past six years, we have grown the number of people in our product teams and ORA who, along with everyone at Microsoft, are responsible for ensuring we put our AI principles into practice. We thank the Ethics & Society team for their groundbreaking work, Help us continue on our journey to responsible AI."
Microsoft employees revealed that the Ethics and Social Team plays a key role in ensuring that the company's products and services comply with the principles of responsible AI. A former employee said: "People see the principles proposed by ORA but don't know how to apply them, and our job is to show them and make rules in areas where there are no rules."
In recent years, the team has A role-playing game called "Judgment Call" was designed to help designers imagine the potential harm that AI may bring and discuss it during the product development process. The Ethics and Society team reached its largest size in 2020, when it had around 30 employees, including engineers, designers and philosophers. But during a reorganization last year, the team was whittled down to just seven people.
In a team meeting after the reorganization, John Montgomery, vice president of Microsoft's AI business, said that company management had instructed them to act quickly. It is reported that Montgomery said at the time: "There is a lot of pressure from CTO Kevin Scott and CEO Satya Nadella to accelerate the integration of the OpenAI model and subsequent The model goes into the hands of the customer."
As a result of the pressure, many people on the ethics and social team will be transferred to other parts of the company, Montgomery said. At the time, Montgomery said the team would not be eliminated and would need to focus "more on individual product teams building services and software." Subsequently, most members of the team were transferred to other departments, and the remaining personnel were understaffed, making it impossible to implement many ambitious plans.
After about five months, remaining employees were informed that the entire team was ultimately being disbanded. An employee said that this move left a huge hole in the user experience and overall design of AI products. "The worst part is that by doing this we put businesses at risk and people at risk," they explained. Members of the ethics and society team said they typically work to support product development. But as Microsoft began to focus on rolling out AI tools faster than its competitors, company management became less interested in the kind of long-term thinking that the team was focused on.
Even as the Ethics and Society team is being disbanded, others are turning their attention to predicting the impact Microsoft will have when it releases OpenAI-powered AI tools to a global audience. In a memo last year, the Ethics & Society team detailed the risks associated with Bing's Image Creator, a tool that uses OpenAI's Dall-E system to create images based on text prompts. The imaging tool launched in several countries in October, making it one of the first public collaborations between Microsoft and OpenAI.
While text-to-image technology has proven extremely popular, Microsoft researchers accurately predicted that it could also threaten artists' livelihoods by allowing anyone to easily copy an artist's creative style. Additionally, OpenAI updated its terms of service to give users "full ownership of images created with Dall-E," which worries Microsoft's ethics and society team. (Xiao Xiao)
The above is the detailed content of Microsoft is revealed to have disbanded its entire AI risk assessment team. For more information, please follow other related articles on the PHP Chinese website!