According to Platformer, Microsoft implemented layoffs of the entire ethics and corporate team who has been involved in teaching employees how create and build the tools of artificial intelligence so responsible. But what exactly happened?
Microsoft, the layoffs continue: the team that manages the AI also loses out
The move to lay off the team managing artificial intelligence leaves Microsoft without a team dedicated to ensuring compliance with the principles of the correct use of artificial intelligence.
Despite this, it still remains active a office responsible for AI. He is responsible for creating rules and principles to govern the company’s AI initiatives.
Microsoft and layoffs: The work done by the Ethics and Society team
In recent years, the team has been designing an RPG called Judgment Call. It helped designers envision the potential harms that could come from AI and were discussed during product development. The game was part of a larger responsible innovation project, which the team then released.
Also, more recently, the team has been working to identify the risks placed by Microsoft’s adoption of OpenAI technology, across its product suite. The Ethics and Society team reached its peak in 2020 when it numbered approx 30 employees including engineers, designers and philosophers. In October, the team was reduced to about seven people following a reorganization of employees in other areas of the company.
This impoverishment of the ethics and society team that deals with the correct use of AI has led to the difficulty of realizing the ambitious projects that Microsoft had in store. As a result of this reorganization, those few remaining employees of the team in question would have been relieved of their duties entirely.
The case of the Google layoffs
In 2020, Google he fired artificial intelligence researcher Timnit Gebru. This happened after publishing a document critical of the large linguistic models, which would soon be used for artificial intelligence. The ensuing chaos has led to the diminishing credibility of the company on matters relating to AI being held accountable.
Microsoft was the quickest to introduce AI into its tools
Microsoft has been very active and careful to be the first to implement the AI service in their tools. Not only. The company may now have a unique opportunity to gain significant traction against Google in research, productivity software, cloud computing and other areas where giants compete.
When he raised Bing with artificial intelligence, the company told investors that every 1% of market share it could steal from Google in search would translate into $2 billion in annual revenue.
This potential explains why Microsoft has invested $11 billion in OpenAI so far. And not only that, that it is in a hurry to integrate the startup’s technology into every tool that belongs to it. It seems to be very successful. The company said last week that Bing now has 100 million daily active users, of which a third are new since the search engine was relaunched with OpenAI technology.
The potential risks posed by artificial intelligence
On the other hand, everyone involved in the development of AI agrees that this kind of technology poses risks powerful and existential, both known and unknown.
The tech giants have taken pains to signal that they are taking these risks seriously. Microsoft alone has three different groups working on the matter, even after the corporate and ethics team was eliminated. But given the stakes, any cuts to teams focused on responsible work look likely harm the use of these new tools.
Although the tecnologia text-to-image has proved extremely popular, Microsoft researchers correctly predicted that it could also threaten artists and their work, allowing anyone to easily copy their style.
Indeed: “in testing Bing Image Creator, it was found that with a simple prompt that included only the name of the artist and a medium (painting, print, photography or sculpture), the generated images were almost impossible to differentiate from the original works” , the researchers wrote in the memo.
In any case, Microsoft reassures and said that the tool will take into account the copyright of the work and the artist. Despite this, many legal issues have remained unresolved. Indeed, last month, Getty Images filed a lawsuit against Stability AI, the creators of the AI art generator Stable Diffusion. Getty accused the AI startup of misusing more than 12 million images.
Leave a Reply
View Comments