Some AI experts have appealed to address the risks that artificial intelligence, going so far as to say it could cause the extinction of humanity. Among the signatories are the head of Google DeepMind, the head of OpenAI and two Turing Award winners. This call follows an open letter calling for a pause in AI development.
Risks “from extinction” with AI, experts warn in 22 words
An urgent call to prevent the risk that AI can destroy humanity it was signed by some of the top AI scientists, engineers and executives. Among them are the leader of Google DeepMind Demis Hassabis, the CEO of OpenAI Sam Altman and two award-winning developers Turing Award. 22 words, to warn the public.
Translating it, we read: “Mitigating AI’s extinction risk should be a global priority along with other social-scale risks such as pandemics and nuclear war.”
Dan Hendrycks, who heads the Center for AI Safety, told the New York Times that the brevity of today’s statement was intentional to avoid conflict. “We didn’t want to come up with too long a list of 30 possible interventions,” Hendrycks explained. “When this happens, the message loses strength.”
The main points of this discussion are known but the details often poorly defined, based on what-if scenarios where AI systems rapidly increase in ability and are no longer secure. Many experts point to rapid advances in systems such as large language models as evidence of future expectations of intelligence.
On the other hand, many critics point to the errors of language models, the difficulties of autonomous driving and other limitations of AI. Saying that the risks, if any, are still far away.
But both danger proponents and AI doubters agree that, even without advances in their abilities, AI systems pose a number of risks nowadays. From their use that allows mass surveillance, al strengthening of incorrect “preventive police” algorithms and facilitating the creation of misinformation and disinformation.
Leave a Reply
View Comments