Could this be the end of human civilization as we know it?
In the latest crypto news, a coalition of top AI scientists is sounding the alarm, urging nations to create a global system of oversight to prevent what they describe as potentially “catastrophic outcomes” if humanity loses control over artificial intelligence.
In a statement issued on September 16, these experts voiced deep concerns about the very technology they helped build, warning that unchecked AI could pose serious threats to humanity’s future.
The message was clear: if we lose control of AI or it falls into the wrong hands, the consequences could be disastrous for everyone. The scientists admitted that, as of now, we lack the scientific tools to fully regulate and safeguard these advanced systems.
Their solution? Nations must set up dedicated authorities to identify and manage AI-related incidents, and a global contingency plan needs to be rolled out sooner rather than later.
In the long run, the group emphasized the need for a global governance system to prevent the rise of AI models that could unleash catastrophic risks. Their concerns are grounded in discussions held earlier this month at the International Dialogue on AI Safety in Venice, which was organized by the Safe AI Forum, a US-based nonprofit.
In the longer term, states should develop an international governance regime to prevent the development of models that could pose global catastrophic risks.”
Gillian Hadfield, a professor at Johns Hopkins University, shared the statement on social media and highlighted the urgency of a coordinated response, asking, “If we detect models starting to autonomously self-improve in six months, who’s going to step in?”
The scientists stressed that AI safety is a global public good, and as such, requires international cooperation and governance to address.
They outlined three critical steps: establishing emergency preparedness institutions, developing a safety assurance framework, and investing in independent global AI safety research and verification.
The statement, signed by over 30 experts from countries like the US, Canada, China, the UK, and Singapore, included several Turing Award winners—the computing world’s equivalent of the Nobel Prize. The group also pointed to increasing tensions between superpowers, particularly the US and China, as a barrier to the kind of global consensus needed to tackle AI risks.
Earlier this month, the US, EU, and UK signed the world’s first legally binding international AI treaty, focused on ensuring human rights and accountability in AI. However, some tech companies have voiced concerns that too much regulation, especially in the EU, could slow down innovation.