Leading AI Scientists Warn AI Could Escape Control at Any Moment

Experts Say

During an international meeting of the minds, some of the world's foremost artificial intelligence experts came together to write a definitive dispatch on its dangers.

"Rapid advances in artificial intelligence systems’ capabilities are pushing humanity closer to a world where AI meets and surpasses human intelligence," begins the statement from International Dialogues on AI Safety (IDAIS), a cross-cultural consortium of scientists intent on mitigating AI risks.

With luminaries like Turing Institute computer scientist Geoffrey Hinton rubbing shoulders with the likes of Zhang Ya-Qin, the former president of the Chinese tech conglomerate Baidu, the letter's assorted signatories represent top AI thinkers globally.

"Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently," the IDAIS statement continues. "Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity."

Penned at the group's third conclave meeting in Venice, this "consensus statement" is geared not only towards outlining AI risk, but also aligning it for "global public good." At these meetings, dozens of experts have come together to issue a call to arms regarding the AI risks we're rushing towards.

Think Locally, Plan Globally

Because AI knows no international borders, thinking of this technology and its risk globally is, as the IDAIS signatories argue, of the utmost importance. While there have been "promising initial steps by the international community" towards AI safety cooperation at intergovernmental summits, they wrote, these efforts need to continue in the interest of developing "a global contingency plan" when and if these risks become more severe.

Such contingency plans would include the establishment of international bodies to create emergency preparedness — there's no word on whether this would take place within or outside an established organization like the United Nations — as well as mutually-assured consensus on "red lines" and what to do when they're crossed.

With additional signatories including former Irish president Mary Robinson, Turing Award winner Andrew Yao, and several researchers and officials from academic institutions in Quebec and Beijing, the statement is big on what needs to be done to mitigate risks, but relatively vague on what these exact risks are and how they may come about.

All the same, IDAIS' recommendations are probably a good one, and fostering international dialogue on such an important topic is paramount in the face of the coming militarized AI race between the United States and China.

More on the future of AI: Scientists Preparing "Humanity’s Last Exam" to Test Powerful AI