Tuesday, August 22, 2023

Global Cooperation Urgently Needed to Govern Risks of Advanced AI, Warns New Report.

Global Cooperation Urgently Needed to Govern Risks of Advanced AI, Warns New Report.  

World Leaders in Artificial Intelligence Explain Future Possibilities.

 

A new report by The Millennium Project warns that advanced artificial intelligence systems could emerge sooner than expected, posing unprecedented risks unless prudent governance frameworks are rapidly put in place.

 

The report titled International Governance Issues of the Transition from Artificial Narrow Intelligence to Artificial General Intelligence (AGI) is a distillation of interviews and collected insights from 55 AI experts from the United States, China, the United Kingdom, Canada, the European Union, and Russia on how to regulate AGI—AI that can handle novel situations as well as, or better than humans. Included among these experts are Sam Altman, Bill Gates, and Elon Musk. 



AGI could arise in the next few years, representing an “intelligence explosion” that creates AI surpassing human abilities, the report states. Lack of governance could lead to catastrophic outcomes, including existential threats to humanity if such systems are misaligned with human values and interests. The report finds that no existing governance models are adequately prepared to manage the risks and opportunities posed by artificial general intelligence (AGI). It calls for rapid development of a new kind of flexible governance that can match and anticipate the pace of AI change and provide the necessary safeguards while not stifling the promises of an advanced AI.

 

"AGI is closer than any time before—the next advance could surpass human intelligence," the report quotes Ilya Sutskever, co-founder of OpenAI. "Alignment with human values is critical but challenging." Ben Goertzel, author of AGI Revolution added: “It is more about WHO controls the development and use of AGI than a list of ethics.”

 

Other key findings include:

  • Because the benefits of AGI are so great in medicine, education, management, and productivity, corporations are racing to be first. 
  • Because AGI will increase political power, governments are racing to be first. 
  • International cooperation is essential but threatened by competitive tensions among nations and corporations racing for AI supremacy. The shared risks may compel collaboration, but overcoming distrust poses an enormous challenge.
  • Extraordinary enforcement powers may be needed for governance to be trusted and effective globally, potentially including military capabilities.
  • Although controversial, proposals to limit research and development may be needed to allow time to design and implement management solutions.
  • The window for developing effective governance is short, demanding unprecedented collaboration.

 

"We’re all in this boat together—if it goes badly, we’re all doomed," the report quotes Oxford professor Nick Bostrom.

 

The Millennium Project is calling for urgent action to create AGI governance and alignment at national and international levels before advanced AI exceeds humanity's ability to control it safely. “If we don’t get an UN Convention on AGI and a UN AGI Agency to enforce rules, guardrails, auditing, and verification right, then various forms of Artificial Supper Intelligence could emerge beyond our control and not to our liking,” says Jerome Glenn, CEO of The Millennium Project.

 

With stakes potentially including human extinction, the report warns we can ill afford delay in mobilizing global cooperation.



For the full study, visit https://www.millennium-project.org/transition-from-artificial-narrow-to-artificial-general-intelligence-governance/ 

 

This work was supported by the Dubai Future Foundation and general support from the Future of Life Institute. The Millennium Project is an international participatory think tank with 70 Nodes around the world and three regional networks; it was established in 1996 and has published over 60 futures research projects based on international judgments. 

No comments: