Dr. Richard Jordan Co-authors Article on AI Governance Regimes
The article “International Governance of Advancing Artificial Intelligence” was co-authored with Nicholas Emery-Xu and Robert Trager. It was published online in September by the journal AI & Society: Knowledge, Culture and Communication.
Along with scholars from Oxford University and UCLA, Associate Professor Richard Jordan argues in favor of creating a Non-Proliferation Regime as a foundation for AI governance, but also weighs other alternatives, such as Verifiable Limits, International Monopoly, and International Hegemony.
The authors point to a variety of risks associated with unregulated spread of transformative AI and note that “all of the qualities that make AI dangerous have been encountered before in other technologies, but rarely (perhaps never) all at the same time.”
This makes it particularly vital to “keep it out of reach of malicious actors” and requires “setting up regimes and norms before these technologies proliferate.”
When comparing the advantages and downsides of the four regulatory regimes the authors conclude that each may be difficult to adopt and implement given the competing interests of major powers. Still, they argue in favor of taking near-terms actions to “investigate technical mechanisms that facilitate governance and increase governance options,” such as verification and monitoring of chip production capable of producing transformative AI.