Future regulation of AI could resemble that of dual-use materials: Experts

Moscow, Apr 12 (FN Agency) Kirill Krasilnikov – Governments may have to treat artificial intelligence (AI) systems as dual-use materials that can be employed for civilian and military purposes, regulating computing methods that could potentially be weaponized and not the technology itself, experts told Sputnik. In late March, an open letter signed by prominent computer scientists and tech entrepreneurs — including Elon Musk and Apple co-founder Steve Wozniak — called for an immediate pause on the training of AI systems more powerful than GPT-4 for at least six months. The letter was published as excitement and anxiety started to grow among the public over increasing AI capabilities, demonstrated by the popularity of the ChatGPT artificial intelligence chatbot. In addition, ChatGPT’s personal user data was compromised by a bug, later discovered by its developer, OpenAI. In total, around 1.2% of ChatGPT subscribers were affected by the data leak, the company said, adding that the figure is “extremely low.”

Nevertheless, the Italian Data Protection Authority has already restricted the chatbot’s operation in the country over data collection violations. “I doubt the current call for a moratorium will achieve anything in the short term, but governments are becoming alive to the prospect of real damage when fast, autonomous, and opaque systems are deployed without proper guardrails,” Simon Chesterman, the David Marshall professor of law at the National University of Singapore, said. The expert noted that AI technology is “inherently mobile and thus able to move away from strict regulation.” This makes regulation enforcement more difficult while also raising the costs for countries concerned about driving away innovation. In a similar vein, AI consultant Richard Batt observed that the fast-paced nature of the industry and the lack of global coordination renders a six-month pause unfeasible. “AI development is driven by market forces, research interests, and geopolitical competition. It is unlikely that all actors will agree to stop or slow down their work for half a year,” Batt said. He added that a pause may not prove effective in resolving the underlying issues of AI regulation due to the vagueness of some of the open letter’s wording.

“I think a better approach would be to focus on creating a robust and adaptive framework for AI governance that can balance innovation and accountability. This framework should involve multiple levels of regulation, from self-regulation by developers and users to industry standards and best practices, to national laws and policies, to international norms and agreements,” Batt continued. Meanwhile, Chesterman suggested viewing some of the more advanced methods as so-called dangerous materials and dealing with them accordingly. “I’m wary of attempts to regulate new computing methods as computing methods, but there’s an analogy that can be drawn between some of the most sophisticated methods and dangerous materials. Rather than waiting for deployment in weaponised form, we control the creation and distribution of some dual-use materials (like nuclear materials, certain chemicals, biological agents, and so on). We may be nearing the point at which AI systems need to be treated similarly,” Chesterman stated.