Regulating artificial intelligence is a 4D challenge
The writer is founder of sievedan FT-backed site on European startups
The leaders of the G7 nations addressed many global concerns over sake-steamed Nomi oysters in Hiroshima last weekend: the war in Ukraine, economic resilience, clean energy and food security, among others. But they also threw an additional item into their good-intentioned parting bag: the promotion of inclusive and trustworthy artificial intelligence.
While acknowledging AI’s innovative potential, leaders worried about the damage it could cause to public safety and human rights. Launching the Hiroshima AI process, the G7 tasked a working group to analyze the impact of generative AI models, such as ChatGPT, and prepare the leaders’ discussions by the end of this year.
Initial challenges will be how to best define AI, categorize its dangers, and frame an appropriate response. Is it better to leave regulation to existing national agencies? Or is the technology so important that it demands new international institutions? Do we need the modern equivalent of the International Atomic Energy Agency, founded in 1957 to promote the peaceful development of nuclear technology and discourage its military use?
One can debate how effectively the UN body has fulfilled that mission. Furthermore, nuclear technology involves radioactive material and massive infrastructure that is physically easy to detect. AI, on the other hand, is comparatively cheap, invisible, ubiquitous, and has infinite use cases. At the very least, it presents a four-dimensional challenge that needs to be approached in a more flexible way.
The first dimension is discrimination. Machine learning systems are designed to discriminate, to detect outliers in patterns. That’s good for detecting cancer cells on radiological scans. But it’s bad if black box systems trained on faulty data sets are used to hire and fire workers or authorize bank loans. Bias in, bias out, as they say. Banning these systems in areas of unacceptably high risk, as proposed by the forthcoming EU AI Law, is a strict and precautionary approach. Creating independent, expert auditors might be a more adaptable way to go.
Second, misinformation. As academic expert Gary Marcus warned the US Congress last week, generative AI could endanger democracy itself. Such models can generate plausible lies and falsified humans at lightning speeds and on an industrial scale.
The onus should fall on the tech companies themselves to flag content and minimize misinformation, just as they suppressed spam. Failure to do so will only amplify calls for more drastic intervention. The precedent may have been set in China, where a draft law places responsibility for the misuse of AI models on the producer, not the user.
Third, dislocation. No one can accurately forecast what economic impact AI will have overall. But it seems pretty sure it will lead to the “de-professionalization” of swaths of white-collar jobs, as businesswoman Vivienne Ming told the FT Weekend festival in DC.
Computer programmers have widely adopted generative AI as a tool to improve productivity. By contrast, Hollywood’s amazing screenwriters may be the first of many trades to fear that their basic skills will be automated. This messy story defies simple solutions. Nations will have to adapt to social challenges in their own way.
Fourth, devastation. The incorporation of AI into lethal autonomous weapon systems (LAWS), or killer robots, is a terrifying prospect. The principle that human beings should always remain in the decision-making loop can only be established and enforced through international treaties. The same applies to the discussion of artificial general intelligence, the (possibly fictional) day when AI surpasses human intelligence in all domains. Some activists dismiss this scenario as a distracting fantasy. But it’s surely worth paying attention to experts warning of possible existential risks and calling for an international research collaboration.
Others may argue that trying to regulate the AI is as futile as praying the sun doesn’t set. Laws only evolve incrementally, while AI develops exponentially. But Marcus says he was encouraged by the bipartisan consensus for action in the US Congress. Perhaps fearful that EU regulators could set global rules for AI, as they did with data protection five years ago, US tech companies are also publicly backing regulation.
The G7 leaders should encourage a competition for good ideas. Now they need to spark a regulatory race to the top, rather than preside over a terrifying slide to the bottom.