Julian Jacobs:National approaches to AI safety diverge in focus
2024-07-01 IMIThe article first appeared on OMFIF on 25 June 2024
Julian Jacobs is a senior economist
Domestic initiatives in artificial intelligence safety are beginning to emerge in countries around the world. In the last 18 months, the UK, US, Canada and Japan have created national AI safety institutes that aim to address governance and regulatory challenges, including issues related to misinformation, human safety and economic equity. Although they are unified by a common goal of creating frameworks for safe AI innovation, they diverge in meaningful ways.
US: prioritising domestic developments
The US AI Safety Institute was launched in February 2024 by the National Institute of Standards and Technologies. With a total funding package of $10m, AISI aims to ‘facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts’. AISI is focused on developing methods for the detection, tracking and potential watermarking of synthetic content.
Such objectives are focused on actionable policies and the development of safety frameworks that can avert significant risks to ‘national security, public safety and individual rights.’ This includes coordinating with 200 companies on red-teaming exercises to identify vulnerabilities and develop mitigation strategies.
Throughout its early existence, the AISI was chiefly focused on US domestic safety concerns, with sparser public language about the need for global collaboration. This may be changing, with a new UK partnership to develop safety tests for advanced AI models as well as recent statements of purpose to foster a global AI safety institute, although this remains very preliminary.
UK: voluntary commitments, global collaboration
The UK AI Safety Institute, founded in April 2023, evolved from the Frontier AI Taskforce with an initial £100m investment and ongoing funding as part of a £20bn research and development initiative. In contrast to the US, the UK AI Safety Institute focuses on a broader array of safety considerations and stakeholders.
Its mission is to ensure the safe development of advanced AI systems through evaluations, foundational research and information sharing. It places a large emphasis on collaboration with international partners, industry, academia, civil society and national security agencies to advance AI safety and foster global consensus and institution building. In practice, that has meant an approach to AI that is bent on making the UK central to the discourse on global safety but is not immediately interested in creating regulatory obligations for AI firms.
The UK has remained overwhelmingly focused on voluntary commitments from AI companies, relying on existing regulations to address new risks. As Ellie Sweet, head of AI regulation strategy, engagement and consultation at the UK Department of Science, Innovation, and Technology, remarked at OMFIF’s AI in finance Sseminar: ‘It’s better to have our existing expert regulators interpret and apply those principles within their existing remits, rather than necessarily standing up a whole new regulatory framework.’
Meanwhile, the UK has been very active in its development of international partnerships, including a new UK AI Safety Institute Office in San Francisco and a UK-Canada science of AI safety partnership.
Canada: investing in becoming an AI leader
In April 2024, Canada announced plans to develop its own AI Safety Institute as part of a broader investment in AI by the Canadian government. The institute is funded with $50m and aims to protect against risks posed by advanced AI systems while also solidifying Canada’s place as a potential leader in AI development.
It will work under the broader Pan-Canadian Artificial Intelligence Strategy, which focuses on commercialisation, standards and research. The institute aims to help Canada better understand and mitigate the risks associated with AI technologies while also supporting international governance efforts. This includes aligning with international AI governance principles set by groups such as the G7 and the Global Partnership on AI to ensure that domestic AI innovation is responsibly conducted.
Japan: initiatives still in early phase
Japan has launched an AI Safety Institute that is very similar to the UK’s. The country’s institute – founded in January 2024 within the Information-technology Promotion Agency – involves decentralised AI governance across many governmental departments such as internal and foreign affairs. The exact investment amounts have not been publicly disclosed.
Current initiatives involve the creation of AI safety standards, conducting cross-department research on AI implications and opportunities and developing international partnerships with other emerging AI governance leaders, such as those in Europe and the US, to co-ordinate global AI safety and risk standards. The details of many of these initiatives are still emerging.