69% ARE CONCERNED ABOUT CONFLICT IN AI STANDARDS BETWEEN COUNTRIES

4th September 2024

There is also lack of clarity on the global agenda on AI standards

More than two-thirds of respondents in this week’s Airmic Big Question have indicated their concerns that standards on artificial intelligence in other countries will conflict with that in their home country. The results underscore the proliferation of different AI standards around the world, as the UK’s Department for Science, Innovation and Technology (DSIT) seeks views from industry on a draft Code of Practice on AI cyber security.

Julia Graham, CEO of Airmic, said: “We are supportive of the government’s stated aim of ultimately aligning AI standards with international ones, given concerns aired by Airmic members that a patchwork of competing and even conflicting standards on AI around the world is developing.”

Nevertheless, that alignment with international standards should not come at the expense of a pro-business, pro-innovation approach to AI for the UK.”

Yet other respondents in the Airmic survey said there is a lack of clarity as to the global agenda around the development of such AI standards, including the purpose and use of these standards.

Hoe-Yeong Loke, Head of Research, Airmic, said: “There is the sense that the global discussions around AI standards have been largely between governments and Big Tech at the moment, with little relevance for business-at-large at this stage. However, it is crucial that businesses and society are engaged with this process now, so that the resultant standards work for everyone.”

On 1 August, the European Union’s AI Act came into effect, with the aim of fostering responsible artificial intelligence development and deployment. In the US, the Biden administration produced what it called a “blueprint for an AI bill of rights”, while China has set a goal of formulating over 50 standards for the AI sector by 2026.

A report last year by Brookings, the US-based think tank, said that the EU and the US are on a path to “significant misalignment” as regards their AI risk management regimes.

In partnership with the Global Cyber Security Capacity Centre (GCSCC) at the University of Oxford, Airmic held a roundtable in early August to solicit views on DSIT’s draft code, following which it made a submission to the government.

Leigh-Anne Slade, Head of Media, Communications and Interest Groups, Airmic, said: “At a practical level, Airmic members are concerned that having a patchwork of AI standards around the world would pose significant challenges when implementing AI applications across different jurisdictions. While there are concerns that governments are not reacting fast enough to the rapid developments in AI, it would equally be a mistake to put in place standards without engaging all stakeholders.”

If you would like to request an interview and or have any further questions, please let me know.

We will be sharing the results of the Airmic Big Question with the press weekly.

You can also find the results here.

Media contact: Leigh Anne Slade
Head of Media, Communications and Interest Groups, Airmic
Leigh-Anne.Slade@Airmic.com
07956 41 78 77