Airmic has called for the UK government’s proposed Code of Practice on AI cyber security to consider the business as a whole as a stakeholder, rather than to focus on just IT developers, system operators and data controllers.
In partnership with the Global Cyber Security Capacity Centre (GCSCC) at the University of Oxford, Airmic held a roundtable in early August to solicit views on the Department for Science, Innovation and Technology’s (DSIT) draft code, following which it made a submission to the government. Airmic acknowledged the contributions of Dr Jamie Saunders, Oxford Martin Fellow, University of Oxford, made towards the roundtable and the resultant submission.
“We commend the work of DSIT in preparing the draft AI Cyber Security Code of Practice, which will provide much needed steer in AI that is sought by Airmic members and the organisations they serve,” said Julia Graham, Airmic CEO.
“Given the pro-business, pro-innovation aims behind this Code of Practice, which we thoroughly support, the business as a whole should be considered as the stakeholder. Only then can the Code get the buy-in of board directors and the C-suite.”
In the proposed Code of Practice, four stakeholders had been identified for being primarily responsible for implementing the code – developers, system operators, data controllers and end users.
“The draft Code of Practice looks to provide a great first step in moving the UK towards a more secure, consistent and professional approach to AI cyber security, and the associated risks,” said James Ingham, Airmic member and Group Risk Manager at Orange. “The cde seems currently to aim at IT professionals at a quite technical level, and it is my belief that to really be effective, the next step will be to broaden this to include recommendations for senior managers, and businesses as a whole.”
Comprising of 12 principles on issues ranging from staff awareness of AI risks to conducting tests and evaluating AI models, the draft code was based on the National Cyber Security Centre’s (NCSC) Guidelines for secure AI system development published in 2023, alongside the US Cybersecurity and Infrastructure Security Agency and other international cyber agencies.
While the government has envisaged for the code to be adopted on a voluntary basis, an Airmic Big Question survey conducted in August showed that 74% of respondents believed the code should be made compulsory.
“There always tends to be a tension between regulation and innovation when it comes to emerging technologies such as AI,” said Hoe-Yeong Loke, Head of Research, Airmic. “But Airmic members recognise that AI is already being used in misinformation and disinformation campaigns, presenting clear threats to democratic societies.”
Airmic and DSIT have agreed to continue engaging with each other, as standards and regulations in the critically important space that is AI develops.
“The regulation of AI today is a pressing issue that calls for more dialogue,” Leigh-Anne Slade, Head of Media, Communications and Interest Groups, Airmic, said. “Associations such as Airmic can play a major role in bringing all stakeholders to the table – risk professionals, business leaders and regulators – to ensure the regulatory landscape for this fast-evolving space is fit for the future.”