ISO/IEC 42001:2023 – ESTABLISHING STANDARDS FOR RESPONSIBLE AI
- Florence A. Ogonjo |
- January 15, 2024 |
- Artificial Intelligence
Following the rapid and continued growth of Artificial Intelligence (AI) and noting the impact of use both positive and negative and the need to establish robust governance structures on responsible AI, the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) responsible for the development of a specialised system for world wide standardisation adopted the first ever international AI management system standard ISO/IEC 42001:2023.Adopted on 18 December 2023,the standard intends to help organisations responsibly apply and use AI particularly in addressing considerations brought about by the use of AI such as,1
The use of AI for automatic decision-making, sometimes in a non-transparent and non-explainable way, can require specific management beyond that of classical IT systems.
The use of data analysis, insight and machine learning, rather than human-coded logic to design systems, both increases the application opportunities for AI systems and changes the way that such systems are developed, justified and deployed.
AI systems that perform continuous learning change their behaviour during use. They require special consideration to ensure their responsible use continues with changing behaviour.
The standard provides a certifiable AI Management System (AIMS) framework in which AI systems can be developed and deployed as part of an AI assurance ecosystem.2 It is designed for organisations and entities providing or utilising AI based products or services ensuring responsible development and use of AI systems. The standard adds to the growing instruments aimed at establishing responsible AI in response to the growing ethical issues and unintended consequences of leveraging AI. It is also a notable step that observes moving from principle to practice, responsible AI presupposes a process of defining policies and establishing accountability to guide the creation and deployment of AI systems in compliance with ethics and laws.3 Responsible AI encompasses principles such as fairness, accountability, inclusiveness, reliability, transparency, privacy and security.
ISO/IEC 42001:2023 not only establishes requirements for establishing, implementing, maintaining and continually improving AI management systems in organisations. It also establishes the expectation upon the organisations to focus on additional safeguards that would be required with certain AI features such as the ability to continuously learn and improve or the lack of transparency or explainability in comparison to the more traditional tasks that AI systems perform. 4
Additionally the standards also cover, understanding the organisation and the context within which AI is utilised, organisational needs and needs of interested parties in utilising AI. It also provides directives on determining the scope of AI,5
AI management systems, leadership and commitment of the organisation in AI policy, roles, responsibilities and authorities,
Actions to address risks and opportunities, organisational AI objectives and planning processes on how to achieve set objectives, resources required, competencies, raising awareness, communication and documenting information.
The standards equally set out operational requirements including, operational planning and control, AI risk assessment and treatment, AI system impact assessment, performance evaluation criteria for AI systems, and improvement structures.
The pre-requisites given in the standards address various responsible AI principles particularly, transparency, continuous learning which speaks to reliability, accountability, and privacy and security.
Organisations that choose to implement the standard in their use of AI, will set out a structured system of managing risks and opportunities associated with AI. This will also ensure the creation of balanced governance structures where the needs of interested parties are considered and innovation processes are not stifled. Through this, organisations will be able to create responsible AI governance frameworks that are not only grounded on principle but practice, this being characterised by a demonstration of upholding responsible AI principles throughout their organisational management structures generating evidence of responsibility and accountability regarding its role with respect to AI systems.6
This standard particularly serves as an instrumental guide to SMEs in different sectors that may not have clarity on how best to implement responsible AI within their organisation, for governmental organisations that are yet to establish or are in the process of of establishing governance frameworks it equally serves as a guide of considerations in streamlining processes that leverage and utilise AI in providing public service. Through this, governments may benefit from establishing the responsible use of AI and build public trust in the use of such technologies as public trust is paramount in establishing the use of AI and ultimately benefiting from the use of such technologies. The adoption of this standard may also present an opportunity for governments and sectors that are yet to develop AI regulations , to create comprehensive and robust frameworks on AI regulation grounded on responsible AI.
Image is from canva.com
1ISO/IEC 42001 Certification – Artificial Intelligence (AI) Management System https://www.sgsgroup.com.cn/en-ke/services/iso-iec-42001-certification-artificial-intelligence-ai-management-system
2ISO/IEC 42001:2023 https://www.iso.org/obp/ui/en/#iso:std:iso-iec:42001:ed-1:v1:en
3 ISO/IEC 42001:2023 https://www.normadoc.com/english/iso-iec-42001-2023.html
4 ISO/IEC 42001:2023 https://www.normadoc.com/english/iso-iec-42001-2023.html
5 State of AI in Africa (CIPIT, 2023) <https://cipit.strathmore.edu/wp-content/uploads/2023/05/The-State-of-AI-in-Africa-Report-2023-min.pdf
6 ISO/IEC 42001:2023 https://www.iso.org/obp/ui/en/#iso:std:iso-iec:42001:ed-1:v1:en