### Artificial Intelligence: Regulations, Standardization, and Compliance Challenges
In recent years, artificial intelligence (AI) has made significant strides, bringing with it a host of challenges and opportunities that require careful regulation. It has become increasingly clear that the rapid evolution of AI systems and their associated applications is leading to a growing need for guidelines and technical standards aimed at ensuring the safe and responsible use of emerging technologies. The goal is to maintain trust in their deployment, especially in critical sectors such as education, employment, public services, and justice.
#### The AI Act and High-Risk Systems
One of the fundamental tools for regulating AI is the AI Act. This legislative text focuses particularly on cases of AI use defined as “high-risk,” imposing obligations for transparency and specific requirements for systems that interact with humans, such as biometric recognition technologies and generative systems. Such requirements also extend to systems capable of recognizing and categorizing emotions.
Although compliance constraints will come into effect in the coming years, organizations must already begin to prepare to address these new regulations, a complex challenge considering the rapidity and complexity of AI technology adoption.
#### Compliance and Harmonized Standards
According to the AI Act, compliance with harmonized standards approved by the European Commission provides a presumption of compliance with the obligations set forth for high-risk systems. Technical standards, therefore, play a crucial role, serving as a reference for organizations wishing to operate within the legal parameters. However, given that AI is a socio-technical technology—entailing strong interaction between human and algorithmic components—it complicates the development of standards.
The AI Act not only emphasizes the importance of creating reliable and human-centered AI systems but also aims to protect fundamental rights and the environment, requiring a delicate balance in drafting standards.
#### The Demand for Standardization
In May 2023, the European Commission requested standardization bodies to develop regulations related to ten key themes, including risk management, dataset governance, transparency, and human oversight. These topics aim to ensure that AI solutions are not only effective but also ethical and respectful of individual rights. To achieve this goal, it is essential that European standards align with international ones, thus creating a common regulatory framework.
Coordination with international regulations not only promotes trade and production but could also raise protective standards in less regulated jurisdictions, supporting a global spread of responsible practices in AI management.
#### ISO/IEC 42001 Standard: Content and Application
An emerging standard in this context is ISO/IEC 42001, designed to address AI management systematically within organizations. This standard interweaves with the European Commission’s standardization requests and offers a framework for the responsible implementation of AI technologies.
ISO/IEC 42001 describes a process-oriented management system and serves as a useful tool for any entity using AI systems. It includes requirements related to quality, information transparency, risk management, and information recording.
#### Compliance with the Obligations of the AI Act
While the ISO/IEC 42001 standard provides tools to help organizations comply with the AI Act, there are significant differences between these standards and European regulations. For example, the AI Act requires specific impact assessment tools that are essential for managing the risks associated with AI. While the ISO standardization process may offer a certain degree of flexibility, it does not always allow for strict alignment with imposed regulations.
In this context, organizations must navigate the complexities of both the ISO standard and the regulatory requirements set forth by the AI Act to ensure comprehensive compliance.