AI ACT

AI Act – European regulation on artificial intelligence

by Sara Commodo

On 13 March 2024 the European Parliament approved by a very large majority (523 votes in favour, 46 against, 49 abstentions) il European regulation on artificial intelligence the so-called AI Act, the first regulatory framework on artificial intelligence.

It is a historic document because it is the first worldwide, after the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence signed by Biden on October 23

The law will enter into force twenty days after publication in the Official Journal of the EU and will begin to apply 24 months after entry into force, except with respect to prohibitions on prohibited practices, which will apply six months after entry into force. force; the codes of good practice that will apply nine months after entry into force; the rules on AI systems for general purposes, including governance which will apply after 12 months from entry into force and the obligations for high risk systems which will apply after 36 months from entry into force.

Main objective ofAI Act is to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI systems, while promoting innovation and ensuring Europe leads the field, which is growing enormously .

According to the latest edition of theArtificial Intelligence Observatoryand the School of Management of the Polytechnic of Milan, the Italian artificial intelligence market is growing rapidly: +52% in 2023 for a total value of 760 million euros.

In the regulation, Artificial Intelligence was defined as “AN AUTOMATED SYSTEM DESIGNED TO WORK WITH DIFFERENT LEVELS OF AUTONOMY AND WHICH CAN SHOW ADAPTATION CAPABILITY AFTER INSTALLATION AND WHICH, FOR EXPRESS OR IMPLICIT OBJECTIVES, INFERES, FROM THE INPUTS IT RECEIVES, HOW TO GENERATE OUTPUTS SUCH AS FORECASTS, CONTENTS, RECOMMENDATIONS OR DECISIONS WHICH MAY AFFECT PHYSICAL OR VIRTUAL ENVIRONMENTS".

The new Regulation will apply to all public and private entities that produce tools with artificial intelligence technology aimed at the European market, regardless of whether the companies are European or not: if American companies want to continue to operate on the European market, they will also have to adapt .

Not only suppliers, but also users will have to ensure that the product is compliant.

Exceptions are foreseen in the application of the Regulation, which will not apply to AI systems for military, defense or national security purposes, to those for scientific research and development purposes, or to those released with free and open source licenses (without prejudice the verification of the existence of a risk), for research, testing and development activities relating to artificial intelligence systems and for natural persons who use AI systems for purely personal purposes

AI systems are divided into four macro categories: minimal, limited, high and unacceptable risk: the greater the risk, the greater the responsibilities and limits for developers and users

High risk are those AI systems intended to be used as a security component of a product or those listed in Annex III. They will be admitted, provided they do not present a significant risk of harm to the health, safety or fundamental rights of natural persons

For low-risk systems like ChatGPT there are only transparency and information obligations.

Suppliers of high-risk AI systems and importers have, among others, the obligation to

  • guarantee compliance with the specific technical requirements indicated by the Regulation,
  • indicate essential information about the system,
  • have a quality management system,
  • draw up an EU declaration of conformity,
  • take corrective measures if necessary and provide all the information requested to the competent authorities.

Deployers of high-risk AI systems are required to

  • adopt technical and organizational measures suitable for compliant use,
  • entrust the human surveillance of the systems to competent people,
  • monitor the functioning of the systems and cooperate with supervisory and control authorities, if necessary.
  • carry out an impact assessment on fundamental rights in certain cases, as required by art. 27.

Most of the obligations are reserved for suppliers to whom importers, distributors or deployers are assimilated or any third party could be identified as the supplier of the high-risk AI system if it has affixed its name or trademark to the system after it has been has already been placed on the market or put into service, if it has made substantial changes after its marketing or putting into service (provided the system remains high risk), or if it has changed the intended purpose of the AI ​​​​system, making it high risk.

The law provides limits to the use of biometric identification systems by law enforcement authorities, rules to combat manipulation and exploitation of user vulnerabilities, consumers will have the right to lodge complaints and receive relevant explanations.

The new rules prohibit some AI applications that risk harming citizens' rights. Between these:

  • Biometric categorization systems based on sensitive characteristics: these systems can be used to discriminate against people based on race, ethnicity, sex, religion or other sensitive factors.
  • Indiscriminate extrapolation of facial images from the internet or CCTV systems to create facial recognition databases: technology that can be used to track people without their consent violating their privacy.
  • Emotion recognition systems in the workplace and in schools: These systems can be used to monitor people and discriminate against them based on their emotions.
  • Social credit systems: these systems can be used to control people and limit their freedom.
  • Predictive policing practices based on profiling or evaluating a person's characteristics: These practices can be used to discriminate against people and deprive them of their rights.
  • Systems that manipulate human behavior or exploit people's vulnerabilities: These systems can be used to harm people and deprive them of their free will.

Law enforcement "they will not be able to use biometric identification systems“, except in some specific situations expressly provided for by law. These include, for example, the search for a missing person, the prevention of a terrorist attack or the investigation of a serious crime.

The identification “in real time” may only be used if strict guarantees are respected, for example if the use is limited in time and space and subject to judicial or administrative authorization.

Furthermore, artificial or manipulated images and audio or video content (so-called “deepfakes”) must be clearly labeled as such.

EU countries will have to establish and make accessible at national level regulatory experimentation spaces and testing mechanisms in real conditions (sandboxes), so that SMEs and start-ups can develop innovative systems and train them before placing them on the market.

Articles

Menu