EU Guidelines for Article 50 of the AI Act
The draft EU Guidelines on Article 50 of the AI Act form a comprehensive attempt to translate the regulation’s transparency obligations into practical expectations for providers and deployers of AI systems. The guideline opens by situating Article 50 within the broader architecture of the AI Act, which entered into force on 1 August 2024 and adopts a risk‑based approach to regulating AI. Transparency risks form one of the four risk categories, and Article 50’s obligations will apply from 2 August 2026. The Commission stresses that these guidelines are non‑binding, but they are intended to help authorities, providers, and deployers implement the law consistently. As the document states, the purpose is to “serve as practical guidance to assist competent authorities, as well as providers and deployers of AI systems, in ensuring compliance with the transparency obligations” (quoted from the document).
The guidelines begin by mapping the four transparency obligations in Article 50. The first concerns AI systems that interact directly with natural persons; the second concerns AI systems that generate or manipulate synthetic content; the third concerns emotion recognition and biometric categorisation systems; and the fourth concerns deep fakes and AI‑generated or manipulated text published to inform the public on matters of public interest. Each obligation has its own scope, responsible actor, and exceptions, and the guidelines emphasize that these obligations can apply cumulatively to the same system or output. The rationale behind all four obligations is to reduce risks of deception, impersonation, manipulation, misinformation, and fraud, and to protect democratic processes and societal trust. The guidelines quote the Act’s recitals to explain that transparency helps individuals “take informed decisions” and calibrate their trust in AI‑mediated interactions.
The guidelines clarifies who is responsible for compliance. Providers are those who develop or place AI systems on the market under their name, regardless of where they are located, and they must ensure compliance with Article 50(1), (2), and (5) before the system is placed on the market. Deployers are those who use AI systems under their authority, unless the use is purely personal and non‑professional. The guidelines give examples to illustrate the distinction: a media outlet using AI to support its reporting is a deployer; an online platform merely transmitting AI‑generated content is not. The guidelines also explain that purely personal, non‑professional use is excluded from deployer obligations, but this exclusion is narrow. A person generating a deep fake of a mayor and posting it publicly cannot claim the personal‑use exemption, because the content affects public discourse. The guidelines quote: “an AI-generated or manipulated deep fake that is made publicly available… should not be considered a purely personal non-professional activity” (quoted from the document).
Research and development activities are also excluded when the AI system is used solely for scientific research, but the moment the system or its outputs are used outside that context, Article 50 applies. Open‑source systems are not exempt unless they fall outside all Article 50 obligations, meaning open‑source providers and deployers must still comply when their systems fall within scope.
The guidelines emphasize that transparency obligations do not imply legality of the underlying system. A system may comply with Article 50 but still be prohibited under Article 5, such as emotion recognition in workplaces or schools. Similarly, systems subject to Article 50 may also be high‑risk and must meet additional requirements.
The guidelines call out the first major obligation: transparency for interactive AI systems. Providers must design systems so that natural persons are informed they are interacting with AI. The guidelines unpack what it means for a system to be “intended to interact directly with natural persons.” The system must be an AI system, must be designed for bidirectional exchange, must interact directly rather than through intermediaries, and must interact with natural persons rather than operating in closed industrial environments. Examples include chatbots, voice assistants, AI avatars, and social‑media bots. Systems like recommender engines, spam filters, or backend decision‑support tools do not qualify because they do not engage in direct interaction.
The obligation requires disclosure at or before the first interaction, and the guidelines emphasize that the disclosure must be clear, accessible, and adapted to vulnerable groups such as children or persons with disabilities. The guidelines give examples of acceptable disclosures, such as a chatbot stating “You are interacting with an AI system,” a voice assistant announcing its AI nature, or a visible AI label on an email generated by an AI agent. They also warn against disclosures buried in terms and conditions, ambiguous signals, or purely machine‑readable metadata. The guidelines stress that multimodal disclosure — combining text, audio, and visual cues — is often the most effective.
Two exceptions apply. The first is when the artificial nature of the interaction is obvious to a reasonably well‑informed, observant, and circumspect person, taking into account the target audience and context. The guidelines explain that this standard is borrowed from EU consumer law. For example, developers interacting with a code‑assistant chatbot can reasonably be expected to know it is AI, but a highly realistic robotic pet or a human‑like avatar in a virtual environment would not qualify as obvious. The second exception applies when the system is authorised by law for detecting, preventing, investigating, or prosecuting criminal offences, except when the system is available to the public to report crimes. Police chatbots for public reporting must still disclose their AI nature.
The second major obligation concerns marking and detection of AI‑generated or manipulated content. Providers of such systems must ensure that outputs are marked in a machine‑readable format and that the content is detectable as AI‑generated or manipulated. Both marking and detection must be implemented; one without the other is insufficient. The guidelines quote: “Fulfilling only one element… will not suffice” (quoted from the document). The obligation applies to synthetic audio, image, video, or text content, including multimodal content and virtual or augmented reality. It applies to both generation and manipulation, and includes GPAI systems and agentic systems when their outputs are perceptible by humans.
The guidelines clarify what falls outside the scope: content that merely reproduces existing material, machine‑to‑machine outputs, sensor data, or industrial outputs not intended for human interpretation. They also explain that marking solutions may include watermarks, metadata, cryptographic provenance, fingerprints, or combinations thereof. Providers may implement marking at the model or system level and may rely on upstream solutions, but they remain responsible for compliance.
Detection tools must be made available so that natural persons and relevant actors can verify whether content is AI‑generated or manipulated. The results must be human‑readable and available at first exposure. The technical solutions must be effective, reliable, robust, and interoperable. The guidelines explain each term: effectiveness means enabling humans to distinguish AI content; reliability means accurate identification; robustness means resilience to alterations and adversarial attacks; interoperability means compatibility across systems. Providers must implement technically feasible, state‑of‑the‑art solutions, and because no single technique currently satisfies all requirements, combinations of techniques are expected. The guidelines allow narrow exceptions for industrial applications where outputs are strictly technical and confined to professional users, or for ephemeral real‑time content in contexts like video games.
The guidelines then describe exceptions: systems performing only standard editing (such as grammar correction, noise reduction, or minor colour adjustments) are exempt; systems that do not substantially alter input data or its semantics are exempt; and systems authorised by law for criminal‑offence purposes are exempt. The guidelines provide examples of minor edits versus semantic changes, noting that adding or removing objects, altering body shape, or changing skin colour are substantial manipulations requiring marking.
The third obligation concerns emotion recognition and biometric categorisation systems. Deployers must inform natural persons exposed to such systems, whether in real time or ex post. Emotion recognition is defined as identifying or inferring emotions or intentions from biometric data, and biometric categorisation involves assigning persons to categories based on biometric data. The obligation applies broadly, regardless of whether the system is high‑risk, though many such systems are high‑risk by definition. Deployers must inform all exposed persons, including children, in a clear and accessible manner at first exposure. The guidelines give examples such as pop‑up notices in games or signage at exhibition entrances. The only exception is when the system is authorised by law for criminal‑offence purposes.
The fourth obligation concerns deep fakes and AI‑generated or manipulated text published to inform the public on matters of public interest. Deployers must clearly disclose that deep fake content has been artificially generated or manipulated. A deep fake is defined as AI‑generated or manipulated image, audio, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful. The guidelines unpack each element: resemblance must be appreciable; the subject must be realistic; the content must depict persons, objects, places, entities, or events; and the content must be capable of misleading a person. The guidelines emphasize that the assessment must consider the actual audience, including vulnerable groups, not an abstract average person. Minor technical edits do not create deep fakes, but substantive manipulations do.
Deployers must label deep fakes in a clear and perceivable way. However, an attenuated regime applies to artistic, creative, satirical, fictional, or analogous works, where disclosure must be done in an appropriate manner that does not hamper enjoyment of the work. The guidelines explain each category and require that the artistic or fictional nature be evident. Even in these cases, deployers must safeguard the rights and freedoms of third parties, including image rights and intellectual property.
The guidelines then address AI‑generated or manipulated text published to inform the public on matters of public interest. The text must be published, meaning accessible to an indeterminate public; it must aim to inform; and it must concern matters of public interest such as public administration, health, environment, consumer safety, politics, or science. Deployers must disclose that the text is AI‑generated or manipulated unless two conditions are met: the text has undergone human review or editorial control, and a natural or legal person holds editorial responsibility. Human review must be substantive, not superficial, and editorial responsibility must be publicly identifiable. The guidelines give examples of qualifying and non‑qualifying cases.
The guidelines then explains the horizontal requirement in Article 50(5): all information must be provided clearly, distinguishably, and at the latest at first interaction or exposure, and must comply with accessibility requirements. Information must be noticeable, easy to understand, and not buried in manuals or menus. First exposure applies to each natural person encountering the content, not just the first person ever exposed. The guidelines give examples such as labelling deep fakes at the start of a video rather than in end credits.
The enforcement section explains that providers and deployers may demonstrate compliance by adhering to a code of practice assessed as adequate by the AI Office. Doing so simplifies supervision and may mitigate penalties. Those not adhering must demonstrate compliance through other means and may face more scrutiny. Market surveillance authorities, the AI Office, and the European Data Protection Supervisor enforce Article 50, with powers under the AI Act and Regulation 2019/1020. Penalties can reach €15 million or 3% of global turnover. Article 50 applies from 2 August 2026, and all systems in scope must comply regardless of when they were placed on the market, except for a proposed transitional rule for marking and detection under Article 50(2). Existing AI‑generated content does not need retroactive marking, but actors are encouraged to label it voluntarily.
The guidelines conclude by noting that they will be reviewed as technology and enforcement evolve, and the Commission invites ongoing contributions from stakeholders.
No comments:
Post a Comment