The European Commission on Wednesday launched a formal consultation process to draft new transparency guidelines and a Code of Practice for artificial intelligence systems, particularly generative AI. The initiative, which runs until October 2, aims to clarify the obligations outlined in the EU’s Artificial Intelligence Act and to ensure that users are properly informed when interacting with AI-generated or AI-manipulated content.

The consultation calls on a wide spectrum of stakeholders, including AI system providers, public institutions, researchers, civil society representatives, supervisory bodies and citizens, to provide input on how transparency obligations should be applied in practice. The move comes as part of the European Union’s broader strategy to strengthen oversight of AI technologies and reinforce trust in digital systems across member states.
Under Article 50 of the AI Act, which came into force on August 1, 2024, deployers and providers of certain AI systems are legally required to inform individuals when they are engaging with AI, especially in contexts involving biometric categorization, emotion recognition, or exposure to synthetic content. These rules are set to become enforceable from August 2, 2026, giving stakeholders a two-year window to prepare for full compliance.
The European Commission is developing two separate but related instruments. The first is a set of official guidelines aimed at providing legal and technical clarity on transparency provisions. These guidelines will interpret the scope, definitions, and potential exceptions within Article 50. The second is a voluntary Code of Practice that will offer practical and technical measures to detect, label, and disclose AI-generated or manipulated content.
Generative AI systems under focus in new framework
The Code will focus particularly on output-level transparency in generative AI systems. To support the drafting of the Code of Practice, the Commission has also issued a parallel call for expressions of interest. Selected stakeholders will be invited to participate in the co-creation process beginning with a plenary session in November 2025. The drafting process is expected to conclude by June 2026.
The Commission emphasized that this inclusive, multi-stakeholder approach is essential for creating a workable and widely accepted framework for transparency. The new initiative builds on the General-Purpose AI (GPAI) Code of Practice introduced by the Commission in July 2025. While the GPAI Code focused on systemic issues such as data transparency, model documentation and copyright, the current consultation targets more immediate user-facing concerns, particularly the identification and labeling of AI-generated outputs in public and private digital environments.
Transparency has become a central issue in the EU’s approach to AI governance as concerns mount over the misuse of generative models in media, education, advertising and political communication. Officials say the guidelines and Code of Practice will play a critical role in safeguarding public trust, ensuring user autonomy and preventing the unintentional spread of misinformation.
Stakeholder collaboration seen as essential to success
The European Commission has stressed that while the Code of Practice is voluntary, it is intended to serve as a blueprint for best practices and may influence future regulatory enforcement under the AI Act. Once finalized, the guidelines and Code will help create a consistent framework for AI transparency across the EU’s 27 member states, setting a global benchmark for responsible AI deployment.
The consultation is managed by the European AI Office and forms part of a broader push to align AI regulation with ethical standards, fundamental rights, and the EU’s Digital Decade objectives. With a tight timeline and active participation from key stakeholders, the European Commission aims to finalize a robust set of tools that can guide AI developers and users toward full legal and ethical compliance. – By EuroWire News Desk.
