26 Tech Giants Sign On to EU’s AI Code of Conduct for Safer, Smarter AI

26 Tech Companies Join EU Commission’s AI Code of Conduct

In a landmark move to regulate artificial intelligence, 26 leading technology companies have signed the European Union’s voluntary General-Purpose AI Code of Practice, announced the European Commission on July 31, 2025. The code, designed to align with the EU’s AI Act, aims to ensure transparency, safety, and copyright compliance for advanced AI models, marking a significant step toward trustworthy AI in Europe.

A Framework for Responsible AI

The Code of Practice, finalized on July 10, 2025, provides a voluntary framework to help companies comply with the AI Act’s obligations, which take effect on August 2, 2025, though enforcement begins in 2026 for new models and 2027 for existing ones. It focuses on three key areas: transparency in AI model training data, copyright protections, and safety measures to mitigate systemic risks, such as the potential misuse of AI in developing harmful technologies like chemical weapons.

A digital illustration of a formal EU conference room. High-level executives from Google, Microsoft, OpenAI, Mistral, Airbus, and Mercedes-Benz are seated around a table, signing a document. In the background, an EU flag and the European Commission emblem are displayed. A separate figure representing Meta is shown in the shadows.

Among the signatories are industry giants like Google, OpenAI, Microsoft, and French AI startup Mistral, alongside European firms such as Airbus and Mercedes-Benz. Notably, Meta has declined to sign, citing concerns over “legal uncertainties” and potential stifling of innovation, with its global affairs chief Joel Kaplan calling the code an “overreach” that could hinder European AI development.

Benefits and Challenges

The European Commission, led by Henna Virkkunen, executive vice president for tech sovereignty, security, and democracy, hailed the code as a tool to make AI “not only innovative but also safe and transparent.” Signatories benefit from reduced administrative burdens and greater legal certainty, while non-signatories must prove compliance through costlier alternative methods.

However, the code has faced pushback. Over 40 European companies, including some signatories like Mistral, urged a two-year delay in AI Act enforcement, arguing that complex regulations threaten Europe’s competitiveness against the U.S. and China. Critics also note that the voluntary nature of the code may limit its impact, and enforcement gaps—such as addressing misinformation—remain unclear.

Europe’s AI Ambitions

The EU’s broader AI strategy emphasizes excellence and trust, backed by €1 billion annual investments through Horizon Europe and Digital Europe programs, aiming for €20 billion yearly with private and member state contributions. The AI Act, complemented by initiatives like GenAI4EU, seeks to foster innovation while upholding safety and fundamental rights, positioning Europe as a global leader in trustworthy AI.

The participation of 26 tech firms signals strong industry support, but the absence of key players like Meta and ongoing debates over regulatory burdens highlight challenges. As the AI Act rolls out, the code’s success will depend on balancing innovation with rigorous oversight in a fiercely competitive global AI landscape.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *