Meta has stirred up a new wave of conversation around AI regulation by publicly refusing to sign the European Union’s voluntary Code of Practice for general-purpose AI models. The announcement was made just weeks before the first phase of the EU’s landmark AI Act is set to roll out on August 2. While many tech companies have opted to comply in preparation for future legislation, Meta has taken a bold stance by labeling the code legally unclear and potentially harmful to AI innovation in Europe.

Joel Kaplan, Meta’s Chief Global Affairs Officer, revealed the company’s decision through a LinkedIn post where he criticized the EU’s approach. According to Kaplan, the current draft of the Code introduces serious legal uncertainties for developers of large language models and exceeds the actual scope of the AI Act itself. While the document is not enforceable by law, Kaplan suggests that the expectations it sets are overly rigid and could hinder the progress of AI technologies across Europe.

The Code of Practice is structured around three key areas: transparency, copyright, and safety. It provides guidance on how companies should document their models, ensure compliance with EU copyright laws, and implement safety protocols to avoid potential risks to users and society. Although voluntary, the code is seen as an important step in preparing companies for the stricter AI regulations that will follow once the full AI Act comes into force.

Interestingly, not everyone in the AI industry agrees with Meta’s assessment. OpenAI has confirmed its intention to sign the Code, aligning itself with the EU’s cautious but structured approach to AI deployment. This has only heightened the contrast in how different players in the space view regulatory oversight.

Meta’s resistance comes at a time when several other European companies, including major firms like Airbus, Siemens Energy, and Mercedes-Benz, have also expressed concerns about the rapid implementation of the AI Act. In an open letter to the European Commission, these companies requested a temporary halt or delay in rolling out the law, warning that it could stifle innovation and discourage global AI leaders from investing in Europe.

Despite growing pressure, the European Commission remains firm. In a recent statement, spokesperson Thomas Regnier clarified that there will be no delay in enforcing the AI Act, no grace period, and no pause in execution. The Commission has stated that the timeline remains unchanged and the rollout will proceed as planned.

As the debate intensifies, the clash between innovation and regulation is becoming more visible. While the EU hopes to lead the world in ethical AI governance, companies like Meta are raising critical questions about how much control is too much, and at what cost to technological progress.

For more updates on AI policies, global tech decisions, and how they impact innovation, follow Tech Moves on Instagram and Facebook.