The contribution of the EU’s AI code is brave-but insufficiently appeared at the online magazine Basic Thinking. You can start the day well every morning via our newsletter update.
Where is Europe going to regulate artificial intelligence? A new EU AI code of AI is supposed to create legal certainty, but is met with violent criticism. But what does this voluntary code of conduct bring – and what doesn’t?
The GPAI Code of Practice is not a law. It is a voluntary code of conduct-a soft-LAW instrument that is intended to support the provider of AI models to comply with the European “AI Act”. This is the heart of European AI regulation, which places high demands on transparency, security and copyright conformity, especially in advanced models.
The GPAI code concretizes these requirements in three chapters: “Transparency”, “Copyright” and “Safety and Security”. So it is about documenting the models, strategies for compliance with copyrights and procedures for dealing with systemic risks. And although the code is voluntary, the EU significantly signals: Anyone who sticks to it can hope for a milder regulation – a kind of “Presumption of conformity”.
For companies that previously navigated in the regulatory fog, this is basically good news. Finally there are sample forms, practical checklists and an official guideline, which means, for example, “sufficient transparency” in Article 53 of the AI Act.
EU Codex: Between Voluntary and Political Printing
But this is only one side of the medal. Because as soon as the code was published, critical voices reported. The digital association Bitkom praised the direction of the code, but warns of a “bureaucratic monster”, which could overwhelm many companies with the open risk light-the so-called “Open-Ended Risk Identification”.
Large companies such as Siemens, SAP, Lufthansa or Airbus with the code went even more sharply to court with the Codex: In its current form, the GPAI code was not only away, but hostile to innovation. SAP boss Christian Klein speaks, for example, that regulation in this form is “toxic”-a remarkable term for a European digital strategy that actually wants to create trust.
But there is also resistance from civil society. The NGO “The Future Society” criticizes that American technology groups are too strong on the final guidelines. Of all things, where you wanted to create transparency, it is not understandable to the outside world why certain regulations were weakened – or are completely missing.
The price of the compromises
And indeed: the GPAI code remains behind in some places what experts and data protection advocates consider necessary. Risk analyzes and model reports should only be presented after the market launch – the principle “first publish, then regulate” is not harmless to AI.
Second, a real whistleblower protection is missing, although internal information on risk management would be extremely important. Thirdly, there is no obligation to do emergency plans – surprisingly in a technological area that can have systemic effects on infrastructure, education or health.
And fourthly, the code largely leaves the specific risk definition to the providers themselves – including the evaluation and control. All of this is understandable in a liberal innovation climate. But the question remains whether this advance of trust is justified.
AI code: Europe between control and chance
Despite all the legitimate criticism, one should not forget: In my eyes, the GPAI code is a courageous attempt to give technological progress a European framework – beyond American market logic and Chinese state steering.
The EU could also have wait and see how global competition developed. Instead, she acted – and created a structured, publicly discussed code that is ambitious but not perfect. In my opinion, this deserves recognition.
Because especially in times when trust in technology is no longer a matter of course, political initiatives are needed to combine security, responsibility and innovative strength. Even if the GPAI code still has gaps-it can be an important building block for a value-based AI policy that does not deter European providers but strengthens.
EU’s AI code: A first step-no more and no less
Of course, the GPAI code will have to be revised. The criticism expressed is too concrete, the weaknesses too obvious to be transferred to practice unchanged. But it is also a first foundation – an invitation to participate, a blueprint for standards that can develop.
What becomes of it depends on whether the actors involved are ready to name the gaps without rejecting the whole idea. Maybe a second version is needed. Perhaps additional protective mechanisms are needed. And certainly more dialogue is needed – between companies, governments, NGOs and civil society.
Because the code only makes sense if it not only makes rules, but also creates trust. Trust in a technology that cannot do everything but will change a lot – if you leave it. And if you accompany them cleverly.
Also interesting:
- Irr Ki-Plan: Trump wants to rename artificial intelligence
- AI has to go to data diet, otherwise it will ruin the internet
- Digital gap: How artificial intelligence divides the labor market
- Study: How artificial intelligence can become more sustainable
The contribution of the EU’s AI code is brave-but initially appeared to be insufficiently on Basic Thinking. Follow us too Google News and Flipboard Or subscribe to our update newsletter.
As a Tech Industry expert, I believe that the EU’s AI code is a step in the right direction towards regulating artificial intelligence in a way that prioritizes ethics and human rights. However, I also think that it is insufficient in addressing the full scope of challenges posed by AI technology.
While the code sets out important principles such as transparency, accountability, and data protection, it lacks clear enforcement mechanisms and consequences for non-compliance. Without strong enforcement measures, companies may not feel compelled to adhere to the code, leading to potential abuses of AI technology.
Additionally, the code does not address the broader societal impacts of AI, such as job displacement, algorithmic bias, and the erosion of privacy rights. These issues require more comprehensive regulatory frameworks that consider the broader implications of AI on society as a whole.
Overall, while the EU’s AI code is a positive step towards regulating AI technology, it is important for policymakers to continue to develop more robust and comprehensive regulations to ensure that AI is developed and deployed responsibly.
Credits