On August 1, 2024, the AI Act came into force. It is intended to regulate artificial intelligence within the EU. But the implementation of the regulation should not only take place gradually, it also raises many questions. So anyone who thinks that everything has already been sorted out is very much mistaken. A comment.
What does the AI Act change? Is he a bureaucratic monster? Or: Does it even hinder progress? One or more similar questions are currently being addressed by the media in connection with the AI Act. The EU’s first AI law officially came into force on August 1, 2024.
But many of these contributions only scratch the surface or are pure speculation. There is still a lot of uncertainty surrounding the AI Act. That’s why it comes into force gradually and initially changes nothing. The regulation should rather be understood as a process that needs to be rigorously developed further.
What is the AI Act?
The AI Act is intended to create a legal framework for the regulation of artificial intelligence within the EU. For example, it categorizes AI applications according to risks – for the fundamental rights and security of all EU citizens. Depending on their classification, different deadlines and rules should apply to these systems.
From February 2, 2025, the first applications with an “unacceptable” risk will be banned. These include: monitoring systems such as so-called social scoring, which evaluate people’s social behavior. Biometric real-time surveillance in public spaces by law enforcement authorities should also be prohibited – without a court order.
The problem: The AI Act not only leaves a lot of room for interpretation; it is also full of holes like a sieve. The EU has not fundamentally banned biometric facial recognition. Authorities are still allowed to use AI for subsequent detection. However, it is unclear where the boundaries lie between real-time monitoring and subsequent identification.
Another half-baked rule: In the future, providers of AI systems should label image, audio and video content that was generated using AI – so far so good. But media that take their craft seriously don’t want to mislead anyway. However, anyone who specifically wants to spread false information and propaganda will probably not be interested in mandatory labeling.
AI law raises many questions
The situation is similar with copyright. According to the AI Act, companies will in future have to ensure that they do not commit any copyright infringements when training their AI models. The problem: Opinions as to whether AI systems are allowed to access copyrighted data not only vary widely. It is also unclear when exactly an infringement occurred, as copyright law in Germany does not yet have clear rules regarding artificial intelligence.
The implementation of the AI Act is now the responsibility of the individual EU member states – within their respective borders and laws. However, due to the staggered transition periods, nothing will change for the time being. The first regulations will only become binding from February 2025 – including the ban on risky AI systems.
Whether and how effective the AI Act will be in practice will only become apparent then at the earliest. The EU member states must and can determine the specific implementation themselves. It would be possible, for example, for Germany to fundamentally ban biometric facial recognition within the framework of its laws – beyond the ban on real-time monitoring.
However, responsibility must first be clarified in this country. Because it is still unclear which authority should implement the AI Act in Germany. Anyone who conjures up a bureaucratic monster against this background is missing the point of reality.
Even the whining of those who rave about the AI Act as a brake on innovation lacks concrete arguments. Because the AI law could ultimately allow even more than before. After all, artificial intelligence already fell under certain laws and rules. The AI Act should therefore be consistently thought through – as a process that is at the beginning rather than at the end.
Also interesting:
- Go with the FlowGPT, or: How far can AI go in art?
- AI as a judge: The advantages and disadvantages of artificial intelligence in the judiciary
- Apple brings AI to your iPhone: What is Apple Intelligence?
- Study: AI can deceive and cheat people
The article Lots of ambiguity: AI Act comes into force – and doesn’t change anything at all by Fabian Peters first appeared on BASIC thinking. Follow us too Facebook, Twitter and Instagram.
As a Tech Industry expert, my initial thoughts on the AI Act coming into force without creating any immediate changes are mixed. On one hand, it is understandable that such a complex and rapidly evolving technology like AI would require time for implementation and adaptation. However, it is concerning that there seems to be a lack of clarity and direction in how the AI Act will be enforced and what impact it will have on the industry.
It is crucial for policymakers to work closely with industry experts to ensure that the AI Act is both effective and feasible. Ambiguity in regulations can lead to confusion and hinder innovation, which is the last thing we want in such a dynamic and competitive field like technology.
I believe that clear and transparent guidelines are essential for the successful implementation of the AI Act. This will not only help protect consumers and promote ethical AI practices but also provide a level playing field for companies to compete in. As the tech industry continues to grow and evolve, it is important for regulations to keep pace and adapt to new technologies and challenges. I hope that the AI Act will eventually lead to positive changes and advancements in the industry, but for now, it is important for all stakeholders to work together to address any ambiguity and ensure a smooth transition.
Credits