Digital Economy
Towards Robust AI Governance in Europe
cepAdhoc
"AI is fundamentally changing the economy and society. A robust regulatory framework is therefore essential. Our technical recommendations aim to make EU regulation of AI future-proof, proportionate, and innovation-friendly," says Anselm Küsters, cep digital expert. The study welcomes the general principles of the Code, but stresses that key aspects need to be fleshed out in more detail in order to foster innovation and address the specific challenges faced by small and medium-sized enterprises as well as open-source developers.
For example, the guidelines should be simplified to make it easier for companies to comply with AI rules in the future. Practical examples should also be highlighted. In particular, open-source providers should not be disadvantaged by excessive reporting requirements, as they often play a key role in democratising AI technologies. In addition to the existing risk categories, the taxonomy should also take into account cascading and spillover effects of AI systems. Another focus of the cep’s recommendations is continuous risk assessment over the entire lifecycle of an AI model. This should be done not only at the technical level, but also through the systematic collection of data on human-AI interactions – while protecting the privacy of users. Finally, as AI models increasingly become fundamental platforms for interaction on the internet, the code must include rules to prevent information bias.
According to Küsters, the first draft of the AI Code of Practice provides a solid basis. However, it needs to be sharpened in many areas. To this end, the cep, as an official member of Working Group 2, which deals with the risk taxonomy, submitted written comments during the plenary session on the Code of Practice. Three further rounds of drafting are planned until April 2025. The final AI Code of Practice is expected in May 2025.
Download PDF
Towards Robust AI Governance in Europe (publ. 11.28.2024) | 754 KB | Download | |
|