Digital Economy
AI Liability (cepPolicyBrief)
cepPolicyBrief
"The presumption of causality is appropriate in view of the complexity and lack of transparency of AI systems. Adapting the rules on the burden of proof puts the injured party on an equal footing with AI providers in court," says cep digital expert Matthias Kullas, who analysed the Commission's proposals.
Kullas welcomes the new direction. "If people want to claim in court that they have been harmed by artificial intelligence, they face particular difficulties in providing evidence. The Commission wants to solve this problem through common minimum standards and thus strengthen society's trust in AI. This approach is the right one," emphasises the cep expert.
The directive also authorises national courts to order the disclosure of relevant evidence on a specific high-risk AI system upon request. What constitutes a high-risk AI system depends largely on the function and purpose of the system and the potential impact on the health, safety and fundamental rights of natural persons. "The disclosure obligation can help to avoid hopeless lawsuits. However, it interferes disproportionately with the sovereign rights of the Member States, to which such obligations are alien," criticises Kullas.
Kullas has major legal concerns regarding the legal basis: "It is doubtful whether the directive can be based on Art. 114 TFEU, as no positive effect on the internal market is to be expected. Neither the presumption of causality nor the disclosure obligation have any impact on the marketability of AI products. Nor do they contribute to the elimination of appreciable distortions of competition."