Information Technology
In Search of “Laws of Robotics”
cepInput
"Just like cars, which are equipped with brakes and airbags before being launched on the market, safety precautions must also be built into AI systems before we release them into everyday life," says cep digital expert Anselm Küsters, who authored the study with Manuel Wörsdörfer, professor of computer ethics at the University of Maine in the US. The authors call for quasi-constitutional regulation to prevent abuse. The concept of constitutional AI involves embedding ethical principles directly into AI functionalities. "Not all technology developers are pursuing the same goals," warns Küsters. "While some players, such as Elon Musk with xAI, are striving for AI without boundaries, we believe that rules and boundaries are also essential for AI models."
The model of "ordoliberalism 2.0" provides a framework for the development of constitutional rules. This approach requires stable and forward-looking rules for data economy, protection against misuse, transparency of decisions, and protection of marginalised groups. Politics is also needed: Citizen participation should ensure that the framework for AI systems is socially legitimised. "AI will affect the lives of millions of people, so the rules for its use must be determined by society," says Wörsdörfer.
"AI can get us to our destination faster and more efficiently, but without the right safeguards, it puts us at risk," says Küsters. "We need to act now to ensure that this engine runs safely, ethically, and in line with democratic values in the future." Technically, this can be ensured through so-called system prompts or methods such as non-fine-tunable learning.
Download PDF
In Search of “Laws of Robotics” (publ. 09.17.2024) | 760 KB | Download | |
|