Regulatory divergences in the draft AI act: Differences in public and private sector obligations
We find that the tools for law enforcement to detect deep-fakes are considered to be high-risk, while deep-fakes themselves fall in the low-risk category. This is a peculiar divergence that appears to be grounded in the assumption that deep fakes (employed mostly by private actors for the time being) harbour less risks than AI systems in the hands of a public actor for the purpose of detecting deep fakes.
This study identifies and examines sources of regulatory divergence within the AI act regarding the obligations and limitations upon public and private sector actors when using certain AI systems. A reflection upon possible impacts and consequences is provided, and a range of policy options is suggested for the European Parliament that could respond to the identified sources of divergence. The study is specifically focused on three AI application areas: manipulative AI, social scoring and biometric AI systems. Questions regarding how and when those systems are designated as prohibited or high-risk and the potentially diverging obligations towards public versus private sector actors and the rationale behind it, are described.
Publication type:
policy brief
Publication language:
Publication date:
Publication URL:
European Parliament / Panel for the Future of Science and Technology (STOA) (STOA)