

It all depends on how we define ‘subliminal techniques’ – which the draft Act does not do yet.

But as written, it also runs the risk of being inoperable. This prohibition could helpfully safeguard users.

Article 5 prohibits systems using subliminal techniques that modify people’s decisions or actions in ways likely to cause significant harm. The EU’s draft AI Act articulates this concern mentioning “subliminal techniques” that impair autonomous choice “in ways that people are not consciously aware of, or even if aware not able to control or resist” (Recital 16, EU Council version). Many fear that social media feeds, search, recommendation systems, or chatbots can unconsciously affect our emotions, beliefs, or behaviours. If you ever worried that organisations use AI systems to manipulate you, you are not alone. Calvo is a Chair in Engineering Design at Imperial College London. Juan Pablo Bermúdez is a Research Associate at Imperial College London Rune Nyrup is an Associate Professor at Aarhus University Sebastian Deterding is a Chair in Design Engineering at Imperial College London Rafael A. While the draft EU AI Act prohibits harmful ‘subliminal techniques’, it doesn’t define the term – we suggest a broader definition that captures problematic manipulation cases without overburdening regulators or companies, write Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding and Rafael A.
