Joanna J. Bryson, Weizenbaum Journal of the Digital Society, 3(3), 2023.
Joseph Weizenbaum is famous for quitting AI after his secretary thought his chatbot, Eliza, understood her. But his ethical concerns went well beyond that, and concern not only potential abuse but also culpable lack of use of intelligent systems. Abuse includes making inhuman cruelty and acts of war more emotionally accessible to human operators or simply solving the problems necessary to make nuclear weapons, negligent lack of use of AI includes failing to solve the social issues of inequality and resource distribution. I was honoured to be asked for the Weizenbaum centenary to explain how the EU’s new digital regulations address his concerns. I first talk about whether Europe has legitimacy or capacity to do so, and then (concluding it might) I describe how the Digital Services Acts and the General Data Protection Regulation mostly do so, though I also spare some words for the Digital Markets Act (which addresses inequality) and the AI Act — which in theory helps by labelling all AI as AI. But Weizenbaum’s secretary knew Eliza was a chatbot, so the GDPR and DSA’s lines about transparency might be more important than that.
Read More