Posts in 2023
Human Experience and AI Regulation: What European Union Law Brings to Digital Technology Ethics

Joanna J. Bryson, Weizenbaum Journal of the Digital Society, 3(3), 2023. 

Joseph Weizenbaum is famous for quitting AI after his secretary thought his chatbot, Eliza, understood her. But his ethical concerns went well beyond that, and concern not only potential abuse but also culpable lack of use of intelligent systems. Abuse includes making inhuman cruelty and acts of war more emotionally accessible to human operators or simply solving the problems necessary to make nuclear weapons, negligent lack of use of AI includes failing to solve the social issues of inequality and resource distribution. I was honoured to be asked for the Weizenbaum centenary to explain how the EU’s new digital regulations address his concerns. I first talk about whether Europe has legitimacy or capacity to do so, and then (concluding it might) I describe how the Digital Services Acts and the General Data Protection Regulation mostly do so, though I also spare some words for the Digital Markets Act (which addresses inequality) and the AI Act — which in theory helps by labelling all AI as AI. But Weizenbaum’s secretary knew Eliza was a chatbot, so the GDPR and DSA’s lines about transparency might be more important than that.

Read More
Going Nuclear? Precedents and Options for the Transnational Governance of AI (pdf)

David Backovsky and Joanna Bryson, Horizons: Journal of International Relations and Sustainable Development , Issue 24, a special issue on AI and AI governance, Summer 2023.

We argue that the current global governance regime for AI is deeply dysfunctional. GPAI, OECD, UNESCO and ITU form a set of competing actors that do not presently provide us with the necessary legitimacy and centralization that would best serve global governance of AI. But we do have a history of technological governance we can learn from, and we know the basic structure of what an organization that can govern an emerging technology can look like. Regardless of the fact that nuclear and AI safeguards will never prove to be completely comparable, many core lessons from the International Atomic Energy Agency (IAEA) are straightforwardly applicable. We need a centralized agency, with political and technical capacity, with internal expertise and the right balance between accountability and political autonomy.

Read More
Do We Collaborate With What We Design?

Katie D. Evans, Scott A. Robbins, and Joanna J. Bryson, Topics in Cognitive Science, 2023. 

In this paper, we critically assess both the accuracy and desirability of using the term “collaboration” to describe interactions between humans and AI systems. We begin by proposing an alternative ontology of human–machine interaction, one which features not two equivalently autonomous agents, but rather one machine that exists in a relationship of heteronomy to one or more human agents. In this sense, while the machine may have a significant degree of independence concerning the means by which it achieves its ends, the ends themselves are always chosen by at least one human agent, whose interests may differ from those of the individuals interacting with the machine. We finally consider the motivations and risks inherent to the continued use of the term “collaboration,” exploring its strained relation to the concept of transparency, and consequences for the future of work.

Read More
The European Parliament’s AI Regulation: Should We Call It Progress?

Meeri Haataja and Joanna J. Bryson, Amicus Curiae, Series 2, 4(3), 707-718, Spring 2023.

We here describe the outcomes of the first round of legislative action by one of the EU’s two legislative bodies, the European Parliament, in terms of modifying the Artificial Intelligence Act. The Parliament has introduced a number of changes we consider to be enormously important, some in a very good way, and some in a very bad way. At stake is whether the AI Act really brings the power and strength of product law to continuously scale improved practice on products in the EU with intelligent components, or whether the law becomes window-dressing aimed only at attacking a few elite actors post hoc. We describe here the EU process, the changes and our recommendations.

Read More
Spamming the Regulator: Exploring a New Lobbying Strategy in EU Competition Procedures (pdf)

Marlene Jugl, William A.M. Pagel, Maria Camila Garcia Jimenez, Jean Pierre Salendres, William Lowe, Joanna J. Bryson, and Helena Malikova, Journal of Antitrust Enforcement, April 2023.

We document and examine a novel lobbying strategy in the context of competition regulation, a strategy that exploits the regulator’s finite administrative capacities. Companies with merger cases under scrutiny by the European Commission’s Directorate General for Competition appear to be employing a strategy of ‘spamming the regulator,’ through the strategic and cumulative submission of economic expert assessments. Procedural pressures may result in an undeservedly favorable assessment of the merger. Based on quantitative and qualitative analyses of an original dataset of all complex merger cases in the EU 2005–2020, we present evidence of this new strategy and a possible learning process among private actors. We suggest remedies to ensure regulatory effectiveness in the face of this novel strategy.

Read More