Posts tagged Regulation and Digital Governance
Human Experience and AI Regulation: What European Union Law Brings to Digital Technology Ethics

Joanna J. Bryson, Weizenbaum Journal of the Digital Society, 3(3), 2023. 

Joseph Weizenbaum is famous for quitting AI after his secretary thought his chatbot, Eliza, understood her. But his ethical concerns went well beyond that, and concern not only potential abuse but also culpable lack of use of intelligent systems. Abuse includes making inhuman cruelty and acts of war more emotionally accessible to human operators or simply solving the problems necessary to make nuclear weapons, negligent lack of use of AI includes failing to solve the social issues of inequality and resource distribution. I was honoured to be asked for the Weizenbaum centenary to explain how the EU’s new digital regulations address his concerns. I first talk about whether Europe has legitimacy or capacity to do so, and then (concluding it might) I describe how the Digital Services Acts and the General Data Protection Regulation mostly do so, though I also spare some words for the Digital Markets Act (which addresses inequality) and the AI Act — which in theory helps by labelling all AI as AI. But Weizenbaum’s secretary knew Eliza was a chatbot, so the GDPR and DSA’s lines about transparency might be more important than that.

Read More
Going Nuclear? Precedents and Options for the Transnational Governance of AI (pdf)

David Backovsky and Joanna Bryson, Horizons: Journal of International Relations and Sustainable Development , Issue 24, a special issue on AI and AI governance, Summer 2023.

We argue that the current global governance regime for AI is deeply dysfunctional. GPAI, OECD, UNESCO and ITU form a set of competing actors that do not presently provide us with the necessary legitimacy and centralization that would best serve global governance of AI. But we do have a history of technological governance we can learn from, and we know the basic structure of what an organization that can govern an emerging technology can look like. Regardless of the fact that nuclear and AI safeguards will never prove to be completely comparable, many core lessons from the International Atomic Energy Agency (IAEA) are straightforwardly applicable. We need a centralized agency, with political and technical capacity, with internal expertise and the right balance between accountability and political autonomy.

Read More
The European Parliament’s AI Regulation: Should We Call It Progress?

Meeri Haataja and Joanna J. Bryson, Amicus Curiae, Series 2, 4(3), 707-718, Spring 2023.

We here describe the outcomes of the first round of legislative action by one of the EU’s two legislative bodies, the European Parliament, in terms of modifying the Artificial Intelligence Act. The Parliament has introduced a number of changes we consider to be enormously important, some in a very good way, and some in a very bad way. At stake is whether the AI Act really brings the power and strength of product law to continuously scale improved practice on products in the EU with intelligent components, or whether the law becomes window-dressing aimed only at attacking a few elite actors post hoc. We describe here the EU process, the changes and our recommendations.

Read More
Spamming the Regulator: Exploring a New Lobbying Strategy in EU Competition Procedures (pdf)

Marlene Jugl, William A.M. Pagel, Maria Camila Garcia Jimenez, Jean Pierre Salendres, William Lowe, Joanna J. Bryson, and Helena Malikova, Journal of Antitrust Enforcement, April 2023.

We document and examine a novel lobbying strategy in the context of competition regulation, a strategy that exploits the regulator’s finite administrative capacities. Companies with merger cases under scrutiny by the European Commission’s Directorate General for Competition appear to be employing a strategy of ‘spamming the regulator,’ through the strategic and cumulative submission of economic expert assessments. Procedural pressures may result in an undeservedly favorable assessment of the merger. Based on quantitative and qualitative analyses of an original dataset of all complex merger cases in the EU 2005–2020, we present evidence of this new strategy and a possible learning process among private actors. We suggest remedies to ensure regulatory effectiveness in the face of this novel strategy.

Read More
Belgian and Flemish Policy Makers’ Guide to AI Regulation

Joanna J. Bryson, KCDS-CiTiP Fellow Lectures Series: Towards an AI Regulator?, October 11, 2022.

The regulation of AI is of pressing national and international concern, yet often distracted by arguments concerning definitions and myths concerning the relevance of opacity to regulation. All software, and indeed all technological means of automating aspects of human industry and behaviour, are products of human action, and as such their production can be regulated to ensure sufficient transparency to hold their developer and operators accountable for mishaps. Indeed the processes necessary to ensure such transparency—including process audits—will reduce harms by encouraging compliance to ever-increasing standards of best practice. In this paper, I discuss social consequences of AI and digital technology, and both social and industrial benefits to coordinating their production through good governance.

Read More
Transnational Digital Governance and Its Impact on Artificial Intelligence

Mark Dempsey, Keegan McBride, Meeri Haataja, and Joanna J. Bryson, Handbook of AI Governance, May 2022.

This chapter explores the extant governance of AI and, in particular, what is arguably the most successful AI regulatory approach to date, that of the European Union. The chapter explores core definitional concepts, shared understandings, values, and approaches currently in play. It argues that not only are the Union’s regulations locally effective, but, due to the so-called “Brussels effect,” regulatory initiatives within the European Union also have a much broader global impact. As such, they warrant close consideration. Open access version.

Read More
Reflections on the EU’s AI Act and How We Could Make It Even Better (pdf)

Meeri Haataja and Joanna J. Bryson, CPI TechREG Chronicle, March 2022

Meeri Haataja and I wrote two papers (really, originally one long one) to inform the writing of the EU’s AI Act (AIA). Because of the importance of getting the material out to policy makers while they were still writing, we published in essentially a newsletter, who promised to publish the second part, a supplement about the costs in the next issue, then didn’t, so it’s just in arxiv for now. See What Costs Should We Expect From the EU’s AI Act?, SocArXiv, 2021

The main thrust of this article is that there is a lot of good work done in the AIA that some people with vested interests are unjustly attacking, but there are also a few things that can be improved. This may be interesting even if you don’t care about law in the EU, just if you are trying to regulate AI or the digital in your own country. See also my Wired article expanding on one aspect here: the definition of AI used in the AIA, and how that relates to the purpose of AI regulation.

Read More
Is There an AI Cold War? (pdf)

Joanna J. Bryson and Helena Malikova, Global Perspectives, 2(1), 2021.

Regulation is a means societies use to create the stability, public goods, and infrastructure they need to thrive securely. This policy brief is intended to both document and to address claims of a new AI cold war: a binary competition between the United States and China that is too important for other powers to either ignore or truly participate in directly, beyond taking sides.

Read More
Of, For, and By the People: The Legal Lacuna of Synthetic Persons

Joanna J. Bryson, Mihailis E. Diamantis, and Thomas D. Grant, Artificial Intelligence and Law, 25(3):273–291, Sep 2017.

Two professors of law and I argue that it would be a terrible, terrible idea to make something strictly AI (in contrast to an organisation also containing humans) a legal person. In fact, the only good thing about this is that it gives us a chance to think about where legal personhood has already been overextended (we give examples). “Gold” open access, not because I think it’s right to make universities or academics pay to do their work, but because Bath has some deal with Springer / has already been coerced into paying. Notice you can read below all my papers going back to 1993 (when I started academia); I don't think “green” open access is part of the war on science.

Read More
Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems

Joanna J. Bryson and Alan F.T. Winfield, IEEE Computer, 50(5):116-119, 2017.

What do you do when technology like AI changes faster than the law can keep up? One thing is have law enforce standards maintained by professional organisations, which are hopefully more agile and informed. Invited commentary. Open access version, authors’ final copy (pdf).

Read More