Technology policy and digital governance are now my dominant areas of research. Given that I have worked on both bettering AI and understanding human society, I have long felt obliged to also work on AI ethics and policy whenever presented with the opportunity. It has become clear that humans have a lot of trouble understanding AI, partly because AI is built without sufficient concern for accountability, partly because humans over-identify with the computational aspects of our intelligence and thus inject a psychological block to AI transparency, and perhaps mostly because we just don’t understand ourselves, including our own ethics. Much of my group’s ethics work has formerly concerned systems engineering and devops – making the use of AI more transparent and accountable. We also do work on what makes AI comprehensible to ordinary users, and we do both experimental and theoretical psychology on understanding what leads people to overidentify with AI. Of course now at the Hertie School and in their Centre for Digital Governance, I am working primarily on understanding the impacts of digital technology more generally on human lives, including our work, our security, our society, and our self image.
Artificial Intelligence (AI) and robots often seem like fun science fiction, but in fact already affect our daily lives. For example, services like Google and Amazon help us find what we want by using AI. Every aspect of how Facebook works is based on AI and Machine Learning (ML). The reason your phone is so useful is it is full of AI — sensing, acting, and learning about you. All these tools not only make us smarter, their intelligence is based partly on what they learn both from us and about us when we use them. So we make the tools smarter too.
The fact is, robots completely belong to us. We author AI, we don’t give birth to it. People, governments and companies build, own and program robots. Whoever owns and operates a robot is responsible for what it does.
Many people worry about the wrong things when they worry about AI. I hope I can help us worry about the right things.
Why Build AI?
If some people think robots might take over the world, or if machines really are learning to predict our everything we do, or even if a president might try to put the blame on a robot for the president’s own bad military decisions, then why would anyone work on advancing AI at all? My personal reason for building AI is simple: I want to help people think.
Our society faces many hard problems, like finding ways to work together yet maintain our diversity. Figuring out how to avoid war, and ending the ones that already started. Learning to live truly sustainably — so our children consume no more space and time than our parents, and no more other resources than can be replaced in a lifetime — to do all that while still protecting human rights, human dignity, and human flourishing. These problems are so hard, they might actually be impossible to solve. But building and using AI is one way we might figure out some answers. If we have tools to help us think, they might make us smarter. And if we have tools that help us understand how we think, that might help us find ways to be happier, and to treat each other and everything else in our world better.
Of course, all knowledge and tools, including AI, can be used for good or for bad. This is why it’s important to think about what AI is, and how we want it to be used. This page is designed to help people (including me) think about the ethics of AI research.
To start out with the basics: here's a Definition of Artificial Intelligence I coauthored with Jeremy Wyatt for the Children’s Britannica. And here is an interview where an American high school student asks me about studying AI.
Do we need rights for robots? No.
My students and I are among the many researchers who work on building artificial consciousness and synthetic emotions. These aren’t any more magic or deserving of ethical obligation than artificial hands or legs. In humans, consciousness and ethics are associated with our morality, but that is because of our evolutionary and cultural history. In artefacts, moral obligation is not tied by either logical or mechanical necessity to awareness or feelings. This is one of the reasons we shouldn’t make AI responsible: we can’t punish it in a meaningful way, because good AI systems are designed to be modular, so the “pain” of punishment could always be excised, unlike in nature.
Why We Shouldn't Fear AI or Robots – Machines Aren’t People (or Even Apes)
As I said, I think most people are worrying about the wrong things when they worry about robots and AI. First, here are some reasons not to worry.
1) AI has the same ethical problems as other, conventional artifacts.
In the mid-1990s I attended a number of talks that made me realize that some people really expected AI to replace humans. Some people were excited about this, and some were afraid. Some of these people were well-known scientists. Nevertheless, it seemed to me that they were all making a very basic mistake. They were afraid that whatever was smartest would “win,” somehow. But we already have calculators and phones that can do math better than us, and they don’t even take over our pockets, let alone the world.
My friend Phil Kime agreed with me, and added that he thought the problem was that people didn’t have enough direct, personal experience of AI to really understand whether or not it was human. So we wrote one of my first published papers, Just Another Artifact: Ethics and the Empirical Experience of AI. We argued that realistic experience of AI would help us better judge what it means to be human, and help us get over our over-identification with AI systems. We pointed out that there are ethical issues with AI, but they are all the same issues we have with other artifacts we build and value or rely on, such as fine art or sewage plants. (We first wrote it in 1996, and in 2010 we got back together and rewrote it for the leading international conference in AI. The updated version is called Just an Artifact: Why Machines are Perceived as Moral Agents and is in the proceedings of The Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI '11).)
2) It’s wrong to exploit people’s ignorance and make them think AI is human.
The people who will use and buy AI should know what its risks really are. Unfortunately, it's easier to get famous and sell robots (or cars) if you go around pretending that your robot really needs to be loved, or otherwise really is human — or super human! In 2000, I wrote an article about this called A Proposal for the Humanoid Agent-builders League (HAL) for The Symposium on Artificial Intelligence, Ethics and (Quasi-)Human Rights at AISB 2000. I proposed creating a league of programmers dedicated to opposing the misuse of AI technology to exploit people’s natural emotional empathy. The slogans would be things like “AI: Art not Soul” or Robot’s Won’t Rule.
In 2000, I didn’t know that the US military might try to give robots ethical obligations, so the whole paper is written with some humor. But as we’ve made better AI, these issues have gotten more serious. Fortunately, academics and other experts are also getting serious. In 2010, I was one of a couple dozen people invited by two United Kingdom research councils to work on making sure robots would fit into British society. We decided to write the Principles of Robotics, the world’s first national-level soft law on AI ethics. So a bunch of the ideas in my HAL paper are now at least informal UK policy. The five principles are:
Robots should not be designed as weapons, except for national security reasons.
Robots should be designed and operated to comply with existing law, including privacy.
Robots are products: as with other products, they should be designed to be safe and secure.
Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
It should be possible to find out who is responsible for any robot.
Notice that the first three correct Asimov’s laws — it’s not the robot that’s responsible.
Full legal versions of the principles and their explanations →
For an account of how they were written, see The Making of the EPSRC Principles of Robotics →
To understand why they were written the way they were (to minimise social disruption and maximise social utility), see The Meaning of the EPSRC Principles of Robotics →
3) Robots will never really be your friends.
In October 2007, I was invited to participate in a workshop called Artificial Companions in Society: Perspectives on the Present and Future at the Oxford Internet Institute. I took the chance to write my third ethics article, Robots Should Be Slaves. In 2010, this (finally) came out as a book chapter in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, edited by Yorick Wilks. The idea is not that we should abuse robots (and of course it isn’t that human slavery was OK!) The idea is that robots, being authored by us, will always be owned—completely. Building a robot is nothing like having a child. Robots are more like novels than children. There's no chance involved unless you deliberately put the chance into it.
Fortunately, even though we may need robots to have and understand things like artificial intelligence emotions, it is perfectly possible not to make them suffer from neglect, a lack of self-actualization, or their low social status in the way a person would. In fact I’d say it’s an obligation; what right do we have to make a person-like thing that would be owned, not human? Robots are things we build, and so we can pick their goals and behaviours. Both buyers and builders ought to pick those goals responsibly.
People have trouble believing this, so I’ve written it a bunch of times trying to clarify it in a bunch of different ways.
See:
Building Persons is a Choice
I got asked to comment on an article by Anne Foerst called Robots and Theology. Foerst is a theologian, we both worked on the Cog project at the MIT AI Laboratory in the 1990s. Foerst has the interesting perspective that robots are capable of being persons and knowing sin, and as such are a part of the spiritual world. I argue in my commentary that while it is interesting to use robots to reason about what it means to be human, calling them “human” dehumanises real people. Worse, it gives people the excuse to blame robots for their actions, when really anything a robot does is entirely our own responsibility.
Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics
Sometime around 2012 I started working on a paper that was finally published in a journal in 2018. The idea is complicated: it’s that both AI and ethical systems are things we build together, so whether AI is a moral subject (that is, is responsible for itself, or is something we’re responsible to) is a matter of choice, not something scientists can discover. Since it’s easier to author AI than to totally overhaul our ethics, I recommend building AI to minimise human social disruption.
Of, For, and By the People: The Legal Lacuna of Synthetic Persons
In 2016, the European Parliament suggested sometimes AI should be legally responsible for itself. This is a super bad idea. Two law professors (Mihailis E. Diamantis and Thomas D. Grant) and I dropped everything to explain why. We wanted to make sure the European Commission didn’t require or even encourage EU national governments to do this. An organisation that contains humans is sometimes considered “a legal person,” but that only works because real humans are held accountable if the organisation does bad things. In fact, if anything, there’s already a problem of people not being held sufficiently to account for organisations. This is when the organisation is called a “shell company” and is a major source of many forms of corruption, like money laundering. An AI legal person would be the ultimate shell company.
Why We Should Worry About AI Anyway
Being worried about the wrong things doesn’t mean that there’s nothing to worry about. Artificial Intelligence is not as special as many people think, but it is further accelerating a rapidly-accelerating phenomenon that’s been going on for about 10,000 years: human culture. Human culture is changing almost every aspect of life on earth, particularly human society.
4) Human culture is already a superintelligent machine turning the planet into apes, cows, and paper clips.
One of the reasons I object to AI scaremongering is that even where the fears are realistic, such as Nick Bostrom and colleague’s description of overwhelming, self-modifying superintelligence, making AI into the bogeyman displaces that fear 30 to 60 years into the future. In fact, AI is here now, and even without AI, our hyperconnected socio-technical culture already creates radically new dynamics and challenges for both human society and our environment. Bostrom writes about (among other things) a future machine intelligence autonomously pursuing a worthwhile goal might incidentally convert the planet into paper clips. We might better think of our current culture itself as the superintelligent but non-cognizant machine — a machine that has learned to support more biomass on the planet than ever before (by mining fossil fuels) but is changing all that life (at least the large animals) into just a few species (humans, dogs, cats, sheep, goats, and cows). No one ever specifically intended to wipe out the rest of the large animals and other biodiversity on the planet, but we’re doing it. Similarly, no one specifically decided that children weren’t sufficiently monitored by their parents up until the 1990s, but now childhood and parenthood have been entirely transformed in just a few decades. These are just two consequences or our expanding cognition, and AI is very much a part of that.
See:
Artificial Intelligence & Pro-Social Behaviour
An academic book chapter from 2015
Living with AGI
A shorter blogpost with the best part of the chapter (it also talks about "Artificial General Intelligence")
5) Big data + better models = ever-improving prediction, even about individuals.
AI and computer science, particularly machine learning but also HCI, are increasingly able to help out research in the social sciences. Fields that are benefiting include political science, economics, psychology, anthropology and business / marketing.
As science — and commerce, and government — learns more and more, our models of human behaviour get better and better. As our models improve, we need less and less data about any particular individual to predict what they are going to do. So just practising good data hygiene is not enough, even if that were a skill we could teach everyone. My professional opinion is that there is no going back on this, but that isn’t to say society is doomed. What we do matters.
Think of it this way. We all know that the police, the military, even most of our neighbours could get into our house if they wanted to. But we don’t expect them to do that. And, generally speaking, if anyone does get into our house, we are able to prosecute them legally, and to claim any damages back from insurance. I think our personal data should be like our houses. First of all, we shouldn’t ever be seen as selling our own data, just leasing it for a particular purpose. This is the model software companies already use for their products; we should just apply the same legal reasoning to we humans. Then if we have any reason to suspect our data has been used in a way we didn’t approve, we should be able to prosecute. That is, the applications of our data should be subject to regulations that protect ordinary citizens from the intrusions of governments, corporations and even friends.