I have a number of papers on — and have worked on some built AI models of — both consciousness and emotions. This page documents that work, and how it relates to ethics.

I got involved in this branch of research because so many people think that consciousness or emotions have something to do with what makes an entity a moral patient — something that we have a moral duty to protect. Historically that may have been true, though very few religions or societies extend moral patiency to all the species of insects or even mammals that experience some form of consciousness or emotion. But AI changes that. 

If you are willing to say that (some) robots have hands, feet, or heads, then you should certainly be willing to say that (some) robots have consciousness or emotions. They aren’t any more like our consciousness or emotions than the feet and heads are like ours. That includes that, in a robot, having explicit awareness of your surroundings is not associated with being irreplaceable (your whole “mind” could and probably should be backed up) or with suffering (in fact that’s probably impossible to design and build the kind of systemic aversion animals experience when they suffer, but even if we could build it, we probably shouldn’t.)

Key Publications Overviewing Ethics:


AI Outputs Are Like Dreams

Dreams are your brain running stimulation across bits of your memory. There are lots of theories about why our brains do this. Dreaming may help you explore new ways to handle problems you’ve seen, or help you consolidate individual memories of events into useful general principles — plans for intelligent behaviour. But for the moment all I'm trying to draw attention to is that dreams can briefly look enough like real life to fool you into thinking you’re awake, yet then the next minute they can also turn a corner into total craziness. Why? Dreams look like real life because they are built out of memories of real life — they use the neural structures you’ve built up concerning how to understand real life. But they are incoherent because they are not being driven by actual perceptions of the world.

When we use machine learning to program some part of AI — say a large language model (LLM) like chatGPT — we are basically reusing computation humans have already done and recorded in text on the Internet. So of course the output of AI is about our experience and looks like our experience. That’s also why you can type a few words into a search engine, and generally get the document you want. It’s not because AI shares our experience of the world. It’s because the people who build generative AI do it by building giant indices into records of our human experience. Whatever you look up with such an index, of course you get back some record of human experience. But that doesn’t mean that an AI system has anything like a human experience itself. It just contains records of our experiences, and it can give parts of those back when we ask. (This also explains why AI has biases.)

Thinking AI needs rights (in itself) is like thinking dreams are always true. AI may contain records we need to protect, and dreams may contain facts you observed before but hadn’t really noticed yet. But that doesn’t mean dreams are a supernatural means to generate new truths, nor that AI is a moral being having human-like experiences.


Consciousness

Human minds include implicit aspects over which we have little conscious control, and explicit aspects which accompany the parts of our control we can talk about, and for which we are held responsible. AI systems for a long time were much more analogous with conscious control than unconscious learning and reactions, but machine learning has changed that around for a lot of systems most people interact with every day, like LLM or search. None of that makes AI a person.  

Key Related Publications:


Emotions and Affect

Similarly to consciousness, there are many, many systems for simulating emotions for AI systems, yet none of these change the moral status of an AI system. They can be added or removed from the system by a designer, which is why I say artificial suffering in particular isn’t really a coherent notion — it isn’t just an emotional experience, it’s a way biology makes you necessarily attend to particular problems. Although anaesthetics can reduce that for a while, nothing can just take it away.

Key Related Publications: