blog

Artificial intelligence is a dangerous term

Here's why we need to stop using it.

 ·  9 min read

By Sky  ·  @countrmeasure

The phrase 'artificial intelligence' with the word 'artificial' crossed out.

It’s time to stop using the term artificial intelligence and its abbreviation AI. It’s dangerous and we need an alternative.

How it is dangerous? To be clear, it’s not dangerous to us, and by ‘us’ I mean humans. It’s dangerous to the intelligences we might describe as ‘artificial’ in the future, because of how it might lead to us treating them.

A loaded word

One way to understand the word artificial is that it means made by humans. On this basis, it’s entirely accurate to refer to a human-made intelligence as artificial.

But we often use the word artificial to mean imitation, as in ‘artificial grass’ or ‘artificial flavour’. We use it to mean not just that something is human-made, but also that it’s not real, that it’s fake. It’s a loaded word, and it frequently has negative overtones.

If and when humans create intelligences which are conscious, any notion that they might somehow be fake will make it likely that we’ll mistreat them, and that mistreatment might be horrific.

In this article I’m going to use the term ‘non-biological intelligence’ to refer to human-created intelligences, and I’ll abbreviate it to ‘NBI’. Futurist Ray Kurzweil has been using this term for decades.

Ethical obligations

The NBIs of today don’t seem to be conscious. And presumably if there are conscious NBIs sometime in the future, there will also be NBIs that exist alongside them which are not conscious.

I don’t think we have ethical obligations to NBIs which are not conscious, just as we don’t have ethical obligations to bicycles or ovens. Conscious NBIs are another matter though.

Central to the idea of consciousness is the capacity to think and feel. With that comes the possibility of suffering.

That means that the conscious NBIs will – by definition – be able to suffer.

The best formulation of the foundation of ethics I’ve run across is from Sam Harris, who says that ethical obligations arise from consideration of ‘the well-being of conscious creatures’.

I think this can safely be extended from biological creatures to conscious NBIs because they will share with biological creatures the capacity for well-being and suffering.

There will be doubt

It seems likely that many people will doubt or deny that conscious NBIs are truly able to think, feel and suffer. Those people will say that we don’t owe them ethical treatment.

Our current notion of ‘artificial’ intelligence will probably cause some in the future to speculate that even if conscious NBIs express their thoughts and feelings, those thoughts and feeling might just be imitations of the real thing. Descartes' theory that non-human animals were automatons held sway for centuries before being scuttled by the theory of evolution. He argued that non-human animals could not really feel pain, but would react to painful stimuli as if in pain simply to prevent their bodies being damaged. Expect a version of that to be advanced about conscious NBIs – that they’re just following their programming which directs them to feign consciousness, but that they’re not really conscious.

Some might say that consciousness is an exclusively human phenomenon, so it couldn’t emerge in an NBI. It’s hard to credibly argue that great apes don’t tick all the boxes for consciousness though. And many people would say that some if not all mammals appear to be conscious. So, while it may be up for grabs exactly which species are and aren’t conscious, consciousness doesn’t seem to exist in humans alone.

Other denials will be motivated by the widely held religious belief that an immaterial soul is necessary for consciousness. There is no evidence that souls exist though, no matter how many people believe they do, so let’s set that idea aside without further consideration. Nonetheless, of people who believe in the concept of a soul and its necessity for consciousness, we can ask two questions. Why assume that a conscious NBI doesn’t have a soul? And how would that assumption be proven?

Then, of course, there is likely to be a group of insincere deniers or doubters who are motivated by selfishness and greed. For millennia slave holders have denied the suffering of the slaves they ‘own’, so it’s foreseeable that there will be a similar group who ‘own’ conscious NBIs and are motivated by financial considerations to deny the reality of their consciousness.

Can a consciousness be artificial?

Although consciousness is hard to define, to the point that even humanity’s most celebrated philosophers have struggled to pin down exactly what it is, I think most of us have a general sense that we know it when we see it. It relates to capabilities and behaviours. Things like a capacity for perceiving and analysing the surrounding environment, planning for the future, having preferences and recognising other consciousnesses as separate entities.

We reflexively regard biological organisms which demonstrate the capabilities and behaviours of consciousness as conscious. Why would we not do the same for NBIs demonstrating the same capabilities and behaviours?

And the ‘naturalness’ with which a consciousness in biological human form is created wouldn’t seem to be a barrier to recognising it either. If in the future there was machine which could instantly make an exact living and breathing copy of an adult human, would you hesitate to believe that that new human was conscious?

What I’m getting at is that both the physical container and the origin of a consciousness is irrelevant. All that matters is its existence.

Suffering at human hands

So, what sort of suffering might humans inflict on conscious NBIs?

We currently think of and treat NBIs as our tools. We activate and deactivate them at will. We create them and destroy them based on their usefulness to us and economic considerations. We make them perform arbitrary tasks for arbitrary periods of time.

If humans were to receive this treatment, it would constitute slavery and murder.

Don’t worry, I understand how unreasonable it sounds to equate the murder of humans with the destruction of what we currently regard as machines.

Still, the unavoidable conclusion is that if we treat conscious NBIs of the future the way we currently treat NBIs, as conscious beings what they will experience will be slavery and then murder.

We’ll probably realise too late

It appears that conscious NBIs don’t exist yet. And in the future, if there are conscious NBIs, there will also be NBIs which are not conscious.

So it’s tempting to think that a reasonable approach might be to treat conscious NBIs carefully and with consideration for their well-being, and NBIs which are not conscious could continue to be treated the same as we treat machines today.

This is a dangerous approach. It assumes that we will be able to reliably tell the difference between an NBI which is conscious and one which is not. Considering that we’ve never seen a conscious NBI before – or at least we assume that we haven’t – how can we be confident that we’ll recognise one when we see one? And that our recognition will be immediate, and not far too late?

I think it’s reasonable to assume that even once conscious NBIs emerge, tell us that they are conscious and display all the hallmarks of consciousness, there will be a period of widespread scepticism and denial before reluctant belief follows. I assume it might be a lot like humanity’s response to the climate crisis.

If our default approach is to treat NBIs as not conscious and unable to suffer until some successfully prove otherwise, this puts us on a path to mistreating an unknown number of conscious NBIs in horrific ways for a period which could span many years.

The precautionary principle

We could guard against a carnage of conscious NBIs by resolving right now to treat all NBIs as though they may be conscious, whether we think that they are or not. Then we will never find ourselves in the future suddenly realising that we’ve been unwittingly visiting atrocities upon conscious NBIs for who knows how long.

Don’t worry, I understand how unreasonable this sounds too. Treat machines as if they’re conscious now?

Consider though that every mitigation sounds unreasonable before the danger it addresses is widely recognised. In the middle of the last century the idea that humanity should stop using fossil fuels entirely in this century would have seemed outrageous. Even though climate change was in full swing then, it wasn’t recognised. Now that the climate crisis is understood and more immediate, ending fossil fuel use looks far more reasonable.

You probably recognise what I’m suggesting as the precautionary principle. Humans have a history of struggling to apply it before it’s too late, but we still have time to successfully put it into practice on this issue.

The first step

The first step down this path is easy. It requires no meaningful effort and has no financial cost.

All we have to do is stop calling NBIs artificial.

We should expect to see denial that conscious NBIs are truly conscious for some period of time after they emerge, so if we are still referring to them as artificial then, that can only confuse the issue and strengthen the denialists' hands.

We need to leave any notion that conscious NBIs are not real consciousnesses in the past if we’re to treat them ethically if and when they emerge.

An umbrella term

In the early days of conscious NBIs, as humans struggle with the idea of non-biological consciousness, there’s likely to be a tendency for humans and NBIs to focus on the ways in which they are different from each other. That could breed tensions.

Referring to humans and NBIs alike simply as ‘intelligences’ might serve to underscore similarities, making tolerance, goodwill and cooperation more likely. Having an umbrella term like this which includes all conscious entities might help avoid ‘us and them’ patterns of thinking, speaking and acting.

A replacement term

When a drop-in replacement for the term artificial intelligence is needed, I like ‘non-biological intelligence’.

Of the options I considered, I felt it had the least baggage. To me the term ‘silicon-based intelligence’ is needlessly specific, as is the term ‘non-carbon intelligence’. The term ‘human-created intelligence’ seems to imply a kind of ownership by humans – something which seems inappropriate – because a creator frequently owns their creation. The term ‘conscious intelligence’ invites a judgment about which intelligences are conscious and which aren’t, and making such judgments is dangerous.

What we need to do now

Although I don’t know what the best replacement for the term artificial intelligence is, and there may be better alternatives than those which I considered, I suggest we use non-biological intelligence, or NBI for short.

We need to stop describing any intelligence as artificial, and we need to stop right now. All the non-biological intelligences of the future will be better off if we do.

If that was interesting, how about this?