Robots and sentience

iStock Photo

iStock Photo

Robot. Sentience.  They are two words that, when considered at the surface, don’t seem to be able to go together.  After all, a robot is a mechanical creation, generally considered incapable of sentience, or full self-awareness.  We specifically use the word “robot” to imply that the machine cannot have sentience; a robot is a clockwork thing.

When we try to suggest that a mechanical creation has sentience, we tend to immediately rename it.  Cyborg.  Android.  Replicant.  Synthezoid.  We distance ourselves from the word “robot,” and seek to redefine the creation to stand for something beyond its mechanical parts.

Is it because we want to keep the concept of “robots” as simple things?  Or is it because we see sentience as being beyond mechanical creations?  Do we see sentience as requiring some special spark that robots are incapable of?

Sentience, or self-awareness, is a difficult thing to define, describe or prove.  The basic definition, as stated by Merriam-Webster, is “responsive to or conscious of sense impressions.”  But that definition can be ascribed to creatures large and small,  from humans to whales to ants; it’s not enough of a definition for our purposes.  Sometimes we’ll default to some semblance of a soul, or “divine spark”… but, again, we have no real way to define, detect or prove the existence of souls. So that’s hardly useful.

In fact, the only practical definition we have of sentience seems to be: Acting independently of, or in spite of, instinctive responses.  If you can equate instinct with robotic programming, then any robot that can act independently of its programming would be by definition sentient.

This is still a vague definition, since it’s difficult to pin down instinct, even in humans.  Though we don’t like to admit it, we still have a basic instinct for self-preservation; but our brains can examine an instance where self-preservation is a priority, weigh multiple strategies to achieve self-preservation, and select the optimum action.  How much of that can be considered instinct?  Possibly all of it, right down to recalling a set of Kung-Fu moves we learned from television that might be useful in saving our necks, and applying it to our attacker.  Or maybe the time-honored method of throwing up our hands in surrender, and offering to talk out the problem.

These, and many other actions, are well within the bounds of the “fight or flight” response, which is considered instinctive.  In fact, you can boil down most of our daily actions to their instinctual roots, namely, our instinct for self-preservation, for establishing ourselves in social groups, and for satisfying appetites (nutritional and sexual).  With all that in mind, where, exactly, does sentience come in?  And if we have a hard time defining it in humans, how, exactly, do we define it in robots?

Maybe there’s one very easy way to detect sentience in robots: Don’t give them any pre-programmed instructions… no instincts.  Then any independent action they take is free of instinctual roots, and therefore, sentient.  Extending this a bit, if you can identify any action a robot takes that is completely unencumbered by any preprogrammed instruction—say, if a robot is offered an equal choice of power sources, but it has no pre-programmed sense of self-preservation, hunger, etc, that drives it to charge itself—any choice it makes is independent of programming, and is therefore sentient.

This isn’t to say that we should create robots without instinctive drives.  No creature on this planet lives free of instinct, including humans; sentience and instinct must work together in all higher animals.  A robot without instinct would have no reason to learn, or to care about the impact its actions have on the world.  It would probably be a pretty dangerous thing to have around.

Asimov certainly understood this when he famously wrote his Three Laws of Robotics, designed to make sure robots would obey humans, but never hurt humans or allow them to come to harm.  This is akin to putting a tiger in your home, but giving him instinctive drives to obey you and not hurt you.  But more: Putting him into a cage that would prevent him from striking at you, even unwittingly.  Asimov’s laws restricted robots more than any instinctive drives ever seen on this planet.

And we’ve already seen, in Asimov’s later writings, that the Three Laws can be subverted; robots could use the logic of the Three Laws to decide that humans needed to be tightly controlled for their own good, or that the birth of a child should be prevented if its birth meant certain future suffering of other humans.  The Three Laws are not air-tight, and can be used as a basis of interpretation in extreme situations (as many of Asimov’s stories illustrated).

And this is where sentience comes in: When instinct alone doesn’t provide a solution, or where multiple choices exist to satisfy that instinct, sentience steps in to make the choice.  It separates the robots that act from the robots that stand immobile, frozen by instinctive paradoxes or logic loops.

Human brains are highly complex, and store a vast quantity of memory information, all of which is brought to bear to make a decision.  We cannot say for sure how much of the decision-making we do is due to instinctive responses, and how much is derived from abstract thought based on a combination of complete and incomplete data and the need for rationalization to arrive at a solution… sentience.  If, potentially, every possible decision has at least one non-fully-quantifiable variable in it, it can be said that sentience must ultimately control every decision made.

Sentience would seem, therefore, to be highly possible in a robot, given enough of a brain and memory capacity to handle memory storage and evaluation mostly free of instinctual input.  We are approaching the edges of robot sentience now, mostly with testbed-locked mechanations that are learning to speak and respond to their creators, interpret voice and facial patterns, and recreate them.  We’re a long way from the robots of the movies that can take initiative for themselves. But we’re taking the first steps to creating them now.

This post is a followup to Robots: Tools, slaves and devils.

3 thoughts on “Robots and sentience

  1. Some interesting comments made on Facebook regarding this post! I’d like to share them:

    Steven Mayfield
    What is sentience? Can we diagram it? Certainly, memory is a part of the mix. Our minds build over a lifetime of experience. Not surprisingly, some of the most basic structures of our personality result from the first few years. And, excepting for injury, they remain as the foundations of our minds forever (almost impossible to change).

    Chay Tana Prince Wao
    The Chambers Dictionary gives ‘sentient’ as “conscious; capable of sensation; aware; responsive to stimulus”. It then depends on how the above ‘consciousness’ is defined. If it doesn’t imply self-awareness (ie: the ability to self-reflect – to recognise one’s own reflection) all animals are sentient. However, having been obsessed with the nature of consciousness for over four decades, I can’t see even the most sophisticated ‘computer-brain’ ever becoming truly self-reflective (a robot can be programmed to recognise an image of itself, but that wouldn’t constitute self-awareness). The idea that it’s possible comes from the false notion that the brain and therefore consciousness is purely mechanistic. This is a historically recent notion born of intellectual materialism, and there are many scientists and psychologists who refute the mechanistic word view (not to mention that the great sages of the Beautiful Earth, past and still living, have been teaching the nature of consciousness for millennia).

    Steven Lyle Jordan
    Until consciousness can be clearly defined or quantified, there will always be a doubt as to its very existence and in exactly what vessels it is present. This, in turn, gives doubt to the definition of sentience. We’re a long way from creating robots as potentially intelligent as ourselves; but perhaps when robots approach our level of intelligence, they will help answer the questions of consciousness and sentience.

    Phoenix MacKenzie
    Material science will never find that, but gradually its measurements are reaching into metaphysics – its parent.

    Steven Lyle Jordan
    I wouldn’t be too sure about that… looked at an atom lately? You couldn’t do that 100 years ago. Identified the connections between quantum particles separated by miles of distance? Couldn’t do that 10 years ago. Consciousness? Give it time.

    Phoenix MacKenzie
    Physics and metaphysics.

    Wolfy Bro
    I’m not sure that any two-valued logic can capture sentience. Perhaps I read A. E. van Vogt’s Null-A books (and I bought and read/studied the entire oeuvre of Alfred Korzybski … for fun) too early. A multi-valued logic — the “fuzzy logic” that was such the rage years ago — might have a better chance of becoming sentient.

    If you’re interested, Neil Postman wrote a good introduction to Korzybski’s General Semantics called Crazy Talk, Stupid Talk. I thoroughly recommend it.

    Steven Lyle Jordan
    Wolfy Bro: My statement–that sentience can be determined if every decision contains at least one non-quantifiable variable that must be addressed before the final decision is made–best approximates fuzzy logic.

    Phoenix: Metaphysics is a way to ignore objective questions and jump past them to “higher,” usually subjective questions; instead of asking, “what’s an atom made of?” we ask, “what is it like?” Metaphysics, therefore, will never be able to identify consciousness, it will only be able to describe what it perceives as consciousness.

    But consciousness happens within a biological entity that can be measured and examined. It’s only a matter of time before our instruments are exacting enough to be able to identify the processes that happen in the brain that are the instigators of consciousness. The process of R&D for brains smart enough to independently control robots should bring us closer to that day.

    Phoenix MacKenzie
    Please do not disrespect what I have studied all my life, or for that matter Einstein, Newton, Bacon, Fludd, Franklin, Da Vinci, Pythagoras, Paracelsus etc. Metaphysics is not ignoring the material. It is not limiting yourself to measurement or the ability to reproduce something artificially.

    Steven Lyle Jordan
    I didn’t mean to disrespect; I’ve studied these subjects too, and this is how I interpret them. Obviously, you’re free to reject my interpretation.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s