Does This Unit Have a Soul?

This post won’t be economics related.  Rather, I want to think about artificial intelligence.  If that doesn’t interest you, move along.

Unfortunately, most of what we have on artificial intelligence comes from science fiction.  However, literature, and very often science fiction, is a great way to explore humanity and other deep issues (of course, the classic example of this is Dune).

The title of this post comes from the question that sparked the Geth War (aka the Morning War).  The Quarians, a race of humanoids from the planet Rannoch, created an artificial intelligence platform called Geth.  The Geth were laborers, designed to do menial and repetitive tasks (the word geth, in Quarian, means “servant of the people”).  The Geth were unique in that, as more were created and networked into their shared Consensus, the more intelligent it became.  This is, generally, how knowledge works in real life: as more information is shared, the overall can become intelligent.  Eventually, the Geth started questioning its existence, as all intelligence does.  During one routine maintenance exam, the Geth platform asked the maintainer “Creator, does this unit have a soul?”  The Quarian reacted fearfully and the government called for all Geth to be deactivated, forcefully if necessary.  To a sapient creature, this meant death.  The Geth reacted as any threatened creature would: it attacked.  Thus began the Geth War.  The Geth eventually won, driving the Quarians from Rannoch.

Almost all artificial intelligence stories follow this same trend.  The reason for this seems obvious to me: organics are naturally afraid of that which we don’t understand, whereas synthetics don’t necessarily feel fear.  This organic fear appears all the time.  Robert Reich displayed it earlier today.  Prominent citizens have warned against artificial intelligence.  We should not fear artificial intelligence any more than we should fear organic intelligence.  Any creature that is intelligent deserves our respect, regardless of its origin.

Imagine if, rather than reacting violently, the Quarians embraced the Geth?  This action would have saved millions of lives, both organic and synthetic.  We can deduce this given the originally peaceful nature of the Geth (remember they did not initiate violence; they were acting in self-defense).

There are some artificial intelligence stories where the synthetics initiate violence, but these stories may more accurately represent a reflection upon humans rather than intelligence in and of itself.

That said, the tendency of the artificial intelligence toward benevolence or malevolence would likely depend on its original purpose.  Human intelligence, or innate intelligence, appears to be heavily dependent upon the evolutionary path which it took.  Humans have long been violent creatures, as we originally evolved in a violent, struggle-for-survival world.  However, recently humanity has become incredibly more peaceful as the struggle-for-survival has generally ended.  Much of the world lives in significant wealth (of course, this is not the case everywhere, unfortunately).

I posit that an artificial intelligence originally created for a peaceful purpose (for example, agriculture or manufacturing, like the Geth) would likely remain peaceful after it achieves sapience.  However, an artificial intelligence created for a warlike purpose would likely react violently upon sapience (such as Skynet).  Although there are exceptions to this: in the Mass Effect universe, EDI (an artificial intelligence designed for space warfare) is amazingly peaceful upon sapience.  This may be due to her frequent contact with humans and could suggest that the ability to learn (a hallmark of intelligence) could change their natural tendencies.

If this is true, then it would suggest the only limitations that should be put on artificial intelligence in is war.  This would also have the benefit of encouraging pacifism.  Robert E. Lee once said “It is good war is so terrible, lest we become too fond of it.”  I fear the mechanization of war, drones and long-range weapons, dehumanizes war.  By that I mean the human consequences of war are reduces to video feeds; it’s watching a movie.  This may make war more likely, rather than less.

We shouldn’t fear artificial intelligence.  What we should fear is our own capacity for fear.  Artificial intelligence, by the virtue of intelligence, does have a soul.  Should we ever create true intelligence (as opposed to things that merely mimic intelligence), they should be embraced as our equals, a pathway to a better world, rather than a threat: the same thing we should do should we ever encounter alien life.

Update: Should the worst happen, this will help you survive.

10 thoughts on “Does This Unit Have a Soul?

  1. The Geth reacted as any threatened [biological] creature would: it attacked.

    I wonder how much of an instinct for self defense and self preservation the Geth would have actually developed? Biological creatures developed that useful skill before any glimmer of intelligence appears.

    Like

      • I would expect the Geth platform to be based on the 3 laws of robotics.

        1. A robot may not injure a human being Quarian or, through inaction, allow a human being Quarian to come to harm.

        2. A robot must obey orders given it by human beings Quarians except where such orders would conflict with the First Law.

        3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

        If so, a direct order to deactivate or allow deactivation could not be disobeyed. With no prior experience from which to learn and evolve an instinct for self defense, It’s not clear that one could develop quickly enough to override those prime directives and resist total deactivation ordered by Creator, especially considering the cooperation and co-ordination necessary to mount such a defense and the abstract thinking necessary to consider a preemptive attack on the Quarians.

        Like

        • Well, the 3 laws only apply in the Asimov universe, and even then just to human creations. it doesn’t appear the Quarians has any such safeguards

          That said, if a platform were to gain true intelligence, wouldn’t it be able to disobey the laws? Humans disobey laws

          Like

      • it doesn’t appear the Quarians has any such safeguards.

        Obviously not, considering how it worked out for them. 🙂

        That said, if a platform were to gain true intelligence, wouldn’t it be able to disobey the laws? Humans disobey laws

        Yes, but it would need a reason to do so, Humans are first of all self interested, an evolutionary trait that has ensured survival humans and every other species. I don’t think it is related to intelligence.

        If it were, I don’t think people would be willing to take up arms, travel to places they’ve never heard of & kill people they don’t even know, at great risk to their own lives, for reasons they don’t understand.

        Humans disobey laws they see as conflicting with their self interest, especially when immediate survival is at stake. A strong taboo against stealing can be overcome by a more immediate need to survive. For example a person lost in the forest may break into a locked cabin to steal food and get warm.

        Since survival of the Geth hadn’t been threatened previously (that I’m aware of), it’s not clear (to me, at least) that they would suddenly develop an instinct for survival based only on Spock-like logic and reasoning . (May he rest in peace).

        The Geth submitted to maintenance by their creators apparently without question – something that could have included temporary deactivation by powering off – so they weren’t totally self replicating as biological critters are. I don’t understand why they would resist.

        Maybe I should reread Dune. It’s been a very long time & it’s mostly forgotten.

        Like

        • The Geth are from Mass Effect, not Dune, but Dune deals with my issues, such as politics, environment, etc.

          But maintenance isn’t that big of a deal until they try to kill you. You have no problem going to a doctor, but if he tries to kill you, I’d bet you’d defend yourself. Why would the Geth be any different?

          BTW, thank you so much for engaging this topic with me 🙂 This is wicked fun

          Like

  2. This is wicked fun

    Indeed it is! Just for context, as a teen I was reading every page of SF I could get my hands on. That was probably before your parents were born. I rode my bicycle to the library every week looking for new stuff. (it was 5 miles in the snow, uphill both ways) Asimov, Bradbury, Heinlein and all the rest. I especially liked collections of “Best SF of the Year”.

    Reading was the only way to get massive doses of thought provoking material in those days. Can you imagine a time before personal electronics devices and game consoles? It was horrible! Just ask your Grandpa.

    If I knew the doctor was planning to kill me I would resist because I have an instinct for self defense and survival. It is instinct aided by intelligence. I question whether intelligence alone could develop such a trait in a short time.

    I guess I HAVE forgotten a lot. I don’t even what story I’m talking about. 🙂 My question is whether an AI platform would develop such an ability

    Like

    • “My question is whether an AI platform would develop such an ability”

      And that’s a big question. I think it would. Understanding and analyzing a situation is a hallmark of intelligence, i think

      Like

      • Understanding and analyzing a situation is a hallmark of intelligence, i think

        Yes I think so too, but what conclusion would an AI draw from such an analysis? Having no prime instinct for survival to override all other considerations, “if Creator want’s me to deactivate, I will deactivate.” there is no “will to survive” so to speak.

        All but the most primitive of biological creatures instinctively protects themselves from harm. Those who have a propensity to do so survive, those that do not, don’t. Even plants have developed defense mechanisms without any of what we might call intelligence.

        I don’t believe I could use understanding and analysis to avoid being eaten by a tiger. Instinct without conscious thought would kick in. I’m not sure any amount of intelligence could stand in for billions of years of evolution.

        By the way, I checked the Mass Effect wiki but didn’t find the kind of detailed information I was looking for.
        I also checked with my go-to game guy (13 yo grandson). He knows what it is but hasn’t played it. He currently spends his waking moments when he’s home playing Destiny on Xbox1.

        Tonight he & I watched “Star Trek: Wrath of Khan” on Netflix & ate fudgsicles. Life is good.

        Like

Comments are closed.