(In case you missed the email.)
As we delve deeper into
the genre that is sci-fi, we continue to examine our own humanity, and more
particularly, the way we interact with each other as human beings. It can be
generally agreed upon that in any functioning human relationship a sense of trust
must exist. Asimov explores this sentiment in “Evidence”, drawn from the
collection I, Robot, by questioning
how much we can trust our fellow man if we doubt his status as such.
The primary accusations against the beloved and
impressive lawyer Stephen Byerley come from politician Francis Quinn, who from
the get-go seems the untrustworthy, slimy politician archetype. However, he
manages to convince robotic scientist Alfred Lanning there may be a chance the
lawyer is not human. When Byerley comes before Lanning and his assistant Dr.
Susan Calvin, and seemingly proves his status as human by eating from an apple,
Calvin objects, saying “in the present case, it proves nothing…he is too human
to be credible”. Going on to cite his firm morality and sense of righteousness,
Calvin concludes this goodness is inherently inhumane, and that he must either
be an impossibly good man, or most certainly robot.
A week before his swearing-in, after the public accepts
him as not being robot after all, Mr. Byerley and Dr. Calvin are discussing the
election one last time. While the conversation seems as if the two dodge around
the real question, the status of Byerley’s humanity, it seems apparent that
Calvin believes him robot, as she tells the reporter back in the present.
Despite going along with the district attorney’s insistence he is man, she
states, “If a robot can be created capable of being a civil executive, I think
he’d make the best one possible…incapable of harming humans,”. Here, the doctor
suggests that humans cannot actually trust one another to act in each other’s
best interests; that a person deserving of absolute trust is not one that
develops as such, but is created as such.
While in today’s society we can be positive the man or
woman across from us on a subway car is human (I hope), we must ask ourselves:
has society progressed to a point where we trust others more easily, or less? While
we may not have robots like Byerley yet to fuel further mistrust, what role
does technology play in the breakdown of our faith in one another?
More
on the topic of mistrust among humans:
http://www.usatoday.com/story/news/nation/2013/11/30/poll-americans-dont-trust-one-another/3792179/
This story indeed criticizes political moves that aim to destroy the public’s trust in a person. But I think there’s an even more fundamental point. Robopsychologist Susan Calvin opines that, due to the three laws of robotics, a robot’s moral choices are indistinguishable from those of a very good human. She concludes that it is better for a robot, which is guaranteed by the laws of robotics to govern as fairly as possible, than for a human to hold public office. This is a troubling point. If robots are intellectually, physically, and morally superior to humans, then is there anything that redeems us? (Do we have a ‘use’ or ‘purpose’ any longer?) Are we morally obligated to choose a benevolent and reliable robot ruler over a human one who’s subject to passions and temptation? If we choose the robot, do we become enslaved to another race?
ReplyDeleteMan desires to maintain his free will, or at least the semblance of free choice. As Dostoevsky writes in Notes from Underground, “man needs only one thing--his own independent desire, whatever that independence might cost and wherever it might lead.” If one believes that total self-determination is absolutely necessary for mankind, then robot government is unacceptable. But if one places more weight on the safety of the human race (cf. utilitarianism) then robot government remains a morally acceptable option. A necessary condition for robot government is that only its supporters are aware of it. If this condition is not held, then the people will revolt (cf. The Matrix). A second necessary condition is that the robots are programmed with utmost care and diligence so that their vision of morality is acceptable under all possible circumstances. Asimov’s three laws of robotics fail to meet this standard. Take the first law: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The ‘inaction’ clause of this formula is unacceptable because it permits (even demands), for example, injecting everybody with a drug that makes them maximally happy. Not doing so would allow humans to be harmed by experiencing sadness and grief. But intuition says that’s a bad ending. Why? Again, it’s because we lose self-determination. So, taking away human self-determination must constitute a ‘harm’ in robot morality.
Suppose we’ve codified a new set of laws for robots that prevents them from taking our self-determination (within reasonable limits. Restraining murderers is acceptable). Then a robot government would be possible without encroaching on our freedoms excessively. The original question still stands: is this new robot government acceptable? Let us put aside preconceived judgments and examine this critically.