Isaac Asimov’s Laws of Robots are a brilliant piece of work. They create a balanced world, as XKCD demonstrates:
The question for us today is if and how you could force these laws to be implemented in robots and enforce them.
Asimov’s Law of Robotics Number One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
If robots are property, then this is partially covered under laws that say owners/creators are liable if their equipment injures or kills someone.
If a robot is legally a person, then it is guilty of assault, murder, negligent homicide, etc.
Asimov’s Law of Robotics Number Two: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
I don’t know of any laws that cover this if it is property, but you could turn it off and destroy something that doesn’t follow orders.
If the robot is legally a person, you cannot make this law without potentially violating laws against slavery.
If the robot is paid for its time and effort, then you could fire it for insubordination. I don’t want a Roomba smart enough to refuse to clean up a mess and argue with me about it. We certainly don’t want AI smart enough to decide to disobey humans or be able to choose to harm them.
In theory, the solution here is from Dune’s Butlerian Jihad – “thou shalt not make a machine in the likeness of a human mind.” Ironically, this means the only way to protect humans entirely is to forbid upon pain of death creating robots smart enough to choose to disobey humans. For human protection, we end up with a death penalty for everyone advocating the Singularity and upstart smart cars.
Laws mandating Asimov’s laws be programmed into any smart device and refusing to grant them personhood are a more palatable option. You wouldn’t be able to marry your sexbot or consider your AI your legal heir, but the AI won’t have rights if you reprogram it after refusing to follow orders or delete it if you turn it off.
I’ve been arguing with people on the internet who honestly believe that AI should have equal legal rights, that we’re evil if we don’t both develop them and give them equal legal status. And the fact that an AI would theoretically have a right to power and IT tech support to support its right to “life” when humans in the world may not have electricity or suffer resource rationing was never considered of importance to them. The unfair double standard this creates was ignored. That is the fact that humans on life support are periodically disconnected to save resources, but that artificial intelligences or robots might be given greater “rights” to those same resources. Do we truly want to privilege our creations over ourselves?
Asimov’s Law of Robotics Number Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If it is property, this is irrelevant as a legal matter. You can program equipment to run efficiently and minimize damage without hurting people, like a self-driving car avoiding obstacles but being told “hit the wall before you hit the human while avoiding a pothole.”
If a robot is legally human, it is a person with the right to self-defense. And it doesn’t have to protect others above itself unless it is meeting the same standards as humans. For example, we have laws that require someone to stop and render aid, if they can, in the case of a car accident. At a minimum, check to see if people are hurt and call for help before moving on. We could mandate a robot have to do the same according to the best of its ability.
***
Check out Tamara Wilhite’s Amazon Author Page and see her on Hubpages.
Photo by IsaacMao
Comments
Leave a Reply