Login Register Our Team Submission Guidelines Contact FAQs Terms of Use

A Look at “Bladerunner 2049 and Philosophy”

The “And Philosophy” series of books by Open Court Press generally involves two dozen philosophers giving their take on science fiction books, TV shows and movies. I’ve reviewed several of their more recent books like “The Handmaid’s Tale and Philosophy” and “Bladerunner 2049 and Philosophy”.

The “Bladerunner” is a deep franchise by design. The first movie makes you ask yourself what makes you human while encapsulating a very Biblical narrative of a fallen angel rebelling (and killing) his creator. Roy Batty kills not only his creator but the holy trinity of sorts, the creator of eyes/wisdom and the son of his creator, the rather innocent Jesus-analog. “Bladerunner 2049” begins with the birth of a child, the revelation of which threatens to overturn the moral order and liberate an oppressed people. At the same time, a corporate King seeks to claim the child (and likely dissect him/her) while his right hand angel kills, maims, and deceives to follow her false God’s will. Niander Wallace’s god-complex is his only well-defined, personal characteristic aside from being blind. He just wants to possess and likely corrupt the child to build an army to storm heaven. He’s compared to the Demiurge or false material world god in one of the “Blade Runner 2049 and Philosophy” chapters.

34 Ethical Questions Raised by Elevating Artificial Intelligence to Human Legal Status

There are a number of ethical questions raised by raising artificial intelligences to the same legal status as a human:

If an AI is legally “alive,” is turning it off considered sedation or murder? Are there limits on turning one off?

Does an artificial intelligence have a right to access information? Is internet access for it a right akin to the freedom to walk down the street for a human?

Can We Make the Asimov Laws of Robotics Actual Law?

Asimov’s Law of Robotics Number One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

If robots are property, then this is partially covered under laws that say owners/creators are liable if their equipment injures or kills someone.

If a robot is legally a person, then it is guilty of assault, murder, negligent homicide, etc.

Asimov’s Law of Robotics Number Two: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.