As a "moral law" governing robots that will soon be serving humanity, does Asimov's Three Laws of Robotics have jurisdiction over Artificial Intelligence software that will be working for us in the near future?
By: Vanessa Uy
Even though it is just perceived by the world's "Legal Eagles" as a mere literary device (sadly, currently it still is), science fiction literature guru Isaac Asimov's Three Laws of Robotics nevertheless serves as a theoretical legal precedent-related template in which to serve as a guide in formulating future "moral guidelines" of humanoid bipedal autonomously self-aware robotic mechanisms that will be working for and serving humanity in the foreseeable future. But does Asimov's Three Laws of Robotics have jurisdictional precedent over Artificial Intelligence / self-aware computer programs or software that will be aiding humanity in the future as a tool to aid in our intellectual drudgery? Before we go any further, let's review first in detail Asimov's Three Laws of Robotics.
Asimov's Three Laws of Robotics - i.e. the version formulated by the famed science fiction literature guru - states that:
1) The First Law states that a robot may not injure a human being or through inaction, allow a human being to come to harm.
2) The Second Law states that a robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3) The Third Law states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Conservative Christians who are into science fiction literature view Asimov's Three Laws of Robotics as "parts of the Ten Commandments that atheists arbitrarily follow". In short moral laws that are self-serving to them. Maybe they got the idea after reading Professor Richard Dawkins' NY Times bestseller The God Delusion. Though it does seem that self-awareness does demand a semblance of morality whether organic entities or artificial ones.
From my own perspective, Asimov's Three Laws of Robotics is an ideological anathema to Soviet-era socialism. Given that Isaac Asimov's robot-related science fiction masterpieces were written and published at the time when the Soviet Union was busy proving to the world the superiority of Marxist-Leninist Socialism over the Western capitalist system. This is so because it gives importance to the subject of self-preservation of the sentient entity as it serves the greater whole of society and state compared to the Soviet-era citizen's be all-end all subservience to the state. add to that Asimov's bumper-crop of science fiction literary classics that has since sprung fourth from his Three Laws of Robotics and its accompanying moral and legal dilemma implied by this particular moral construct.
If future global corporate jurisprudence ever embraces Asimov's Three Laws of Robotics, it does tend to point the direction of a utopic society that could be a reality if we took the steps now in establishing the requisite technological and legal foundations. Just imagine your computer software making your Internet experience less stressful than it is at present. Especially students doing their on-line research experiencing first hand the quirky nature of "unintelligent" search engines. "Give Me Convenience or Give Me Death" could become the software peddlers marketing slogan of the future.
Unfortunately, at present time, our yet fledgling attempts at at artificial intelligence software systems still can't pass that dreaded Turing Test with flying colors. You know, that intelligence test for computer software formulated by that late, great World War II-era computer scientist named Alan Turing to evaluate the self-awareness of man-made artificially intelligent systems for their cognitive and sentient abilities. Asimov's Three Laws of Robotics should include this largely ignored aspect of our increasingly technological world because in the movie I, Robot it was an artificially intelligent software that was doing all the evil work. Not the bipedal mechanized humanoid robots.
Monday, July 27, 2009
Subscribe to:
Posts (Atom)