Asimov’s Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or the Second Law.
"3 LAWS UNSAFE" ARTICLES
Is it possible to create ethical AI based on the Three Laws? Is it ethical to create ethical AI based on the Three Laws? What other solutions have been proposed for the problem? These questions are explored in the Articles Section. The articles give perspective on why the field of AI ethics is crucial, and why Asimov’s Laws are simply its beginning.
We have watched a couple of episodes of "Lost in Space", the 1965 original and I, Robot. We are now working on creating a couple of robots and getting some programs into them. However, to do this we need batteries and a virtual machine with usb support, something that Microsoft Virtual PC 2004 or 2007 doesn't have.