Monday, 10 March 2008


I have ben working on my robotics unit plan for the term. We have a number of Lego RCX robots and a NXT robot available to us. I have been working on the theory behind the assessment. So far we have looked at Robots, where they fit into things, how they work and the three laws of robotics.

Asimov’s Three Laws of Robotics:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or the Second Law.
But are these laws safe, there is a website that looks at these issues

Is it possible to create ethical AI based on the Three Laws? Is it ethical to create ethical AI based on the Three Laws? What other solutions have been proposed for the problem? These questions are explored in the Articles Section. The articles give perspective on why the field of AI ethics is crucial, and why Asimov’s Laws are simply its beginning.

We have watched a couple of episodes of "Lost in Space", the 1965 original and I, Robot. We are now working on creating a couple of robots and getting some programs into them. However, to do this we need batteries and a virtual machine with usb support, something that Microsoft Virtual PC 2004 or 2007 doesn't have.

No comments: