Oh, are any other robot laws used by different machines? the three laws run into some...problems...some questions I should ask before I play a machine intelligence that at least pretends to follow the laws. How is Injury defined? How Is human defined? How is Harm defined? A space trolley is heading down the tracks at 1000 Miles per hour, it can go one of two paths, One which runs over one person and another which runs over 5 people, Which track will the robot choose? A robot witnesses a person fooling around with alien technology of unknown function, The Robot calculates a 34% chance that this device could cause catastrophic damage and kill millions of humans. The robot calculates that it it fired its disintigration beam at the human the human would 100% be killed and the weapon has a 0% chance of killing everyone. What does the robot do? A gangster holds hostages, it orders a robot to go kill people or the hostages will parish, The robot has no means of killing the gangster An alien species is evolving on another planet, this species could concievably in thousands of years become a threat to humanity. Is the Robot obligated to destroy all life on the planet? Here is an interesting article on AI ethics, certainly not the only one, its a topic of much debate. [url]http://intelligence.org/files/CEV-MachineEthics.pdf[/url]