MrPapagiorgio
MrPapagiorgio
  • Threads: 58
  • Posts: 183
Joined: Nov 11, 2009
December 3rd, 2009 at 3:51:47 AM permalink
Ahh, very first post in "Questions and Answers > All Other". Another proud moment. Anyway, I was watching Bicentennial Man with Robin Williams while this site was opened and I thought the movie and site might have a potential to intersect. This movie is one of the many pieces of work that recite the 3 Laws of Robotics. For those unfamiliar, these laws are:

http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In Will Smith's iRobot, the robots interpreted these rules in such a way that robots believe that humans required the protection of robots, and as such, robots needed to remove the humans from making decisions, violently if necessary, in order to protect humankind. I also saw some old TV show a while back (may have been The Outer Limits) where the robots protecting an apartment building came to the same conclusion based on the same laws.

Questions for readers and wizards alike:

Do you agree with the movies and shows, that a robot following these rules could injure humans to protect mankind, or do you believe this is a violation of those rules? Personally, I think it still violates the literal value of the rules.

Are there any other ways you think these rules can be interpreted by by robots?

What is the probability of robots programmed with these rules running amuck and attacking man?

If the robots DO run amuck, what is the probability they will succeed?

Creative math time!
So I says to him, I said "Get your own monkey!"
odiousgambit
odiousgambit
  • Threads: 326
  • Posts: 9570
Joined: Nov 9, 2009
December 3rd, 2009 at 5:19:41 AM permalink
well, this is sure out there, but I have to admit I am struck by the fact that computers no sooner came on the scene than Science Fiction writers began to worry that they would take over. I don't read Science Fiction, but apparently very early on the term "self-realization" came into use to describe what was felt to be the the essential step for a powerful computer to run amuck. This dawned on me watching the silly Robbie the Robot early 1950's movies a while back, as one movie plot was all about this.

In other words, it did not take latter-day Schwarzenegger movies to start asking that particular what-if. Note that the robots in those movies have quite abandoned the 3 Laws of Robotics!
the next time Dame Fortune toys with your heart, your soul and your wallet, raise your glass and praise her thus: “Thanks for nothing, you cold-hearted, evil, damnable, nefarious, low-life, malicious monster from Hell!”   She is, after all, stone deaf. ... Arnold Snyder
Wizard
Administrator
Wizard
  • Threads: 1493
  • Posts: 26480
Joined: Oct 14, 2009
December 3rd, 2009 at 9:05:05 AM permalink
I've never thought it, to be honest. In my opinion we're still at least a century away from this being a practical point of discussion. Personally, I don't think robots will ever attain the kind of deep thinking that resulted in the rebellion in iRobot. Robots will always just be machines, and lousy at human interaction. Anyone who has called customer support and gotten one of those stupid voice recognition mazes will know what I mean. If robots could ever attain self awareness, then I think they could evolve (like they did in AI) and conquer the entire galaxy, but you don't see robots ruling earth do you?

Forgive me for getting off topic, but this question has similarities to the debate about why Hal killed the astronauts in 2001.
"For with much wisdom comes much sorrow." -- Ecclesiastes 1:18 (NIV)
MrPapagiorgio
MrPapagiorgio
  • Threads: 58
  • Posts: 183
Joined: Nov 11, 2009
December 3rd, 2009 at 9:12:25 AM permalink
Quote: Wizard

I've never thought it, to be honest. In my opinion we're still at least a century away from this being a practical point of discussion. Personally, I don't think robots will ever attain the kind of deep thinking that resulted in the rebellion in iRobot. Robots will always just be machines, and lousy at human interaction. Anyone who has called customer support and gotten one of those stupid voice recognition mazes will know what I mean. If robots could ever attain self awareness, then I think they could evolve (like they did in AI) and conquer the entire galaxy, but you don't see robots ruling earth do you?



So you would say the robot's edge is rather low?

Quote: Wizard

Forgive me for getting off topic, but this question has similarities to the debate about why Hal killed the astronauts in 2001.



I've heard that debate a few times. What's your take?
So I says to him, I said "Get your own monkey!"
  • Jump to: