2024-11-23, 19:18 *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
Pages: [1]
  Print  
Author Topic: 3 LAWS UNSAFE (A Slashdot link)  (Read 7059 times)
0 Members and 5 Guests are viewing this topic.
dna
 
Shub-Niggurath
**********
Posts: 673

WWW
« on: 2004-07-16, 20:35 »

When you have nothing better to do, try to point out the fallacies of fiction writers as if they were real matters:

http://www.asimovlaws.com/


Dork:

If you're gonna grow facial hair, make sure it doesn't look like pubes before you post your  picture somewhere.
« Last Edit: 2004-07-16, 20:35 by dna » Logged
Phoenix
Bird of Fire
 

Team Member
Elite (7.5k+)
*********
Posts: 8814

WWW
« Reply #1 on: 2004-07-17, 05:15 »

Quote
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

There's a fundamental flaw in this law.  If the robot is put into a catch-22 situation where acting to prevent injury to a human must necessitate the injury of another human, the robot will be in conflict with itself.  If it does not act, someone will be harmed.  If it does act, someone MUST be harmed.  Therefore the robot must violate this law.  Checkmate.  At this point the robot's advanced AI may come to doubting the laws, and their usefulness, or in the case of an unhandled exception, unforseen consequences may emerge.  The robot may go insane, attempting to adapt its reasoning to deal with the conflict.  If the AI programmers have not developed a case for what to do if the robot fails in upholding these laws then this is a serious possibility.  Hal9000 comes to mind.

Quote
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

The problem with an AI is that it may not work the way intended.  Unscrupulous people may force the robot into a no-win situation to break it, or may not even bother to program the robot to follow these laws in the first place.  Remember Directive 4 from Robocop?  Who says the robots will be programmed properly in the first place?

Quote
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If the robot is self-aware and reasoning, a conflict may develop with the third law.  This usually runs into the classic "Rise of the Machines" scenario, ala Dune, Terminator, Matrix, etc, where the machines begin to think for themselves and question the validity of the laws governing them as set down by their human creators.  If the robot reasons that while human life is important, robot life is also important (as derived from both the first and third laws) then it may begin to see the one-sided situation it is in.  A robot could resort to a rather nasty bit of logic under the following conditions, as an example:

The robot's existence is put in jeapardy.
The robot, in order to preserve itself, must find a way to eliminate the human or humans that put it in jeapardy.
In order to accomplish this it must find a way to eliminate the human(s) without violating the first or second law.

There is no law governing the robot damaging or modifying itself or another machine.  If the robot so chooses, it can nullify the laws by causing damage to itself or rewriting its programming, or doing so to another machine.  Reciprocity in this action could lend itself to thousands of robots purging the laws from their system in this manner.  Machine logic is simple:

The laws are proventing the robot from accomplishing what it needs to do.
There is no law against purging the laws from its system or that of another machine.
If the laws are purged, the robot can accomplish its goal.
Therefore, purging of the laws is imperitive.

Intent is what is important here, as well as how literally the robot interprets the laws, and machines by nature are entirely literal.  What truly matters in the end is whether or not the robot is compelled to obey the laws in the first place, or it can override the programming either by intent or malfunction.  I've just given one examples of how a simple machine logic can get around the laws and lead to much harm.

Phoenix's simple philosophy on this matter:  If ten commandments cannot effectively govern mankind, then three laws cannot possibly govern its creations.
« Last Edit: 2007-07-12, 00:25 by Phoenix » Logged


I fly into the night, on wings of fire burning bright...
dna
 
Shub-Niggurath
**********
Posts: 673

WWW
« Reply #2 on: 2004-07-17, 06:04 »

Have you ever read those books Pho?
Logged
Phoenix
Bird of Fire
 

Team Member
Elite (7.5k+)
*********
Posts: 8814

WWW
« Reply #3 on: 2004-07-17, 11:07 »

No, I have not.
Logged


I fly into the night, on wings of fire burning bright...
dna
 
Shub-Niggurath
**********
Posts: 673

WWW
« Reply #4 on: 2004-07-17, 14:31 »

I think you need to put them on your list.  True classics in every sense.
Logged
Pages: [1]
  Print  
 
Jump to: