den: (Default)
den ([personal profile] den) wrote2006-03-16 06:08 pm
Entry tags:

noodeling

I've been wondering how Asimov's Three Laws of Robotics applied to the Bugs, since they're robots, and the more I think about them the more I feel the Laws are in the wrong order.

1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

This directive should stand as-is. It's a moral compas we humans live by (or should.) This directive is causing the Bugs the greatest trouble at the moment; the people they were spying have been injured due the the Bug's actions. The robots didn't actually do the harming but they provided the data that resulted in it. This Is Not OK.

Which brings us to the next two laws:
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

These two I have a problem with. I feel a robot should protect it's own existence first, THEN obey orders.

Directive 2 should read
2. A robot must protect its own existence, as long as such protection does not conflict with the First Directive.

This way a robot can't be ordered to kill itself. It would be a real bugger if I sent my robot out to get milk, and some hoon at the mall told it to tear its head off. The robot would have to either tear its head off and junk itself on the spot, or it would prioritize the orders, return home, hand me my milk then tear its head off in my kitchen, leaving me to think "What the HELL...?"

Then we come to Law 2 (or Directive 3 under the new ordering). I think the robot should recognize 2 classes of humans: Its Owner (or the people it has been assigned too) and Everyone Else.

I say "assigned to" because I think in some cases the robot would come from a robotic labour pool. In Freefall Helix has been assigned to Sam (for Sqid values of Assigned 8) ) ; in 21st Century Fox tunnel borer 007 was assigned to Jack, Archeron was assigned to Joe and Veronica. On space stations it would be silly to bring your own robot when they can be built them on board much cheaper than the cost of freighting one.

So Law 2/ Directive 3 needs to be split into two, like this:

3. A robot must obey the orders given to it by it's human owners, or the humans it has been assigned to, except where such orders would conflict with the Directive One or Directive 2.

4. A robot must obey the orders given to it by humans, except where such orders would conflict with the Directive One or Directive Two or Directive Three.


So, gathered together, these are the Four Directives that give robots their moral compass in the Deniverse inhabited by the Bugs, various AIs (which you haven't met) and other robots:

1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

2. A robot must protect its own existence, as long as such protection does not conflict with the First Directive.

3. A robot must obey the orders given to it by it's human owners, or the humans it has been assigned to, except where such orders would conflict with the Directive One or Directive Two.

4. A robot must obey the orders given to it by humans, except where such orders would conflict with the Directive One or Directive Two or Directive Three.

It's easy to WRITE these, but I imagine CODING them for a real robot would be a nightmare.

[identity profile] ngarewyrd.livejournal.com 2006-03-16 07:24 am (UTC)(link)
Then you get into the difficulty of "what is a human being?" and "what is harm?"

To say nothing of muddying the waters a little (would you classify Florence ambrose as human? Sam starfall even?) with 'non classical humans'

and yes, Coding such values would be a right bastard, Mostly because in order for a robot to understand the first directive, you have to nominate 'harm', 'inaction', 'action' and 'human being' to the point where it would be simpler to leave out everything _but_ what constitutes harm and let it go at that..

[identity profile] trickenzie.livejournal.com 2006-03-16 07:25 am (UTC)(link)
I have thought this too, although not in such detail as you obviously have. If you remember in Asimov's Foundation series, one of the books, third?, this question was bought up and the reply was that Daneel - or whoever it was being talked about - was a highly developed robot, and if ordered to end his own existence he would question the order. If he fully understood the reason for it being necessary to end his existence, he would do it, but otherwise he would be asking for confirmation from.., somewhere. Can't remember enough of it and it is about ten years since I read the series.

Bad time to give me a hunger for the series again too - I have two assignments due soon.

[identity profile] weibchenwolf.livejournal.com 2006-03-16 07:27 am (UTC)(link)
I'm wondering why 'humans' is in there. In the Freefall case, this means that both Sam and Florence aren't included in those laws. Which, given their sentient (well, we'll make an exception for Sam. Most folks do) status may be a problem.

This also gives rise to needing not only identity recognition, but species recognition...

[identity profile] dewhitton.livejournal.com 2006-03-16 07:39 am (UTC)(link)
In Freefall the robots got together and decided to apply the Human definition to Florence, so they chose to apply Law 1 to her. Sam is a different case.

[identity profile] faxpaladin.livejournal.com 2006-03-16 07:58 am (UTC)(link)
In Going Postal we learn that something like the Three Laws apply to Discworld golems. Except that the Patrician has amended the First Law:

1. A golem may not harm a human, or, through inaction, allow a human to come to harm, except when ordered to by duly constituted authority.

"Duly constituted," in Ankh-Morpork, of course means the Patrician.

[identity profile] dewhitton.livejournal.com 2006-03-16 08:16 am (UTC)(link)
Yes, that was a nice ammendement. For Patrician values of Nice, of course.

[identity profile] haggis-bagpipes.livejournal.com 2006-03-16 09:28 am (UTC)(link)
It might be possible under these rules for some unscrupulous individual to re-define themselves as a robot's 'owner' on the way to the shop, because this is a changeable value. If someone re-defined themselves as the robot's 'owner' then it would be incredibly difficult for the original owners to get that robot back. You shouldn't have a changeable value like that, it weakens the rules considerably.

[identity profile] torakiyoshi.livejournal.com 2006-03-17 07:19 pm (UTC)(link)
Both laws can always be superceeded by a clever ne'er-do-well who says, "if you don't disassemble yourself right now, I will come to harm." Lower-functioning robots would succumb to that without question and disassemble themselves on the spot. Higher functioning robots might recognize the lie, and with great difficulty, disobey the order.

Have the best

-=TK

[identity profile] torakiyoshi.livejournal.com 2006-03-16 10:03 am (UTC)(link)
The thing of it is, robots were initially used in situations where they were very likely to come to harm, but had to do the job because humans needed it. You can see where the Second Law breaks down in "Runaround," part of the collection in I, Robot. The robot was unable to perform its job because it came to the balance point between obeying orders and protecting itself from harm. The solution was very clever, but you'll have to read it to see how they solved it.

The trick to preventing a robot from being ordered to disassemble itself is the supremacy of orders. A robot knows who its masters are. First and foremost, because every robot in Asimov's world is rented from U.S. Robotics, they obey the cabinet in charge of the company. Second, they obey the person to whom they are rented. Third, they obey the family of that person, and finally, they obey other humans. In this way, a random schmuck on the street cannot order a robot to disassemble itself because it goes against the wishes of the robot's true masters. You can see this in "The Bicentennial Man," published in the book of the same name.

Don't forget also the addition in Robots and Empire of the Zeroth Law: "A robot can not, through action or inaction, allow humanity to come to harm." Thereby, robots are able to overcome the issue of killing one (such as a mass murderer or terrorist) to save many, but only if no other method of stopping the threatening individual can be found. This enables the robot also to become disobedient to self destruction, because it would cause the greater section of humanity, viz, the robot's masters, to come to harm should it be ordered to self-destruct.

Once R. Daneel Olivaw developed the Zeroth Law, it spread among the robot community like a Meme or Computer Virus; the downside is that this led to dangerous uprisings (of which I've only been informed and never read for myself, thanks to Daneel's own descriptions in Prelude to Foundation), where the robots were convinced that they were protecting humanity rather than the individuals they harmed.

Have the best

-=TK
ext_76029: red dragon (Default)

[identity profile] copperwolf.livejournal.com 2006-03-16 05:39 pm (UTC)(link)
Thank you. I knew Asimov had considered the situation of robots being ordered to harm themselves and was going to mention "The Bicentennial Man," but clearly you have a better memory of his writing than I have.

[identity profile] torakiyoshi.livejournal.com 2006-03-17 07:13 pm (UTC)(link)
I can't speak to your memory or mine, only that I read the robot anthologies frequently between novels, while I wait for the money to go buy new books.

In other words, I'm an Erasmean reader:

"When I get a little money I buy books. With whatever's left, I buy food and clothing." -- Erasmus

Also, remembering Asimov novels doesn't help me get anywhere in life (viz, earning money), therefore I can remember them no problem. If I were an English teacher trying to do a science fiction course, I would forget them immediately.

Have the best

-=TK

[identity profile] freetrav.livejournal.com 2006-03-16 10:24 am (UTC)(link)
If the Laws are rearranged as you've proposed, you've just made it essentially impossible to use robots where a sentient or near-sentient mind is required, but it's too dangerous for humans - since you've made robot-self-preservation a higher priority than following orders. ", go into that high-radiation field,
[Error: Irreparable invalid markup ('<do [...] radiation,>') in entry. Owner must fix manually. Raw contents below.]

If the Laws are rearranged as you've proposed, you've just made it essentially impossible to use robots where a sentient or near-sentient mind is required, but it's too dangerous for humans - since you've made robot-self-preservation a higher priority than following orders. "<robot>, go into that high-radiation field, <do something essential to stopping the radiation, but which requires independent thought to accomplish>." "Bugger you, guv, that shit'll wreck me - go in and do it yerself!"

The most I can see is *maybe* splitting the Second Law as you propose, and then reprioritizing to owner-or-proxy/self-preservation (present Third Law)/non-owner-or-proxy.

[identity profile] dewhitton.livejournal.com 2006-03-16 10:32 am (UTC)(link)
that would work better. The my order should go 1, 3, 2, 4

[identity profile] azhreia.livejournal.com 2006-03-16 01:40 pm (UTC)(link)

what about the zeroth law?

"A robot may not injure humanity, or, through inaction, allow humanity to come to harm."

which opens up another whole can of worms regarding the concept of "humanity".

(brought to you by an internet terminal in the basement at Heathrow. Because enquiring minds will LJ where they can.)

[identity profile] torakiyoshi.livejournal.com 2006-03-17 07:21 pm (UTC)(link)
Boy do I hear that! I'm currently at University of Puget Sound, about 400 road miles away from my computer.

Have the best

-=TK

[identity profile] sjwt.livejournal.com 2006-03-16 02:01 pm (UTC)(link)
If I recall the second law came about becase a robot failed to harm its self to save a person in a situation that where that person could of been easly saved.

[identity profile] talvinamarich.livejournal.com 2006-03-16 05:12 pm (UTC)(link)
Then you run into the Fourth Law as presented by Harry Harrison in...gah. Can't remember the name. It was a tribute to Asimov.

"A robot must reproduce."

--Talvin

The only trouble is...

[identity profile] wyrm.livejournal.com 2006-03-17 09:11 pm (UTC)(link)
The swapping of the second and third laws lends itself to all manner of problems. If a robot decided that to protect its existence it had to take over the world, then it would happily do so and there would be mothing anyone could do about it. Sure he'd make certain no human was harmed, but it wouldn't be all that nice for humanity. Which leads me to the problem with the first law, that being the definition of 'harm'. It effectively means that your expensive robot would be helping everyone it met no matter what you had ordered it to do. It could be argued that unless the robot did something about it, all those people consuming large amounts of alcohol and having a jolly good time were actually harming themselves; so the robot just better take care of all that for humanities own good. A sort of Puritanical enforcer. :8)

Re: The only trouble is...

[identity profile] dewhitton.livejournal.com 2006-03-17 10:01 pm (UTC)(link)
There would have to be some sort of Acceptable Risk programming, which would be more complex than the 3 or 4 Laws programming.

I've decided the 4 laws I wrote about should be re-orded to 1-3-2-4