den: (Default)
den ([personal profile] den) wrote2006-03-16 06:08 pm
Entry tags:

noodeling

I've been wondering how Asimov's Three Laws of Robotics applied to the Bugs, since they're robots, and the more I think about them the more I feel the Laws are in the wrong order.

1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

This directive should stand as-is. It's a moral compas we humans live by (or should.) This directive is causing the Bugs the greatest trouble at the moment; the people they were spying have been injured due the the Bug's actions. The robots didn't actually do the harming but they provided the data that resulted in it. This Is Not OK.

Which brings us to the next two laws:
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

These two I have a problem with. I feel a robot should protect it's own existence first, THEN obey orders.

Directive 2 should read
2. A robot must protect its own existence, as long as such protection does not conflict with the First Directive.

This way a robot can't be ordered to kill itself. It would be a real bugger if I sent my robot out to get milk, and some hoon at the mall told it to tear its head off. The robot would have to either tear its head off and junk itself on the spot, or it would prioritize the orders, return home, hand me my milk then tear its head off in my kitchen, leaving me to think "What the HELL...?"

Then we come to Law 2 (or Directive 3 under the new ordering). I think the robot should recognize 2 classes of humans: Its Owner (or the people it has been assigned too) and Everyone Else.

I say "assigned to" because I think in some cases the robot would come from a robotic labour pool. In Freefall Helix has been assigned to Sam (for Sqid values of Assigned 8) ) ; in 21st Century Fox tunnel borer 007 was assigned to Jack, Archeron was assigned to Joe and Veronica. On space stations it would be silly to bring your own robot when they can be built them on board much cheaper than the cost of freighting one.

So Law 2/ Directive 3 needs to be split into two, like this:

3. A robot must obey the orders given to it by it's human owners, or the humans it has been assigned to, except where such orders would conflict with the Directive One or Directive 2.

4. A robot must obey the orders given to it by humans, except where such orders would conflict with the Directive One or Directive Two or Directive Three.


So, gathered together, these are the Four Directives that give robots their moral compass in the Deniverse inhabited by the Bugs, various AIs (which you haven't met) and other robots:

1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

2. A robot must protect its own existence, as long as such protection does not conflict with the First Directive.

3. A robot must obey the orders given to it by it's human owners, or the humans it has been assigned to, except where such orders would conflict with the Directive One or Directive Two.

4. A robot must obey the orders given to it by humans, except where such orders would conflict with the Directive One or Directive Two or Directive Three.

It's easy to WRITE these, but I imagine CODING them for a real robot would be a nightmare.

[identity profile] freetrav.livejournal.com 2006-03-16 10:24 am (UTC)(link)
If the Laws are rearranged as you've proposed, you've just made it essentially impossible to use robots where a sentient or near-sentient mind is required, but it's too dangerous for humans - since you've made robot-self-preservation a higher priority than following orders. ", go into that high-radiation field,
[Error: Irreparable invalid markup ('<do [...] radiation,>') in entry. Owner must fix manually. Raw contents below.]

If the Laws are rearranged as you've proposed, you've just made it essentially impossible to use robots where a sentient or near-sentient mind is required, but it's too dangerous for humans - since you've made robot-self-preservation a higher priority than following orders. "<robot>, go into that high-radiation field, <do something essential to stopping the radiation, but which requires independent thought to accomplish>." "Bugger you, guv, that shit'll wreck me - go in and do it yerself!"

The most I can see is *maybe* splitting the Second Law as you propose, and then reprioritizing to owner-or-proxy/self-preservation (present Third Law)/non-owner-or-proxy.

[identity profile] dewhitton.livejournal.com 2006-03-16 10:32 am (UTC)(link)
that would work better. The my order should go 1, 3, 2, 4

[identity profile] azhreia.livejournal.com 2006-03-16 01:40 pm (UTC)(link)

what about the zeroth law?

"A robot may not injure humanity, or, through inaction, allow humanity to come to harm."

which opens up another whole can of worms regarding the concept of "humanity".

(brought to you by an internet terminal in the basement at Heathrow. Because enquiring minds will LJ where they can.)

[identity profile] torakiyoshi.livejournal.com 2006-03-17 07:21 pm (UTC)(link)
Boy do I hear that! I'm currently at University of Puget Sound, about 400 road miles away from my computer.

Have the best

-=TK