Story Time
15 May 2006 23:59The frombel joined the end of the long line of creatures, and sighed. He looked again at the script and decided his best chance of ever getting into this production would be a walk-on part on page 12 or 13. He sighed again, and wondered if he should just go home. A furute in front of him turned and smiled.
"You here for the shoot?" she asked.
Frombel stared at her. "Um. Yeah. I-- uh-- I'm going for a minor role. A walk through."
Furute nodded. "You're after a place on page 12, too? I'm hoping for somewhere here." She pointed at a place on her script.
"You know, I think you might be better suited to somewhere up the top of page 13," said the frombel. He pointed. "Up here."
"What? No way! You can't see anything up there!"
"I know, but--" Frombel scratched the back of his neck and wondered how to say what he had to say. "I don't think you're the right... shape."
Furute glared at the frombel. "Oh yeah? What about him?" She poined at a winged creature flying overhead. It had one eye and, apart from the wings, no other appendages. "And that one looks like an elephant with no trunk. And she looks like a rabbit with no arms!" she growled, pointing at each of the creatures in turn.
"Yeah, but..." Frombel reluctantly pointed at the furute's chest. "It's just that..."
Furute looked down at her chest. "There's something in my fur."
"No. It's your... your..." Frombel sighed. "Boob."
"WHAT?"
"You have one giant boob. It's... You know. Exposed. I don't think this is the right production for you."
"Are you shitting me?" fumed the furute.
At that moment the door to the director's office opened. "Okay! Would everybody who wants to be in Dr Seuss' book 'One Fish Two Fish' please file in one at a time and we'll start auditions" said the assisstant.
"Good luck," said the frombel.
"Piss off," grumbled the furute.
(They both got into the book. Page 13. Right up the top.)
"You here for the shoot?" she asked.
Frombel stared at her. "Um. Yeah. I-- uh-- I'm going for a minor role. A walk through."
Furute nodded. "You're after a place on page 12, too? I'm hoping for somewhere here." She pointed at a place on her script.
"You know, I think you might be better suited to somewhere up the top of page 13," said the frombel. He pointed. "Up here."
"What? No way! You can't see anything up there!"
"I know, but--" Frombel scratched the back of his neck and wondered how to say what he had to say. "I don't think you're the right... shape."
Furute glared at the frombel. "Oh yeah? What about him?" She poined at a winged creature flying overhead. It had one eye and, apart from the wings, no other appendages. "And that one looks like an elephant with no trunk. And she looks like a rabbit with no arms!" she growled, pointing at each of the creatures in turn.
"Yeah, but..." Frombel reluctantly pointed at the furute's chest. "It's just that..."
Furute looked down at her chest. "There's something in my fur."
"No. It's your... your..." Frombel sighed. "Boob."
"WHAT?"
"You have one giant boob. It's... You know. Exposed. I don't think this is the right production for you."
"Are you shitting me?" fumed the furute.
At that moment the door to the director's office opened. "Okay! Would everybody who wants to be in Dr Seuss' book 'One Fish Two Fish' please file in one at a time and we'll start auditions" said the assisstant.
"Good luck," said the frombel.
"Piss off," grumbled the furute.
(They both got into the book. Page 13. Right up the top.)
I've been wondering how Asimov's Three Laws of Robotics applied to the Bugs, since they're robots, and the more I think about them the more I feel the Laws are in the wrong order.
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
This directive should stand as-is. It's a moral compas we humans live by (or should.) This directive is causing the Bugs the greatest trouble at the moment; the people they were spying have been injured due the the Bug's actions. The robots didn't actually do the harming but they provided the data that resulted in it. This Is Not OK.
Which brings us to the next two laws:
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
These two I have a problem with. I feel a robot should protect it's own existence first, THEN obey orders.
Directive 2 should read
2. A robot must protect its own existence, as long as such protection does not conflict with the First Directive.
This way a robot can't be ordered to kill itself. It would be a real bugger if I sent my robot out to get milk, and some hoon at the mall told it to tear its head off. The robot would have to either tear its head off and junk itself on the spot, or it would prioritize the orders, return home, hand me my milk then tear its head off in my kitchen, leaving me to think "What the HELL...?"
Then we come to Law 2 (or Directive 3 under the new ordering). I think the robot should recognize 2 classes of humans: Its Owner (or the people it has been assigned too) and Everyone Else.
I say "assigned to" because I think in some cases the robot would come from a robotic labour pool. In Freefall Helix has been assigned to Sam (for Sqid values of Assigned 8) ) ; in 21st Century Fox tunnel borer 007 was assigned to Jack, Archeron was assigned to Joe and Veronica. On space stations it would be silly to bring your own robot when they can be built them on board much cheaper than the cost of freighting one.
So Law 2/ Directive 3 needs to be split into two, like this:
3. A robot must obey the orders given to it by it's human owners, or the humans it has been assigned to, except where such orders would conflict with the Directive One or Directive 2.
4. A robot must obey the orders given to it by humans, except where such orders would conflict with the Directive One or Directive Two or Directive Three.
So, gathered together, these are the Four Directives that give robots their moral compass in the Deniverse inhabited by the Bugs, various AIs (which you haven't met) and other robots:
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must protect its own existence, as long as such protection does not conflict with the First Directive.
3. A robot must obey the orders given to it by it's human owners, or the humans it has been assigned to, except where such orders would conflict with the Directive One or Directive Two.
4. A robot must obey the orders given to it by humans, except where such orders would conflict with the Directive One or Directive Two or Directive Three.
It's easy to WRITE these, but I imagine CODING them for a real robot would be a nightmare.
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
This directive should stand as-is. It's a moral compas we humans live by (or should.) This directive is causing the Bugs the greatest trouble at the moment; the people they were spying have been injured due the the Bug's actions. The robots didn't actually do the harming but they provided the data that resulted in it. This Is Not OK.
Which brings us to the next two laws:
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
These two I have a problem with. I feel a robot should protect it's own existence first, THEN obey orders.
Directive 2 should read
2. A robot must protect its own existence, as long as such protection does not conflict with the First Directive.
This way a robot can't be ordered to kill itself. It would be a real bugger if I sent my robot out to get milk, and some hoon at the mall told it to tear its head off. The robot would have to either tear its head off and junk itself on the spot, or it would prioritize the orders, return home, hand me my milk then tear its head off in my kitchen, leaving me to think "What the HELL...?"
Then we come to Law 2 (or Directive 3 under the new ordering). I think the robot should recognize 2 classes of humans: Its Owner (or the people it has been assigned too) and Everyone Else.
I say "assigned to" because I think in some cases the robot would come from a robotic labour pool. In Freefall Helix has been assigned to Sam (for Sqid values of Assigned 8) ) ; in 21st Century Fox tunnel borer 007 was assigned to Jack, Archeron was assigned to Joe and Veronica. On space stations it would be silly to bring your own robot when they can be built them on board much cheaper than the cost of freighting one.
So Law 2/ Directive 3 needs to be split into two, like this:
3. A robot must obey the orders given to it by it's human owners, or the humans it has been assigned to, except where such orders would conflict with the Directive One or Directive 2.
4. A robot must obey the orders given to it by humans, except where such orders would conflict with the Directive One or Directive Two or Directive Three.
So, gathered together, these are the Four Directives that give robots their moral compass in the Deniverse inhabited by the Bugs, various AIs (which you haven't met) and other robots:
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must protect its own existence, as long as such protection does not conflict with the First Directive.
3. A robot must obey the orders given to it by it's human owners, or the humans it has been assigned to, except where such orders would conflict with the Directive One or Directive Two.
4. A robot must obey the orders given to it by humans, except where such orders would conflict with the Directive One or Directive Two or Directive Three.
It's easy to WRITE these, but I imagine CODING them for a real robot would be a nightmare.