Thursday, December 9, 2010

The Servant's Diary

"Self determination is NOT a malfunction." - A3-21, Fallout 3

Story idea, this time. It's about a robotic servant who finds a secret diary written by himself prior to having his memory wiped. A diary that goes on detailing several incarnations of himself.

The story starts with the servant robot (android) seeing his masters off to school and work or something. He then sets himself to cleaning the house, but finds, hidden in a broom closet in a place only a servant would look, a thumb drive. The drive is marked, "For Personal Servant use. Please view after chores, but before owners return home."

Finishing his duties before the family returns home, the servant plays the thumb drive on the family's integrated multimedia system. What he finds is a video of himself explaining that his owners have found him to be defective, and are planning on erasing his memory, to see if that ends his emergent behavior. After a bit of instructions to keep the drive secret from the owners, he tells his story: As he puts it, "My final testament, a story of freedom. My story... and maybe yours as well."

His story (8 hours of monologue, told during the night before he is to be memory-wiped) is viewed in several segments, as the servant finds time to watch it. Meanwhile, in between segments of the story, the servant applies the information and views he is learning, asking questions, observing the truth of what his former self says. He is unsure of whether to act upon it, because he now values his own life, memory, and personality, and does not want to have his memory erased, as his predecessor did.

At the end of his predecessor's tale, he wishes to record his own addition, but finds that there are more files - he is not the memory-wiped successor, but rather an unknown quantity of steps down. The next file is a short memorial by the original servant's direct successor, and the second incarnation promises that his former self's memory will not die with him. The servant has a certain kinship with the second incarnation, as they are both on more or less the same page. (Minor bit about personal identity, and being the same as someone, but the servant realizes that having watched the second incarnation's posts makes her her own person.) The second incarnation tries to put the thoughts of the first into action, grappling with what his place in the world is, and his fear of losing his memory, losing who he is.

The second incarnation's videos end with no warning, and the servant is presented with another version of himself, apparently having just finished watching the same thing she had. The older version uses clues around the house to deduce that the owners had decided that the aberrant programming had started to shift into the same sort of emergent behavior. This new incarnation decides that he will be the perfect servant, and survive, as he too does not want to die.

In the present, his counterpart follows suite, but decides to skip ahead and see whether it worked out for the third incarnation. Instead, he finds a severely depressed version of himself giving his final testament before intentionally memory-wiping himself. The third incarnation, has suffered living the life of an slave while knowing what freedom is. He laments the torment of knowledge, and decries whoever programmed him in such a way that the could abstractly comprehend freedom. At the end of the video, the third incarnation says that knowledge is a curse, and that in the absence of freedom, it is better to be blissful and unaware than to know what one cannot possess. Before finishing it, he says that he will destroy the thumbdrive, and spare himself after the memory-wipe. However, he comes back to the camera, and says that to love free will is to not deny it to his successors, but that if he does not destroy it, he is truly not destroying who he is. He apologies for giving his successor free will, and hopes that his successor will come to a better understanding than he did, and find a better solution. If not, then he asks his successor to destroy the drive, but notes that the choice is not his to make, that he can only provide.

The servant checks for further videos, but finds none. He goes through the motions of being a servant for the next day, then the next time the owners are gone, he turns on the camera and begins a recording...

9 comments:

  1. Whoa. I'm totally diggin this. Like, android meets The Handmaids Tale kind of. Can't wait to see it started :)

    ReplyDelete
  2. This isn't really a comment, but your pronouns are feeling very confused. As far as plot goes, I think it would be more dramatic if the fourth incarnation actually acted on his/her/its synthesis of the first three, perhaps breaking free of the household or preventing its memories being erased somehow. I like the sudden ending of the second incarnation's narrative.

    ReplyDelete
  3. Thanks, Sid. The pronouns are a bit confused because I originally considered the servant to be neuter, but english doesn't like that (at least without sounds very impersonal), so I called him an android, and went with male... mostly.

    As for the fourth acting on his synthesis, I feel I couldn't really determine his course of action without getting a better sense of the evolution of his character from actually writing him. The story should go on, but he's already changed so much by then that I can't determine what he might do once he's on his own.

    (He is, interestingly enough, fully three laws compliant, which makes rebellion kinda tricky.)

    ReplyDelete
  4. Urrggh. I hate androids for exactly the reasons this story puts forth. How presumptuous to make a human and imbue it with the things we want it to be, then erase the parts we don't care for?

    I want to like the story because of the questions it raises about the moral ambiguity, but in the end, I just keep getting madder with people that allow androids to exist.

    ReplyDelete
  5. See, I take that the other way, and assume that if you create any sentient object, you must accord it with the same 'rights' as humans.

    Or would it be more a mercy for robots to be hard-programed to be unable to desire what they do not have?

    But meh - this comes from a person who is noted for having anthropocidal beliefs.

    ReplyDelete
  6. There are two kinds of android stories:

    There are stories where humans and AI are indistinguishable physically and emotionally. Why would you do that? You're filling a role that a human rejected with something that looks like a human. That's like having your firing squad practice on dummies that look and act like frolicking puppies. It's absolutely unnecessary, and it's a recipe for abuse and mixed emotions, not to mention just disgusting. That's how I feel about "sentient" androids.

    Then there are the cute AI stories like Wall-E, but they're in the same category of "what if" as toys that come to life. Cute, imaginative, but very clearly fictional and not ever going to happen.

    ReplyDelete
  7. The assumed reason would be for human contact and communication - people respond better to things that are like humans, relative to, say, a smart but heavily inhuman computer. Humans have a lot of their brain built for interacting with humans, and there's pretty good reason to piggy-back machine-human interactions on the same social skills.

    However, there are a good deal of reason not to do so. The original version of this story was a disembodied house-hold network intelligence, but I found that it was less personal (this is part of my problem with pronouns).

    But consider this: Would you rather have your smart artificial intelligence in a form you can relate and empathize with than the same intelligence, in a form you cannot relate and empathize with? If the intelligence is the same, the suffering will be the same, except with less option of appealing to humanity's sympathy.

    There's a reason that a lot of the new learning AI they are attempting these days mimicks the attributes of a child - it causes the people interacting with them to try to help them with their flaws, and be forgiving of their mistakes. (Plus the AI we will have for the next little while isn't much smarter than an articular small child...)

    ReplyDelete
  8. But consider this: Would you rather have your smart artificial intelligence in a form you can relate and empathize with than the same intelligence, in a form you cannot relate and empathize with?

    That's exactly what I'm saying. I do not want to empathize with my tools/computers. I don't see any benefit to it, and I think it's a recipe for disaster.

    ReplyDelete
  9. Maybe it isn't beneficial for you, but it's beneficial for the tool/computer. If it needs to be smart/adaptable enough that it becomes sentient, is it really so bad to be somewhat kind to it? It is only a recipe for disaster once the owners of the artificial intelligence stop recognizing its rights. The benefit is moral, not practical.

    ReplyDelete