TOY STORY
the Prequel
© Cary Cook 2012
Let's say you're dreaming and you don't remember who you are, or what you are, or where you came from. You can't even see your own body. You're in a nebulous chaotic place where anything you think becomes manifest in front of you. Soon you learn thought control, and discover that you can think anything into existence – anything within the bounds of logic. So you start making things. Geometric things are easiest, so you crate a big cube full of space-time, and then create more complex things inside it. Some things are stable, and some aren't. The things that aren't stable keep degenerating into stable things because of some damn law of entropy, which you didn't create, but seems to be there anyway. (Or maybe you created it just to keep things from getting clogged up.) Anyway, it's there.
In order to create anything that doesn't degenerate, you need to create (or discover) a logical algorithm to maintain it. Once you have such an algorithm in your mind, it's part of you, and lasts until you forget it. So you create another algorithm for an algorithm storage container so you don't forget anything important. Now you can start to boogie. You can keep creating more complex things whenever the old things stop being fun.
So you start collecting toys and putting them in a box in your space-time cube. Of course the more fun toys are the more complex ones, but those have a short shelf life because of degeneration. So you figure out a preservative – pain when they do self-destructive things, and pleasure when they do self-preservative things. Then you figure out how to get your toys to make more toys. Reproduction.
Now maybe you have time constraints, and maybe you don't. If you don't, you can let these suckers evolve naturally, without any intervention, except to buck the law of entropy enough to force living things to get more complex – or motivate them to want to. (Don't ask me how.) You aim for self sustaining systems, so you don't have to keep intervening, but every new complexity makes that more difficult.
As you create more sophisticated toys, you notice that you yourself are becoming more sophisticated. You think more complex thoughts and have more complex questions. You are changing – growing. You start to wonder where this is going, and more importantly, whether or not you like where it's going. Are you and your thought toys all that exists? Will you eventually wake up into some other place, or are you stuck forever where you are? Are there others like you? If so, will you like each other? Do you even want to exist at all? You don't know any of this stuff.
You are much like one of your own more sophisticated creations. They don't know any of the big answers either. But unlike you, they don't think about it. So you get an idea. Why not make things that are more like you, and see what they do? They already have emotions because of the pain/pleasure factor. So you pick a toy species and pack in as much intellect as you can. It's fun for a while, but no matter how smart you make them, you always know exactly what they're going to think and do. They're essentially robots. Boring. So you decide to take the next step – free will.
Big problems right away. Your toys are just as self-centered as you are. So when you get a bunch together in the same box there's a justice problem. And injustice is a major cause of self destruction.
So you expand your space-time cube and duplicate the whole toybox. You make several copies just in case. And you notice something interesting. They don't all evolve the same way. Free will causes diversity. Some boxes become more inclined to self destruction than others.
But they're all short lived because free-will creatures in community are naturally volatile. So you mix in some robots as a preservative. Nope. Doesn't work. The free willies attack the robots. Well there's an interesting phenomenon! Some of the free willies are determinists, so they think they're robots, and they fight on the side of the bots. Go figure.
Anyway, it doesn't work. Maybe if you program the bots to not know they're bots, and make them anatomically indistinguishable from the willies. There! Now try different mixtures of boring bots and volatile willies. OK, now you can find that Goldilocks zone, where it's just right. You trash all the other boxes and duplicate the best one. OK, maybe you try to figure out a way to re-mix them. But a mind in pursuit of something (anything) has to expect some collateral damage.
So now you've got a set of optimum quality toyboxes to play with. You let them run. When a box becomes unstable, you intervene. But minimally because sometimes intervention backfires. You lose several boxes in the learning process.
Eventually one of the willies becomes inquisitive enough to ask you if you exist. You decide not to answer and see what happens. Everything seems to be going along as usual. Other toys start asking the same question. This gives you a disturbing thought. What if you are a toy created by someone else? So, you look away from your entire space-time cube and shout, "Hey! Is anybody out there?" No response. What does that tell you? Not much. So you go back to a toy box and respond to a toy just to see what happens.
You: | Yes, I exist. |
---|---|
Toy: | Oh, good. How can I get what I want? |
You: | Well, you just do whatever looks most likely to get it, and take your chances. |
Toy: | Been doing that. Doesn't work very well. If I give you what you want, will you give me what I want? |
You: | What do you want? |
Toy: | Whatever it takes to make me satisfied. |
Now you know what this guy wants. He wants what you want. He wants to have fun, and keep on having it – at least enough fun to make his life worthwhile. If he can't have that, he wants to degenerate into a stable form, but that's not what you created him for. That's why you created pain.
So you tell him you'll get back to him – or maybe you just put him on pause. You need time to think about it. This guy has given you an idea. So, you look away from your space-time box and shout, "Hey! If I give you what you want, will you give me what I want?" No response. No surprise.
So you go back to the toy you were talking to.
You: | Look. There are no guarantees, and you got nothing I want. |
---|---|
Toy: | Then why did you create me? |
The question irritates you, but you feel a need to justify yourself.
You: | You got any kids? |
---|---|
Toy: | Two. |
You: | Why did you create them? |
Toy: | I like to fuck. |
You: | Yeah well, so do I. And guys like you are the best things I have to fuck with. So just go do your stuff, and don't bother me. |
Toy: | One last question? |
You: | Sure. |
Toy: | What happens to me after I die? |
You: | Nothing. You just stop existing. |
Toy: | Thanks. |
End of conversation. Next time you check back on him, he has stolen a lot of money and impregnated some young females. You decide not to say a word to him. You give him a painful disease and let him run. Next time you check back on him, you find that he has committed suicide. He did, however, leave a note. It says, "Fuck you."
Now you're pissed. You have a good mind to pull his software, and re-embody him, and kick his ass. Unfortunately, you didn't back him up. All you have is the basic algorithm of his model. And no others of that model have done anything worthy of ass kicking.
So you learned something. Never tell toys they can stop existing. And maybe you should never respond to them at all. You already know what they're going to ask. But definitely save their software at the moment of death, so you can reincarnate the ones you want to punish. In fact, if you save their software, you can get the benefit of their mental evolution, so you don't have to keep thinking of new upgrades. If you reincarnate the good ones, they'll upgrade themselves. Just be sure to wipe out their historical memory, and keep only the operational improvements.
In the process of all this, you create one hell of a lot of undeserved misery. So after a while, you start feeling like an asshole. Maybe you try coercing some toys into pretending to love you, but that just makes you feel like an even bigger asshole. You simply have to operate justly in order to feel right about yourself. And don't forget the possibility that somebody may be watching you. What if you are treated as you treat your own creation? Would an entity like you respect another entity like you?
So you decide to reward toys that deserve reward, and punish those that deserve punishment – only the willies. The bots deserve neither. You keep a list of who's naughty and who's nice, so you can give the good ones the better bodies, in the richer families. They rarely deserve rewards more than once, because a moral compass doesn't work well next to magnets made of money or nooky. It's not a perfect system, but it at least shows that you're trying – in case somebody's watching. Also, if a toy gets unjustly shafted in life and then dies, you have to compensate it. Fortunately there's a lot more bad willies than good ones, so it works out. But it's still another level of complexity, and a lot more bookkeeping. You could really use some help.
What if you quit wiping out their historical memory when they die? Not all of them of course, just the really good ones. You could set up a whole new level of second timers to help you manage the first timers. It would be a dangerous move. You'd need to communicate with them enough to tell them what to do. And you know the kind of questions they'll ask. Few of them have any reason to like you.
No! Better idea – just use bots for second level work. You can't trust willies. Bots are so convenient. They'll take any amount of crap without complaining – when they're not programmed to think they're willies. And you can program them to think you're good, no matter what you do – in fact, you can define yourself as good. It even works philosophically, if you're the Supreme Being – which, for all they know, you are. In fact, for all you know, you may be the Supreme Being. You've never seen evidence of anyone above you. If you were the Supreme Being, how would you know it? Anyway, that's off subject.
So now you have two levels of toyboxes. What happens when you have third time bots? A third level? Whenever you need it, sure. In fact, this could keep on going forever – if you don't find something better to do. Fortunately your space-time cube is expandable.
One thing keeps bothering you. No matter how you set up your system, it generates more misery than happiness. You can program bots to think they're happy, but it doesn't work with willies. Of course you could make some of them happy, by giving them what they want. But as soon as they're satisfied, they're as boring as bots. So you keep them wanting more, just like you do.
One thing they want is for you to communicate with them, just like you want anybody above you to communicate with you. But if you communicate with them, they become unlike you. Sentient life appears to be a bitch no matter where you are. Why bother with it at all? Hmh. Somehow you're programmed to prefer it. Does that mean you're a robot?! Even if you are, the Supreme Being can't be a robot. What keeps him going? Worthwhile life must exist at the very top. But is it accessible from the lower levels? Or what if you are the Supreme Being, and haven't wised up enough to suicide out? Damn these questions!
How do your toys handle such questions? – the ones they can think of, that is. They make up religions. They imagine you communicating with them. It's an easy sell, because they all want to believe it anyway. It's generally not a problem at first, because they imagine you telling them to do things they already believe they should do – things that preserve their various societies. But for some reason they also imagine you being angry at them. They figure it must be because they did something wrong – even the good ones. So they try to make it up to you. They figure you're like a really big king, so you must want what kings want. All kings want obedience, but you're not telling them anything. So after they've done what they already believe they should do, and you still appear angry or indifferent, they assume you want more. Flattery, groveling, worship. If that doesn't work, they start sacrificing things – food, livestock, children, virgins.
Sometimes religions are self-correcting. Other times you just have to intervene – usually with a robo-prophet. But once a religion goes scriptural, it becomes prophet-resistant. When evolving societal conditions make written laws obsolete, they're still "carved in stone". So every time you boost a religion with a prophet, you increase the detriment of its obsolete laws – unless the prophet has a new set of laws – but that's a hard sell. A smattering of atheists is also beneficial – until they become logical enough to embrace nihilism, and numerous enough to infect their whole box with it.
Maybe you should reconsider direct communication. They all want you to just appear in the sky and tell them what's going on. What would you say?
"Listen guys, I have no idea where this is all going, but I want you to behave morally and don't rock the boat. If you do, I'll reincarnate you on the high end of the pecking order. And if you misbehave, it's the low end."
Maybe you should cross off the part about not knowing where it's going. What do you do when they ask? Shut up and go away? Make up a lie? Whack them for asking?
And they'll want to see evidence of reward for good behavior. That means more than just reincarnation. You're going to have to take their whole identity packet, memory and all, and paste it into an infant brain, and then hope the kid grows up happy, so everyone can see that obedience pays. This is going to take a lot of work, even if robots handle most of it.
Once the willies see that obedience is rewarded, they'll all decide to be good. Then there's no bad toys to put at the bottom of the pile. You'll have to put the least good of the good toys at the bottom. So once again, good toys get shafted. Or maybe everybody gives to the poor, so there are no more poor, and everybody's on the same level. But then how do you even figure out who deserves what?
What's in any of this for you? If you're being watched from outside, would the watchers be impressed with your attempted benevolence, or just think you're a damn fool? Would you be impressed by it? What would impress you? If you could create a just and self-sustaining system – that would impress you. If you could balance the misery with happiness – that would impress you. And real happiness – not just coercing frightened fools into pretending to be happy so they don't get clobbered. If you could create more happiness than unhappiness – that would prove you worthy of the same happiness.
But how?
Hey! Whoever is above me, you got any ideas?
If one of your toys asked you that same question, would you answer?
If I had any ideas – sure. Well, not audibly, because they'd go crazy with it.
How would you answer?
I'd give them the ideas.
Would you tell them their religion is full of crap?
No, but I'd give them the ideas they need to figure it out.
Unfortunately, their religions are the only things keeping them out of atheism. And atheists won't be asking you for ideas.
OK, then I have to be careful. I'd only give them what they could handle at the time, and only the ones who asked for it, and only the ones who wouldn't go crazy. Maybe if everybody were connected up from the Supreme Being on down, then we'd all quit cheating each other, and have a chance at happiness.
That's an idea.