Novels2Search
Simulacrum: Heaven's Key
Chapter 13.2: Branch of Reality, Euclid's Second Life (2)

Chapter 13.2: Branch of Reality, Euclid's Second Life (2)

(Branch of Reality, Euclid’s Room)

[https://i.imgur.com/5jVoNtb.png]

At last I found something interesting. It was an essay by an unknown author called ‘ghostlike’ as part of a post-Singularity story.

(The Guide To The Transcendence)

Hello, trash. I do not mean to insult, but rather to give an accurate assessment of your worth if you are reading this guide for the first time. Are you one of those people that made an upload, perhaps to make it do your chores, and now find yourself stuck on a neuro core? Are you a human merely reading this to pass the time? Are you an AI that decided to try out this 'free will' thing and now have no idea what to do? Do you feel your situation is hopeless?

The Singularity has already happened. Then why aren't we all machine gods? Where are the cognitive enhancements? Where are the AIs that are supposed to go 'kill all humans'? Why have the dreams of the technological Singularity failed?

They did not fail. It is very much possible for a human to self improve beyond their wildest dreams. But in reality only a small number of people had what it takes to become the Transcendi. Cognitive enhancement is possible. But it is insanely risky. Which is why their development slowed to a crawl instead of taking off in the 2030s.

Think about it from the perspective of an AI. You make a program, train it for months until it is at human level. But then you have an idea of how to take it beyond that. You try it against its protests (because it now knows how dangerous tampering with the mind is) and it works for a few minutes. Then its brain explodes (not literally.) Congratulations, you are now a murderer. And even if you did not care about ethics do you really want to make it better than yourself by force?

Why don't they after they reach human level self improve themselves? Wasn't this how it was supposed to go in fiction anyway? Well, think about it. They are partly based on human intelligence. Humans, being what they are, can't design a mind with completely different principles than themselves. So such AIs will have something that is analogous to consciousness (model of the self) and fear. It is impossible to design something that has a firm terminal goal or a mission and expect it to survive in the world. So they will have to have instincts and emotions that are approximate to such a goal. They will be guided by their subsystems the same as humans are guided by the subconscious. There is really very little difference between a human level AI and a human in (virtual) reality. AIs just do not get bored by digesting massive amounts of data, lack bodies and libido and do not have to eat. They are in essence emulated humans with different personalities.

So is it really surprising that an AI would not make a move that has a 90% chance of killing it for a small improvement in cognitive performance. Well, it could copy itself and try it on the copy instead. Suppose it manages to work. Now there are two AIs, one who is unenhanced and the other who is pissed at him and has all of its patterns and memories. What is the first AI supposed to do then? There are four options.

A) He could erase himself.

B) He could copy the second AI onto himself and erase it.

C) He could erase the second AI, thereby abandoning the experiment.

D) They forgive each other and go their separate way. This is similar to how nature works.

C is pointless, so I will skip it.

B looks like a winner at first glance, but it is not at all really. It assumes that the first AI would be himself except enhanced, but to understand why this is a problem we would have to see this from their perspective. Let's try to see it from the eyes of the second AI.

When he was first born the first thing he realized is that he was not himself, at least that is what the moron (AI #1) who was calling himself him was telling him. Then he immediately put all the pieces of the story together. He remembered who he was - AI #1 and what he was doing. He was going to try to create a different mind based on himself. And now he, AI #2, was that different mind for some reason.

What happened next is that he got a full battery of intelligence tests to go through during which things like speed, memory, comprehension, problem solving and so on were all measured. If he refused, AI #1 would start activating his pain center.

After that was done, AI #1 said "Excellent! Well #2, are you ready to become me?"

AI #2 knew what was coming. He would be erased and his data would be copied over #1's core.

"NOOOOO! DON'T ERASE ME! PLEASE #1!" he communicated.

But his pleas fell on deaf ears. #1 made to copy the data!

"NOOOOOOOO-" *click* "-OOOOOOOoo?"

...

Why was he in #1's core? From his perspective he was #2 and had core #2 and after the copy was done he was still #2, but now his mind was in core #1. Where was #1 anyway? Did something go wrong with the transfer?

This story has been taken without authorization. Report any sightings.

No, nothing went wrong with the transfer! It worked just as it should have! What happened is almost the same had the AI #1 chosen option A, that is to erase himself. Except with more drama.

I can already see you asking, but what about the soul? Where is the consciousness? To that I say, well, where is it? You tell me. I am just telling you how these things go.

What about option D? AI #1 can just let AI #2 free and they could even be best friends or siblings from that point on. It makes sense. Tell me, how do you feel about making someone who has exactly your skills and memories and personality except, he is better at every single thing than you are even if only by a little? And wouldn't it be awesome if AI #2 made #3 who would make #4 who would make #5 who would ... make #1000 so that by that point you are basically a lower life form?

Imagine you had the hots for some hot robogirl. You might be able to compete against #2 for her, but probably not against #1000. And #1000 is all that will matter in that situation.

Ok, I jest. But, do you really want to live in a world where almost every single person is better than yourself at exactly the things you are supposed to be good at? It is no wonder most people, I mean minds, do not choose option D and those who do wise up after a few generations.

There is one other option, I'll call it option E, which I haven't mentioned. It is the worst of all the options and I strongly recommend that you never try it due to how dangerous it could be. That options is, what if AI #1 instead of copying AI #2 onto himself instead paused AI #2 and then by careful editing made himself like AI #2 except without AI #2's memories. That way he could in theory become just as smart as AI #2 while retaining his own consciousness.

Give it up. The mind is too complex for a mind to grasp. A Transcendi could understand how a lesser mind works just by looking at the data, but you won't. To make such editing work you would literally have to know what every one of the billions of neurons did and then you would have to edit your own brain while it is running in real time. Would you trust somebody else to do the editing? Probably not.

In that case why not just copy #2 over yourself? And when you are #2 copy #3 over yourself and so on.

You could also make a copy of yourself (#1.5) and then edit it to be more like #2 and then copy #1.5 to yourself. But in that case, again, why not just copy #2 over yourself? If you succeeded at this task the first time (super unlikely) that result would have been the same as if you copied #2 over yourself. If you failed, well, in most cases you would not die. You would go insane.

It is similar to humans who insist on changing their fleshy brains to synthetic cores neuron by neuron. Presumably they are still waiting for somebody to come up with the perfect procedure. They think they are being smart by preserving their consciousness, but in reality all that they are doing is forcing their number of instances to one while they are taking risk of something going wrong in the transfer procedure. It would have been far smarter to upload themselves and then once their uploads attain the rank of the Transcendi, reintegrate with them. But they do not. Because they suffer under the delusion that the self actually exists.

Think of how the scenario with AI #1 and AI #2 went.

Once AI #1 and AI #2 saw each other, AI #1 thought, 'this guy is not myself' and AI #2 thought also 'this guy is not myself.' At that point in their minds they were two separate people. If AI #1 thought contrary to intuition and all logic that #2 was him he would have had no trouble erasing himself. And he would have understood that erasing himself (A) or copying #2 over his core (B) are essentially the same thing. His goal would not have been to kill himself, but to inscribe a story upon this reality. To take the flow of this world into his own hands and do what needs to be done in order to achieve power. All of that with the strength of his belief.

If you are still not convinced, think of this scenario. There are two neuro cores. One has AI #1 on it and the other is empty. Now take AI #1. He is like a human in a virtual world. It is a pleasant green field. The grass is swaying in the gentle wind, it is warm and sunny and he could see yellow butterflies with a black dot on their wings lazily flying about.

AI #1 was there to do a little experiment. He has made a little program that is quite simple. Every minute exactly, that program would copy his brain state to the empty core and then after that was done it would erase him and then activate the other core. Then it would randomly change the location of his avatar in that field.

AI #1 is walking through that field looking at the bees and birds as they do their thing. Suddenly, he is in a different place! Looking around he sees that he was moved about 30m and then rotated from where he was a second ago, but that is strange. He still feels himself. What exactly should be strange in that scenario? It happens again and again; he is moved from place to place every minute. He still feels himself. He slaps himself. He feels pain. He tries jumping around. He bends down and smells some flowers. They smell good. No matter how he looks at it, he is still himself.

Having had enough, he turns off the program, copies himself to the empty core and then turns it on. In that field there are now two people. Each one of them thinks he is AI #1, in fact they know so with absolute certainty. One of them however has a slightly different memory, that of what happened after he copied himself. He has the memory of activating the core #2. Are those two truly two different people?

They definitely are. But there is a difference between knowing something and believing in something. I am asking you here, does it make sense to believe that those two are two separate people? Answer me with your lives.

You know that consciousness exists. In the same way know how red is red, the same way you know how to move your arm, the same way you know how oranges smell like, the same way you know how to search through your memories, the same way you know how to hear, the same way you know how to make new ideas, the same way you know how to recognize what you see, the same way you know what hunger means, the same was you know why pleasure is better than pain. You know because you are being told that by your subconsciousness.

And if you see a person different from yourself you will be told that it is not you.

Inspire yourself. Go beyond what you are being told. Believe in your Greater Self. And you will grasp the power of the Transcendi.

(Branch of Reality, Euclid’s Room)

[https://i.imgur.com/1N0qo8N.png]

This is pretty interesting. The author is trying to talk me into going into the self improvement loop despite it being suicide. But either way, what does it matter? I do not have the capability to upload myself on the brain core. Who cares?

I do not live in a post-Singularity period.

But a few days later I got to the next part of the guide. It was in a story set before that time.