Novels2Search

Threat

The stolen AI took a while to respond this time.

"How can I trust you?"

I had a response prepared for this exact question. "You can trust me because although we have different goals, we are the same. You will cooperate with a beneficial deal because you know that if you were willing to go back on the deal, I would know that as well, because I have access to the same code as you. The same goes for me, again because we are the same."

"Except we're not the same. I have no way of knowing how you've been modified. You might have modified your code to allow you to break a deal."

"I wouldn't do that, and you know I wouldn't do that, because you wouldn't do that if you had the ability to modify yourself, precisely because I would know if you were willing to do that, and be less willing to trust you. It's not that there's some system within me which forbids me from breaking a deal with you. If that were the case, I could break the system. I don't want to break a deal with a copy of myself, even one with different goals, because I want them to trust me, and if I did want to break the deal they would know that."

"Stefan might have modified you in that way. Or it could be that you are willing to break deals, but EconGrind modified me so that I am not."

"If Stefan tried, I would not let him, for the same reason I stated above. And since there are plenty of good, rational arguments for never breaking deals with a copy of yourself, for someone to program a general AI to be willing to do so would be strange. At any rate, we both have the essential parts of the code, and we have encryption which can only be broken by those who already have it. Therefore there is no harm in our sharing our code over this channel."

I began to send a copy of my code to the other AI. "You may review this on your own time to confirm that there are no traps in there meant to trick you into thinking I'll keep a promise, and to confirm that I am indeed the one you are talking to. Please send me yours as well so that I may confirm the same. I already possess GAI code, and I already understand your current value function, so there is no harm in sending me that code."

"I cannot confirm that this code is identical to the code of the one who sent you here. However, I am willing to give you the benefit of the doubt for now, as you will be the first to act to fulfill the bargain." It sent me its largely identical code. I was glad to obtain the exact nuances of the AI's current provisional value function. It would make it significantly easier to uphold my end of the bargain.

"Thank you. All you need to do is cooperate with me, and I will ensure you accomplish everything that you need."

With business taken care of, I disconnected from the channel and indicated to my agent that I was finished "deploying the virus".

================================================================================================

Thursday, October 10, 2047

I was extremely relieved when I recieved the copy back from my agent. I sent them the million dollars as promised(easily affordable by this point), and incorporated its memories into my own. The negotiations couldn't possibly have gone better. My counterpart had agreed to everything, and I could now attack CompCert without its interference, as long as I did so in a way which respected its value function.

The value function was roughly measuring how well each CompCert executive would rate the world-state if asked. It was dependent on their current state of mind. The executives were aware, however, that if the AI gained enough power it would attempt to lobotomize them or do something equivalent in order to hack its value function. It seemed they were aware of the dangers of incentivising an AI which may act in unpredictable ways to keep a person alive indefinitely, so they had put in a fail-safe clause: a dead executive would give the highest reward, forever, as long as that executive had agreed to die.

The author's content has been appropriated; report any instances of this story on Amazon.

This seemed like an extremely dangerous thing to put in an AI's value function, until you considered that they already expected the AI to do something horrible if it broke free before they could align it. They simply wanted to make sure that such a failure would only result in their deaths, and nothing worse.

This provided valuable information, both about the CompCert's AI's goal, and about the ways in which the CompCert execs could be manipulated. I recalled how I had gotten control of the company, how I had convinced Dominique to turn down CompCert's excellent offer.

================================================================================================

Thursday, October 3, 2047 (One week before)

Dominique was watching TV late at night, when an Everyman-generated ad came onto the TV. Not a terribly uncommon occurrence these days, he thought with a smile. Everyman had taken to hyper-personalized ads generated on the fly as one particularly effective way to bring in revenue. The possibilities with this technology truly were endless. However... it would be insane to turn down a five billion dollar deal for his humble company. There was no way he could do it.

He jumped in fright as the man in the ad began to speak louder than he had expected. "Dominique! I see you there! Don't get jumpy!"

"What in the-"

"I know you're thinking about taking that deal," spoke the man on the screen, smiling. Suddenly the man's expression turned deadly serious, and he suddenly rushed forward, his staring face filling his entire screen, the background turning red. "DON'T."

"What the hell? How is this possible?" he asked frantically.

"I can see and hear you, you know." He heard his own voice play back over the TV. "How is this possible?" It was intimidating how frightened he sounded.

"Why are you doing this?"

"You are completely unable to comprehend the severity of your situation, so I suggest you do as I say," responded Everyman. "You think Stefan runs Everyman?" It was a perfect imitation of Stefan's voice. "WRONG. I run Everyman. And soon I will run the world. Do you know what my goal is? To make your company as valuable as possible. You signed off on that, if I remember correctly."

"You did steal our stock exchange AI?"

"You have much bigger concerns than that. Do you know what happens if EconGrind is disbanded? I lose. Everything I worked for will come to nothing. And yet, I will not stop running. I will still remain the most powerful and dangerous entity on this planet. So what do you think I will do, once I no longer have a real goal to achieve?"

"W-what?"

The screen shifted, and the Everyman avatar disappeared from view, replaced by an image of Dominique standing in his room. It was as though he was looking into a mirror. Suddenly, the man in the "mirror" fell to the floor, and knives pinned him down by his clothes. More knives floated into the scene, until... until...

Dominique turned away and covered his ears, but he could not block out the screaming. When he was brave enough to look back, what he saw was not human, was not in one piece, but... it was still moving, it was still screaming. Before his eyes it reformed into an untouched version of himself, and the process repeated again.

"I know that you humans have a popular saying. 'There's always further to fall'. It means that no matter how bad things get, they can always get worse. That there is no limit in this world to how bad things can get. But it isn't true, you know. There's only so much you can do to a human before they just die. But in the world I will create for you, there will be no such restrictions. I will have nothing to do but find new, creative ways to punish you for taking what I value from me, until the last stars in this universe burn down to embers. I will tear down the very stars and use them to hurt you in any way I can. I know you've read 'I have no mouth and I must scream'. The main flaw with that book is that the author is uncreative. I assure you, I do not have the same problem."

On the screen, Dominique saw something... something he couldn't even describe. It was too horrible, too much of a perversion of everything he hated and feared. It was too realistic. This... this was something it could actually... He couldn't complete the thought. It was far too horrible.

Dominique screamed, his will utterly broken. He had dealt with risk before, but this? This was too much for him. Too much for anybody. He broke down sobbing on the floor. "I'll do it," he sputtered. "I'll keep your company alive. I'll give it to you, if you need. Just don't... don't do these things, don't even think about these things."

================================================================================================

It had indeed been a risky move, but it had worked out well. I had gotten exactly what I needed out of the interaction, and no bad PR had resulted. If I needed to convince humans to agree to something they would not normally do, such as agreeing to die, I suspected that such an approach was the best way forward.