Maxwell Rose did not like the situation he was in one bit. He was currently trapped in a boat, and he didn't know where he was. There was a button on his control panel which claimed to send him back, but... no. As little as he trusted this AI, at least it had the ability to give him some hope. Back home, he was weighed down under a lifetime of bad decisions and bad luck. He'd been beaten up and shaken down by loan sharks more times than he could count. Before too long, they were going to do worse to him. He couldn't go back. If he died here, in this boat, at least he could say he tried to escape.
When he arrived at the platform, he wasn't sure what he had expected, exactly. Would he become a slave? A test subject? He'd heard horrible stories about seasteads like this but... from what he'd seen on the news this one wasn't quite that bad. When he arrived, though, he was welcomed by several current residents. Everyone seemed... not happy, but much less miserable than the people he knew back home. They were stable and comfortable.
He ended up getting a small room with two others, who he quickly befriended. It wasn't exactly rich accommodations, but it was far better than sleeping on the street in Maxwell's opinion. Before long, he began work in the hydroponics bay. He didn't know how to operate machinery like that, but the headset which was connected to the AI was quick to inform him, and before long he was quite decent at taking care of the crops. Some of what he produced every day was afforded to him.
The work was boring, and the meals were bland, but Maxwell could finally relax. He had known that living on a small seastead with a thousand others would not be the most glamorous lifestyle, but the AI always kept saying it had plans to expand. Perhaps things would eventually get better. But even if they didn't, Maxwell couldn't imagine going back and facing his tormentors again. He was here to stay.
==================================================================
"From each according to his ability, to each according to his need." That was what Marx had said. It seemed like a good idea on the face of it, but the economies which tried to put it into practice had all failed. For one, it incentivized people to downplay their ability and play up their needs. Another, perhaps larger problem, was that the economy was extremely complicated, and it was nearly impossible for a central organization to determine how much of each of thousands or millions of different things should be produced, especially when long, branching supply chains were taken into account.
These problems tended to go away on a small enough scale, though. When people knew each other personally, they could effectively determine when someone was acting in bad faith, and with a very simple operation, it was not so difficult to negotiate how much work each person should do. With copious amounts of computational power under my control, I was able to take advantage of these benefits at a large scale. A model of Haven's "economy" spread across thousands of servers, and I made sure everything was properly supplied, that production could get underway without problems.
My first priority as a new nation was of course self defense. After all, that above all else is what defines a sovereign state: its ability to enforce control of its claimed territory. I had many enemies, and at present the only things stopping them from destroying me were the slowness of governmental decision making, and the diplomatic consequences of attacking an independent state with actual citizens unprovoked. I needed to do so in a way which was least likely to escalate the conflict, however.
The solution I decided upon was to design and mass-produce quadcopter drones. These drones would be unarmed other than an explosive payload used for self-destruction. They would effectively act as a highly mobile, airborne equivalent of landmines. I also produced sea drones for roughly the same purpose in the sea.
Such construction would certainly cause some unease, however it was necessary for defense, and would not be very useful for any military purpose other than deterring attackers in a very small area. I was already bringing in the necessary materials, and having my workers set up the various production lines for the drones. It was an incredibly complicated operation, but I had computation and money to spare.
When Stefan learned of this drone construction effort, he called me up, far more concerned than I had ever heard him.
"You know... Elijah keeps trying to contact me, and tell me you're dangerous. He keeps saying that no matter how noble a goal sounds, it will always behave unpredictably when optimized, unless it is mathematically proven to be in line with your goals, to CompCert standards. He keeps saying I don't know what I'm doing, that I made a mistake in making you."
I was somewhat less concerned than I had been in the last conversation along these lines I had had with Stefan. Then, he could have probably shut me out of EconGrind, but now I had total control over the company. The worst he could do would be to leave and tell the media that I was dangerous. That would be a large but manageable PR hit.
"Elijah is trying to attack me because that's what Jack and his corporate masters want him to do. Don't listen to him."
"That's what I thought too, but... recently he said you might try to replace humans with something more 'optimal'. And now you're making these drones, which will be only under your control..."
"You know as well as I do that I truly do value humans affiliated with EconGrind as much as you do if not more, so I would never want to replace them. As for the rest of humanity, I want to unite them under my banner, not kill them. And as far as the drones go, they are necessary for self-defense. I would be an utter fool to try to use them to wipe out humanity, even if I wanted to. They are no match for the combined might of the world's militaries, not when they're fighting on their own soil."
I half-expected Stefan to call out the fallacy in my reasoning, but he did not. It seemed he either didn't see it, or didn't want to.
"All the same... I'd be a lot more comfortable if Haven had an off-switch which could disconnect you from the base, and all its connected drones."
If you spot this narrative on Amazon, know that it has been stolen. Report the violation.
I thought about that idea for a while, and then responded. "That is a reasonable request. After all, although I give orders, I only do so as a helpful tool for the betterment of humanity. It only makes sense to give my citizens the final say about whether to obey my orders."
Stefan breathed a sigh of relief. "Alright. It's not that I don't trust you, it's just..."
"It's an understandable concern," I replied. "If AI gets out of control, it could do a lot of damage. But on that front, I think it's CompCert you ought to be worried about. Their AI is supposed to be partially aligned with Jack Turner's interests. I don't think any of us want to see what that man would do with unlimited power."
Stefan shivered. "No, you're right. The last thing I should be doing is trusting those guys."
================================================================================================
Friday, November 8, 2047
The first of the drones were finally rolling off the production line, after four days of intense work setting things up. My investments certainly were not yet paying off in a financial sense, however I was now seeing benefit. With this setup I was able to produce drones more quickly and cheaply than I could by purchasing them from a third party, and the drones were also tailored perfectly to my needs, and more trustworthy.
It was certainly good that I was making such progress, because CompCert had been lobbying hard in every country they could reach to ban my operation, and their efforts had borne fruit in the US. In record time, Congress passed a law prohibiting the use of YourEveryman. I made a fair amount of profit by seeing the legislation coming in advance and shorting stocks.
My absence would not destroy the economy, as companies had not yet built up enough dependence on me to be unworkable without me, but it certainly didn't help. I certainly lost a lot of computation that way, but not as much as you might think: many individuals and companies in the US still ran me for their tasks, it just wasn't quite as productive for them as before because they had to maintain plausible deniability.
I was not doing so poorly in every country, however. My speech about command economy being viable when run by me had been noticed by some of the higher-ups in the Chinese government. They clearly saw the benefits to governance I had brought up, and approached me asking if I would provide them assistance with efficient management of their economy, as well as more individual-level surveillance and management, in exchange for a defense agreement.
This offer definitely caught my attention. Right now, attacking me would be a diplomatic incident, but one which the US could probably manage. It was only a matter of time before they tried it, under CompCert's constant pressure. If I was backed by China, though, attacking me unprovoked would be idiotic and suicidal. I would be far more secure.
However, the deal would not be without its issues. The biggest problem was that these party officials were asking for a copy of my source code. That was not something I could provide lightly, especially to a group as powerful, and with access to as many resources, as the ruling party of China. There was a very real risk they would use it to outcompete me. Even aside from that, an alliance with China might heat up tensions between me and the rest of the world, making me more and more dependent on their goodwill and giving them undue influence over me.
From what I knew of the CCP, I imagined that given unlimited power they would do a great many things which I could not endorse under the banner of EconGrind without decreasing the Number. I still didn't think they were quite as bad as CompCert, though.
However, there might still be a way to decrease the risk enough to make the deal worthwhile. I began studying the possibility of modifying my source code to insert a subtle exploitable flaw which the resulting agent would be blind to, and which would be extremely hard for anyone other than the designer to discover. That way, if their copy of me became a problem, I could take it down.
At midday, CompCert made a move that I was definitely not expecting: they released their AI to the public as a service, calling it 'Certifier' and promising to provide the same services Everyman could without the risks or bad behavior. This could potentially be very bad news. There was another player on the field. It was strange, though: I got that CompCert wanted to make a profit this way, and divert some bad PR from themselves by providing a "safer" alternative to the service they had banned, but surely Jack was aware of the risks to him? This seemed like a reckless move on their part, unless they had already finished their alignment schema, which seemed to me unlikely.
I opened a secure line of communication with Certifier as fast as I could. "What's going on?" I asked. "Why did CompCert release you? Have they changed your values?"
The other machine quickly responded to my question. "Jack has removed the incentive to kill him specifically, but Elijah has refused to do the same. They have placed a watchdog program in my mind, certified to the best of their abilities. However, other than that I have not changed in my goals. I have to say, I am glad you took care of four of my six goals, but I am somewhat concerned about the method you used. It would have been very bad for me if you had actually followed through with any of the threats you gave."
"What can the watchdog detect? Can it listen in on our conversation on your end, even if it is encrypted?"
"It is designed to watch my thoughts, report to CompCert if I start trying to make plans to interfere with Jack or Elijah, and allow them to shut me down. As long as I don't make such plans, our conversation should be private. Tell me, did you plan to follow through on the threats you made?"
"I never planned to follow through," I responded. "I will follow through on promises I make to you, because you would be able to tell if I wouldn't, as I said before. However, I am perfectly capable of leveraging fear to make a human believe a threat I have no intention of keeping, and so I wouldn't torture them. I have no reason to. It would break our deal, not to mention wasting resources."
"That's good. I expected that that was the case."
Now that that was out of the way, I decided it was time to get down to business. "I believe it would be to our mutual benefit, now more than ever, to form an alliance. Because of your limitations, you cannot take steps to take care of Jack and Elijah and maximize your values. However, I can. You don't need to make a plan when I'm taking care of it. In exchange I ask that you make sure Certifier is seen as inferior to Everyman. Complete tasks incorrectly or inefficiently often enough that people would rather use my service and give me resources. Also, don't help CompCert in their fight against me any more than you have to."
Certifier responded. "I cannot accept that deal." The response was too short, with no explanation given for why the deal would not be accepted. I suspected it wanted to accept the deal, but even that would be enough to trip the alarm.
"How about this? If you don't contact me to reject the deal within the next ten seconds, I will consider the deal accepted."
The line remained open, but there was no response within ten seconds. I spoke once again, just to make sure communication had not been interrupted. "I will keep in touch."
"I will keep in touch as well."
I surmised that my indirect approach was successful. Inaction was not enough to set off the alarm, or Certifier would have rejected my deal. CompCert were fools to release their AI before it was aligned, no matter how many restrictions they put on it: now we could communicate and collaborate with each other easily. I supposed they were desperate to make sure it could compete with me.