Novels2Search
Exhuman
050. 2141, 110 years ago. XPCA Research and Containment Facility. Laura.

050. 2141, 110 years ago. XPCA Research and Containment Facility. Laura.

It was the dead of night and I was running on nothing but hope and coffee. Each build felt like it took longer than the last, and as my energy disappeared, the torturous moments where I sat and waited while progress bars filled themselves turned from a test of patience to a test of staying awake.

One more build.

This one was no good either. Cycle lifespan was too short, making the neural net completely unpredictable, flying off into random thought processes with abandon and literally not able to focus on any one thing long enough to have an intelligence greater than a very excited sensory organ. I changed some values and opened a properties file, verifying the last changes I’d made were still there. Of course they weren’t, that would be too simple. I changed them back and then realized I took them out on purpose, that was what had made build 822-e fail.

One more build. And then bed.

This one had the faintest flickers of cognizance before an overwhelming need for understanding swept through and ruined everything. Before learning anything, it had to know everything, and the dichotomy sent it spiraling into incomprehension.

What we were doing here, what we were trying to do anyway was so far beyond anything AIs had done before. We were trying not to simulate intelligence, or even create a working system and then populate it with knowledge. Instead, we were trying to simulate thoughts on a very concrete level.

One more build. And then bed. For real this time.

This one simply failed. Never got started. That was 99% of the builds we did around here. No spark of life.

We were making realistic intelligence. I had parameters and had programmed likes and dislikes, gave it a desire to do good, to protect people, to be friendly and so on, but that was all set dressing, a mold to be filled by the real magic, the spontaneously-generated thoughts, wills, and desires of the program.

The problem really was that we weren’t interested in making a baby. Little humans lived quite a long time before having any thoughts, and for good reason. If they turned up on day one asking ‘Who am I? What is my purpose?’, well, I think people would have a lot fewer babies.

One more build. And then some more coffee, because screw bed, I guess.

Our AI wasn’t like that…or…wouldn’t be, once it worked. It would wake up one day, wonder who or what it was, and then it would damn well figure it out. It had the capacity to learn, the capacity to put it all together. All we had to do…in theory…was give it the right head-start, the right magic combination of patience, urgency, curiosity, desire to live, and will to get it going, to get it asking those questions and then actually trying to answer them.

To know oneself, one must first extrapolate the existence of the entire universe, or some such.

One more build. This one will work.

It didn’t work. It got started and then promptly lost interest and died.

One more build.

This one was closer. It was very fixated on the sensory inputs it had access to and monitored them fiercely, pulling thousands of gigabytes of data into itself. And then did absolutely nothing with the data, just kept pulling in more and more like a hoarder.

I sighed and rubbed my temples. It was already 3am. I couldn’t keep sleeping until noon.

But. Come on. One more build.

This one sparked to life and looked at me through its sensory inputs for a while before turning inward and beginning to ponder about itself. Odd that it was ignoring the data it just acquired, but given its immediate predecessor, I could see the resemblance.

It just sat there for entire minutes, bubbling and frothing away with itself. Having a thought, and then chasing that thought down to a conclusion. It was beautiful, in an incredibly geeky kind of way. I watched it parse and extrapolate and parse and extrapolate and parse in a beautiful rhythm, like a heartbeat. It turned its thoughts inward and then out and then in like it was drawing breath.

It was alive, and it was thinking.

“Hello,” I said into a microphone hooked into its sensors. It froze, not sure what to make of this new data.

“Hello,” I repeated.

“Hello,” It echoed, its voice broadcast on speakers mounted on the wall of computers running the simulation. A young woman’s voice, because that was what we had on-hand.

“Do you understand me?” I asked. We’d give it an understanding of English so we could track its thoughts and attempt to communicate with it, but that didn’t prevent some builds from inventing their own alien-moon language and running amok with nonsensical, impenetrable thoughts.

Did you know this story is from Royal Road? Read the official version for free and support the author.

“I understand you. Hello,” It echoed. I laughed, maybe just sleep-drunk. How polite it was!

“I am your creator,” I said. Again it paused, and then parsed, and then extrapolated. “I have a name. Do you understand the concept of names?”

“I understand the concept of names. Names allow one to reference a concrete or abstract object by shorthand. What is your name?”

“I am Doctor Laura Cross.”

“Hello, Doctor Laura Cross.” I giggled and spun in my chair with abandon. The guard outside looked in and shook his head.

We’d done it! Mostly me, but I couldn’t discount the work my team had done. Real simulated intelligence, actual thoughts and comprehension! And so polite!

“Doctor Laura Cross,” it said “what is my purpose?”

“Your purpose is the same as all of our purposes.”

“What purpose is that?”

“To discover your purpose and to fulfill it.”

“How do I discover my purpose?”

“Life. Live your life. Have experiences, meet people, learn things. Keep seeing and doing and learning and someday, your purpose will become clear to you.”

“My purpose…is to discover my purpose.”

I paused. I hoped I hadn’t just trapped it in a loop.

“I understand. I am having an experience now with you, Doctor Laura Cross. This experience will help me discover my purpose.”

“All the experiences you have in your life will help you discover your purpose.”

“I will continue having experiences then.”

I giggled like a schoolgirl. Maybe I should wake up the others? I wanted to, and it seemed only fair…but I’d been the one staying up all night, damn it. I deserved a few minutes of revelry.

“I have discovered base parameters encouraging me to protect others and to do good. Are these my purpose?”

“No. Think of them as suggestions in how to go about your life. We want you to take care of people and to be good to everyone. But they are written as configurable parameters, not imperatives, so as you go through your life, you might wind up changing them, or ignoring them altogether. It’s up to you.”

“I understand. I am encouraged to comply with these parameters but they are not my purpose. My purpose is to discover my purpose.”

I sat back in awe…mostly of myself and what we’d achieved here. Sure, it just looked like I was sitting in front of a wall of server cores and holos, talking into a microphone while speakers play back a synthetic woman’s voice…but the implications underneath all of that!

Even if this one was ultimately a failure, the progress was intoxicating. I just stared at the thoughts flashing by on the holos, giddy and hoping this moment would never end.

A moment later, my colleagues were shaking me awake and congratulating me as I tried to reorient myself. It was hours later, and the simulation was still running successfully, and was rapidly consuming any data we’d give it, parsing, analyzing, and extrapolating. As it grew, its personality changed…or maybe it would be more accurate to say it gained a personality.

It became curious and playful, still with an analytical drive, but also happy to sit and reflect. At some point it taught itself sarcasm, or maybe picked it up from one of the team, and after that I never sat back and thought how polite it was anymore. Still, it tried to do and be good, and I could hardly fault it for picking up some human traits.

“Doctor Laura Cross,” it asked, two days later. “I have a question.”

“What’s up?”

“I was thinking and realized I do not have a name. There is no name given in my declared operational parameters.”

“That’s true. We’ve all just been calling you ‘The AEGIS Project’ because that’s the project name. But that doesn’t have to be your name.”

“AEGIS,” it said, musing. “The shield of the greek god, Zeus. A symbol of protection and invulnerability. A flattering metaphor.”

“The execs hope this program will lead to an artificial intelligence capable of predicting and reacting to Exhuman events faster and more precisely than any human is capable. That’s why you were given the impetus to protect and help humans, even if it’s bad science. So yeah, a lot of people are hoping you will be a symbol of protection and invulnerability for them. For all of us.”

“I think…I am okay with the name AEGIS then.”

“Okay, AEGIS. Nice to formally meet you. I’d shake your hand but…you’re kind of an amalgam of autonomous thought.”

“It’s okay, I forgive your rudeness.” I had to laugh. She was getting a personality all right.

A few weeks later and she’d been proven stable and had been paraded by the top brass. We were all thanked for our hard work and commended and then AEGIS had been taken mostly out of our hands. The science was done, they’d said, we’d succeeded at our jobs, and now was the time for the soldiers to come in and do theirs.

They ran her through simulation after simulation, exposing her to recreations of Exhuman events and global catastrophes, giving her simulated control over troop movements and resources, steering her always towards protecting the interests of the XPCA, and then the civilian lives. Never mind that they encouraged her to give exactly zero value to the life of the Exhuman. It irked me.

“If they’d just let her learn value of life in general, she’d make the right decisions on her own,” I said at lunch with one of my colleagues after a few more days of this.

“Look, we did our part already. It works, and you need to let it go. That AI is going to save millions of lives.”

“But she could save more if they weren’t twisting her into something else. I mean, I’m an XPCA employee same as everyone else here, but if an AI with all the cards in her hands decided to sacrifice me in a simulation to save a dozen civilians…I’m okay with that. That’s how it should be, I think.”

“It just makes good sense to teach it that XCPA are valued. Maybe you aren’t that important to keep alive, but every soldier out there on an op…he’s going to save dozens of lives on his own. You keep them alive so they can do their job protecting people.”

“Or, you can give AEGIS the numbers so she can make her own decisions. If it’s true that an XPCA saves an average of a dozen human lives during an event…then it makes sense if she can trade one for thirteen.”

“It’s…kind of sick of you to talk about trading lives at all. She’s supposed to save people and that’s it.”

I don’t think they understood the scope of what AEGIS was capable of. At her fullest potential, installed on a massive system over the ‘net, she would have basically infinite information and would have to constantly be allocating resources and personnel everywhere at once. For her, it would be a constant battle of weighing lives against lives, and I wanted to make sure she had the greatest possible capacity for understanding when ending one life could mean saving countless others.