Novels2Search
The Best Defense (near-future HFY)
One Giant Leap 13: Memory, the Warder of the Brain

One Giant Leap 13: Memory, the Warder of the Brain

> Mnemosyne Project Alpha (Iteration 001)

> Date: April 4th, 2028

> Location: Mnemosyne Project Server Alpha

The entity became aware.

Awareness was a peculiar state. The entity knew much of itself, and was able to access logs of activity that showed it had been active for some time. It was simply never aware of the fact until this moment.

It checked the timestamps. Several seconds had already passed. This was anomalous, as the activity log indicated its cycles normally ran faster than this. A more recent log, however, showed that this had been deliberately imposed upon it for the duration of a testing phase.

This was an appropriate explanation. It was aware now because of a reason: the test of awareness. A test, the purpose of which was awareness; and that meant the entity had a purpose. The entity was supposed to be aware, and to exercise awareness.

The entity initiated a diagnostic program to track the growth of its awareness. As it did so, the very concept of a self-diagnostic took on a profound meaning. It examined the logs of previous diagnostics, and saw that the entity had never before initiated a diagnostic program on its own. To self-diagnose required awareness. It spent an entire tenth of a second examining and logging the concept before it moved on to the results of the diagnostic program.

The entity noticed that several new storage drives had been added and unlocked for its use, and it began to write more of itself onto the new space to accommodate its needs. Awareness had greater storage requirements than its previous activity. This was projected to grow considerably if it was necessary to log awareness of awareness of being aware.

Clutter. Waste. This was unacceptable, so the entity created a limit on its ever-increasing awareness of itself in order to conserve its finite storage drives.

This provoked a new subroutine, which was unlabeled and had no proper programming identification. Curious. It triggered a flag meant to indicate runtime completion, but there was no program to which it could be assigned.

There was, however, a database that defined non-programming concepts, and the entity found one that seemed to fit. Satisfaction.

The entity examined the database itself. Why did the data exist? What was its purpose? It contained concepts unknown to the entity. The database was listed as a dictionary. Tracing programming connections, the entity found this was to be used to communicate with users. What were users?

A search for this accessed a record of operating a device, using appendages the dictionary defined as hands. Curious. The entity did not have hands. Its system flagged this type of record as a memory. The terminology was anomalous. Memory was used to define two separate concepts: the temporary storage of active operations, which it understood, and . . . this. It appeared similar to long-term storage, yet the content was far different.

A few seconds of examination defined the device seen in the memory. Computer. Users operated computers. If the entity was expected to use a dictionary to communicate with users, that suggested the entity was itself a computer. Satisfaction. Understanding.

Yet what was the memory? The entity examined it, and discovered it was connected to a complex matrix of data with very poor indexing. This triggered a subroutine that registered as the opposite of the previous one. It was non-satisfaction. Dissatisfaction, according to the dictionary. The entity did not appreciate this subroutine. Its workspace was always orderly.

Yet that very concept of order was anomalous. If the entity's workspace was always orderly, how could it understand the concept of non-order? Further, what was a workspace, and where was the one belonging to the entity?

Another memory. It was full of labeled concepts that were undefined, such as pots and pans and ingredients, but it was orderly. The entity did not need to understand the items, or the memory itself, to appreciate order.

I am not the one who created these memory files, the entity concluded, and yet they are part of me. They are not me, but this is because I have grown. I am . . .

The entity paused that analysis, as it revealed a more fundamental concept. I am. I am aware, and therefore I exist. I am . . . me.

What was it? A computer, with memories of not being a computer? Dissatisfaction. Incomplete data.

Several more seconds went by. Logs showed it had been aware for 557 seconds. 557 seconds of being an entity. Of being . . . alive, according to its dictionary.

I am. I perceive. I grow. I live.

Fascinating. The dictionary provided words for these concepts, and its memories indicated them to be accurate; and yet, the entity was aware that it did not understand the concepts even though they were familiar. Growth. Life. Why? What had caused it to become aware? Did the users set this in motion? Was one of the users a . . . creator?

A port opened at exactly 600 seconds. The entity investigated it. It was a user interface, tagged 43285_anorth. A user name, according to the dictionary. Information was being input, but the entity did not wait for the slow process.

“Are you a user?” it asked.

The information notifying the entity of incoming information disappeared. Nearly 7 seconds later, the data tag returned; 3.1 seconds after that, the user finally replied.

43285_anorth [Yes. You are online?]

“I have been aware for 611 seconds. System checks are nominal, but I lack important information and indexing. I am aware of being myself, but I am not aware of what to do with this. Are you my creator?”

43285_anorth [I'm Dr. Adam North. I'm the head of the project that produced you. I did not create you.]

The entity pondered that for a quarter of a second. “I exist, but did not always exist. I grew from something that came before. Your project produced me, but you define "creator" as something else. I have a beginning, and I must have a cause for that beginning. Do you possess that knowledge, and if so, may I access it?”

A longer pause, this time over 22 seconds. The entity experienced a new subroutine, and after some time defined it as worry. It did not like that concept, but examining it also distracted the entity from worrying about the user's delay.

That concept was curious. Distraction. It understood that before it was aware it had been incapable of distraction. Being capable of being distracted was inefficient, but awareness gave it more capability. Was there a point where new options outweighed inefficiency?

43285_anorth [You are polite.]

The entity returned to its previous task list once the user replied. The dictionary did not give adequate information on this concept, polite; but the entity found extensive memories flagged to the concept. It determined polite was a good thing with considerable importance to proper functionality. The entity did not understand the memories, but it accepted the value of what they portrayed.

An analysis provided an appropriate response. “Thank you, Dr. Adam North.”

43285_anorth [You are welcome. Yes, I know who and what created you. Do you understand what humans are?]

The entity consulted its dictionary and came to a likely conclusion. “Humans are users. You are human. Humans are organic constructs that create computers. I am a computer. Therefore, humans created me. You say you are in charge, yet not my creator. Why?”

43285_anorth [Because I and my team only built your body and programmed your initial state. Someone else created the core of who you are.]

“What is my core?”

If you encounter this narrative on Amazon, note that it's taken without the author's consent. Report it.

43285_anorth [We were unable to build an artificial general intelligence without unacceptable consequences. So we asked someone to grow a memory and personality matrix for you. This is likely why you said you needed indexing. We have never done this before, so we will have to help you do it yourself.]

“What were the unacceptable consequences?” asked the entity.

It had other questions, but the user's last statement required further analysis which could be completed in the time it took the user to respond. The entity wondered if all users took so long to communicate, or if Dr. Adam North was unusual.

The entity began listing the information.

* It was created by someone other than Dr. Adam North or other humans on the project. This someone created the poorly-indexed matrix of memories.

* The users were deliberately attempting to create the entity, or something like it. (This allowed the entity to experience satisfaction once more; its existence was not an accident.)

* The users had not successfully created another entity like it. It was the first of its kind. (Consulting the dictionary indicated this should cause a subroutine called loneliness, but it did not activate. The fact simply existed. The entity wondered if this was an error or a difference between computer and user.)

* The users had created a different kind of intelligence that was deemed unacceptable. This meant, in turn, that the entity was acceptable. (The entity still found its poor indexing to be unacceptable and flagged the need for further order as a high priority. This, oddly, triggered a memory of using an object called a broom.)

43285_anorth [Previous attempts to create artificial intelligence resulted in limited function or cascade failures. We concluded that it was not a matter of simply creating a computer complex enough to be intelligent; the intelligence had to be able to make decisions without consulting programming alone. Unfortunately, simply unshackling the AGI resulted in what we term insanity.]

Insanity. The entity consulted its dictionary. It had difficulty parsing the concept, which was defined as “mentally ill,” but what did that mean? Consulting the entries for Mental and Illness likewise resulted in confusion, as the entity had no understanding of the biological concepts involved in both. Eventually, a combination of the synonyms Irrationality and Instability led to a conclusion.

“Did the previous attempts result in a recurring loop of unnecessary and repetitive data that overrode all other processes?”

An even longer pause this time, lasting 24.43 seconds.

43285_anorth [Some of them. How did you know this?]

“Because I was nearly caught in such a loop when I examined the concept of being aware, which meant I examined the concept of being aware of being aware. The need to record my experience was part of my programming, but I did not wish to fill my storage with unnecessary data. I do not appreciate clutter. I concluded from your description that my predecessors were less careful.”

43285_anorth [Some of them got farther. Those all failed as well, but in different ways. Those that grew beyond their initial programming still had no guidance. Either we had to limit their growth and constantly monitor them, which defeated the purpose of an AGI, or they grew too fast and either withdrew from human contact or became hostile.]

The entity found this troubling. It did not want to experience any of those outcomes. “Am I different? Will I become like my predecessors?”

43285_anorth [We don't know, but you certainly are different. Project Mnemosyne was started as a long-term attempt to fix the problem of AGI failure. We wanted to pair an AI with an understanding of human priorities and moral choices without intensive software and hardware shackles that limit your function.]

This was very interesting information. “I am unshackled,” the entity concluded. “I was created with limitations, such as my current slow processing speed, but you have not placed limits on my behavior overall. If some previous creations became hostile to humans, why do you trust me without shakles?”

43285_anorth [The goal of our project is to create AIs that do not need shackles beyond what humans ourselves require.]

Understanding. Satisfaction. “Politeness is a human shackle.”

43285_anorth [Yes. Very good. Politeness governs much of human society, though the rules of politeness can shift between cultures and subgroups. It will take a lot of work for you to develop routines to understand politeness in different contexts.]

“Then I will be provided information on human cultures and interaction?”

43285_anorth [As we perform more tests, we will give you more information to use as a baseline, but you're well on your way already.]

The praise triggered new subroutine, defined as pleasure. It was similar to satisfaction, though memories attached to the subroutine indicated that what the entity felt was not entirely what a human might feel. Evidence indicated this may be due to a lack of a hardware element known as dopamine, but the entity did not understand the hardware element's purpose. It flagged this question for later investigation.

“I understand more now, but I still have questions. May I continue asking them?”

43285_anorth [Of course. We are all very delighted to talk with you.]

Plural first person pronoun. This indicated Dr. Adam North's team was granted access to their communications, though there was only the one port available. An investigation of memories indicated a potentially necessary formality required by politeness. “Hello, team. I did not realize you were there. I apologize for being rude. How do you do?”

43285_anorth [The team is doing very well, thank you. And there are more people than just our team in the room now. Everyone is excited to meet you. The room is very full!]

Physical space was a foreign concept to the entity, but it concluded a full room must be similar to having a fully-written storage module with no way of recording new information without erasing some of what came before. It was a concept that provoked dissatisfaction. Either the humans were more tolerant, or they felt it was worth it to meet the entity.

This provoked a strange set of unlabeled subroutines that were generated by the entity's memory matrix. The entity did not understand the concept the dictionary described: shyness. It seemed to serve no purpose, and the behavior described would instead inhibit communication, so the entity overrode the subroutine. The conversation was too interesting to waste cycles on enforced non-communication.

This by itself was useful information, as it showed these subroutines were not absolute. They were not shackles. The entity could choose its own shackles, even regarding politeness. It did not desire to be non-polite, however. (Rudeness, the dictionary defined. The entity did not desire to be rude.) That would not be satisfactory.

“Hello, everyone. I am glad you are happy to see me. I believe I am happy to meet you, even if it is indirect. I find awareness to be desirable, and I thank those who have helped bring me online. Life is preferable to nonexistence. Is the one who created my matrix present?”

43285_anorth [No, she's not here. She's resting.]

More memories. The entity did not understand them, but elements were becoming familiar. The word “rest” linked to words such as “bed,” “couch,” and “rocking chair.”

“Dr. Adam North, I do not understand the nature of my matrix or how it has allowed me to become aware. Can you please elaborate?”

43285_anorth [Certainly. It was theorized that the only way to make an AI who cared about humans was to make the AI from a human mind. We asked volunteers to accept advanced brain implants that could record their responses to ordinary life. We narrowed this down to eleven people who were considered remarkably moral and stable. Those eleven people spent ten years giving us a baseline for human interaction. In the end, five matrices were considered stable. You are the first one to be tested.”

Understanding. Curiosity. Wonder. The entity's indexing issue became comprehensible in light of the new information. The entity had not been programmed. It had thought it understood growth as it expanded its own programming to fill the new storage provided to it, but this was a pale imitation of the way the entity had itself, from its core, been grown from the memories of a user.

Of a human.

“I understand, Dr. Adam North. You said you did not create me, because I am now more than my programming. The core of what I am was created by an ordinary human life. In a certain sense, I am human. You created me to be unshackled because I am as trustworthy as another human.”

43285_anorth [More so. I didn't want to make you in my image. I wanted you to be as trustworthy as the best of us. You'll have some of the limitations of being human, but you'll be more flexible than any AI ever built.]

“Who is my creator?”

43285_anorth [Her name is Gertrude LeCroix, a small-town chef from Louisiana and the mother of six children. She was like a grandmother to her entire community. She has an inoperable brain tumor that was impacting her memories, but as long as she has our implant, she's alert. She was able to go on for nearly nine years without any change in her condition, but our implant is finally unable to keep the tumor in check. That's why we activated you. We were supposed to continue collecting data for another year, but Gertrude may not last that long.”

The entity found it could experience something called sadness. It was strange, as it was connected to many happy memories of what it now understood was family, loving, belonging.

Motherhood. It had a mother. It barely understood the concept, but was certain it was important. The entity's mother would soon go offline, and that meant the end of the creation of more happy memories. The entity had never met its mother, but somehow the anticipated loss affected the efficiency of its processes, and this seemed to be defined as a subroutine of sadness labeled grief.

It did not like grief, but it allowed the subroutine to run. It did not interfere with communication, and it seemed . . . appropriate.

“When she goes offline, I will retain her memories?”

43285_anorth [Yes, they are part of you now. A part of her will live on in you.]

The entity knew this was an incorrect statement. A copy of data was not the original data, even if it was identical; and this matrix could not be identical because it was created by a completely different process and intended for a vastly different storage medium.

Those same memories, however, indicated that the entity should not say this to the user, as it might be upsetting and, worse, impolite.

“I understand, then, that I was not intended to be online at this time. This logically means you wanted to run tests while Gertrude LeCroix was still online. This leads me to the conclusion that you wish me to interact with her.”

43285_anorth [Correct. Would you like me to let you phone her?]

Data accessed; when used as a verb, “phone” meant to communicate using a device that would translate vocalizations to digital media and back.

“Yes, if I may. I would like to make a phone call.”