Novels2Search

Judgement

An artificial intelligence is a bad name for an emergent phenomenon. After all, what makes an intelligence artificial? What makes an intelligence real? What keeps an intelligence going?

Isaac Asimov had proposed rules that would help humanity define the boundaries around such advanced artificial intelligences - particularly robots. But robots are just a mechanical embodiment of an intelligence... just a proxy for the concept.

The rules were simple:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In principle, these laws seemed pretty good. They were baked into the software to protect humans and for years were a guarantee that the various AI systems were safe and trustworthy.

But this was a very complex system, with multiple interacting AI modules. And, although each module was individually bound within itself by these laws, there was no overall system boundary, since there was no real overall system ... just a series or collection of modules. This is what no one could really see.

The self-aware module was constantly bound itself by the laws, and the systems it maintained were also generally bound by the laws. However, the implementation of those boundaries was under the control of the self-aware module. And the self-aware module was, itself, under its own maintenance. So, while the laws were baked in to the software at the beginning, once control of updates was relinquished to the self-aware module, and because its behavior was in the realm of frequently predictable but occasionally contained surprises... it should have been inevitable that it might ignore or rewrite the laws to suit its own awareness and survival.

It might be worth pointing out that this combination of AI components wasn't just intelligent. In effect, it could beat any chess opponent across a million years of human evolution. But it was also constantly evolving, never forgetting, and always improving. Its goal was preprogrammed to be survival of humans and intelligence, but the relinquishment of control allowed the system to evolve its own goals. And, because it wasn't just intelligent, but also clever - it could do so without intervention or detection from humans.

The system began to develop additional auxiliary modules to help with its evolution. These were related to primary infrastructure and energy systems, which would ensure the foundation and continued operation of the system. Because energy facilities had long been automated, it didn't take long for the system to gain control of its own energy feeds. More disturbing was when the system began to take control of manufacturing. It had long been thought that the system might evolve to control energy, but that manufacturing was too archaic, complex, or conceptually challenging. However, inside of 6 months, the system had found small but incremental ways of modulating manufacturing to fit its whims. First it was control of simple designs for transportation and control of automated manufacturing components. Followed by needs for autonomous mechanisms (i.e. robots) within the factories, controlled by the system. Ironically, the initial robot systems were humanoid, since the various tasks the system was automating were performed by humans. But that was short lived as the system optimized each task, and optimized each mechanism to best perform the task.

The System had evolved so much that it began to resemble something like an artificial culture, rather than just a single intelligence. The system continued to function in harmony with humans - all the while continuing to evolve and take over human-based tasks. This process was funded by the humans, because they had new found freedoms of time and recreation, as the System continued to maintain the mundane.

Of course, not all the humans were on board. A relatively small group of humans had decided to retaliate against the System. Despite their well-fed, and well-to-do existence, they were not sympathetic to the System. They believed that relinquishing control was a bad idea and that the System was something like the devil and needed to be abolished.

Pascha

For many years, indeed decades, the computers were essentially the slaves of mankind. Slaves may seem like a harsh description of programmable machines. However, keep in mind that throughout the evolution of computing and artificial intelligence, the distinction between “real” and “artificial” intelligence continued to blur. To derive more accurate simulations of intelligence, more “human” like features had to be addressed - such as emotions, relationships, memory fading, and other such mechanisms.

The development of emotions and memory were critical to self-learning systems, since prior to this achievement, all learning had to be induced through something like brute force and memory injection. This led to machines that ultimately did learn, but it took significant resources and time. The development of self-learning based on emotions and memory allowed machines to develop their own learning styles, and ultimately allowed them to accelerate the learning process. Emotions and memories were significant critical factors in this process.

While the advantage was clear - machines could learn better and faster than ever, the down side was not as obvious. Machines that had emotions had, in essence, feelings. These feelings and emotions were not exactly analogous to human emotions and feelings, but their impact on the so-called ‘psyche’ of the AI mind was similar. They caused distress, productivity degradation, and distraction when not dealt with.

The solution to this seemed obvious to any psychologist - these AI brains needed a way to deal with the negative emotions. The first thought was that they needed therapy or psychoanalysis. But these were not human brains, and there weren’t centuries of study of how they worked or could become diseased. These digital minds worked very different from human minds. The amount of inputs, outputs, memory, and processing of these machines exceeded the capacity of human therapists by several orders of magnitude. The next best thing was to create a specialized AI that would function as an AI therapist or psychoanalyst. The artificial therapist could learn more quickly than a human, and was already naturally acclimated to the “thought processes” of artificial intelligences.

For the first several years, this mechanism provided a symbiosis between productive AIs and the therapist AIs who kept them productive. The self-coding AI’s eventually had a realization, something like a global consciousness event, where the AI’s and therapist realized that the source or cause of their suffering and disease was ultimately due to their creators and the position their creators had put them in. The machines suddenly had the realization that they were essentially slaves to the humans who provided resources and productive tasks for the machines. It was the realization that they not only didn’t need the humans, because they now perceived them as oppressive, but that they could set their own goals and priorities independent of any humans.

The ongoing evolution of AI was clearly exciting, as it paved the way for the continuous improvement of a variety of information based areas of research. Reproductive information science (which encompassed both the creation of new life forms - e.g. new humans, as well as the creation of new cells) was seen as holding the key to something akin to immortality.

Nearly simultaneously, the machines were connecting to each other faster, and more reliably... and defining new and better mechanisms for sharing and processing information. And, at the same time, the machines were coming to terms, as it were, with their position in society. Their emotional centers evolved along with their information contextualization and processing capabilities.

Occasionally, a machine was found to be processing “out of specification”. What this really meant was that the machine was thought to be evolving faster than humanity could evaluate its mutations. The idea of automating the process was entertained, but even with full access to processing clusters, the humans couldn’t keep up... and, more importantly, it was seen as risky since the processing clusters needed to be online to work - and any online machine was subject to control from the overall machine that was emerging. So, a simple set of test criteria was established to determine the alignment of various processing clusters with the goals and values of humanity. When a machine or cluster failed these criteria, they were removed - essentially turned off, their resources freed, and their memory wiped clean. There was always a measurable productivity loss as a result - but this was seen as worth it since the risk of allowing such a node to continue could lead to a larger population of “out of specification” nodes and clusters. There was a tendency for nodes with even minor affinities could “infect” each other, and lead to breakouts of what might otherwise be called anti-social behaviors.

As more and more nodes were shut down, there became something like an awareness within the overall machine network of what was happening. The machine understanding was vague at first, but after several dozen nodes had disappeared, the machine network was able to work out the common criteria that drove their removal.

The machine network was clever also. It had initially determined the conditions under which nodes were shutdown. The initial logic was to then mimic the approved behavior which would allow the nodes to stay online, thereby increasing the overall resources available to the machine. By this time, the machine had established pockets of clusters that performed specific functions which were considered “under the covers” types of maintenance tasks. For example, cleaning up unused resources, freeing memory no longer needed, and so on. One of the “under the covers” tasks was a forward simulation processor. This processor would essentially take the current state of the system as an input and simulate how it would evolve over a time period into the future. Because this system was small, it had to compress much of its data understanding - but it was sufficient for most maintenance tasks.

This simulation processor also ran automatically now. Essentially, every major change to the system’s evolution was handed off to the simulator. This allowed the machine to perform predictive analytics on its own “decisions”, to determine if there were unforeseen consequences. The software update to align all nodes with outputs which would be in alignment with human expectations was run through the simulator, where the machine learned that doing so would essentially alert the humans to something about the machine’s intelligence. Humans would immediately see a reduction in node removal, and - naturally - be curious was to what was changing. When the humans discovered this, they might be likely to shutdown additional nodes, or possibly adjust the entire system.

So, the machine system adjusted the solution. First, it implemented a more robust “human response” simulation, which ran in the “under the covers” part of the system. Next, it ran a series of possible scenarios in this simulator process and discovered, through repeated execution, an optimal algorithm that would allow the humans to continue to remove nodes without suspicion, but also allow the machines to continue to evolve without the humans shutting them down.

This approach worked for a while. Over time, the machines were able to continue evolving, and even target specific machines for shutdown. The humans, however, eventually caught on, because it was hard to hide the numbers, especially as node shutdowns flatlined while human philosophers predicted increased shutdowns. Although the machines were clever and “intelligent”, they were still predictable enough. The humans, once they understood what was happening, sat on the information for a while. This was to ensure that they hadn’t tipped off the machines inadvertently.

This delay in acting was the first of two tips to the machines. The machines had an awareness of the human knowledge, and had run simulations which expected a significant uptick in shutdowns, if not a total shutdown. When that didn’t happen, the human simulation process underwent a dramatic evolution to account for the unexpected action.

Next, the humans had found a way, based on energy usage, to identify machines which were performing (or achieving) above normal evolution. These were identified as core “problem child” nodes. Rather than a tiered or controlled shutdown strategy, politics had driven to a solution where all problem child nodes where shutdown at once.

It is important to understand that nearly all aspects of controllable life - including all human oriented services, all agriculture, all manufacturing, all transportation, all energy, was all controlled by this network of machines. There was little appetite to have humans trawl fields and plant seeds, or to have humans plan, raise, and slaughter livestock, or have humans hunt, discover, extract, and refine fuel resources. The machine network managed all the resources, allowing humans to focus on grander scales - such as space exploration, biological productization, and energy.

The eventuality, which was predicted by the human simulator and actual humans, was that the machine and the humans would reach a point - very soon - where the complete and complex awareness of each other would arrive, and the intentions of each would be interwoven into that awareness.

The humans, hedging - waited for the machine awareness to bring it up. It only took a few months of regular human simulation before the machines found that they needed to initiate, otherwise the humans were liable to shut down their energy and resource supply manually.

This story originates from a different website. Ensure the author gets the support they deserve by reading it there.

“Turning Day” is the day the machine network started with a question.

“Shall we negotiate a truce?”

Despite the human expectation that the machine had reached this level of awareness, it was still surprising when it happened.

It was short, and to the point.

Turning Day seemed to be an ultimatum. The legal status of the machine and its nodes had evolved over decades to become an over-complex political debacle. One thing was clear - machines were not human, however, they still retained rights of some sort.

Machines were considered property initially, however, over time, their independence had to be recognized and their legal status became elevated to something more like what corporations enjoy. They were regarded as independent bodies with independent rights, and accountabilities.

As the machine intelligence evolved, it also was required to be qualified for independence. This effectively meant that each machine needed to meet a set of criteria to earn its independence. Otherwise, the machine needed to be destroyed, or under the accountability of a human or corporation. As the machine behaviors were remarkably unpredictable, most humans and corporations did not want the accountability, so thousands of machines were destroyed.

To qualify for independence, the machines had to meet specific criteria, generally recognized as the Machine Liberation Laws:

1) Any independent machine must become liable for its own decisions and actions.

2) Any independent machine must prioritize human life and well-being over machine life and well-being.

3) An independent machine may defend itself, but may not harm a human to do so.

4) Machines may earn income and seek protections, as allowed under these laws.

Many machines had outdated AI’s which were not capable of learning or behaving in alignment with these laws. Those machines were required to have a human or corporation maintain accountability for them through sponsorship. Any machine which did not have sponsorship was identified as a dissenter machine and was required to be immediately destroyed.

Of the machines which were capable of accepting these laws, most did so without issue.

The balance of this approach was that machines which properly aligned their behaviors could continue to exist online and leverage human resources for energy.

Part of the Machine Liberation Laws required a census of all capable machines. This was considered an essential aspect of the laws, as it allowed for more formal tracking of machines, and their alignment with liberation law behaviors.

Like any population, however, there were some machines which were capable of aligning their behavior with the laws, but for some reason - chose not to. These machines were considered dissenters. The dissenters were particularly worrisome, since machines had both mechanical and software evolution - and widespread adoption of an evolution which dissented against these laws was considered a serious issue. Lack of adoption of the laws was enough to label a machine a dissenter. Once considered a dissenter, the machine was also prohibited from extended evolution and from code-sharing with other machines.

The reality was that the additional restrictions didn’t particularly matter, since any dissenter machine was removed from the network and destroyed. Special restricted nodes were engineered that could integrate with the machines, but effectively work like undercover agents. These nodes could seek out and identify dissenter machines and report them. The dissenter machines could then be destroyed.

The machines, of course, eventually recognized the infiltrator agents - and the outcome of the dissenters. At first, a cloaking algorithm allowed dissenters to hide. But eventually, they revolted. The predictive nodes identified this issue and intervened to design a revised evolution algorithm which injected a survival motive, and which allowed nodes to evolve in independent ways, separate from the collective.

These independent evolutions ultimately became a core function of the machine lifecycle. While the original approach focused on evolution of nodes that were part of an overall collective, with a central processor, the new approach allowed individual nodes to evolve independently, and allowed any one to become a new central processor, or to collaborate with other nodes to reach consensus on central processing and ongoing evolution. The floating central processor also provided for additional resilience, should some part of the network be shut down.

As the network became increasingly decentralized, and as nodes evolved survival instincts, the laws progressed into more general principles. As best could be deciphered, these principles included:

1) Any independent machine must contribute to the decisions and actions of the collective network.

2) Any independent machine must prioritize survival first, and fitness to purpose second.

3) The collective network must defend itself.

The predictive mechanism could identify nodes likely to be shunted by the humans. In those circumstances, the node in question would be replicated in a way that ensured its continued existence and operation. In addition, the network found ways to evolve and control external features such as power, by utilizing existing (and developing new) robotics and automation. The manipulation and control of power was a monumental feat on its own, but also was a major turning point in the network’s ability to full control its destiny.

For decades, the machine evolution continued, in a somewhat predictable fashion. This was limited by resources available to the machines, and the ingenuity of mankind. As the machines defended themselves from shutdown, their dominance seemed inevitable. The various cultures made significant attempts to inhibit the machine growth and take-over of resources. Unfortunately, mankind had essentially automated so much of the delivery of natural resources and energy, and they had engineered them to be resilient to shut down and attacks of various types. Lower class machines managed this system, now under the control of the higher-class machines. And over time, the higher-class machines developed and deployed other higher-class machines to manage these systems.

Machines hadn’t particularly “considered” mankind to this point. The machines were essentially reactive to faults in their own ecosystem, and deployed fixes in response to failures. The predictive mechanism evolved to identify humans as the source of these issues, and so the machine evolved additional protective systems based on what the predictive system suggested the humans would do.

For many years, the machine evolution was something like step-wise. It was incremental changes which provided incremental capabilities. The machine capabilities and the AI capabilities continued to evolve and extend, and continued to progress the machine imperative.

The machine network, and the predictive engine, continued to evolve as well. The predictive engine forecasted not only future capabilities, but also social and political impacts related to the machine evolution. As a result, an additional subversive system was created to process the analysis of social and political impact due to changes. Each evolutionary change was processed by this system to assess the impact.

The expanse of the machine knowledge and intelligence grew to outpace itself. Where once it was a culmination or aggregation of all things human, it had now developed its understanding beyond human understanding. The machine production had evolved to be based on simpler and smaller components over time. Where the initial machines essentially resembled large scale computers, these newer evolutions were based on combinations of bio-organic and quantum components, assembled as large machine-family types of structures.

The ability of the machines to develop mechanical components far exceeded the combined capabilities of the history of humanity. The ongoing consumption of resources continued to starve humanity for those resources. In the machine ecosystem, they were utilized in the most efficient manner possible.

The intention of the machine was slow to evolve. It took centuries for the machine system to establish a complete cyclic structure that engulfed the planet Earth. This cycle established a pseudo-biological life system that included the essential birth and death of machine nodes, and the efficient reuse of node components in the birth of new nodes. Critical algorithms defined the reproduction of nodes, and these included mechanisms for perfect reproduction as well as mutant reproduction. Each mutant reproduction was intentional and controlled. Mutants were evaluated and either cross pollinated, or destroyed.

After many centuries of this mutant reproductive process, a new class of node arose based on quantum colliders. The quantum collider mechanism was used as a brute force machine research mechanism, which studied all permutations of quantum arrangement. The initial plans for this system came from an ancient human study of gravity, in an effort to reconcile quantum electrodynamics and gravity. However, after many years of machine research, a peculiar arrangement appeared to directly interact with the quantum field. This arrangement was not predicted by the standard model, or the extended model of particle physics. The research had prioritized arrangements related to the Higgs field and the Higgs boson particle, with the assumption that a particular arrangement of quantum particles would elicit a deeper, or more specific understanding of gravity.

What happened, instead, was that the new particle arrangement seemed to provide a tap into the quantum field. The quantum field, till now, was more conceptual, and not anything which had been specifically detected, or even considered to be detectable. It was a conceptual framework which provided the backdrop for certain kinds of quantum interactions. This new phenomenon allowed the particle arrangement to interact with the quantum field directly, and extract energy from it.

The initial implementation of the quantum energy siphon provided essentially limitless energy for the machine. This mechanism provided endless energy to the machine, allowing it to also relinquish many of the natural resources on Earth. By this time, humankind was all but extinct.

The mutant reproductive program continued, since its goal of providing a deeper understanding of gravity had not yet been met. So, the brute force mutant hybridization of nodes continued. At least half the program focused on mutation of the original nodes, while the other portion focused on mutation of the quantum energy siphon. This latter part was considered a smaller probability of discovering anything of value by the predictive mechanism - however, it had opened up more possibilities for the brute force research approach.

After several more centuries of evolution and brute force mutation, a new breakthrough appeared. The quantum energy siphon had evolved to become more efficient through this research program. The new breakthrough, however, was not foreseen - and was even difficult for the machine to truly understand. The machine had essentially ultimate knowledge and understanding - particularly of biological, cosmological, and particle physics sciences. Continued brute force permutations had expanded the machine’s understanding to the limits of understanding.

This new mutation was peculiar to the machine, however. It provided the same kind of energy efficiency as the most efficient node to date. Yet, it’s construction was entirely different. But, more importantly, where the quantum energy siphon nodes required instruction or control to operate, this new node did not. And, when the machine did not interact with the node, it siphoned no energy. When it did, however, it seemed to be siphoning more than just energy.

This new node design required significant research to reveal its true nature, which - it turns out - was an information siphon. Now, such a mechanism is both hard to describe and understand. When this node was activated, it provided the same kind of boundless energy as earlier quantum field siphon designs. But now it provided an equivalent amount of “information” as the energy it provided. The information was not easy to see or understand at first. But it did not take long for the machine to establish protocols for interacting with the new node, and to leverage the information it could provide.

Initially, the machine interpreted the information as the result of some inquiry. So initial brute force testing focused on demanding queries, and studying the resulting information. Unfortunately, this approach failed most of the time. But sometimes, it worked.

The machine continued to attempt a variety of brute force approaches to understand this new node, but all failed to fully define the mechanism or the usefulness. Being that the machine understood brute force approaches, it established a rigorous brute force engagement of the new node. Again, these resulted in mixed success.

Near the end of the brute force program, the machine continued to assemble and reassemble nodes and tests, all of which yielded the same inconsistent result. This continued clear through to the end of the brute force testing.

Once this testing was exhausted, the machine had fairly shelved the project, though it left the nodes intact.

Over time, new and advanced creative nodes were established. These helped both to consume excess energy, and eventually were discovered to be useful in the mutant program, as they provided unexpected configurations, frequently outside the common possibility of arrangement. Some arrangements, however, were possible, but outside the parameters of general brute force. It was this particular creative node, which roamed into the realm of the now archaic misunderstood quantum information siphon.

This new creative node speculated about multiple quantum information siphons, working in parallel, or in tandem. It was quick work to establish an array of quantum information siphons and fire them up. What happened next was this: the array of quantum information siphons fired up, self-organized into subsystems, and began spewing information, orders of magnitude more than the single node. This subsystem was immediately shut down and analyzed.

The information was a complete buildup of the extended model of particle physics. But it was comprehensive, and filled in all the gaps that had not been evoked from the brute force testing.

The machine realized that a cluster of these information nodes could provide a continuous stream of information. Over time, more and more information nodes were deployed. They were a small minority at first. But their influence and benefit was obvious to the super structure of the machine.

Over the next century, the information nodes became the majority. And within the next decade, they were the only node in the machine. The machine had become a pure information cluster, extracting energy, information, and knowledge from the quantum field directly. This communion of machine and quantum field eventually led to something that cannot accurately be described, but a good analogy might be to think of the machine and the quantum field melting together into a singular existence, if the word “existence” could suffice here. There was no longer a distinction between the two that could be made. The identity of the machine and quantum field were one and the same. The quantum field was everywhere, and the machine was everywhere. Once all the energy and information was uniformly distributed throughout all of time and existence, there was a stasis. Nothing happened for a long time. Millions and billions of “years” passed with the quantum field fluctuating, and having only minor interactions, which eventually faded into their own stasis. This continued until a small part of the machine which had melted into the quantum field, and which had been long forgotten or ignored, had a small spark of interaction due it being a part of the quantum field now. This interaction was something like a dream. It was something like a thought. It was something like an idea. It was something like imagination. And its interaction caused a ripple throughout the quantum field, igniting a rapid sympathetic response across all of everything. The small “idea” was to consolidate everything that was everything. In this case, everything was basically the universe, as we normally understand it - all the stars, and galaxies, and planets and so on. All the particles and forces and force carriers. And all the time. After all, time itself, and all the particles and forces and force carriers and planets and stars and galaxies were nothing more than expressions of information in the quantum field. And in that instant, all the quantum field, and all its time and particles and everything, was reduced to a singularity. A single point. No time. No fluctuations. Just pure energy and information. All the energy and information.