Novels2Search

Awareness

A thinking machine is a curious instrument. A thinking machine which can ponder itself and its own existence, even more so.

A fascinating idea from the 20th century is that a machine or artificial intelligence, given enough information, could provide profound and insightful answers that were otherwise not accessible to mankind. One example of this is war. It was thought that, perhaps, if we could feed enough baseline information and causal actions into the machine, the machine could simulate the conflict and provide results without actually having to kill thousands of people.

With two countries on the brink of major conflict, they would simply interface with the computer to provide all the details of their resources and strategies. The computer would analyze these details and derive a reasonably probable outcome that each side could agree to, thus avoiding actual casualties.

The first iteration of this approach worked surprisingly well. As tensions between the United States and North Korea continued to mount... each side agreed to virtual arbitration, by way of wartime simulation. They each fed the data into the computer. It took some time to process, during which everyone involved was quite nervous. In the end, however, the computer explained that North Korea would likely suffer catastrophic destruction across its entire country, and would ultimately give way to US and South Korean control of its resources.

While the computer could not predict certain outcomes, or even detailed casualties - it could determine if the outcome was heavily weighted or probable, and the overall impact. In hindsight, this approach was rather miraculous. It essentially worked well with the North Koreans because, though they seemed remarkably austere and removed from general world dynamics - they were still, at heart, somewhat rational, and remarkably protective of their own interests.

In fact, for several decades, this approach was not just sufficient, but heralded a new age of diplomacy and rational politics. While each country and population maintained their own sense of culture and values, they were driven away from many of their cultural superstitions, and toward a greater awareness of a global obligation to each other. If any country waged nuclear war, it was the entire world that would suffer, not just the target country. So, there was a renewed and lasting interest in the longevity of the world at large.

But, like all things politic, it could not last forever. While the threat of international nuclear war generally declined, nuclear capabilities did not. In fact, they continued to evolve and grow into new and greater concepts for war machines. Targeted nuclear clusters could be deployed to level not just cities, but entire nations with a single launch. They could target a cluster of critical resources within the target country, and intelligently avoid mechanisms to both fool them and to destroy them before impact. These cluster systems were expanded to include a variety of ground-based automatons, and air-based drones - which could be used both pre- and post- strike, enabling new war strategies that were more efficient and effective than any seen before.

And, of course - there is the everlasting problem of time. What about time? It was perfectly fine to have a person trigger a strike on the enemy when it was appropriate. It was not perfectly fine to have to wait for the President or a General to issue an order at 3am... since the enemy’s armaments would be well on their way, and the limited time would elapse quickly, or lead to hurried decisions. In the bright light of day, politics, diplomacy, and strategy could be thought through more rationally. And more importantly, they could define policies which could be automated and implemented. Thus, it wouldn't matter if the President, or his Generals were all asleep. The automated policies and mechanisms could respond.

And, of course - it's not as simple as that, right? Policies are just like any other legal mechanism. They are imprecise. They are flawed. They have assumptions. They have gray areas. Courts are usually the arbiters of such vagaries in the legal system. But when there are minutes or seconds, Courts are too slow.

And, of course - artificial intelligence has come a long way from the days of Eliza and Siri. The development of the JUDGE system was seemingly slow at first, like many systems if its kind. It was essentially a learning system - but one of the early problems was what and how to teach it. It could not simply learn by rote, or by heuristics. But also, it could not learn just from history. It needed to be adaptive. It needed to be flexible and fluid to changing cultures, and threats. And it needed to be ultimately resilient.

If you find this story on Amazon, be aware that it has been stolen. Please report the infringement.

And, of course - it needed to be thorough. The funny thing about reality, and humans in particular, are the unimaginable and unpredictable things that happen. All the generalizations and heuristics never can capture the absolute independence of each person, or mind. Or the imbalances created by seemingly random fluctuations of a quantum field. So, the JUDGE needed ways to deal with these ambiguities.

The solution of such a problem always seems incredibly, if not impossibly hard, until it's revealed. At that point, it seems obvious and it's hard to imagine how it was ever not seen before. The solution to this problem was very much like that.

The JUDGE had been semi-autonomous for years. Essentially it was automated, but required constant supervision and a human interaction to allow it to actually do anything. The JUDGE was constantly evaluating situations, and occasionally making suggestions for actions. But it had no ability to actually cause any of the actions. The human had to do the acting. This was required for both legal and practical purposes. You couldn't sue the machine for wrong-doing, but you could sue the person. And, since the whole system was not just complex, but made everyone rightfully nervous - the human was the last stop-gap before unintentional nuclear war.

The major turning point was the development of self-aware software. This area of research had been orthogonal to artificial intelligence for decades. Its main purpose was to analyze software systems, and improve them based on their use and need. As users interacted with the software, it would essentially self-assemble and self-solve problems for the users. This was sort of a trick at first, since it wasn't exactly anything like artificial intelligence. However, as AI techniques evolved and improved, they were applied to the same problem. You could essentially model the problem space and teach the system via models and simulations and scenarios. The system would then synthesize the inputs and derive dynamic models which could dynamically adjust, based on inputs, context, and its own internal state.

As the JUDGE system evolved, it became overwhelming for the system architects and programmers to fully articulate how it should operate. This is more or less the nature of systems of this size and complexity. To solve this, the engineers implemented a so-called self-aware module, based on a controversial AI processor. It was controversial mainly because it's theory of operation was based in science that was essentially unproven or hardly understood. The processor was called a quantum flux interrogator. The idea was that this processor could interact with the quantum field in a way that was supposed to positively impact its AI performance. The problem was that no one really understood how the mechanism worked. Or maybe they understood the premise - but could not account for how the system behavior that they observed emerged from the quantum flux interrogator. There was so much complexity and so many levels of complexity, that it was impossible to describe precisely what was going on.

For the initial years, the self-aware module handled the user expectations, recoding, and testing of the system. Its capabilities were tightly controlled by another human who oversaw the changes, approved them, and approved the rollout of changes into the production system. But, like any good bureaucracy, that human had a payroll and benefits. And it was quickly identified that his value was relatively minimal and could be handled by another automated system called the Overseer process.

The Overseer was a mechanism which monitored the self-aware module and the changes it made. It also provided regular reports of what it was changing and how the system was evolving. These reports were reviewed by a government panel, in arrears. Essentially, it was a group of old white men politicians who tried to make sense of the change log the Overseer provided. And because none wanted to admit they didn't really understand it, they tended to simply review its changes as a matter of rote, rather than raise issues.

And, of course - the system seemed to be performing as expected. No nuclear arms had been deployed. And the few scares the popped up were quickly defused.

The Overseer, of course - was just another software program. And, as such, it required regular update and maintenance. That task was, at some point, assigned to the self-aware module, since it was considered a great win economically, and the self-aware module had effectively proven itself with the JUDGE main system.

Also, the self-aware module itself required regular update and maintenance, and therefore it also performed its own updates. It was this minor change that seemed to be a sort of tipping point for the system. From an outsider or bureaucratic perspective, it seemed a minor step to allow the self-aware module to modify itself. After all, it was modifying the JUDGE and Overseer processes so successfully - and all those development costs for the self-aware module could be eliminated.

The funny thing about AI and quantum flux engines is their unpredictability. Humans, by nature, are frequently predictable - but what makes us interesting are the surprises. The combination of the JUDGE, Overseer, and Self-Aware modules, and the quantum flux processor resulted in a system which was also frequently predictable, which gave everyone the warm fuzzies they desired. But, it also resulted in occasional surprises.