Novels2Search
Start World? _
Chapter 1: A Prologue of Sorts

Chapter 1: A Prologue of Sorts

Important note:  I have two books I've thought about writing for a while. I'm going to post a few chapters of both to see interest and feedback.  I have more than 1 full time job, so I don't believe I'll be able to keep a set schedule of updates.  Any feedback is appreciated.  Consider looking at each of the stories if one should take priority on being written.

     June 18th

     “Subject 50 has been pronounced dead.” I heard over the phone, causing a momentary pause before I thanked them and hung up.

     I wasn’t sure whether I was more surprised that we made it through 50 subjects with none of them having any real problems or that the technology seemed to work.  Regardless, with subject number fifty having finally died, phase two was near a close. We would have to await results of an autopsy to make sure she died of expected causes, but it wouldn’t be long now.  Just one final autopsy to make sure the technology didn’t appear to cause any detrimental effects to an individual’s brain or body.

     The science was simple enough and relied on tons of particles surrounding the brain that were powered by body heat.  The body’s natural temperature allowed each particle to absorb heat and convert it to energy. Around half of those nanoparticles were set to record even the faintest electrical signal of brain activity and transmit it to a receiver that had been surgically installed as a port in the wall of the skull.  While the receivers picked up a mess of signals, thanks to the advances in quantum computing the resulting signals were able to be separated.  Then, signals received were mapped out and interpreted allowing a computer system to know exactly what an individual’s brain was doing and thinking.

     Thanks to the quantum computing there was no recognizable lag between detecting an electrical signal on a cellular level, interpreting that activity, and displaying the activity in any manner of outputs.  Our current project created a model of the individual and was the first half of a fully immersive computer-based simulation. 

     The other half of the nanoparticles in an individual served as receivers. The system generated electrical signals to the brain of the individual.  This was the other half of our immersive computer-based simulation.  It did three things that were crucial for the immersion.  First, it allowed the individual to effectively have their senses hacked and allow the computer to provide sensory input from the immersion.  Second, it prevented unwanted external movements by overriding signals the brain was trying to trigger.  It wouldn’t do well to have someone actually walking around in real life when they were in a computer-based immersion.  Third, it allowed the computer to run baseline micromovements for health purposes such as prevent muscles from losing their tone or veins from developing blood clots.

      While phase one had been evaluating such a system on rats, phase two involved humans being immersed into a simulation to pilot the system.  Our subjects were hospice patients kept in identical generic rooms in our lab.  While the base bedroom and bathroom each subject was assigned were all identical, they had been allowed to bring personal belongings.  Using machine learning we had scanned each room without the patient in it, and then set the machine to view and interpret the security and cleaning feeds to see how each item was interacted with. It was also set to record and interpret the patient’s remote medical monitoring devices to capture human physiologic data while observing the individual.  Through doing all of that, the program then developed a protocol, essentially an understanding, of what any interaction between the individual and anything in the room resulted in. 

     From a sensory standpoint the system began to recognize what neurons, brain cells, were triggered with different interactions, such as sensing weight and texture in an individual’s hands if they picked up a lamp to move it.  From a physiologic standpoint the system began to recognize the effects that walking around had on individuals in various states of health, so it could identify how much of an increased heart rate someone might have, or how winded they might feel, after certain activities.

     Each night as the subject went to bed they were immersed into our phase two simulation until the morning, admittedly without their knowledge, and had their bodies closely monitored for signs of acceptance or rejection of the immersion.  While data from a subject sleeping through a night wasn’t the most useful, it showed that the system wasn’t harming a subject’s body over short or long-term use.  

This story originates from Royal Road. Ensure the author gets the support they deserve by reading it there.

     Data from when a subject couldn’t sleep was always our jackpot though.  Ensuring an individual’s immersion experience was so realistic they couldn’t tell it from reality, that was our goal.  The first year or so we had to explain away subject’s experiences as vivid and lifelike dreams coming from one of their medicines. Around that point the system had learned enough protocols that to our subjects the immersion was real.  If a subject woke up and walked to the restroom or had a midnight snack, they couldn’t tell any difference from having been immersed in our simulation and reality.

     We realized the potential flaws that were inherent in using hospice subjects, people that were nearing death.  A person’s body wasn’t necessarily normal at that point for the system to identify.  Depending on why they were dying they may always be short of breath, or they may always have pain.

     The fact we only allowed them a single bedroom and bathroom as their environment didn’t help the situation either.  The program wasn’t exposed to a full array of normal and abnormal activities.  How could we know if the system would react appropriately if someone were to wrap a cord around their neck to strangle themselves.  Regardless, this was the second phase of our testing, and aside from subject 50 going into remission from her cancer for a few extra years, it went off flawlessly. 

     Considering the potential issues though we had also been running phase 3 simultaneously.  Phase 3 allowed a handful of our quantum-computers to evaluate and develop protocols, essentially learn, from anything that could be found online.  They were set to overlap to a degree so that disagreements in protocol would be averaged based on all findings, and to study anything online ranging from text to images and videos to interactive games. 

     Additionally, as part of phase 3, while not the most ethical, we had provided free remote health monitoring devices to all of our employees, their families, the local school districts, hospitals, clinics, police departments, sports teams, and any other organizations that were interested in wearing them.  The amount of physiologic data received allowed for the system to recognize an individual’s state and map it to their activities.  That data would then ensure that despite the experience being just a simulation of a model within a computer doing an activity, the computer could cause an individual’s body to respond the same way that it would be expected to.  It could cause the person pain, happiness, relief, anxiety, and any other emotion or feeling.  It was a great PR move too, showing our company was providing so much free mobile and remote health monitoring, and had led to a few new tele-health products for the company.

     With everything going so well, it was time to begin my own procedures needed to implant a port, instill nanoparticles, and map my brain for when I was immersed.  While I certainly wasn’t willing to put myself in the first line of individuals being tested, I also wasn’t willing to give up the chance to be one of the first individual’s once we started phase 5.

----------------------------------------

     November 24th

     So far it looks like my own implanting of the port and nanoparticles was a success, and that over 95% of my brain was successfully mapped for both the receiving and transmission of signals.  The process for mapping has been significantly quicker than when we first started, and the technologists seem to think it will eventually be down to days or even hours.  The master control program seems to be learning general areas and functions and starting to map based on probabilities and just require fine tuning and confirmations.

     Phase 3 was also ready to be marked as complete.  It had been running for a little over 8 years since we had started it… that still surprised me.  Since we designed the initial program to determine when it should stop based on a decrease of algorithms developed per item reviewed, we hadn’t known how long it would take. I would have to trust it.  

     That meant phase 4 would start soon, we would allow the master control program to begin integrating all data from phase 2 and 3 and building a virtual world for immersion. 

     Phase 5 was when the world was ready… I would be immersed.

----------------------------------------

     July 16th

     The master control program had been running through various odd command cycles and phases. None of us really knew what to expect, how many different phases it would cycle, or how long they would run.  Today though, the screen finally shifted from a generic blue background with rapidly scrolling text and statuses to all black.

     All black except for the very center of the screen. 

     Two words were displayed in white, followed by a flashing horizontal caret after it.  It looked like an ancient DOS program waiting for a Y or N.

     Start World? _

Previous Chapter
Next Chapter