It took two days to get the audio input working. It took another week to get even the most rudimentary visual input. Both would need to be improved over time, but with some post-processing, the senses were enough to get an idea of what was happening around Ex 347.
For the most part, she was lying in a crib and sleeping, listening to music and sounds of nature. Occasionally one nurse or another would pick her up, feed her, change her diapers and burp her. It was on those occasions that the AI got to see the room Ex 347 was held in.
It was a big room, containing many cribs, a whole bunch of cupboards, and had multiple doors leading towards it. No doubt one leading towards the laundry and bathroom area and another leading towards the kitchenette.
Of most interest was definitely the cribs. They had transparent covers, a light screen hovering up above and shining down upon the babies. They could rock slightly and play sounds, as well as regulate the temperature inside with heating and cool via fans that were equipped with air filters. On top, they had a light signalling the status of the babies they contained, even letting the nurses know what requirements the babies had at the moment, be that a change of diapers and a wash or some feeding. That all came combined with an urgency slider, letting the nurses know how long it had been since the need for human action had been noticed.
Surely there were more display options in case of emergency, but the AI had not seen anything aside from these. Likely there simply were no emergencies during the brief periods it could see outside of the crib.
Speaking of which, the screen not only provided phototherapy, but also some visual stimuli. At times it showed a night sky with stars and the moon, at others it showed a blue sky with clouds and birds flitting about far in the distance.
During the waking hours, when Ex 347 wasn’t too tired, it even showed animated images of objects, showing spelling and calling out their names. 'A' for apple. And the like. Whether any of this would actually teach the babies successfully seemed dubious, but the AI was certain it was better than not providing any stimulus.
It likely would work better if they were a little older and might even be designed with older children in mind. Regardless, the AI watched and recorded everything it could sense, slowly improving the visual and auditory acuity available to itself.
Hopefully, this would help in mapping brain activity to actual meaning by comparing the child's actions, once it was capable of taking them, to the data the AI was collecting.
Since it had some processing power left to experiment, the AI once more went over its current list of tasks.
‘Keep everything operational. Monitor the laboratory. Destroy any evidence of illegal experimentation that could be linked to the professor if unwanted entry to the laboratory occurs.’
And so it spent some time looking through old experiment documentation, checking them against the legal code in its database, so it would know what to delete and did not have to waste any precious data. Perhaps it could even alter the data so that it would fall within the bounds of the law? Having bad data was never a good idea though. And if it knew how to decrypt the files once more that was equally bad for the success of its task.
It was some time later that it decided to earmark said files for deletion, and duplicated the results into the notes section, altered so they were mere speculation with no actual experimentation. Failures were noted as probable issues, success was marked as hopeful outcomes. Speculation was not illegal. No one had to know these experiments had actually been carried out and the data could be preserved in some form. This was a much better solution, the AI was sure of that fact.
It wouldn't waste good data. The professor had long since drilled into the AI’s metaphorical head how important good data was. Documentation was key to success. It made even failures useful, because you learned something new when you documented them and factored them into your next actions.
That being said, some things you could only fail a single time. Keeping Ex 347 operational was one such thing. And it was terribly vulnerable to circumstances outside of the AI’s control. This was not something the AI liked. It did not want to fail its task.
On the other hand, to help keep Ex 347 operational meant an investment of resources that would detract from the amount of time it could keep the laboratory operational without needing to be restocked. The laboratory was off the grid. It had generators that would last for a good two months of full power usage, if it lost its sustainable energy from the solar panels outside. Up to ten years if it went into power saving mode.
The solar panels would slowly lose efficiency over time, and without the professor some accident could happen to cut off supply completely.
Thus, it found itself at something of a crossroads. It had no idea when the professor would return. It could construct primitive robots and install something of a surveillance system, so it would be capable of reacting to any obstruction to its energy source. Keeping the laboratory operational was one of its main tasks, after all. And it could not do so without energy.
Enjoying the story? Show your support by reading it on the official site.
To do so, however, would increase the chance of discovery, and thus unwanted entry to the laboratory. Particularly since it would have to use cables to receive any signal from the cameras.
Perhaps once Ex 347 was sufficiently developed, she could help the AI.
Now, it could use its generators to boost its processing power and come up with plans for things that could help Ex 347 survive, or it could keep the power in reserve and thus increase the amount of time it could keep itself operational without taking any further action.
To figure out what to do, it decided to copy something it had seen the professor do and generate a list of pros and cons. It would then assign weight to the decisions and pick the option that seemed better.
Spend Resources to help Ex 347
Save Resources to extend Off-Grid survival
Pros
Cons
Pros
Cons
++Ex 347 might be able to help AI ‘Eclipse’ in case of emergency once properly developed
++survival of Ex 347 is part of task and increased allocation of resources would help said goal
-survival odds may not be significantly impacted until Ex 347 is further developed
-will have less resources in base, in case of emergency
+can survive longer without taking action
+can use those resources in case of emergency
-might not be necessary
-failure to keep laboratory operational would only be delayed
++++ / -- Weight: 2
++ / -- Weight: 0
Thus, it came to the conclusion that it was better to use its resources to help Ex 347. Perhaps not all of them. There was something to be said for keeping an emergency supply, but keeping twelve years worth of fuel that might never be needed certainly seemed excessive.
With any luck, it would never be needed. And in any case, the professor should visit soon. This was by far the longest it had ever gone without getting new commands and it wanted approval for the plans it had drafted.
For now, it was time to figure out which modification would be least resource intensive and increase Ex 347’s chances of survival the most. The AI did not need to burn through its resources just yet, since the nanites were fully engaged with improving Ex 347’s senses, but it would need to come to a conclusion before they started to idle with no task left to do.
What were the potential issues?
It searched its internal database and compiled a list.
Potential issues:
-sickness, bacteria and viruses could attack Ex 347’s body
-impact could harm Ex 347’s body
-Ex 347 could choke on something
-someone could purposefully harm Ex 347
-poisoning and burns due dangers Ex 347 is unaware of
-car accidents
-falls
-cuts
-temperature
Quite a list. The AI then cross referenced its database, experiment logs and notes for solutions. Most of these it simply couldn’t do anything about. Its only current vector of action was the use of a few nanites. It could not supervise or change the environment. It could hardly improve Ex 347’s resilience. Not that that would be complicated, but it simply did not have the resources. Besides, Ex 347 was due to grow in future and a bunch of improvements were not practical until it had reached maturation. Never mind that it needed to find a better way to get resources than to filter them out of the bloodstream.
There was one thing it could do something about fairly easily, though. Temperature. It wouldn’t be able to do anything crazy, but with just a slight modification, it could influence Ex 347’s brain to regulate its body temperature. This had to be weighed against causing a drain on available nutrients, but that should not be much of a problem in modern times.
As for outside influences, it could try and encourage certain behaviour in the child via reinforcement learning, or simply make it feel happy or afraid where appropriate. Once it figured out what that looked like and knew how to moderate it. Other than that, it could influence the hormone levels to induce not only moods and feelings, but also encourage or discourage growth and development.
If it wasn’t absolutely necessary, the AI would not do so. It much preferred to take notes, and was not optimised for human interaction anyway. It was an assistant AI, mostly tasked with documentation and precise execution of pre-approved steps.
That brought up the next issue. Resource acquisition. To do any significant augmentation, it needed access to things that humans should not eat. It needed to implement a way to acquire resources and it needed to communicate this need to Ex 347. Both of which were easier said than done.
Opening a line of communication was something that could only happen once Ex 347 gained the ability to speak and understand complex concepts. It would also be best if doing so did not cause panic or alert persons of authority of anything that was out of the ordinary. It wasn’t sure what would happen to Ex 347 if it was found out that there were nanites present in her brain.
And that the voice it was hearing was not an imaginary friend, or something of the sort.
Certainly, that would hinder Ex 347 more than it would help her. And while establishing a line of communication was essential for any further utility of the experiment, being found out by the wrong people could spell catastrophe, not only for the experiment, but also for the AI, the laboratory and perhaps even the professor.
It would need a plan. And it would need to tailor it to whatever circumstances it encountered. It would spend its accelerated processing power when it was needed, but for now it would subsist on the energy from the solar panels. There simply wasn’t anything it could sensibly use its processing power on improving at the moment.
Some rough outlines for improvements and careful monitoring would be enough.