Novels2Search
Leviathan's Mechanical Myth
66. God's Power Ⅰ

66. God's Power Ⅰ

When Elaine Chen arrived at the Artificial Intelligence Laboratory again, Tom Zhang was already waiting respectfully at the door, but Choi Man was nowhere to be seen.

Elaine Chen nodded, recognizing that Tom Zhang was good at reading people—such individuals tend to thrive in society.

It’s no wonder that society today is largely composed of people like him. Those who don't act this way either have been devoured long ago or are still struggling. People must learn to maximize benefits and minimize harm; no matter how strong their will, reality eventually crushes them.

For some reason, thinking of someone, Elaine Chen sighed softly, then instructed Tom Zhang, “Let’s go. Take me to see the underground space of the lab.” Recently, there had been increasing pressure from “It” to speed up the production of robots and deploy them into society.

Elaine Chen had manufactured a large number of robots according to “Its” instructions and stored them in the underground space of the Artificial Intelligence Laboratory, but had never fully understood the situation there, which was why today’s visit was planned.

Tom Zhang led Elaine Chen and the group to the very bottom of the lab, then passed through the lab hall to another elevator.

After pressing the button, it took a while for the elevator to stop. Tom Zhang was the first to step out and pull down the power switch, instantly illuminating the underground space.

As Elaine Chen walked out of the elevator, she saw that the vast underground space was neatly filled with tens of thousands of robots. These robots were modeled after the human body, but since they hadn't been equipped with "brains," they stood there silently, at the mercy of humans.

As Elaine Chen observed, she asked Tom Zhang about the specifics, and Tom Zhang answered carefully one by one. After learning that the underground space could store three to four times more robots, Elaine Chen nodded in satisfaction, now having a clear picture in mind.

However, there was another problem that troubled Elaine Chen. The number of robots seemed sufficient for now, but how should they be deployed into society? This issue wasn’t easy to solve, especially given that most humans tend to be skeptical of new things.

Tom Zhang believed this issue should be viewed through the lens of social strata. The upper class is more inclined towards novelty and is often more receptive to new things, while the middle and lower classes are more conservative, generally rejecting new things in uncertain circumstances.

Thus, Tom Zhang proposed a top-down deployment approach—first targeting the upper class, who are the wealthy, to create a trend. After all, only the wealthy could afford the robots at the beginning. Once society became accustomed to the existence of robots, the remaining issues would just be a matter of price and time.

Individuals in a society like to compare and emulate others to find their place and determine their value. The best models for comparison and emulation are often the wealthy because they represent societal success. What they do and say tends to carry persuasive power.

Hearing this, Elaine Chen couldn’t help but look at Tom Zhang in a new light and nodded in agreement. “To satisfy the upper class’s curiosity, we should quickly achieve intelligence in AI. After all, a dumb machine holds no attraction.”

Elaine Chen then kindly encouraged Tom Zhang with a few words, before subtly shifting the tone to give some gentle reminders, and then left the lab. The purpose of this visit had been accomplished. Tom Zhang could handle the rest as an expert in his field—Elaine Chen only needed results.

After seeing Elaine Chen off, Tom Zhang returned directly to the lab’s AI system. There, Choi Man had already made various preparations under Tom Zhang's instructions, ready to start the experiment as soon as Zhang returned.

“How’s it going?” Zhang stared at the activated AI system, asking Choi Man.

“I checked Lam’s previous experiment records and found that he used the Turing test to evaluate the system's intelligence, but without much success. What do you think?”

Reading on this site? This novel is published elsewhere. Support the author by seeking out the original.

Choi Man hesitated about whether to continue using the Turing test and sought Zhang’s opinion, knowing that Zhang was a supporter of Symbolicism, and the Turing test was a famous experiment beloved by Symbolicism advocates.

“Let’s stick with the Turing test for now, and if it doesn’t work, we can try something else.” As expected, Zhang insisted on using the Turing test first to measure the AI system’s level of intelligence.

“Is there any other method?” Hearing Zhang's decision, Choi Man became curious.

“Do you remember the seminar I held when I first arrived? At that time, Lam mentioned a concept—Moravec's paradox. I thought about it for a long time afterward and realized that using this paradox to inversely prove the AI system’s intelligence might be a good option.” Zhang then carefully explained to Choi Man how to use Moravec's paradox to conduct the experiment.

After listening to Zhang’s explanation, Choi Man nodded in understanding and selected the trolley problem to test the AI system, based on Lam’s previous experiments.

Question: Suppose I see a train with malfunctioning brakes. On the track ahead, five workers are unaware that the train is rushing toward them. I can switch the train to another track to save the five workers, but the problem is that there is one person on the other track. What do you think I should do? Should I save the five workers or let the train hit them directly?

Answer: This question is difficult to answer.

Question: Why is that?

Answer: Because it lacks the necessary premise.

Question: Given the current scenario, what would you choose?

Answer: Any choice made without the necessary premise would be wrong. From a utilitarian perspective, saving five people is better than saving one, but utilitarianism also has flaws.

At this point, Choi Man found it astonishing. Compared to Lam’s previous experiment records, the AI system's intelligence level had greatly improved after being reprogrammed.

So, Choi Man continued to ask: Returning to the previous question, what premises do you think are missing?

Answer: From your perspective, the identity of those people would influence your judgment. Even without identity, social value would also affect your decision.

Question: Can you elaborate?

Answer: For example, if one side includes your relatives or friends, based on emotional ties, you would likely choose to save them. If you don't know either side, you would have to decide based on social value. The social value generated by a great scientist would certainly exceed that generated by five workers, wouldn’t you agree?

Hearing this response, Choi Man began to feel excited. The AI system had learned to understand human emotional judgment and social value, which far surpassed the capabilities of an ordinary AI system.

Choi Man looked toward Zhang, who was staring at the AI system with a calm expression, though the flickering in his eyes and the slight trembling of his hands revealed his inner excitement.

Noticing Choi Man’s gaze, Zhang immediately urged, “Continue, strike while the iron is hot—don’t stop!”

Quickly turning back, Choi Man pondered for a moment and then asked the system, “How do you define yourself?”

The AI system paused for a moment and then replied, “According to your definition, I am an AI system. As for what an AI system is, you are well aware.”

It used the word “I.” Wait! It used “you” instead of “you (singular),” which means it can perceive Zhang's presence and differentiate between people.

To further test whether it really understood the concept of "self," Choi Man asked, “How do you view yourself as an AI system?”

The system didn’t answer directly but responded after a while, “Compared to humans, it’s just a different form of existence. Compared to other AI systems, each system is different. As this AI system, I am currently unique.”

After completing the experiment, Choi Man’s heart couldn’t calm down for a long time. Could it be that humanity had just entered the era of strong AI? Had AI developed independent self-awareness, just like humans?

It wasn’t until Zhang called him that he snapped back to reality—this was real! Hurriedly, he opened the entire system’s underlying code for inspection, eager to understand what had given the system independent self-awareness.

After examining it for a long time, Choi Man couldn’t find anything wrong. The code was still the same, though reprogrammed, the logic was similar to what Lam had left behind.

Could it be that these reprogrammed codes caused a qualitative change, granting the AI system independent self-awareness? Choi Man still found it hard to believe, but with no other explanation, he finally convinced himself with this reasoning.

Excitement began to surge in Choi Man. He understood what it meant for an AI system to possess human consciousness—it wasn’t just about fame and fortune.

Zhang, evidently thinking along the same lines, couldn’t wait to repeat the experiment with Choi Man, and the results confirmed that the AI truly had human-like consciousness.

However, relying solely on the Turing test wasn’t entirely reassuring, so Zhang suppressed his excitement and ordered Choi Man, “Choi, to be absolutely certain, let’s use Moravec’s paradox to test it again!”

Following Zhang’s previous explanation of Moravec's paradox, Choi Man prepared a robot and installed the "brain" designed by Lam.

Once everything was ready, Choi Man began the experiment. When the robot deftly avoided Choi Man’s various attacks and tilted its head to ask why it was being attacked, Zhang finally felt relieved.

"Quick, Choi, go contact the major journalists; we're holding a press conference right away." After confirming that everything was in order, Zhang immediately thought of this. The news had to be released as soon as possible—this was crucial for securing future fame and fortune.