In 1950, the famous Turing Test was introduced. According to Alan Turing, the father of artificial intelligence, if a machine can communicate with humans via telex without being identified as a machine, it can be considered intelligent.
In 1956, a group of young scientists, including McCarthy, Minsky, Rochester, and Shannon, convened to discuss issues related to simulating intelligence with machines, introducing the concept of "artificial intelligence" for the first time.
Since then, artificial intelligence (AI) has rapidly developed and integrated into various aspects of human life. Today, AI is increasingly a part of everyday life.
The ultimate goal of AI is to create a machine that thinks like a human. However, achieving human-like intelligence in machines has been a challenge for decades. Countless scientists have approached this from different angles, resulting in diverse theories and schools of thought.
Traditional AI is divided into three major schools: Symbolism, Connectionism, and Behaviorism. After extensive debate and practice, these schools have continued to evolve and integrate.
Symbolism posits that AI originates from mathematical logic, using logical symbols in mathematics and physics to replicate human thought processes through extensive "if...then..." rule languages.
Connectionism asserts that AI is derived from biomimicry, where intelligence arises from neuron connections. It aims to simulate the neural network and connection mechanisms of the human brain with computers.
Behaviorism believes that AI stems from cybernetics, utilizing a control system based on perceived behavior to enable self-optimization and adaptation in basic units.
The Gats Computational Neuroscience Center in Longland, the capital of the Great Kingdom, is a renowned hub for Connectionism. Founded by Jeffrey, the father of neural networks and the godfather of AI, it focuses on deep learning. This approach uses multi-layered neural networks to achieve machine learning.
Stanley Lam, a visiting scholar, has studied at the Gats Center for over two years, gaining deep insights into cutting-edge technologies, but also encountering more questions.
Technological progress has driven long-term human advancement, but overcoming technical challenges often involves struggle. Some technological laws and truths seem to preexist, waiting to be discovered, while others defy explanation despite generations of wisdom.
If you stumble upon this tale on Amazon, it's taken without the author's consent. Report it.
Stanley's greatest takeaway is the realization that humans should always maintain a sense of awe towards the unknown.
Today marked the end of Stanley's visit, and he returned to his dormitory to pack for his trip back to China. He had promised his work unit to continue an unfinished project upon his return.
"Lam, packing up?" asked Great Wei, biting an apple and wearing flip-flops as he walked in.
"The plane leaves for home this afternoon. What about you? Aren't you coming?" Lam glanced at Wei while continuing to pack.
Grand Wei came to the Gats Center four years ago to pursue a Ph.D. in Computational Neuroscience. Having developed a new simulated human memory model, he has become an academic star and graduated this year. Over the past two years, their shared background and interests forged a deep friendship.
"No, you go ahead. There are some breakthroughs at the center. They're trying to solve Gödel's theorem problem. I want to stay and see," Wei said seriously, patting Lam's shoulder with a smile. "Go back and do well. When you succeed, take me back with you. I'll work for you, you bastard."
The June sunshine was intense. Although it was afternoon, the roads were empty and quiet. At the roadside, Lam took his luggage from Wei, tossed it into the trunk, whispered for Wei to take care, and got into the car to the airport.
In the car, Lam, slightly dizzy from the sun, started to regain his focus, and his thoughts drifted.
Wei had mentioned Kurt Gödel, a famous mathematician who proposed the incompleteness theorems in 1931. Gödel's first theorem states that any formal system, including first-order predicate logic and elementary number theory, contains propositions that cannot be proven true or false within the system.
The second theorem states that if system S includes elementary number theory, then if S is consistent, its consistency cannot be proven within S.
Applying Gödel's incompleteness theorems to AI suggests that all computing-based AI systems have fundamental limitations. Oxford philosopher Lucas argued that, based on Gödel's first theorem, a machine cannot possess a human mind.
Nobel laureate Roger Penrose used Gödel's second theorem to demonstrate that the mind is non-computable by Turing machine standards. Penrose viewed Gödel's incompleteness theorems as the dividing line between weak AI and strong AI.
If the Gats Center makes a breakthrough in this area, AI could truly be considered intelligent. But what exactly is this intelligence, and what does it represent? Can robots really think like humans?
Lam shook his head, feeling uneasy but unsure why. As he looked out the window, he realized something was wrong.
This was not the usual route to the airport. He had traveled this way many times over the past two years and shouldn't be mistaken. Lam cautiously asked the driver if they were going the wrong way.
In response, the driver suddenly accelerated. Caught off guard, Lam was thrown heavily onto the back seat.