There's a mad dash to create a working artificial intelligence. Although, what that actually means is still pretty vague. In 1950, Alan Turing proposed what is arguably the first set of criteria for judging artificial intelligence. These criteria are evaluated in what is called a Turing Test. The essence of the test is to have a human interact with another intelligence via text chat, and a third-party judge. The judge knows that one of the parties is a human, and the other machine. If the judge cannot reliably determine which is human and which is not, then the machine is said to have passed the test.
The so-called imitation game, for the time, was a remarkable concept. It is something like imagining a way to travel to the moon when all you have are horses and carriages for travel. But the concept, however intriguing, is fairly short-sighted and even trivial by modern computing standards.
The advances in not only computing and information theory, but also in neurology and consciousness have uncovered dramatically new ideas in machine learning and machine intelligence. Most current concepts of intelligence are centered around the ideas of learning in the context of neural networks. A neural network is basically an interconnected set of neural nodes that can build and strengthen connections by firing together.
You can imagine how this works in a fairly simple way by thinking about a checkerboard which is eight by eight squares, having alternate black and white squares. Now, imagine you touch the squares in all four corners. And imagine that when you do, two things happen. First, all four squares light up. Second, all four squares make a connection to each other. Now imagine you touch the squares again. Again, they light up. And this time, their connection grows stronger. Next, imaging you touch just one of the corners, and it lights up. And this causes the other three corners to also light up, but not as strongly as they did earlier. This is the most basic concept of how a neural network operates. It's a set of nodes that can fire (e.g. light up), and build or reinforce connections.
This model of neural firing and connections is based on our understanding of the brain's neural configuration. And for some specific tasks, building such a neural network can function pretty well as a learning machine. In a learning machine, what happens is you have a large neural network (much larger than a 64-square checker board). Next, you must train or educate the network. This is done by feeding the network a model or set of models. The models represent different configurations of how the nodes in the network should fire. It basically does the job of touching the squares like in the checkerboard example, thereby creating the connections in the network. Thus, neural nodes and their connections are basically a representation of the information in the model. Often this is done with basic logical definitions. For example, the first model might teach the network that A + B = C. It might then teach the network that A + D = C. Because the network is effectively a logical inductance machine, it can then infer that D = B without having to be taught that explicitly. And this, my friends, is what artificial intelligence and machine learning means in the current world.
Not to say that such a mechanism isn't incredibly useful, but it is not what most people think intelligence is. Consciousness is required for true intelligence. Id, Ego, and Super-ego are probably also required. And also required is an appropriate set of senses and inputs, and an appropriate set of ways of expressing output. Human understanding, based on current neurology, is extremely complex and layered. The way the conscious brain assembles and assimilates inputs is like a computer, but only in a highly artificial way.
Here's a good example to think about. When you sleep, and dream, your brain... your neural network is firing away. It is triggering memories of recent events, which trigger other memories, and so on. When you are awake, this process of remembering is controlled and filtered, and you know the difference between the memory and your current reality. In the dream state, however, it's different. You don't know it's a dream. And, strangely, the rules of consciousness don't apply. You can move from one scene to another. A person can be one person in this moment, and then immediately another person. Things you know to be untrue or impossible, are possible and happen in your dream. And while it is happening, you believe it. This is because when you are dreaming, essential parts of your consciousness are shut down, and you're left with something like random firing of your neural network.
If you stumble upon this tale on Amazon, it's taken without the author's consent. Report it.
What is particularly interesting about this is that your brain makes sense of, and understands, the things that happen in your dream, even if they would be nonsensical in your waking mode of thinking. A computer, on the other hand, tends to fail when it can't "understand". It tends to crash when the program stops making logical sense. The computer cannot understand what it means for 2 + 2 to equal 5, but your brain, especially during dreaming, can readily accept this. This is because your brain always understands. It always has a way of assembling a coherent story based on the inputs and neural firing.
But that's not the only thing. What about interactions? Intelligence isn't just having a book or reference of knowledge. A database isn't intelligent. An intelligent person interacts. He or she listens. He or she responds. He or she considers. He or she analyzes and potentially synthesizes new, related, ideas. Where is the mechanism for artificial intelligence which addresses these issues? What about emotions? Can the machine understand the emotions in the context of a conversation? Shouldn't an intelligent machine refrain from speaking coldly about death when it's conversational counterpart has just lost a loved one? This may seem superfluous to using machines to solve problems, and perhaps it is when the problem can be reduced to a finite set of logical models and outcomes. But real-world problems are much more complex. Problems of people are much more complex.
And it doesn't end there. The way a brain works is very specialized to the individual. Your brain doesn't work like mine and vice versa. Your neural network, while physically similar to mine at birth, is very different from mine in that the nodes that have fired and connected to each other have done so in a way that is unique to you and your experiences. You couldn't take my eyes and ears and such and connect them to your brain and expect to be able to control my body. You couldn't interconnect our two neural networks and expect them to have any mutual understanding. It would be like writing a book that consists of Arabic and Chinese characters intermingled to make words - essentially, only gibberish. Because each network is unique, there's no Rosetta Stone to translate between them.
But, worst of all, is the idea that you can create such an intelligence with different inputs and outputs and streams of network excitation. Terribly, such conditions do exist in humans. There are conditions where inputs are routed wrong or interrupted. Where outputs are routed wrong or distorted. Where processing doesn't happen as expected. These are generally characterized as mental disorders. Frequently, there is a physiological cause of such circumstances. The thing is, the human brain has co-evolved with the rest of the human body, and therefore is highly co-dependent on it. So, you cannot take the neural network model of memory and "thinking" out of the brain and expect it to work on its own. And worse, if you were to have multiple streams of input trying to create excitations in the same neural network, don't the streams have to interact and create interference? In the same way a human can't learn calculus and French at exactly the same time, the machine also cannot. The best case would be confusion, but the worst case is some kind of artificial insanity.
Imagine a human, but with unlimited neural capacity. But also, limited types of sensory input. And delayed feedback. But also many parallel streams of the same kind of input (maybe something like having more than 2 eyes – maybe like having a million eyes and a million ears). And scatter the eyes and ears all over the Earth. What would that human’s subjective experience be like? What would constitute “consciousness” for this human? How would they synthesize all of the sensory input, even if they have the capacity to synthesize it? What would amount to understanding? What kind of output would you expect? The closest model I can imagine is something like a million-fold schizophrenic. Instead of two or five or ten personalities, what if there were millions of parallel personalities, working from a mix of the same inputs and memories? Obviously, the subjective experience of such an entity would be unrelatable for most of humanity. But what about objective experience and understanding? Would such an existence provide any better insight into the mysteries of the universe? Of why anything exists, rather than nothing? Of where everything comes from, and where it’s going? Would it understand how to solve world hunger and world peace? Or would it be like humans trying to explain such concepts to ants and earthworms? Or would such an existence be missing some critical feature, such as emotions, or an amygdala, and ultimately be a giant, universal psychopath?