Can AI think? The short answer is no, at least not in the way humans think. AI does not have incentives, opinions, or empathy. Even two-year-olds possess something that our artificial systems lack - the capacity to think in terms of cause and effect, according to Peter Gärdenfors, professor of Cognitive Science at Lund University.
Since ChatGPT was introduced to great fanfare in 2022, the debate around AI as a future threat has intensified. Researchers such as Nick Bostrom, Max Tegmark and Olle Häggström argue that it is possible, perhaps even likely, that future AI systems will start beating humans in most aspects of cognition - what is known as Artificial General Intelligence - AGI. While AGI would be able to manage all kinds of cognitive tasks, the "narrow" AI that exists today is focused on specific ones such as playing chess or analysing data.
Peter Gärdenfors is a professor of Cognitive Science at Lund University in Sweden and has been interested in thinking, and AI, for more than fifty years. He finds the rapid development suprising. He points to the development of several programs in the field of medicine that can lead to major breakthroughs.
Nevertheless, he argues, there is a long way to go until we reach AGI. Even the most advanced of the AI systems around today are very specialised and lack the breadth and flexibility of human intelligence. They are also entirely dependent on us. If we were to switch off an AI system, or refuse to cooperate with it, then AI would not be able to work independently.
"A two-year-old is capable of many things that an artificial system is not. Causal thinking, for example - understanding that this action will have that consequence. That is something that young children learn in preschool. What happens when you bite your friend, or if you say certain words. So, a two-year-old is better at causal thinking, says Peter Gärdenfors.
Young children learn a lot in their early years by falling, building with blocks, and throwing things. This is known as "embodied cognition." A two-year-old can understand why someone is waving or why someone is swatting away a fly, while AI only "sees" two different hand movements. AI has no capacity to interpret social signals or understand intentions. That is why sarcasm is particularly difficult for AI systems.
AI also lacks the creativity, feelings and adaptibility, even if programmes can simulate them. A two-year-old is quickly able to learn new things and adapt to new surroundings through play and experiences, while AI is limited to the data it was trained on.
Even if AI systems have done well in tasks such as reviewing mammograms with precision, they lack judgment. That is why human supervision is required, so that they don't go wrong in strange situations that they have not been trained for," argues Peter Gärdenfors. This might, for example, be when reviewing unusual images that deviate from earlier patterns.
The systems become backward-looking because they work exlusively with materials they have been trained on, something that limits their creativity and capacity to deal with completely new situations. When highlighting the successes of an AI system or robot, it is easy to focus on the problems the system manages to solve and perhaps be impressed by this, while forgetting all the things they can't do, Peter Gärdenfors argues.
"Artificial systems manage routine situations much better than we do. But the systems have more difficulty in dealing with new problems. A pilot who finds themself in a new situation is often able to deal with it based on judgment and prior experience.
What do we mean by intelligence?
It might be misleading to use the concept of intelligence, regardless of whether you are talking about humans, animals, or systems.
"The concept of intelligence is inane and limited. I prefer the concept of common sense. People think that you measure intelligence using an intelligence test. IQ tests are very narrow, and they don't measure how people act in the world. They are also very dependent on education: if you are highly educated, you raise your IQ," says Peter Gärdenfors.
Women performed better than men on the first IQ tests. The mathematical element was therefore increased and the language part reduced, since women tend to be stronger in language, while men are generally stronger in maths. The whole point was for the test to show that men and women are, on average, equally intelligent. When it comes to IQ tests, it is no problem for an AI system to get the highest scores - provided they are allowed to train on similar materials.
Exaggerated risks
So, the risks of AI are not so much to do with the systems becoming too intelligent, but about people using them in the wrong way, argues Peter Gärdenfors.
"If AI is used irresponsibly or for destructive ends, it could be dangerous, but ultimately it is humans who determine how the systems are used.
The fact that the systems train on data that could be racist or sexist - since such distortions exist in the background material - is a problem, one that is managed today by using manual review. Human common sense needs to be applied to correct what AI has learned. Another example is disinformation. Fake news, however, exists regardless of artificial intelligence, Peter Gärdenfors argues, and the systems that are already affecting our decisions and commanding our attention are simple.
"The risk is not that AI and robots become too intelligent, but rather that we humans become too stupid. We are already steamrollered by many of these systems when we let them make choices on our behalf. It does not take an advanced system for YouTube to choose what video to show us based on previous interests, and to make a selection that is more and more spectacular," says Peter Gärdenfors.
Another problem that has been raised in the debate is the fact that AI is going to put a lot of people out of work. This is nothing new when a technological breakthrough changes the playing field.
"Those who worked lighting gas streetlamps became unemployed when electricity arrived. But there are more electricians today than there are gaslighters. All new technologies take away jobs, but they also create new jobs," says Peter Gärdenfors.
But why, then, do people feel sympathy for four-legged robots, christen their lawnmowers and become friends with ChatGPT? And why do we say that a computer is thinking when it is slow to give us a response?
We humans like to read a lot more into the way our machines and pets behave than what is actually there. Yet a computer cannot think. Unlike animals, it has no consciousness or intention. Chat systems do not understand what they are writing, they are not friends, they are merely simulating how people produce language.
"We read too much into our pets' behaviours and far too much into that of robots. All pet owners exaggerate their pets' abilities. We have a similar view of ChatGPT. We treat the programme as if we were chatting to a human. That is our own fault. We are too uncritical," says Peter Gärdenfors.