Is AI stupid ?
Dernière mise à jour : 20 avr.
Answering this question presupposes, of course, a definition of intelligence.
It is generally assumed that intelligence is the capacity for reasoning and an aptitude for grasping truth. Intelligence can be defined in many other ways (such as, for exemple, the capacity for creativity and critical thinking), but let's consider this usual and primary acceptation of the word. According to Hobbes (1588-1679), reason is nothing but reckoning, that is adding and subtracting. In this narrow sense, intelligence would be equivalent to numerical computation. Today, thanks to its superior quantitative, computational and analytical capabilities, AI outperforms humans in many tasks, excelling in one-on-one games like chess, go and even poker and by making decisions beyond human comprehension. So if the concept of intelligence is reduced to calculation and logic, in an Hobbesian way, we must conclude that AI is not stupid but actually rather clever. It would be even infinitely smarter than humans. This observation applies even to so-called weak AI. However, it must be admitted that AI does not always act intelligently. While it is not possible to provide an exhaustive list of its mistakes, let us cite a few cases where AI systems, however reputable, have made unforgivable blunders :
— Google Photos application labelled photographs of a black couple as gorillas (2015). The image recognition software used an automatic tagging tool to make searching easier but the AI software mistakenly described black people.
— Microsoft’s chatbot Tay (designed to experiment with conversational understanding through direct engagement with social media users) declared “Hitler was correct to hate the Jews » after 24 hours of ‘learning’ from human interactions on Twitter (2016). Ironically, Tay was meant to be friendly and innocuous.
— Sophia, the ‘psychopath’ humanoid robot developed by Hanson Robotics (2016) : To the joking question « Do you want to destroy humans? (Please say no) » Sophia responded (without hesitation and with a smile) « Ok, I will destroy humans ».
— Facebook users who watched a video featuring black men were asked if they wanted to "keep seeing videos about primates" by an AI recommendation system (2021).
Racist Google App, Neo-nazi Chatbot, Facebook's biased algorithms and unfriendly robots…These examples which belong to shocking AI failures reveal that AI suffers from major weaknesses that may call into question its reliability.
In 1989, J. Liebowitz made the following remark: if intelligence and stupidity naturally exist, and if AI is said to exist, then is there something that might be called "artificial stupidity? »
While AI performs well in mathematical and statistical fields, and is successful with programs that used logic, solved algebra and geometry problem, it does not possess a human-level of intelligence and ability. It is clear that even the most sophisticated AI systems severely lack profound understanding, emotional knowledge, empathy and, of couse, self-awareness, which are as many features of ‘intelligence’ (at least human intelligence). That’s why, they can lack immunity to detrimental outside impact. In many fields, AI is considered profoundly dumb so that the expression ‘artificial stupidity’ can be used as humorous opposite of the term AI. Unless you consider Sophia's absurd answer to be a joke (on the model of the nonsense that Lewis Carroll popularized in his books), it is obvious that machines have neither the sense of humour nor the sense of beauty (even though ‘algorithmic art’ exists). The truth is that what AI lacks is ‘common sense’. But why is still AI having a hard time interpreting our emotions or understanding a simple joke ? Years ago, AI and robotic researchers made the following observation : while logic and algebra are difficult for humans and, therefore, considered a sign of intelligence, these ‘hard’ problems are extremely easy for computers to solve. On the contrary, the so-called ‘easy’ problems, such as commonsense reasoning, are very difficult for machines. This counter-intuitive observation is known as Moravec’s paradox. The paradox (formulated by Hans Moravec and others in the 1980s) is that in the IA field, the hard problems are easy and the easy problems are hard. Simply put, contrary to traditional assumption, reasoning requires little computation for AI programs, but perception, sensorimotor and mobility abilities require massive computational resources. AI scholars and robotic engineers argue that the most difficult human skills are those that are below the level of conscious awareness. As Moravec states, it is easy to make computers exhibit adult level performance on intelligence tests but difficult (or impossible) to give them the skills of a one-year-old when it comes to perception and mobility. These weaknesses of AI systems are a potential danger for human-machine interactions, as illustrated in the image below.
To return to a more theoretical level, let us recall that some philosophers claimed that an AI system that (sometimes) acts intelligently would not be actually thinking, but would be only a simulation of thinking. With the help of solipsist argument, Turing notes that one never has any direct evidence about the internal mental states of other entities (humans, but also machines). Instead of arguing continually over this point, Turing proposed to adopt the ‘polite convention' that everyone thinks. We agree with Turing and to remain polite we will refrain from calling machines stupid. After all, solipsist robots could, in the same way, consider that humans are simply hunks of meat and that we are not sentient beings. Nonsense ?