Rechercher
  • anielambednarek

The two versions of AI

Dernière mise à jour : 18 avr.



The two versions of AI


In 1980, J. Searle introduced a fundamental distinction between ‘weak AI’ and ‘strong AI’.

Weak (or narrow) AI refers to a system focused on one narrow task that it is trained (or programmed) to perform. It performs a specific task very well (and can out-perform us as far as computation is concerned) and behave intelligently but it is limited to a narrow area. The concept also conveys the idea that machines/computers could act as if they were intelligent. In other words, weak AI simulates thinking and acts as if it were smart (as a human), but it is not. Strong (or general) AI is idea that machines/computers are actually consciously thinking and have the same intellectual capabilities as human beings. The concept clearly refers to human-level programs that are able to solve a wide variety of tasks and do so as well as a human. It belongs to the realm of speculative artificial intelligence insofar as strong AI exist only in theory and in Asimov’s writings. All our current AI systems are ‘weak’ : they can play chess, predict bank credit default risks and recognize dogs but the algorithms cannot exhibit intelligence in a wide range of contexts. Narrow AI can learn and do just one specific task, sometimes better than human assessments but often at a level above human capabilities). Studies and research are currently taking place in order to build strong AI, which would be able to understand and learn any intellectual task that a human can.


There is a fierce debate about whether strong AI could be achieved by the development of existing techniques. Moore's law of exponential advancement in computer power is often cited in this context : since 1940 (where the first electronic computer was developed) each generation of computer hardware has brought an increase in capacity and decrease in price. Performance doubled every 18 months until 2005. However the ‘1965 prediction’ that number of transistors on a computer chip would double every 2 years is currently met with skepticism. Many experts believe that the impressive progress of AI in recent years does not prove that there can be no limit to what artificial intelligence can achieve. AI remains controversial and the term, despite its widespread use, is ill-defined. Roughly speaking, it refers to efforts to build machines which can perform human actions such as reasoning and decision-making. The great progress in the field of AI is due to machine-learning, deep learning and neural networks, which use computing power to execute algorithms that learn from data


But is it possible for a machine to show intelligent behavior ? Can a computer (really) think ? That is the question.


Alan Turing provided an answer to this issue by proposing a test based on written questions/responses (the famous Turing test) : a machine passes the test if a human interrogator cannot tell whether the written responses come from a human or a machine. The Turing test does not require ‘physical simulation’ to demonstrate intelligence because there is no interactions with objects and humans in the real world. In a very interesting way, Turing considered that it would be easier to conceive human-level AI by developing learning algorithms and teaching the intelligent machinery (rather than by programming by hand its intelligence). Learning which is today the privileged method for computer science has a great advantage : it allows the agent to operate in unknown environment and become more competent (by modifying its initial instruction tables).


Turing has not only defined AI, but also shown its possible limitations. Here are some examples of objections that can be raised to oppose the thesis of a strong AI : mathematical obstacles, formality and disability arguments. As Gödel has demonstrated, certain mathematical questions are unanswerable by formal axiomatic systems (mathematical objection). The ‘formality argument’ is based ont the fact that human behavior is too complex (and informal) to be captured by formal systems and codified in a computer program. The 'disability argument’ seems the most questionable as it is too dogmatic. Here is a list of Turing’s activities machines would not be able to do, including: Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behavior as a man, do something really new ». Nowadays, it is obvious that some activities that were supposedly inaccessible to machines are quite commonplace. It is not uncommon to see a biased algorithm make a mistake or, on the contrary, a computer innovate and exceed human performance. More delicate is the question of feelings: if we can expect humans to fall in love with androids/gynoids, the opposite case appears, for the moment, to be a matter of science fiction.


Despite its human appearance, the gynoid Sophia (designed to look like Audrey Hepburn ) is not a strong AI. But this does not prejudge the possibility of a human being falling in love with this humanoid robot.



Currently, it seems that Sophia is not able to pass the Turing test.



Contrast this with Arisa, the new generation gynoid, who can be seen in the Netflix series Better than us.



Another humanlike robot (still female android) but not strong AI : HRP-4C (which is a very sexy name) designed by the National Institute of Advanced Industrial Science and Technology (Japan) which also conceived HRP2-P and HRP3-P


Kodomoroid and Otonaroid : two japanese gynoids (teenager and woman) created by the Professor Hiroshi Ishiguro


If an android robot were able to speak and live like a human, it would probably be difficult to distinguish between a machine and a person. But if that happened, what would the word 'human' mean ? However, we are still a long way from that scenario. Kodomoroid and Otonaroid are not autonomous robots, unable to walk by themselves, and equipped with very limited AI.



For the moment, robots such as Chappie, the robot-cop turned gangster (in reference to the eponymous film) do not exist. However, the focus on learning sheds an interesting light on how AI systems could one day improve their performance through machine-learning. Chappie is an intelligent and sentient robot that behaves like a child and reaches a higher form of consciousness.



In addition to the question « can a machine think like a human ? », there is the question of whether machines can have feelings towards humans. The problem is masterfully posed in another film, I am mother. As the name of the film suggests, the issue is not one of romance but maternal love. In a post-apocalyptic and depopulated world, a droid playing the role of a caring and affectionate mother (with a loving voice) gradually reveals its true nature : ambivalent and disturbing.



Beyond the plot, what is fascinating is the idea, somewhat paradoxical, that humans created an AI raised to protect human life, but this artificial intelligence, impacted by the violent and self-destructive nature of humans, decided to intervene by killing people and giving birth to a humanity more ethical. The film tends to question the conflicting relationship between feelings and rationality.


Besides, it introduces a third version of AI (definitely more speculative): Superintelligence. A ‘superintelligent’ Ai is supposed to overtake and dominate humanity because it has a higher level of general intelligence than typical humans. A blog post will be dedicated to this topic.

8 vues0 commentaire

Posts récents

Voir tout