Is AI a threat for Humanity ?
Dernière mise à jour : 18 avr. 2022
In March 2016, during a technical demonstration at the SXSW tech show, Sophia’s creator (David Hanson founder of Hanson Robotics) asked the robot : 'Do you want to destroy humans? Please say no’ And guess what Sophia answered : ’Ok, I will destroy humans !’ This chilling statement was, however, preceded by more pleasant words: 'In the future, I hope to do things such as go to school, study, make art, start a business, even have my own home and family' she said. This interview has revived the debate about the potential danger of AI to mankind, precisely at a time when its advocates insist on the benefits of this technology for society.
Is AI a threat (or an opportunity) ?
As suggested by S. Russell, it does not require much imagination to see that making something smarter than us as human beings could be a very bad idea. An entity more intelligent than humans - a highly evolved Alien or an overly intelligent Robot (not necessarily humanoid) - could take control of the world, in the way that is described in Butler’s novel Erewhon. As Turing himself observed, it is not obvious that we can control machines that are more intelligent than us. The fear that artificial beings will continue to evolve rapidly and radically thanks to technological advancement to eventually reach a high level of consciousness (that far surpasses human ability) is deeply anchored in the human psyche. The recurring myth of Golem or the story of Frankenstein attests to this. This feeling, already present in the 19th century with regard to technical progress, has become more widespread in recent years with the dramatic advances of AI and robotics. High-level experts and well-known figures such as S. Hawking, M. Tegmark, B. Gates and (even) E. Musk have warned of the risks of unchecked development of AI systems. The Oxford philosopher N. Bostrom is among those who think AI is an existential threat for mankind. His Superintelligence book - which became a bestseller - asks the following question : given that superintelligence will one day be technically possible, will humanity choose to develop it? The answer is quite simple given the profits and benefits that this kind of technology can generate. For Bostrom, an overly intelligent machine (which could be a digital computer or an ensemble of networked computers) might pose a threat to the supremacy and the survival of the human species.
The T-X model gynoid assassin introduced in the 2003 film Terminator 3: Rise of the Machines
Films like Terminator have strongly contributed to the sulphurous reputation of AI with a serie of unforgettable characters (humanoid robots and autonomous cyborgs conceived as indestructible soldier, infiltrator and assassin).
However, there are also researchers who doubt the relevance of the catastrophist approach cleverly orchestrated by some propagandists (philosophers, scientists and engineers) and the media. According to J.-G. Ganascia this climate of fear, artfully created and maintained by techno-prophets and technophobes, deserves to be dispelled through rigorous analysis. The French philosopher rejects the thesis of ‘technological singularity’ in which he sees a simple ‘myth’ (Le mythe de la singularité, 2017). In his view, singularity zealots who announce that artificial entities will soon be endowed with superhuman intelligence are basing their prediction on fallacious arguments or simply irrational fantasies. In short, when it comes to AI, the boundaries between science fiction and science are porous The replacement of Humans by Machines is not for tomorrow, and probably never will be. It should be remembered that there is no such thing as strong AI at present, much less ‘super AI’ (see our blog post on the 2 versions of AI: weak AI and strong AI).
Most AI-powered robots are given a friendly appearance to reassure people and facilitate human-machine interactions. The uprising of ‘Pepper’ robots is quite difficult to imagine…
If one wants to remain rational, while being cautious about the risks inherent in new technologies, it is useful to consider the proven threats of AI. For example, with regard to infringements of fundamental rights : respect for privacy, absence of discriminatory algorithmic bias, liability regime in case of damage caused by autonomous AI systems, ban on social scoring by public authorities. The reports from think tanks and experts on this subject are quite alarming. Concerns about AI should stem from its current and actual applications, not from the futuristic speculations of a few transhumanists. The ethics of AI (fair Machine learning) and legal issues will be the subject of a later blog post. This is a topic that needs to be addressed seriously and urgently.