Universal artificial intelligence could make the modern world a more attractive place to live, researchers say. It will be able to cure us of cancer, improve health in general around the world, and free us from the daily routines that we spend most of our lives on. These aspects were the main topic of conversation among engineers, investors, researchers and policymakers gathered at the recent joint multiconference on the development of human-level artificial intelligence.
It looks creepy. So AI can not only delight, but also scare.
But there were also those at this event who see in artificial intelligence not only a benefit, but also a potential threat. Some expressed their fears about rising unemployment, as with the advent of full-fledged AI, people will begin to lose their jobs, freeing them up for more flexible, fatigue-free robots endowed with superintelligence; others have hinted at the possibility of a machine uprising if we let things go. But where exactly should we draw the line between groundless alarmism and real concern for our future?
The danger of AI
The Futurism portal asked five experts in the field of artificial intelligence development with this question, and tried to find out what exactly in AI scares its creators most of all.
AI can scare not only ordinary people, but even its creator.
Kenneth Stanley, Professor at the University of Central Florida, Senior CTO and Research Fellow at Uber's Artificial Intelligence Lab:“I think the most obvious concern is that AI will be used to the detriment of humans. And in fact, there are many areas where this can happen. We must make every effort to ensure that this bad side does not come out in the end. It is very difficult to find the right solution in the question of how to keep the taken actions accountable for AI. This issue is multifaceted and requires consideration not only from a scientific position. In other words, the search for a solution will require the participation of the entire society, not just the scientific community."
On how to develop secure AI:“Any technology can be used both for good and for harm. Artificial intelligence in this respect is another example. People have always fought to ensure that new technologies do not fall into the wrong hands and are not used for nefarious purposes. I believe that on the issue of AI, we can cope with this task. Correct placement of accents and finding a balance in the use of this technology is necessary. This will alert us to many potential problems. I probably cannot offer more specific solutions. The only thing I would like to say is that we must understand and accept responsibility for the impact that AI can have on the entire society. "
Irakli Beridze, Head of the Center for Artificial Intelligence and Robotics at the United Nations Interregional Research Institute on Crime and Justice (UNICRI) :
“I think the most dangerous thing around AI has to do with the pace of development - how quickly it will be created and how quickly we can adapt to it. If this balance is upset, we may face problems. "
On terrorism, crime and other sources of risk:
“From my point of view, the main danger may lie in the fact that AI can be used by criminal structures and large terrorist organizations aimed at destabilizing the world order. Cyber terrorism and drones equipped with missiles and bombs are already a reality. In the future, robots equipped with AI systems may be added here. This can be a serious problem.
Another big risk of massive adoption of AI can be associated with the likely loss of jobs by people. If these losses are massive, and we do not have a suitable solution, it will become a very dangerous problem."
“But this is only the negative side of the coin of this technology. I am convinced that, at its core, AI is not a weapon. It is rather a tool. A very powerful tool. And this powerful tool can be used for both good and bad purposes. Our job is to understand and minimize the risks associated with its use, and to ensure that it is used only for good purposes. We must focus on maximizing the positive benefits of this technology.”
What lies ahead for AI
John Langford, Chief Scientist, Microsoft Corporation:“I think the main danger will be drones. Automated drones can be a real challenge. The current level of computing power of autonomous weapons is not high enough to carry out any extraordinary tasks. However, I can quite imagine how, in 5-10 years, calculations will be carried out on board autonomous weapons at the level of supercomputers. Drones are used in combat today, but they are still human-controlled. After a while, a human operator is no longer needed. The machines will become efficient enough to perform the assigned tasks independently. This is what worries me."
Khava Siegelman, DARPA Microsystem Technology Program Manager:“Any technology can be used to harm. I think it all depends on whose hands this technology falls into. I don't think there are bad technologies, I think there are bad people. It all comes down to who has access to these technologies and how they use them. "
Thomas Mikolov, Researcher, Facebook AI Lab :
“If something attracts interest and investment, there are always people around it who do not mind to abuse it. I am frustrated by the fact that some people are trying to sell AI and colorfully describe what problems this AI can solve. Although, in fact, no AI has yet been created.
All of these fly-by-night startups promise mountains of gold and provide examples of supposedly AI work, although in reality, we are shown only the improved or optimized technologies of the present. At the same time, in most cases, few people thought about improving or optimizing these technologies. Because they are useless. Take at least the same chat bots that are passed off as artificial intelligence. And now, having spent tens of thousands of hours optimizing the execution of one single task, these startups come to us and say that they have achieved something that others could not achieve. But this is ridiculous.
Frankly speaking, most of the latest supposedly technological breakthroughs of such organizations, the names of which I would not like to name, were not interesting to anyone before, not because no one else could make them, but simply because these technologies do not generate any financial revenue. ... They are completely useless . This is closer to quackery. Especially in those cases when AI is considered as a tool that allows you to optimize the process of solving one single and narrowly focused task as much as possible. It cannot be scaled for anything else, except for the simplest tasks.
At the same time, anyone who at least somehow begins to criticize such systems immediately faces problems that will go against the sweet statements of such companies. "