What can we expect from an AI
A guest contribution by Michael Brendel
Artificial intelligence is ubiquitous in our lives. Thanks to the technology, Alexa, Siri and Google's voice assistant can understand us despite background noise, service robots can perceive their surroundings and camera apps can automatically recognize motifs. But we don't hear about most AI-based applications.
IBM filed an incredible 1,600 AI patents in 2018. Google released 47 products with AI algorithms between 2013 and 2017, but did not issue a press release for most of the releases. trained an AI? The inevitable queries on the Internet that prove to a machine that the person sitting in front is not a machine help a machine to become better. Apart from the bizarre: who knows what exactly they are doing?
Algorithms are always finding new areas of application
The smart algorithms are also penetrating areas that previously required human skills. Many applications are already pre-filtered by computer systems based on deep learning, the hotel chain Hilton and the cosmetics and food company Unilever even have the gestures and facial expressions of applicants analyzed by an AI.
Some insurance companies rely on technology when processing claims. Clinics also use artificial intelligence to analyze x-rays and CT images. In police work, the technology is used in - rightly controversial - programs for facial recognition and the creation of crime prognoses. And by 2020, the Associated Press news agency wants 80 percent of its content to be generated by computers.
Artificial intelligence is about to make its way into our lives and our society. There is no doubt that the technology offers tremendous possibilities. The perspective of a self-determined life that voice assistants give older or physically impaired people, the support provided by robots in dangerous work areas and smart diagnostic devices in medicine alone make it necessary to give the new technology a chance. But please no free ticket!
Not just politics, we all have to decide
It's time to ask how much AI we have want . This question sounds banal, but it is probably one of the most important questions of our time. Because how far the capabilities of artificial intelligence go is so far only a question of feasibility. What's going on and promises a market is implemented. Progress cannot be stopped, can it?
It is good and right that philosophers and sociologists keep asking critical questions and pointing out their social responsibility to AI companies and politicians who are negligent and naive when it comes to artificial intelligence. It is good and right that lawyers point out where case law reaches its limits in the face of increasingly powerful algorithmic processes.
There is a lot at stake.
But in which areas AIs should decide independently, where they should only help people with decision-making, and where they simply have no business is not a legal, philosophical, sociological or political question. It is a question of the limits that we humans set for artificial intelligence want . There is a lot at stake.
If a clever algorithm in the service of an insurance company already takes over the claims processing, can it not also replace the customer advisors with Alexa's skills in the future? Or manipulate the evidence with the help of AI-fueled deep fake technology? Should intelligent systems only help us with CT analysis or, as the US startup did Aspire Health offers to forecast the treatment costs in view of the calculated life expectancy? If facial recognition algorithms can already recognize criminals, why don't they recognize cheats, hand-washers, bell-bag boycotters, dog poopers?
Some AI applications in the near future make the question of the limits we want even more pressing. Weapon systems are already in use today that can not only move autonomously, but theoretically also make the decision to launch themselves. Who the short film Slaughterbots knows how diverse the possibilities of abuse are when things get into the wrong hands. In which not correct is of course a question of perspective.
The question of responsibility is even more sensitive. Who turns up when the AI weapons have been wrongly programmed or hacked and The wrong ones kill? If no one can be held responsible for human deaths, the threshold to the use of autonomous weapons could become dangerously low. Then there is that Black box problem . Applications based on neural networks - and thus most AI systems - cannot explain how they came to their decision. That means: Nobody knows why the weapon killed exactly this person right now.
The crucial question of the 21st century
The development of self-driving cars was also not preceded by any social discourse. Nobody asked us whether we want to drive autonomously (i.e. ourselves) or automatically in the future. Of course, most road accidents occur due to human error. So it's obvious to take the helm out of people's hands, isn't it? But that too opens up a vacuum of responsibility. Which family member can and wants to live with the fact that a loved one was killed by an incomprehensible computer decision when the car chose this evasive maneuver?
And another question: are cars really just a means of transport? Isn't driving yourself an expression of human self-determination, isn't your driving style also an expression of character? Are we handing our passion for automobiles into the hands of a technology that has never asked us how we feel about it?
Do we trust or distrust technology?
It is time for the crucial question of the 21st century. But in order to be able to find our position on artificial intelligence, our relationship to modern technology must first be fundamentally examined. Because that is characterized by several paradoxes that the current situation demands to be aware of. Let us first ask the following question: Do we trust or mistrust technology? According to several surveys, artificial intelligence makes most people feel uncomfortable or afraid; for many people, the risks outweigh the opportunities.
But in a test carried out at the Georgia Institute of Technology in 2016, the subjects obediently chased a self-proclaimed rescue robot, even when it led them into a windowless room during a simulated fire. Anyone who has already experienced similar situations with their navigation device may now feel caught out. Overtrust what scientists call the phenomenon of excessive trust.
But shouldn't we assume that technology simply knows many things better than we do? We don't distrust a calculator either! Aren't we justified in assuming that our voice assistant is telling the truth? Maybe we should ask ourselves what distinguishes the smart programming code in our Alexa from the code that will one day usurp the world .
We hand over tasks to machines voluntarily
Which brings us to the second paradox. It relates to the concern of many people that machines are becoming too powerful in the world. But that is feared Seizure of power not much more like one Transfer of power ? How many tasks do we voluntarily hand over to technology today?
We use our smartphones as an electronic extension of our brain that manages phone numbers, upcoming tasks and private photos for us. We let fitness trackers log our steps, our pulse and our blood pressure, so that we have to take a look at our fitness status first if someone asks us how we are. And to switch on the lights or the heating no more manipulations are necessary Hey siri or OK google enough. And at the same time we are afraid that machines are getting too much power. Crazy, right?
The third paradox arises from our image of machines. On the one hand, we expect an (intelligent) machine to reveal itself as such when it interacts with people. The Australian AI researcher Toby Walsh made this demand back in 2015, and the outcry after the premiere of Google Duplex in May 2018 led to the demand that an automatic system should not pretend to be human. But actually do it we intelligent-looking technical devices to people over and over again.
Let's think about the success of ELIZA from the first days of AI research. Back then, a really simple-minded question-and-answer program that simulated a psychotherapy session actually helped people. Let's think of "Frau Scholz", "Horst Dieter" and all the other nicknames that the owners of vacuum cleaner robots give their devices. Let's think of the users of voice assistants, a quarter of whom admit to including Alexa, Siri, Cortana, the Google Assistant and Co. in sexual fantasies - anthropomorphisms everywhere!
Why do we humanize technical devices over and over again?
It is hardly surprising that human-like robots are also assigned human behavior. In an experiment at the University of Duisburg-Essen, for example, test subjects hesitated when they were supposed to switch off the AI robot Nao, who pleaded with them: “No! Please don't turn me off! I'm afraid it won't get light again! ”With success. Of 43 test persons, 14 followed Nao's request and did not switch it off. At least everyone else hesitated. The subjects cited compassion (“As a good person, you don't want to put anything or anyone in a position to experience fear”) or respect for the will of the robot (“I would have felt guilty if I had done this to him”) as reasons.
In another experiment, test subjects hesitated when asked to touch Nao on the buttocks or the inside of the thighs. A robot. Why do we humanize technical devices over and over again?
We need a debate now!
We should keep in mind that all of the AI applications mentioned here, yes, all of the AI applications currently available, Weak AIs are systems that are designed for a single task. From one Strong AI, who can think as abstractly and act flexibly as humans, we are still a long way off, especially from a super-intelligence that far exceeds human capabilities.
However, the postponement of the question of whether we can use these scenarios want, would be negligent. Because who can conclusively refute that Keep it up AI development does not lead exactly to that - that the machines will one day be the most intelligent beings on this planet? And who can conclusively refute that this Sometime not really very soon? It is better to question our relationship to technology now than to let it be dictated by a highly developed AI at some point.
Let's press the pause button. Let us pause and ask ourselves, our family members and friends, our colleagues, regulars and - sisters and club colleagues - and of course our 1E9 community: Do we want what Artificial Intelligence will be able to do?
Michael Brendel is a trained theologian and trained journalist. His latest book was recently published: “Future Intelligence. Being human in the age of AI. ”He also runs the blog“ Spahggypt - We and the power on the net ”. His full-time job is as head of studies for politics and media in the Ludwig-Windthorst-Haus in Lingen. He's on Twitter too.
Teaser image: Getty Images
- What is a compliment for a girl
- Are Japan's immigration policy so strict?
- How is Diet Soda made
- What is the deepest place on earth
- Bipolar 2 gets worse with age
- The programming language R is useful
- Why should I never visit Budapest
- What do Trump supporters think of Canada?
- What do you dislike about an RPG?
- What do Bengali think of Gujarati
- Why is an American passport powerful
- Is the cumulative distribution function always measurable
- How do you calculate a webinar
- What is the Levinthal Paradox
- What is ultrasound
- Are Vietnamese and Filipinos similar
- Who is your least favorite movie character
- Why do people think Chinese speaks Mandarin?
- What will you never understand
- How can traffic lights be improved?
- Where do dice games come from
- What non-lethal medicine can induce vomiting?
- How does sugar affect blood pressure?
- Anorexic people have less acne