-
Most robots function through artificial intelligence (AI), which enables them to make human-like decisions.
-
The more complex decisions robots are capable of making, the more difficult and important it is to understand why accidents happen.
-
The project RoboTIPS has created an ethical black box, which records all of a robot’s inputs and corresponding actions, to ensure the same mistakes don’t happen twice
Copyright: weforum.org – “We need to investigate when robots cause accidents – here’s why and how”
Robots are featuring more and more in our daily lives. They can be incredibly useful (bionic limbs, robotic lawnmowers, or robots which deliver meals to people in quarantine), or merely entertaining (robotic dogs, dancing toys, and acrobatic drones). Imagination is perhaps the only limit to what robots will be able to do in the future.
What happens, though, when robots don’t do what we want them to – or do it in a way that causes harm? For example, what happens if a bionic arm is involved in a driving accident?
Robot accidents are becoming a concern for two reasons. First, the increase in the number of robots will naturally see a rise in the number of accidents they’re involved in. Second, we’re getting better at building more complex robots. When a robot is more complex, it’s more difficult to understand why something went wrong.
Most robots run on various forms of artificial intelligence (AI). AIs are capable of making human-like decisions (though they may make objectively good or bad ones). These decisions can be any number of things, from identifying an object to interpreting speech.
AIs are trained to make these decisions for the robot based on information from vast datasets. The AIs are then tested for accuracy (how well they do what we want them to) before they’re set the task.
AIs can be designed in different ways. As an example, consider the robot vacuum. It could be designed so that whenever it bumps off a surface it redirects in a random direction. Conversely, it could be designed to map out its surroundings to find obstacles, cover all surface areas, and return to its charging base. While the first vacuum is taking in input from its sensors, the second is tracking that input into an internal mapping system. In both cases, the AI is taking in information and making a decision around it.[…]
Read more: www.weforum.org
Der Beitrag We need to investigate when robots cause accidents – here’s why and how erschien zuerst auf SwissCognitive, World-Leading AI Network.
Source: SwissCognitive