Part 4
IN THIS PART …
Engaging in robotic mayhem.
Fly everywhere with drones.
Let an AI do the driving for you.
Chapter 12
IN THIS CHAPTER
Distinguishing between robots in sci-fi and in reality
Reasoning about robot ethics
Finding more applications for robots
Looking inside how a robot is made
People often mistake robotics for AI, but robotics are different from AI. Artificial intelligence aims to find solutions to some difficult problems related to human abilities (such as recognizing objects, or understanding speech or text); robotics aims to use machines to perform tasks in the physical world in a partially or completely automated way. It helps to think of AI as the software used to solve problems, and of robotics as the hardware for making these solutions a reality.
Robotic hardware may or may not run using AI software. Humans remotely control some robots, as with the da Vinci robot discussed in the “Assisting a surgeon” section of Chapter 7. In many cases, AI does provide augmentation, but the human is still in control. Between these extremes of human control are robots that take more or less detailed instructions by humans (such as going from point A to point B on a map, or picking up an object) and rely on AI to execute the orders. Other robots autonomously perform assigned tasks without any human intervention. Integrating AI into a robot makes the robot smarter and more useful in performing tasks, but robots don’t always need AI to function properly. Human imagination has made the two overlap as a result of sci-fi films and novels.
This chapter explores how this overlap happened and distinguishes between the current realities of robots and how the extensive use of AI solutions could transform them. Robots have existed in production since 1960s. This chapter also explores how people are employing robots more and more in industrial work, scientific discovery, medical care, and war. Some AI discoveries are accelerating this process because they solve difficult problems in robots, such as recognizing objects in the world, predicting human behavior, understanding voice commands, speaking correctly, learning to walk upright and, yes, doing backflips, as you can read in this article on recent robotic milestones: https://tinyurl.com/6smshpfk. Note that things have progressed since this first backflip with a backflipping cheetah (https://tinyurl.com/2av7r6be).
Defining Robot Roles
Robots are a relatively recent idea. The word comes from the Czech word robota, which means “forced labor.” The term first appeared in the 1920 play Rossum’s Universal Robots, written by Czech author Karel Čapek. However, humanity has long dreamed of mechanical beings. Ancient Greeks developed a myth of a bronze mechanical man, Talus, built by the god of metallurgy, Hephaestus, at the request of Zeus, the father of the gods. The Greek myths also contain references to Hephaestus building other automata, apart from Talus. Automata are self-operated machines that executed specific and predetermined sequences of tasks (in contrast to robots, which have the flexibility to perform a wide range of tasks). The Greeks actually built water-hydraulic automata that worked the same as an algorithm executed in the physical world. As algorithms, automata incorporate the intelligence of their creator, thus providing the illusion of being self-aware, reasoning machines.
Differentiating automata from other human-like animations is important. Nowadays we have holograms, which are not automata (although AI can also power them) — they are just light projections with no mechanical parts. As another example of some myths not fitting in as automata, but inspiring robotic thoughts, the Golem (https://tinyurl.com/4m33pw7x) is a mix of clay and magic. No machinery is involved; therefore, it isn’t akin to any device as discussed in this chapter. Like holograms, it doesn’t qualify as automata.
You find examples of automata in Europe throughout the Greek civilization, the Middle Ages, the Renaissance, and modern times. Many designs by mathematician and inventor Al-Jazari appear in the Middle East (see https://tinyurl.com/e7yjh557 for details). China and Japan have their own versions of automata. Some automata are complex mechanical designs, but others are complete hoaxes, such as the Mechanical Turk, an eighteenth-century machine that was said to be able to play chess but hid a man inside.
The robots described by Čapek were not exactly mechanical automata, but rather living beings engineered and assembled as if they were automata. His robots possessed a human-like shape and performed specific roles in society meant to replace human workers. Reminiscent of Mary Shelley’s Frankenstein, Čapek’s robots were something that people view as androids today: bioengineered artificial beings, as described in Philip K. Dick’s novel Do Androids Dream of Electric Sheep? (the inspiration for the film Blade Runner). Yet, the name robot also describes autonomous mechanical devices not made to amaze and delight, but rather to produce goods and services. In addition, robots became a central idea in sci-fi, both in books and movies, further contributing to a collective imagination of the robot as a human-shaped AI, designed to serve humans — not too dissimilar from Čapek’s original idea of a servant. Slowly, the idea transitioned from art to science and technology and became an inspiration for scientists and engineers.
Čapek created both the idea of robots and that of a robot apocalypse, like the AI takeover you see in sci-fi movies and that, given AI’s recent progress, is feared by notable figures such as the founder of Microsoft, Bill Gates, physicist Stephen Hawking, and the inventor and business entrepreneur Elon Musk. Čapek’s robotic slaves rebel against the humans who created them at the end of Rossum’s Universal Robots by eliminating almost all of humanity. However, cooler heads are debunking such extreme thinking, such as the Scientific American article at https://tinyurl.com/4zbjcesu.
Overcoming the sci-fi view of robots
The first commercialized robot, the Unimate (https://tinyurl.com/442x33mw), appeared in 1961. It was simply a robotic arm — a programmable mechanical arm made of metal links and joints — with an end that could grip, spin, or weld manipulated objects according to instructions set by human operators. It was sold to General Motors to use in the production of automobiles. The Unimate had to pick up die castings from the assembly line and weld them together, a physically dangerous task for human workers. To get an idea of the capabilities of such a machine, check out this video: https://tinyurl.com/jzt5w2hh. The following sections describe the realities of robots today.
Considering robotic laws
Before the appearance of Unimate, and long before the introduction of many other robot arms employed in industry that started working with human workers in assembly lines, people already knew how robots should look, act, and even think. Isaac Asimov, an American writer renowned for his works in science fiction and popular science, produced a series of novels in the 1950s that suggested a completely different concept of robots from those used in industrial settings.
Asimov coined the term robotics and used it in the same sense as people use the term mechanics. His powerful imagination still sets the standard today for people’s expectations of robots. Asimov set robots in an age of space exploration, having them use their positronic brains to help humans daily to perform both ordinary and extraordinary tasks. A positronic brain is a fictional device that makes robots in Asimov’s novels act autonomously and be capable of assisting or replacing humans in many tasks. Apart from providing human-like capabilities in understanding and acting (a clear display of a strong AI), the positronic brain works under the three laws of robotics as part of the hardware, controlling the behavior of robots in a moral way:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Later the author added a zeroth rule, with higher priority over the others in order to assure that a robot acted to favor the safety of the many:
1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Central to all Asimov’s stories on robots, the three laws allow robots to work with humans without any risk of rebellion or AI apocalypse. Impossible to bypass or modify, the three laws execute in priority order and appear as mathematical formulations in the positronic brain functions. Unfortunately, the laws have loophole and ambiguity problems, from which arise the plots of most of his novels. The three laws come from a fictional Handbook of Robotics, 56th Edition, 2058 A.D. and rely on principles of harmlessness, obedience, and self-survival.
Asimov imagined a universe in which you can reduce the moral world to a few simple principles, with some risks that drive many of his story plots. In reality, Asimov believed that robots are tools to serve humankind and that the three laws could work even in the real world to control their proper use (read this 1981 interview in Compute! magazine for details: https://tinyurl.com/227352ff). Defying Asimov’s optimistic view, however, current robots don’t have the capability to
· Understand the three laws of robotics
· Select actions according to the three laws
· Sense and acknowledge a possible violation of the three laws
Some may think that today’s robots really aren’t very smart because they lack these capabilities, and they’d be right. However, the Engineering and Physical Sciences Research Council (EPSRC), which is the UK’s main agency for funding research in engineering and the physical sciences, promoted revisiting Asimov’s laws of robotics in 2010 for use with real robots, given the technology of the time. The result is much different from the original Asimov statements (see https://tinyurl.com/5cmr7bdr). The conversation is ongoing, as described in the article at https://tinyurl.com/ztnwk4jk. These revised principles admit that robots may even kill (for national security reasons) exactly because they are a tool. As with all the other tools, complying with the law and existing morals is up to the human user, not the machine, with the robot perceived as an executor. In addition, someone (a human being) should always be accountable for the results of a robot’s actions.
The EPSRC’s principles offer a more realistic point of view on robots and morality, considering the weak AI technology in use now, but they could also provide a partial solution in advanced technology scenarios. Chapter 14 discusses problems related to using self-driving cars, a kind of mobile robot that drives for you. For example, in the exploration of the trolley problem in that chapter, you face possible but unlikely moral problems that challenge the reliance on automated machines when it’s time to make certain choices.
Defining actual robot capabilities
Not only are existing robot capabilities still far from the human-like robots found in Asimov’s works, they’re also of different categories. The kind of biped robot imagined by Asimov is currently the rarest and least advanced.
The most frequent category of robots is the robot arm, such as the previously described Unimate. Robots in this category are also called manipulators. You can find them in factories, working as industrial robots, where they assemble and weld at a speed and precision unmatched by human workers. Some manipulators also appear in hospitals to assist in surgical operations. Manipulators have a limited range of motion because they integrate into their location (they might be able to move a little, but not a lot because they lack motors that would allow movement or require an electrical hookup), so they require help from specialized technicians to move to a new location. In addition, manipulators used for production tend to be completely automated (in contrast to surgical devices, which are remote controlled, relying on the surgeon to make medical operation decisions). More than 2.7 million manipulators appeared in factories throughout the world as of September 2020, 42 percent of them located in Japan or China according to https://tinyurl.com/s99cj96n. (These statistics don’t account for other kinds of robots.)
The second largest, and growing, category of robots is that of mobile robots. Their specialty, contrary to that of manipulators, is to move around by using wheels, rotors, wings, or even legs. The major use of these robots is in industry as described in “Mobile robotics applications” at Robotnik.eu. Mobile robots are mostly unmanned (no one travels with them) and remotely controlled, but autonomy is increasing, and you can expect to see more independent robots in this category. Two special kinds of mobile robots are flying robots, called drones (see Chapter 13), and self-driving cars (discussed in Chapter 14).
The last kind of robots is the mobile manipulator, which can move (as do mobile robots) and manipulate (as do robot arms). The pinnacle of this category doesn’t just consist of a robot that moves and has a mechanical arm but also imitates human shape and behavior. The humanoid robot is a biped (has two legs) that has a human-like torso and communicates with humans through voice and expressions. This kind of robot is what sci-fi dreamed of, but it’s not easy to obtain. The Covid-19 pandemic has spawned greater use of humanoid robots like Sophia (see https://tinyurl.com/2ve3479k).
Being humanoid can be hard
Human-like robots are hard to develop, and scientists are still at work on them. Not only does a humanoid robot require enhanced AI capabilities to make it autonomous, it also needs to move as humans do. The biggest hurdle, though, is getting humans to accept a machine that looks like humans. The following sections look at various aspects of creating a humanoid robot.
Creating a robot that walks
Consider the problem of having a robot walking on two legs (a bipedal robot). This is something that humans learn to do adeptly and without conscious thought, but it’s very problematic for a robot. Four-legged robots balance easily, and they don’t consume much energy doing so. Humans, however, do consume energy simply by standing up, as well as by balancing and walking. Humanoid robots, like humans, have to continuously balance themselves, and do it in an effective and economic way. Otherwise, the robot needs a large battery pack, which is heavy and cumbersome, making the problem of balance even more difficult.
A video provided by IEEE Spectrum gives you a better idea of just how challenging the simple act of walking can be. The video shows robots involved in the DARPA Robotics Challenge (DRC), a challenge held by the U.S. Defense Advanced Research Projects Agency from 2012 to 2015: https://tinyurl.com/xsatxdfp. The purpose of the DRC is to explore robotic advances that could improve disaster and humanitarian operations in environments that are dangerous to humans (https://tinyurl.com/p2ndh952). For this reason, you see robots walking in different terrains, opening doors, grasping tools such as an electric drill, or trying to operate a valve wheel. A robot called Atlas, from Boston Dynamics, shows promise, as described in this article: https://tinyurl.com/6smshpfk. The Atlas robot truly is exceptional but still has a long way to go. The challenge is ongoing, as described at https://tinyurl.com/kwhdhkkj.
SEE SPOT DANCE! THE ROBOTS OF BOSTON DYNAMICS
Boston Dynamics, a company that dates to 1992, has gained fame and reputation as the pioneer of agile robots inspired by humans and animals. Alphabet X's division purchased the company in 2013, and then sold it in 2017 to the Japanese company Softbank, which controlled it until Hyundai recently acquired an 80-percent stake in it. The South Korean industrial conglomerate intends to leverage the company’s capabilities for autonomous vehicles and smart factories. Boston Dynamics’ most renowned robots are Spot (https://www.bostondynamics.com/spot), a four-legged canine robot, and Atlas (https://www.bostondynamics.com/atlas), a bipedal humanoid robot, which represents the most human-like robot on the market at the moment. Atlas has been developed under the supervision of Defense Advanced Research Projects Agency (DARPA; https://www.darpa.mil/) and it participated in two DARPA Robotics Challenges, placing second in 2015. It was revamped in 2016 (with a new, less menacing look). You see the newer version in most of the video you find online (just like the one in which the Atlas robot dances together with all other Boston Dynamics creations: https://tinyurl.com/2sckhjmk). The new Atlas version is designed to easily operate both outdoors and inside thanks to its advanced sensors and hydraulic actuators. In spite of the public fears that the robot may be used for warfare, in 2013 DARPA confirmed its intentions to employ the Atlas robot exclusively for emergency search-and-rescue operations in environments dangerous to human beings.
A robot with wheels can move easily on roads, but in certain situations, you need a human-shaped robot to meet specific needs. Most of the world’s infrastructures are made for a person to navigate. The presence of obstacles, such as the passage size, or the presence of doors or stairs, makes using differently shaped robots difficult. For instance, during an emergency, a robot may need to enter a nuclear power station and close a valve. The human shape enables the robot to walk around, descend stairs, and turn the valve wheel.
Overcoming human reluctance: The uncanny valley
Humans have a problem with humanoid robots that look a little too human. In 1970, a professor at the Tokyo Institute of Technology, Masahiro Mori, studied the impact of robots on Japanese society. He coined the term Bukimi no Tani Genshō, which translates to uncanny valley. Mori realized that the more realistic robots look, the greater affinity humans feel toward them. This increase in affinity remains true until the robot reaches a certain degree of realism, at which point, we start disliking them strongly (even feeling revulsion). The revulsion increases until the robot reaches the level of realism that makes them a copy of a human being. You can find this progression depicted in Figure 12-1 and described in Mori’s original paper at: https://tinyurl.com/5zxepyux.
FIGURE 12-1: The uncanny valley.
Various hypotheses have been formulated about the reasons for the revulsion that humans experience when dealing with a robot that is almost, but not completely, human. Cues that humans use to detect robots are the tone of the robotic voice, the rigidity of movement, and the artificial texture of the robot’s skin. Some scientists attribute the uncanny valley to cultural reasons, others to psychological or biological ones. One experiment with monkeys found that primates might undergo a similar experience when exposed to more or less realistically processed photos of monkeys rendered by 3-D Computer-Generated Imagery (CGI) technology (see the story here: https://tinyurl.com/c69vbhzt). Monkeys participating in the experiment displayed a slight aversion to realistic photos, hinting at a common biological reason for the uncanny valley. An explanation could therefore relate to a self-protective reaction against beings negatively perceived as unnatural-looking because they’re ill or even possibly dead. The research is ongoing as described in the article at https://tinyurl.com/ye9dshs4.
The interesting point about the uncanny valley is that if we need humanoid robots because we want them to assist humans, we must also consider their level of realism and key aesthetic details in order to achieve a positive emotional response that will allow users to accept robotic help. Some observations show that even robots with little human resemblance generate attachment and create bonds with their users. For instance, many U.S. soldiers report feeling a loss when their small tactical robots for explosive detection and handling are destroyed in action. (You can read an article about this on the MIT Technological Review: https://tinyurl.com/6chj2m3a.) At the other extreme, some robots have been cancelled because people thought they were too creepy, such as the New York police dog Spot (https://tinyurl.com/79dkb8vt). Even though Spot doesn’t look much like a dog, the headless aspect of the robot seemed to make people feel uneasy. Perhaps it would have received a better reception if it had had a head of some sort.
An interesting experiment to overcome the uncanny valley is SEER, the Simulative Emotional Expression Robot, a humanoid robotic head developed as an artistic work by the Japanese artist Takayuki Todo. (You can see the robotic head in action on the artist’s website: http://www.takayukitodo.com/.) To overcome the uncanny valley, Todo has worked on the child-like aspect of the robotic head and on its gaze. Because the SEER’s head has a camera that can record the reactions of a human counterpart, the robot can reciprocate reactions by imitating human expressions thanks to the movement of its metallic eyebrows or just uninterrupted eye contact.
Working with robots
Different types of robots have different applications. As humans developed and improved the three classes of robots (manipulator, mobile, and humanoid), new fields of application opened to robotics. It’s now impossible to enumerate exhaustively all the existing uses for robots, but the following sections touch on some of the most promising and revolutionary uses.
Enhancing economic output
Manipulators, or industrial robots, still account for the largest percentage of operating robots in the world. However, you see them used more in some countries than in others. The article “Robot Race: The World´s Top 10 automated countries” at IFR.org is enlightening because robot use has increased even faster than predicted, but where they have increased (mostly in Asia) is as important as the fact that usage has increased. In fact, factories (as an entity) will use robots to become smarter, a concept dubbed Industry 4.0. Thanks to widespread use of the Internet, sensors, data, and robots, Industry 4.0 solutions allow easier customization and higher quality of products in less time than they can achieve without robots. No matter what, robots already operate in dangerous environments, and for tasks such as welding, assembling, painting, and packaging, they operate faster, with higher accuracy, and at lower costs than human workers can.
Taking care of you
Since 1983, robots have assisted surgeons in difficult operations by providing precise and accurate cuts that only robotic arms can provide. Apart from offering remote control of operations (keeping the surgeon out of the operating room to create a more sterile environment), an increase in automated operations are steadily opening the possibility of completely automated surgical operations in the near future, as speculated in this article: https://tinyurl.com/c2j4cabm. You might also want to check out the Society of Robotic Surgery (SRS) page at https://tinyurl.com/4vwcjrab to discover more about the human end of this revolution.
Providing services
Robots provide other care services, both in private and public spaces. The most famous indoor robot is the Roomba vacuum cleaner, a robot that will vacuum the floor of your house by itself (it’s a robotic bestseller, having exceeded 30 million units sold according to https://tinyurl.com/3hyp8kkd), but there are other service robots to consider as well.
The definition of a service robot is changing almost daily because of a combination of worldwide events like the Covid-19 pandemic, the overall aging of the population, and the ability of technology to meet certain needs. The article at https://tinyurl.com/2cjzune7 provides some insights into how people view service robots today, but it’s guaranteed that the definitions will change in the future.
Many service robots today are specifically targeted at home use (rather than a combination of industrial, institutional, and home use), like the Roomba, but you can use them in many different areas, as described in the articles at https://tinyurl.com/2a8j2wby and https://tinyurl.com/6dxnhfbt. There is also the use of robots for elder care, as described at https://tinyurl.com/aj2zxwc6, with an eye toward keeping elderly people in their homes as long as possible.
Venturing into dangerous environments
Robots go where people can’t, or would be at great risk if they did. Some robots have been sent into space (with the NASA Mars rover Perseverance, https://tinyurl.com/5f8aj2cv, being one of the most notable attempts), and more will support future space exploration. (Chapter 16 discusses robots in space.) Many other robots stay on earth and are employed in underground tasks, such as transporting ore in mines or generating maps of tunnels in caves. Underground robots are even exploring sewer systems, as Luigi (a name inspired from the brother of a famous plumber in videogames) does. Luigi is a sewer-trawling robot developed by MIT’s Senseable City Lab to investigate public health in a place where humans can’t go unharmed because of high concentrations of chemicals, bacteria, and viruses (see https://tinyurl.com/da4hwucw). There are now armies of these sewer crawling robots who get rid of fatbergs, nasty conglomerations of non-biodegradeable substances like wet wipes (see the story at https://tinyurl.com/9pu9bwzc).
Robots are even employed where humans will definitely die, such as in nuclear disasters like Three Mile Island, Chernobyl, and Fukushima. These robots remove radioactive materials and make the area safer. High-dose radiation even affects robots because radiation causes electronic noise and signal spikes that damage circuits over time. In addition, unless the circuitry is hardened, radiation can do physical damage to the robot. Only radiation hardened electronic components allow robots to resist the effects of radiation enough to carry out their job, such as the Little Sunfish, an underwater robot that operates in one of Fukushima’s flooded reactors where the meltdown happened (as described in this article at https://tinyurl.com/yes4duwd).
In addition, warfare or criminal scenes represent life-threatening situations in which robots see frequent use for transporting weapons or defusing bombs. These robots can also investigate packages that could include a lot of harmful things other than bombs. Robot models such as iRobot’s PackBot (from the same company that manufactures Rumba, the house cleaner) or QinetiQ North America’s Talon handle dangerous explosives by remote control, meaning that an expert in explosives controls their actions at a distance. Some robots can even act in place of soldiers or police in reconnaissance tasks or direct interventions (for instance, police in Dallas used a robot to take out a shooter https://tinyurl.com/fk85wpsb). Of course, there are lots of questions being asked about this practice, as detailed at https://tinyurl.com/56xk4pzw.
People expect the military to increasingly use robots in the future. Beyond the ethical considerations of these new weapons, it’s a matter of the old guns-versus-butter model (see https://tinyurl.com/8jsws3c8 and https://tinyurl.com/pta7ufdr), meaning that a nation can exchange economic power for military power. Robots seem a perfect fit for that model, moreso than traditional weaponry that needs trained personnel to operate. Using robots means that a country can translate its productive output into an immediately effective army of robots at any time, something that the Star Wars prequels demonstrate all too well.
Understanding the role of specialty robots
Specialty robots include drones and SD cars. Drones are controversial because of their usage in warfare, but unmanned aerial vehicles (UAVs) are also used for monitoring, agriculture, and many less menacing activities, as discussed in Chapter 13.
People have long fantasized about cars that can drive by themselves. Most car producers have realized that being able to produce and commercialize SD cars could change the actual economic balance in the world. At one point, it seemed as if the world were on the cusp of seeing SD cars, as described in this article at the WashingtonPost.com: https://tinyurl.com/dnh7k48y). However, for a whole lot of reasons today, SD car technology has stalled, as described at The Guardian.com (https://tinyurl.com/4sfju66h). Chapter 14 discusses SD cars, their technology, and their implications in more detail.
Assembling a Basic Robot
An overview of robots isn’t complete without discussing how to build one, given the state of the art, and considering how AI can improve its functioning. The following sections discuss robot basics.
Considering the components
A mobile robot’s purpose is to act in the world, so it needs effectors, which are moving legs or wheels that provide the locomotion capability. It also needs arms and pincers to grip, rotate, translate (modify the orientation outside of rotation), and thus provide manipulating capabilities. When talking about the capability of the robot to do something, you may also hear the term actuator used interchangeably with effectors. An actuator is one of the mechanisms that compose the effectors, allowing a single movement. Thus, a robot leg has different actuators, such as electric motors or hydraulic cylinders that perform movements like orienting the feet or bending the knee.
Acting in the world requires determining the composition of the world and understanding where the robot resides in the world. Sensors provide input that reports what’s happening outside the robot. Devices like cameras, lasers, sonars, and pressure sensors measure the environment and report to the robot what’s going on as well as hint at the robot’s location. The robot therefore consists mainly of an organized bundle of sensors and effectors. Everything is designed to work together using an architecture, which is exactly what makes up a robot. (Sensors and effectors are actually mechanical and electronic parts that you can use as stand-alone components in different applications.)
The common internal architecture is made of parallel processes gathered into layers that specialize in solving one kind of problem. Parallelism is important. As human beings, we perceive a single flow of consciousness and attention; we don’t need to think about basic functions such as breathing, heartbeat, and food digestion because these processes go on by themselves in parallel to conscious thought. Often we can even perform one action, such as walking or driving, while talking or doing something else (although it may prove dangerous in some situations). The same goes for robots. For instance, in the three-layer architecture, a robot has many processes gathered into three layers, each one characterized by a different response time and complexity of answer:
· Reactive: Takes immediate data from the sensors, the channels for the robot’s perception of the world, and reacts immediately to sudden problems (for instance, turning immediately after a corner because the robot is going to crash into an unknown wall).
· Executive: Processes sensor input data, determines where the robot is in the world (an important function called localization), and decides what action to execute given the requirements of the previous layer, the reactive one, and the following one, the deliberative.
· Deliberative: Makes plans on how to perform tasks, such as planning how to go from one point to another and deciding what sequence of actions to perform to pick up an object. This layer translates into a series of requirements for the robot that the executive layer carries out.
Another popular architecture is the pipeline architecture, commonly found in SD cars, which simply divides the robot’s parallel processes into separate phases such as sensing, perception (which implies understanding what you sense), planning, and control.
Sensing the world
Chapter 14 discusses sensors in detail and presents practical applications to help explain SD cars. Many kinds of sensors exist, with some focusing on the external world and others on the robot itself. For example, a robotic arm needs to know how much its arm extended or whether it reached its extension limit. Furthermore, some sensors are active (they actively look for information based on a decision of the robot), while others are passive (they receive the information constantly). Each sensor provides an electronic input that the robot can immediately use or process in order to gain a perception.
Perception involves building a local map of real-world objects and determining the location of the robot in a more general map of the known world. Combining data from all sensors, a process called sensor fusion, creates a list of basic facts for the robot to use. Machine learning helps in this case by providing vision algorithms using deep learning to recognize objects and segment images (as discussed in Chapter 11). It also puts all the data together into a meaningful representation using unsupervised machine learning algorithms. This is a task called low-dimensional embedding, which means translating complex data from all sensors into a simple flat map or other representation. Determining a robot’s location is called simultaneous localization and mapping (SLAM), and it is just like when you look at a map to understand where you are in a city.
Controlling a robot
After sensing provides all the needed information, planning provides the robot with the list of the right actions to take to achieve its objectives. Planning is done programmatically (by using an expert system, for example, as described in Chapter 3) or by using a machine learning algorithm, such as Bayesian networks, as described in Chapter 10. The technology that appears to hold the most promise today, though, is reinforcement learning, as described at https://tinyurl.com/346x9tut. This advance is only recently possible because of advances in how reinforcement learning works.
Finally, planning is not simply a matter of smart algorithms, because when it comes to execution, things aren’t likely to go as planned. Think about this issue from a human perspective. When you’re blindfolded, even if you want to go straight in front of you, you won’t unless you have a constant source of corrections. The result is that you start going in loops. Your legs, which are the actuators, don’t always perfectly execute instructions. Robots face the same problem. In addition, robots face issues such as delays in the system (technically called latency) or they don’t execute instructions exactly on time, thus messing things up. However, most often the issue is a problem with the robot’s environment, in one of the following ways:
· Uncertainty: The robot isn’t sure where it is, or it can partially observe the situation but can’t figure it out exactly. Because of uncertainty, developers say that the robot operates in a stochastic environment.
· Adversarial situations: People or moving objects are in the way. In some situations, these objects even become hostile (see an earlier article from Business Insider at https://tinyurl.com/r3mkw23y). This is the multiagent problem. An ongoing study in this area is described in the article in iScience at https://tinyurl.com/f8zsdwm4.
Robots have to operate in environments that are partially unknown, changeable, mostly unpredictable, and in a constant flow, meaning that all actions are chained, and the robot has to continuously manage the flow of information and actions in real time. Being able to adjust to this kind of environment can’t be fully predicted or programmed, and such an adjustment requires learning capabilities, which AI algorithms provide more and more to robots.