Part 5
IN THIS PART …
Determine when an application won’t work.
Consider the use of AI in space.
Create new human occupations.
Chapter 15
IN THIS CHAPTER
Defining AI usage scenarios
Understanding what happens when AI fails
Developing solutions to non-existent problems
Previous chapters in this book explore what AI is and what it isn’t, along with which problems it can solve well and which problems are seemingly out of range. Even with all this information, you can easily recognize a potential application that won’t ever see the light of day because AI simply can’t address that particular need. This chapter explores the nonstarter application. Perhaps the chapter should be retitled as “Why We Still Need Humans,” but the current title is clearer.
As part of this chapter, you discover the effects of attempting to create nonstarter applications. The most worrisome of those effects is the AI winter. An AI winter occurs whenever the promises of AI proponents exceed their capability to deliver, resulting in a loss of funding from entrepreneurs.
AI can also fall into the trap of developing solutions to problems that don’t really exist. Yes, the wonders of the solution really do look quite fancy, but unless the solution addresses a real need, no one will buy it. Technologies thrive only when they address needs that users are willing to spend money to obtain. This chapter finishes with a look at solutions to problems that don’t exist.
Using AI Where It Won’t Work
Table 1-1 in Chapter 1 lists the seven kinds of intelligence. A fully functional society embraces all seven kinds of intelligence, and different people excel in different kinds of intelligence. When you combine the efforts of all the people, you can address all seven kinds of intelligence in a manner that satisfies society’s needs.
You’ll quickly note from Table 1-1 that AI doesn’t address two kinds of intelligence at all, and provides only modest capability with three more. AI excels when it comes to math, logic, and kinesthetic intelligence, limiting its ability to solve many kinds of problems that a fully functional society needs to address. The following sections describe situations in which AI simply can’t work because it’s a technology — not a person.
Defining the limits of AI
When talking to Alexa, you might forget that you’re talking with a machine. The machine has no idea of what you’re saying, doesn’t understand you as a person, and has no real desire to interact with you; it only acts as defined by the algorithms created for it and the data you provide. Even so, the results are amazing. It’s easy to anthropomorphize the AI without realizing it and see it as an extension of a human-like entity. However, an AI lacks the essentials described in the following sections.
Creativity
You can find an endless variety of articles, sites, music, art, writings, and all sorts of supposedly creative output from an AI. The problem with AI is that it can’t create anything. When you think about creativity, think about patterns of thought. For example, Beethoven had a distinct way of thinking about music. You can recognize a classic Beethoven piece even if you aren’t familiar with all his works because the music has a specific pattern to it, formed by the manner in which Beethoven thought.
An AI can create a new Beethoven piece by viewing his thought process mathematically, which the AI does by learning from Beethoven music examples. The resulting basis for creating a new Beethoven piece is mathematical in nature. In fact, because of the mathematics of patterns, in “Hear AI play Beethoven like The Beatles,” at TechCrunch.com, you can hear an AI play Beethoven from the perspective of the Beatles, as well as other music genres.
The problem of equating creativity to math is that math isn’t creative. To be creative means to develop a new pattern of thought — something that no one has seen before (see “What is creativity?” at CSUN.edu for more details). Creativity isn’t just the act of thinking outside the box; it’s the act of defining a new box.
Creativity also implies developing a different perspective, which is essentially defining a different sort of dataset (if you insist on the mathematical point of view). An AI is limited to the data you provide. It can’t create its own data; it can only create variations of existing data — the data from which it learned. The “Understanding teaching orientation” sidebar in Chapter 13 expounds on this idea of perspective. To teach an AI something new, something different, something amazing, a human must decide to provide the appropriate data orientation.
Imagination
To create is to define something real, whether it’s music, art, writing, or any other activity that results in something that others can see, hear, touch, or interact with in other ways. Imagination is the abstraction of creation, and is therefore even further outside the range of AI capability than creativity. Someone can imagine things that aren’t real and can never be real. Imagination is the mind wandering across fields of endeavor, playing with what might be if the rules didn’t get in the way. True creativity is often the result of a successful imagination.
From a purely human perspective, everyone can imagine something. Imagination sets us apart from everything else and often places us in situations that aren’t real at all. The Huffington Post article “5 Reasons Imagination Is More Important Than Reality” provides five reasons that imagination is critical in overcoming the limits of reality.
Just as an AI can’t create new patterns of thought or develop new data without using existing sources, it must also exist within the confines of reality. Consequently, it’s unlikely that anyone will ever develop an AI with imagination. Not only does imagination require creative intelligence, it also requires intrapersonal intelligence, and an AI possesses neither form of intelligence.
Imagination, like many human traits, is emotional. AI lacks emotion. In fact, when viewing what an AI can do, versus what a human can do, it often pays to ask the simple question of whether the task requires emotion.
Original ideas
To imagine something, create something real from what was imagined, and then to use that real-world example of something that never existed in the past is to develop an idea. To successfully create an idea, a human needs good creative, intrapersonal, and interpersonal intelligence. Creating something new is great if you want to define one-off versions of something or to entertain yourself. However, to make it into an idea, you must share it with others in a manner that allows them to see it as well.
Data deficiencies
The “Considering the Five Mistruths in Data” section of Chapter 2 tells you about data issues that an AI must overcome to perform the tasks that it’s designed to do. The only problem is that an AI typically can’t recognize mistruths in data with any ease unless there is an accompanying wealth of example data that lacks these mistruths, which might be harder to come by than you think. Humans, on the other hand, can often spot the mistruths with relative ease. Having seen more examples than any AI will ever see, a human can spot the mistruths through both imagination and creativity. A human can picture the mistruth in a manner that the AI can’t because the AI is stuck in reality.
Mistruths are added into data in so many ways that listing them all is not even possible. Humans often add these mistruths without thinking about it. In fact, avoiding mistruths can be impossible, caused as they are by perspective, bias, and frame-of-reference at times. Because an AI can’t identify all the mistruths, the data used to make decisions will always have some level of deficiency. Whether that deficiency affects the AI’s capability to produce useful output depends on the kind and level of deficiency, along with the capabilities of the algorithms.
The oddest sort of data deficiency to consider, however, is when a human actually wants a mistruth as output. This situation occurs more often that most people think, and the only way to overcome this particular human issue is through the subtle communication provided by interpersonal intelligence that an AI lacks. For example, someone buys a new set of clothes. They look hideous — to you, at least, (and clothes can be amazingly subjective). However, if you’re smart, you’ll say that the clothes look amazing. The person isn’t looking for your unbiased opinion — the person is looking for your support and approval. The question then becomes not one of “How do these clothes look?” — which is what the AI would hear — but one of “Do you approve of me?” or “Will you support my decision to buy these clothes?” You can partially overcome the problem by suggesting accessories that complement the clothes or by using other means, such as subtly getting the person to see that they might not even wear the clothes publicly.
There is also the issue of speaking a hurtful truth that an AI will never be able to handle because an AI lacks emotion. A hurtful truth is one in which the recipient gains nothing useful, but instead receives information that causes harm — whether emotional, physical, or intellectual. For example, a child may not know that one parent was unfaithful to another. Because both parents have passed on, the information isn’t pertinent any longer, and it would be best to allow the child to remain in a state of bliss. However, someone comes along and ensures that the child’s memories are damaged by discussing the unfaithfulness in detail. The child doesn’t gain anything, but is most definitely hurt. An AI could cause the same sort of hurt by reviewing family information in ways that the child would never consider. Upon discovering the unfaithfulness through a combination of police reports, hotel records, store receipts, and other sources, the AI tells the child about the unfaithfulness, again causing hurt by using the truth. However, in the case of the AI, the truth is presented because of a lack of emotional intelligence (empathy); the AI is unable to understand the child’s need to remain in a blissful state about the parent’s fidelity. Unfortunately, even when a dataset contains enough correct and truthful information for an AI to produce a usable result, the result can prove more hurtful than helpful.
Applying AI incorrectly
The limits of AI define the realm of possibility for applying AI correctly. However, even within this realm, you can obtain an unexpected or unhelpful output. For example, you could provide an AI with various inputs and then ask for a probability of certain events occurring based on those inputs. When sufficient data is available, the AI can produce a result that matches the mathematical basis of the input data. However, the AI can’t produce new data, create solutions based on that data, imagine new ways of working with that data, or provide ideas for implementing a solution. All these activities reside within the human realm. All you should expect is a probability prediction.
Many of the results of AI are based on probability or statistics. Unfortunately, neither of these mathematical methods applies to individuals; these methods work only with groups. In fact, using statistics creates myriad problems for just about any purpose other than concrete output, such as driving a car. The article “The Problems with Statistics” at public.wsu.edu discusses the problems with using statistics. When your AI application affects individuals, you must be prepared for the unexpected, including complete failure to achieve any of the goals that you had set out to achieve.
Another issue is whether the dataset contains any sort of opinion, which is far more prevalent than you might think. An opinion differs from a fact in that the fact is completely provable and everyone agrees that a fact is truthful (at least, everyone with an open mind). Opinions occur when you don’t have enough scientific fact to back up the data. In addition, opinions occur when emotion is involved. Even when faced with conclusive proof to the contrary, some humans would rather rely on opinion than fact. The opinion makes us feel comfortable; the fact doesn’t. AI will nearly always fail when opinion is involved. Even with the best algorithm available, someone will be dissatisfied with the output.
Entering a world of unrealistic expectations
The previous sections of the chapter discuss how expecting an AI to perform certain tasks or applying it in less than concrete situations will cause problems. Unfortunately, humans don’t seem to get the idea that the sort of tasks that many of us think an AI can perform will never come about. These unrealistic expectations have many sources, including
· Media: Books, movies, and other forms of media all seek to obtain an emotional response from us. However, that emotional response is the very source of unrealistic expectations. We imagine that an AI can do something, but it truly can’t do those things in the real world.
· Anthropomorphization: Along with the emotions that media generates, humans also tend to form attachments to everything. People often name their cars, talk to them, and wonder if they’re feeling bad when they break down. An AI can’t feel, can’t understand, can’t communicate (really), can’t do anything other than crunch numbers — lots and lots of numbers. When the expectation is that the AI will suddenly develop feelings and act human, the result is doomed to failure.
· Undefined problem: An AI can solve a defined problem, but not an undefined one. You can present a human with a set of potential inputs and expect a human to create a matching question based on extrapolation. Say that a series of tests keeps failing for the most part, but some test subjects do achieve the desired goal. An AI might try to improve test results through interpolation by locating new test subjects with characteristics that match those who survived. However, a human might improve the test results through extrapolation by questioning why some test subjects succeeded and finding the cause, whether the cause is based on test subject characteristics or not (perhaps environmental conditions have changed or the test subject simply has a different attitude). For an AI to solve any problem, however, a human must be able to express that problem in a manner that the AI understands. Undefined problems, those that represent something outside human experience, simply aren’t solvable using an AI.
· Deficient technology: In many places in this book, you find that a problem wasn’t solvable at a certain time because of a lack of technology. It isn’t realistic to ask an AI to solve a problem when the technology is insufficient. For example, the lack of sensors and processing power would have made creating a self-driving car in the 1960s impossible, yet advances in technology have made such an endeavor possible today.
Considering the Effects of AI Winters
AI winters occur when scientists and others make promises about the benefits of AI that don’t come to fruition within an expected time frame, causing funding for AI to dry up and research to continue at only a glacial pace. (Scare tactics employed by those who have no idea of how AI works have likely had an effect on AI winters as well.) Since 1956, the world has seen two AI winters. (Right now, the world is in its third AI summer.) The following sections discuss the causes, effects, and results of AI winter in more detail.
Understanding the AI winter
It’s hard to say precisely when AI began. After all, even the ancient Greeks dreamed of creating mechanical men, such as those presented in the Greek myths about Hephaestus and Pygmalion’s Galatea, and we can assume that these mechanical men would have some sort of intelligence. Consequently, one could argue that the first AI winter actually occurred sometime between the fall of the Roman empire and the time in the middle ages when people dreamed of an alchemical way of placing the mind into matter, such as Jābir ibn Hayyān’s Takwin, Paracelsus’ homunculus (see ancient-origins.net), and Rabbi Judah Loew’s Golem (see “Golem: A myth of perfection in an imperfect world” at blogs.timesofisrael.com. However, these efforts are unfounded stories and not of the scientific sort that would appear later in 1956 with the founding of government-funded artificial intelligence research at Dartmouth College.
An AI winter occurs when funding for AI dwindles. The use of the word winter is appropriate because, like a tree in winter, AI didn’t stop growing altogether. When you view the rings of a tree, you see that the tree does continue to grow in winter — just not very fast. Likewise, during the AI winters from 1974 to 1980 and again from 1987 to 1993, AI did continue to grow, but at a glacial pace.
Defining the causes of the AI winter
The cause of an AI winter could easily be summarized as resulting from outlandish promises that are impossible to keep. At the outset of the efforts at Dartmouth College in 1956, the soon-to-be leaders of AI research predicted that a computer as intelligent as a human would take no more than a generation. Sixty-plus years later, computers still aren’t nearly as smart as humans. In fact, if you’ve read previous chapters, you know that computers are unlikely to ever be as smart as humans, at least not in every kind of intelligence (and by now have exceeded human capability only in a very few kinds and only in limited situations).
Part of the problem with overpromising capabilities is that early proponents of AI believed that all human thought could be formalized as algorithms. In fact, this idea goes back to the Chinese, Indian, and Greek philosophers. However, as shown in Table 1-1 of Chapter 1, only some components of human intelligence are formalized. In fact, the best possible outcome is that human mathematical and logical reasoning could be mechanized. Even so, in the 1920s and 1930s, David Hilbert challenged mathematicians to prove that all mathematical reasoning can be formalized. The answer to this challenge came from Gödel’s incompleteness proof, Turing’s machine, and Church’s Lambda calculus. Two outcomes emerged: Formalizing all mathematical reasoning isn’t possible; and in the areas in which formalization is possible, you can also mechanize the reasoning, which is the basis of AI.
Another part of the problem with overpromising is excessive optimism. During the early years of AI, computers solved algebra word problems, proved theorems in geometry, and learned to speak English. The first two outputs are reasonable when you consider that the computer is simply parsing input and putting it into a form that the computer can manipulate. The problem is with the third of these outputs. The computer wasn’t truly speaking English; instead, it was converting textual data into digital patterns that were in turn converted to analog and output as something that seemed like speech, but wasn’t. The computer didn’t understand anything about English, or any other language for that matter. Yes, the scientists did indeed hear English, but the computer simply saw 0s and 1s in a specific pattern that the computer didn’t see as language at all.
Even the researchers were often fooled into thinking that the computer was doing more than it really was. For example, Joseph Weizenbaum’s ELIZA at psych.fullerton.edu appeared to hear input and then respond in an intelligent manner. Unfortunately, the responses were canned and the application wasn’t hearing, understanding, or saying anything. Yet, ELIZA was the first chatterbot and did represent a step forward, albeit an incredibly small one. The hype was simply significantly greater than the actual technology — a problem that AI faces today. People feel disappointed when they see that the hype isn’t real, so scientists and promoters continue to set themselves up for failure by displaying glitz rather than real technology. The first AI winter was brought on by predictions such as these:
· H.A. Simon: “Within ten years, a digital computer will be the world’s chess champion” (1958) and “machines will be capable, within twenty years, of doing any work a man can do.” (1965)
· Allen Newell: “Within ten years, a digital computer will discover and prove an important new mathematical theorem.” (1958)
· Marvin Minsky: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved” (1967) and “In from three to eight years, we will have a machine with the general intelligence of an average human being.” (1970)
Oddly enough, a computer did become chess champion in 1997, though not within ten years (see “How 22 Years of AI Superiority Changed Chess” at Towards Data Science.com), but the other predictions still aren’t true. In viewing these outlandish claims today, it’s easy to see why governments withdrew funding. The “Considering the Chinese Room argument” section of Chapter 5 outlines just one of many counterarguments that even people within the AI community made against these predictions.
The second AI winter came as a result of the same issues that created the first AI winter — overpromising, overexcitement, and excessive optimism. In this case, the boom started with the expert system (see “Leveraging expert systems” in Chapter 3 for more details on expert systems), a kind of AI program that solves problems using logical rules. In addition, the Japanese entered the fray with their Fifth Generation Computer project, a computer system that offered massively parallel processing. The idea was to create a computer that could perform a lot of tasks in parallel, similar to the human brain. Finally, John Hopfield and David Rumelhart resurrected connectionism, a strategy that models mental processes as interconnected networks of simple units.
The end came as sort of an economic bubble. The expert systems proved brittle, even when run on specialized computer systems. The specialized computer systems ended up as economic sinkholes that newer, common computer systems could easily replace at a significantly reduced cost. In fact, the Japanese Fifth Generation Computer project was also a fatality of this economic bubble. It proved extremely expensive to build and maintain.
Rebuilding expectations with new goals
An AI winter does not necessarily prove devastating. Quite the contrary: Such times can be viewed as an opportunity to stand back and think about the various issues that came up during the rush to develop something amazing. Two major areas of thought benefitted during the first AI winter (along with minor benefits to other areas of thought):
· Logical programming: This area of thought involves presenting a set of sentences in logical form (executed as an application) that expresses facts and rules about a particular problem domain. Examples of programming languages that use this particular paradigm are Prolog, Answer Set Programming (ASP), and Datalog. This is a form of rule-based programming, which is the underlying technology used for expert systems.
· Common-sense reasoning: This area of thought uses a method of simulating the human ability to predict the outcome of an event sequence based on the properties, purpose, intentions, and behavior of a particular object. Common-sense reasoning is an essential component in AI because it affects a wide variety of disciplines, including computer vision, robotic manipulation, taxonomic reasoning, action and change, temporal reasoning, and qualitative reasoning.
The second AI winter brought additional changes that have served to bring AI into the focus that it has today. These changes included:
· Using common hardware: At one point, expert systems and other uses of AI relied on specialized hardware. The reason is that common hardware didn’t provide the necessary computing power or memory. However, these custom systems proved expensive to maintain, hard to program, and extremely brittle when faced with unusual situations. Common hardware is general purpose in nature and is less prone to issues of having a solution that’s attempting to find a problem (see the upcoming “Creating Solutions in Search of a Problem” section of the chapter for details).
It’s important to realize that common hardware indicates hardware that you can buy anywhere and that other groups use. For example, machine learning benefits greatly from the inclusion of a Graphics Processing Unit (GPU) in the host system (see “What is a GPU and do you need one in Deep Learning?” at Towards Data Science.com for details). However, gaming and other graphics-intensive tasks also rely on these devices (find out more at https://www.hp.com/us-en/shop/tech-takes/gpu-vs-cpu-for-pc-gaming), so the hardware is theoretically common, but not every system has one.
· Seeing a need to learn: Expert systems and other early forms of AI required special programming to meet each need, thereby making them extremely inflexible. It became evident that computers would need to be able to learn from the environment, sensors, and data provided.
· Creating a flexible environment: The systems that did perform useful work between the first and second AI winters did so in a rigid manner. When the inputs didn’t quite match expectations, these systems were apt to produce grotesque errors in the output. It became obvious that any new systems would need to know how to react to real-world data, which is full of errors, incomplete, and often formatted incorrectly.
· Relying on new strategies: Imagine that you work for the government and have promised all sorts of amazing things based on AI, except that none of them seemed to materialize. That’s the problem with the second AI winter: Some governments had tried various ways of making the promises of AI a reality. When the current strategies obviously weren’t working, these same governments started looking for other ways to advance computing, some of which have produced interesting results, such as advances in robotics.
The point is that AI winters aren’t necessarily bad for AI. In fact, these occasions to step back and view the progress (or lack thereof) of current strategies are important. Taking these thoughtful moments is hard when one is rushing headlong into the next hopeful achievement.
When considering AI winters and the resulting renewal of AI with updated ideas and objectives, an adage known as Amara’s law, coined by American scientist and futurist Roy Charles Amara, is worth remembering: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” After all the hype and disillusionment, there is always a time when people can’t perceive the long-term impact of a new technology clearly and understand the revolutions it brings about with it. As a technology, AI is here to stay and will change our world for better and worse, no matter how many winters it still has to face.
Creating Solutions in Search of a Problem
Two people are looking at a mass of wires, wheels, bits of metal, and odd, assorted items that appear to be junk. The first person asks the second, “What does it do?” The second answers, “What doesn’t it do?” Yet, the invention that apparently does everything ends up doing nothing at all. The media is rife with examples of the solution looking for a problem. We laugh because everyone has encountered the solution that’s in search of a problem before. These solutions end up as so much junk, even when they do work, because they fail to answer a pressing need. The following sections discuss the AI solution in search of a problem in more detail.
Defining a gizmo
When it comes to AI, the world is full of gizmos. Some of those gizmos really are useful, but many aren’t, and a few fall between these two extremes. For example, Alexa comes with many useful features, but it also comes with a hoard of gizmos that will leave you scratching your head when you try to use them. The article at https://www.digitaltrends.com/home/what-is-amazons-alexa-and-what-can-it-do/ provides you with a balanced view of what Alexa is and how it can be helpful in a gizmo sort of a way. The review at https://www.prologic-technologies.com/blog/pros-cons-of-amazon-alexa/ points out some interesting aspects of using Alexa that you might not consider at first. However, what you might find even more interesting is actual user reviews of Alexa (off Amazon), which you can find at https://www.trustradius.com/products/alexa/reviews?qs=pros-and-cons.
An AI gizmo is any application that seems on first glance to do something interesting, but ultimately proves unable to perform truly useful tasks. Here are some of the common aspects to look for when determining whether something is a gizmo. (The first letter of the each bullet in the list spells the acronym CREEP, meaning, don’t create a creepy AI application):
· Cost effective: Before anyone decides to buy into an AI application, it must prove to cost the same or less than existing solutions. Everyone is looking for a deal. Paying more for a similar benefit will simply not attract attention.
· Reproducible: The results of an AI application must be reproducible, even when the circumstances of performing the task change. In contrast to procedural solutions to a problem, people expect an AI to adapt — to learn from doing, which means that the bar is set higher on providing reproducible results.
· Efficient: When an AI solution suddenly consumes huge amounts of resources of any sort, users look elsewhere. Businesses, especially, have become extremely focused on performing tasks with the fewest possible resources.
· Effective: Simply providing a practical benefit that’s cost effective and efficient isn’t enough; an AI must also provide a solution that fully addresses a need. Effective solutions enable someone to allow the automation to perform the task without having to constantly recheck the results or prop the automation up.
· Practical: A useful application must provide a practical benefit. The benefit must be something that the end user requires, such as access to a road map or reminders to take medication.
Avoiding the infomercial
Bedazzling potential users of your AI application is a sure sign that the application will fail. Oddly enough, the applications that succeed with the greatest ease are those whose purpose and intent are obvious from the outset. A voice recognition application is obvious: You talk, and the computer does something useful in exchange. You don’t need to sell anyone on the idea that voice recognition software is useful. This book is filled with a number of these truly useful applications, none of which requires the infomercial approach of the hard sell. If people start asking what something does, it’s time to rethink the project.
Understanding when humans do it better
This chapter is all about keeping humans in the loop while making use of AI. You’ve seen sections about things we do better than AI, when an AI can master them at all. Anything that requires imagination, creativity, the discernment of truth, the handling of opinion, or the creation of an idea is best left to humans. Oddly enough, the limits of AI leave a lot of places for humans to go, many of which aren’t even possible today because humans are overly engaged in repetitive, boring tasks that an AI could easily do.
Look for a future in which AI acts as an assistant to humans. In fact, you’ll see this use of AI more and more as time goes on. The best AI applications will be those that look to assist, rather than replace, humans. Yes, it’s true that robots will replace humans in hazardous conditions, but humans will need to make decisions as to how to avoid making those situations worse, which means having a human at a safe location to direct the robot. It’s a hand-in-hand collaboration between technology and humans.
CONSIDERING THE INDUSTRIAL REVOLUTION
The human/AI collaboration won’t happen all at one time. In addition, the new kinds of work that humans will be able to perform won’t appear on the scene immediately. However, the vision of humans just sitting around waiting to be serviced by a machine is farfetched and obviously not tenable. Humans will continue to perform various tasks. Of course, the same claims of machines taking over were made during all major human upheavals in the past, with the industrial revolution being the more recent and more violent of those upheavals (see “The Industrial Revolution” at History Doctor.net). Humans will always do certain things better than an AI, and you can be certain that we’ll continue to make a place for ourselves in society. We just need to hope that this upheaval is less violent than the industrial revolution was.
Looking for the simple solution
The Keep It Simple, Stupid (KISS) principle is the best idea to keep in mind when it comes to developing AI applications. You can read more about KISS at Techopedia.com, but the basic idea is to ensure that any solution is the simplest you can make it. All sorts of precedents exist for the use of simple solutions. However, of these, Occam’s Razor is probably the most famous (https://science.howstuffworks.com/innovation/scientific-experiments/occams-razor.htm).
Of course, the question arises as to why KISS is so important. The easiest answer is that complexity leads to failure: The more parts something has, the more likely it is to fail. This principle has its roots in mathematics and is easy to prove.
When it comes to applications, however, other principles come into play. For most people, an application is a means to an end. People are interested in the end and don’t really care about the application. If the application were to disappear from view, the user would be quite happy because then just the end result is in view. Simple applications are easy to use, tend to disappear from view, and don’t require any complex instructions. In fact, the best applications are obvious. When your AI solution has to rely on all sorts of complex interactions to use, you need to consider whether it’s time to go back to the drawing board and come up with something better.