AI IS TOUTED as the next hero in our ever-changing society. Heroes are used in movies to reveal the qualities that we are missing, and, in this, AI is no exception. Indeed, driven by a limitless ambition to optimize our own abilities, our society has moved quickly towards adopting algorithms. In fact, we have invented so many of them that today they seem to be used for every application imaginable.
The primary strength of algorithms is their ability to process a vast amount of data at high speed and reveal trends and insights that humans without technological support would not necessarily see. For obvious reasons, this is an excellent skill when it comes down to improving (or some may say perfecting) how our society works. In 2009, Lloyd Blankfein, chairman and chief executive of Goldman Sachs, said that bankers do the work of God. Well, this time around, it seems like we have catapulted algorithms to a level where they are not only taking over the role of bankers, but maybe even God!
This God-like status of AI has not escaped the attention of organizations either. They are rational, focused, systematic in their processing and more accurate than any human can ever be. In the eyes of corporate executives, algorithms make the perfect advisor to the decision-making process within organizations. One could even say that algorithms are being recognized as tools that maintain a cool head, helping organizations to make decisions void of sentiment: the perfect decision-making machine! While all of this may be true, I think we all realize that, at the end of the day, organizations are likely to need more than a cool approach to succeed. Instead, more often than not, they are also in need of a warm approach.
A so-called warm approach comes along with the right sentiments to assess the importance of different stakeholder’s interests in the decisions that need to be made. Today, more than ever, organizations need to be very much aware of the kind of social value they want to create by means of their decisions.
In fact, in the summer of 2019, the Business Roundtable, led by JPMorgan Chase CEO Jamie Dimon and including leaders from some of America’s biggest companies, announced that decisions made by organizations should not only reflect the interests of shareholders, but of all stakeholders. With this statement, companies have now officially left behind the ideas developed by the famous economist Milton Friedman, who argued in his article, ‘The social responsibility of business is to increase its profits,’ that the sole purpose of an organization is to maximize profit for the shareholder.203 In a way, Friedman is saying that promoting shareholder value is the ethical thing to do!
Today, we know better. Confronted with the possibility of becoming a more tech-driven and thus cool society, we increasingly feel that we need more, rather than less, humanity. In addition to rational calculations and optimal functions derived from further data analysis, we are feeling the need to emphasize the importance of sound and ethical judgments that satisfy the concerns and needs of all those being implicated by the decision taken. Such a realistic perspective stresses the necessity for organizations and their respective leaders to be constantly aware during their digital transformation journeys of both the opportunities and limitations that algorithms bring to the table.
Too good to be true?
The first issue that we need to resolve is that the idea of digital transformation does not signal the end of human activity in the workplace. Yet feelings of fear have nevertheless gone up since algorithms started penetrating organizational reality. By now, it should be clear that the omnipresence of algorithms in combination with deep learning processes is regarded by today’s company leaders as the most effective business model. This model allows for organizations to pursue amazing possibilities that both drive execution costs down and promote effectiveness and work performance. In fact, evidence abounds that corporate executives today are fully embracing the idea that it is an automated workforce that should be relied upon to excel in the future. And why wouldn’t they? Just think about it: paying less and getting more! It does sound like a great deal, does it not? But, let us ask ourselves whether it could be a case of sounding too good to be true? Or, stated differently, could it be that in our corporate enthusiasm, we seem to be forgetting something?
That little thing we may be forgetting concerns the realization that embracing the idea of optimization, and hence, a likely desire to create a perfect world, we run the risk of losing our human identity in the process. In fact, this is not really a little thing at all. The initial idea behind our efforts to let algorithmic presence grow in our organizations is and should always be to optimize the workings of a humane society. The idea is not, and should never be, to upgrade and implement AI at such a level that we ultimately forget to serve the human aspect of our quickly developing society.
The end user is human – not technology. We do not innovate AI-applications to create a society where the end user becomes the algorithm itself. If we were to do so, then we would have to conclude that the goal driving today’s AI-revolution is to develop the most perfect technology possible – without regard for the kind of society we will be creating. Even more so, if this is really the case, then we are developing technology solely for the sake of making technology perfect, and nothing else. And, then, dear reader, the end user of this AI-driven revolution is indeed the algorithm itself.
The end user is technology, or am I wrong?
Some of you may say that it will never go this far. But may I offer a parallel example to clarify that the question I just introduced may actually be more real than we think. Without a doubt you will remember the controversy in 2018 where it was revealed that the data of as many as 87 million Facebook users were accessed and used for both research and political purposes. The founder and CEO of Facebook, Mark Zuckerberg, received much criticism and people wondered whether he realized the importance of privacy to human users.
Indeed, the end users here are humans, like you and me, so why was he being so inconsiderate in how their data was used? Was he not aware that he – as creator – was responsible not only for ensuring technological progress, but also for the welfare of his customers? To answer this question, let us rewind to June 2017, where Mark Zuckerberg, in an interview with Freakonomics Radio said: “Of course, privacy is extremely important, and people engage and share their content and feel free to connect because they know that their privacy is going to be protected [on Facebook].”
Great! The creator of Facebook knew that privacy matters to the human end user and should be treated with care. But, only one year later, it became clear that he was not really treating privacy and the value it carries for humans with much care. What became clear instead is that people like Mark Zuckerberg were primarily driven by the ambition to innovate and show off rapid technological development (remember how cool it makes your company), rather than thinking about the consequences it may have for the kind of society that will be created. Zuckerberg suffers from what I call an innovation only bias.204 He wants to innovate to serve the goal of innovation itself. Innovate to innovate! Such a bias can be called a blind spot, because it shows that if we are obsessed with creating something unique that seems to have almost unlimited possibilities, we only come to see the value of the innovation itself. And often the consequences revealed are to the detriment of its end user and society at large.
This is exactly what happened in the case of Facebook. It was only when Mark Zuckerberg was forced by his court hearings to think about his own role in the technological innovation, that he realized he was also responsible for the consequences that a platform such as Facebook reveals to the welfare and interest of its human users. As he said: “I started [Facebook], I run it, I’m responsible for what happens here. I’m not looking to throw anyone else under the bus for mistakes that we’ve made here.”
Eventually, he got it. In his striving for the next technological improvement, he did not consider the perspective of the human and the values (privacy) involved. Only when tragedy hit his company, was he forced to take the perspective of the human end user as well.
I believe that at this moment the parallel between the Facebook case and our current obsession with developing an ever more perfect algorithm, powered on by deep learning processes we do not understand anymore, has become painstakingly clear. We are innovating for the sake of innovation, which may actually serve technology more rather than humanity. The risk may well be (as it was clearly so in the Facebook case) that we only realize we have handed over control to a perfect technology system, and thus lost our humanity, when it is too late. To prevent this, it is necessary that we bring this kind of debate into the picture as soon as possible. We must also ensure that any organization, government, and society implementing new technology in an effort to optimize their functioning, should always be aware (and to some extent be reminded) to use human welfare as the ultimate goal in our efforts to automate.
What about the human?
Let me put it this way: although these concerns were not a consideration when AI started its upsurge, today serious doubts are arising when it concerns this issue. In fact, when we listen carefully, most opposition against the increasing trend to automatize almost always involves typical human concerns and fears. Very few concerns are being raised about potential limitations of the technology itself. The sky is the limit when it comes down to forecasting how technology may develop. However, with this optimistic view on the potential and abilities of technology itself, fears and doubts arise regarding the future of our human identity.
The more mature and autonomously acting algorithms grow, the more humanity seems to become a challenging issue. For example, we notice that although automation is setting in, promising opportunities we can hardly imagine today, many employees continue to be afraid. They are afraid of working with the new technology, afraid to recognize the algorithmic functioning of their new co-workers, and afraid that all of this may lead to their redundancy. In essence, people are starting to say: “What about me, the human?”
This existential fear has come to the surface as a sense of algorithm aversion. As mentioned, algorithms are perceived as black boxes, meaning that humans have a hard time trusting them. After all, if your new co-worker is increasingly having a say, but you do not know how this co-worker thinks, then fear will automatically set in. And these negative feelings will not be easy to dismiss, especially as design engineers begin to have an increasingly hard time explaining how algorithms work themselves. This reality opens the door to the possibility that AI-driven technology may as well grow from a rather narrow type of intelligence – only focusing on the task at hand – to a type of intelligence that will not only match the unique strength of human intelligence, but surpass it.
If the designer, who is human, is limited in their understanding of how technology functions, then it is inevitable that human dependence on new technology will not only increase, but become a fact of life. Such a reality will also ensure that technology’s dependence on human input will decrease at an alarming rate. This will transpire in a change of the position and influence of humans within organizations, societies and the world at large.
Although a reality where a super type of AI liberates itself entirely from human influence still sounds very futuristic to many, and is unlikely to happen in the next few decades, it should make us aware that we need to start thinking about our own position towards algorithms in the next stage of organizational development.
We will survive… again!
It is these kind of innate reflections – activated whenever humanity comes under threat – that make people increasingly worry about algorithms being part of the workforce. As these human fears seem to be specific to our biological make-up, we could well say that opposition to the massive use of algorithms in organizations is nothing really new. Indeed, as critics will argue, every time a revolution is started to challenge our way of working, our own biological reality leads humans to experience existential fears.
But whenever these existential fears come to the surface, humans nevertheless seem to be able to adapt and move on. As such, one may suggest that the truly unique ability of humans is to survive. And we do so, again and again, because of our ability to adapt. It is an ability so dear to our biological make-up that Charles Darwin wrote an entire book on it.205 One of the key arguments that Darwin introduces in his book, On the Origin of Species, is that individuals who can adapt most to the environment at hand will be most likely to survive and thus reproduce.
According to Darwin, humans (not all of them, though) have the capacity to adjust to any change in the environment, and it is this innate desire and motivation that allows the human species to survive over time. Building on this, we should be relatively confident that the human species will find a way to adapt to an increasingly automated environment. However, what kind of shape this adaptation will have to take is less clear. Will humans occupy a submissive role towards algorithms, or a role where man becomes part of machine? At this moment, no one knows. But the main message seems to be of cautious optimism: humans are likely to survive anything.
But, before we ride out to victory with an unshaken belief that man will survive whatever the threat, let me issue a warning. I believe that what we are talking about today is something other than the average threat humanity has encountered before. Today, a discussion is taking shape where we are talking about a revolution that in reality may not have much of a future for humans. In its most extreme form, the end of the road may well be near for humans.
After all, does the popular press and tech gurus worldwide not speak of a technology that will grow to becoming a better and stronger version of the human species? If these forecasts materialize, then it will be the first time in history that we will not be equipped to remain the dominant species on this planet. We will become followers of a system superior to ourselves and, maybe, in the second instance, even become eliminated. The fact that such possibilities are seriously being envisioned, means that we owe it to ourselves to be responsible enough to discuss and reflect upon how we want to use this new technology within our societies.
Co-operation above all
One aspect of this task involves looking at how both humans and algorithms simultaneously (and thus in co-operation) play a role in the organization of the future. Co-operation is the one thing that has allowed the human species to survive while it found new ways of dealing with change. In other words, the act of co-operation implies a sense of awareness that for future welfare to emerge, all parties are needed with all their unique abilities and strengths. This will create greater opportunities to come up with new and innovative ways to face the future, but in a way that is directed and guided by human concerns.
It is this kind of innovation that will bring us the best chance for humanity to adapt and thus survive. As such, we are today faced with the important task of implementing new ways to co-operate that best utilize both humans and technology. This new diversity will therefore aim to pave the path to a better, more efficient society, one that preserves and stimulates human identity.
In this context, such optimization does not then aim to develop technology beyond human boundaries. Instead, it involves applying new technology in such a way that it promotes and enhances human survival in the most efficient way possible. And, because of that, development of new technology is guided, even tempered, by its main goal: to help advance humanity. In light of this, we need to determine limits for the tech revolution before it is too late and the distance between algorithms and humans becomes a gap impossible to bridge.
Of course, to achieve such a collaborative situation, all parties involved need to have the right attitude. A sense of realism is needed to pinpoint the problems that exist today and address them with human concern as the primary focus. So, where are we today with this particular ambition?
The current state of affairs seems to be that a disconnect exists between the perceptions of those wanting to use the new technology (corporate leaders and shareholders), to promote their company’s interests, and the employees working in that same company. This disconnect not only creates tension, which makes algorithm implementation even more difficult, but brings to the fore the view that the automation process itself is perceived as a real (existential) threat to the human face of organizations. These are serious matters which need to be dealt with in cautionary ways. Research conducted by PwC revealed that 90% of C-suite executives indicated that their companies are paying attention to people’s needs when introducing new technology.206
At this moment, we could conclude that even though automation of the workforce is the business model to work with, corporate leaders do seem to take the human cause seriously. But, before we breathe a sigh of relief, PwC’s findings also revealed another side to the story. The same survey found that only 53% of employees agreed with the assessment of the C-suite executives. As such, these findings underscore a discrepancy between the perception of our corporate leaders and their employees when it comes down to how new technology should be treated.
How to move on
What to think of this discrepancy? In my view, rather than launching an investigation into who’s right and who’s wrong, a constructive debate is needed where both perceived realities are embraced and brought together. On the one hand, the law of innovation that new ideas and trends are inevitable dictates that automation has set in and is here to stay. The best thing then would appear to be to embrace the potential of new technology and explore further how it can function in optimal ways within the organizational context.
On the other hand, anyone who has children also knows that simply forcing others to accept things and do as they are told does not help things either. Employees therefore need to be given the opportunity to learn and test the innovative potential of new technology in light of the progress that humanity seeks. Such a testing phase should allow for corrections to take place. Important to the cause of striving for an improved sense of human identity, it should also provide assurance to us that human leadership (and not followership) will remain in place.
As mentioned, organizations need to take the first step towards building a work culture that allows for co-operation between humans and algorithms to take place. Co-operation is important to organizational functioning as it brings together the expertise and contributions of different parties. So, in today’s digital society, organizations should see no problem in fostering co-operation between algorithms and human employees. The problem we are facing, however, is that on average people prefer to work more with other humans rather than algorithms. Organizations and their leadership are therefore forced to create the right culture by fostering a digital mindset. This will help to motivate employees to think about the opportunities created by employing algorithms, such as fostering more innovative ideas to help companies grow.
With such a culture in place, we should be able to initiate a virtuous innovation cycle that is defined by the process of co-creation between algorithms and humans. A report by Accenture (2018) suggests that the new technology (AI) is best used in collaboration with humans, since the merging of the unique abilities of both parties will generate better outcomes.207 Accenture illustrated this idea nicely by referring to the employment of AI techniques by doctors at Harvard to detect breast cancer cells. AI itself scored 92% accuracy, but human pathologists nevertheless did better by achieving 96% accuracy. It was only when both partners found a way to collaborate that the detection rate rose to 99.5%.
Building the right culture
If we want to survive this new revolution, then organizations should not hesitate to build a culture that ensures that the implementation of algorithms will lead to successful co-creation with humans. This task does not seem too difficult, does it?
Well, unfortunately, it does. First of all, a recent survey from Microsoft in Asia revealed that organizations today are failing to install a mindset that leads employees to accept and engage AI.208 In fact, the survey stated explicitly that the Asia Pacific is not ready yet for AI. And, why is this?
Isn’t it the case that we are witnessing a technology race between the US and China which is predicted to expose the differences between Eastern and Western technology approaches? As such, the world should be ready for the arrival of new technology. So, what is missing? It turns out that building the right culture seems to be a key barrier. Among the Asia Pacific business leaders interviewed for the survey, more than half indicated that companies today are failing to build a culture and corresponding mindset that recognizes AI as a way to promote innovation. If this is the case, then, obviously, the much-needed co-creation between humans and algorithms will not be facilitated, but rather made more difficult.
Second, this failure to culture build is actually not such a big surprise. Organizational scholars have been discussing the challenge of building the right work culture for decades. The fact that this debate has been going on for so long is an indication of just how hard it is to reach a consensus on what it takes to change culture. In fact, an abundance of models exists outlining what to do when it comes down to cultural transformations. Nevertheless, organizations keep knocking on the doors of both scholars and consultants to help drive their cultural change projects. If there is one thing, however, that seems common to all the cultural change models out there, then it is that to make change happen one party cannot be missed. And, that one party is leaders who display the right kind of leadership to guide change in ways that are meaningful to others.
Leadership is a process of influence that can drive change in how people think and act. For this reason, effective leadership is needed to influence the installation of a mindset that helps humans to recognize the value of the new technology along with its goals. If organizations can think along these lines, a more fertile ground will exist for co-creation between humans and algorithms to take place in more optimal ways. And, with this leadership suggestion, we have arrived at a very interesting point in our discussion on leadership by algorithm.
As you might remember, earlier in this book, I mentioned that human superiority in leading organizations is not immune to the threat of algorithms. The whole construct is advanced by a business model that advocates the use of new technology to help organizations find a better fit with a business environment that is dynamic, complex and volatile.
This idea is recognized by the corporate world and its leaders, to the extent that my executive students are increasingly worried about how automated their leadership position will become in the future. In their minds, they are fearful of a future in which algorithms will take over their leadership. Seeing how dependent we are becoming on algorithms, fuels the idea that human leadership will soon be a thing of the past. In fact, the fear exists that future leadership will be stripped of its human elements. But, if it is true that we need leaders to create a culture that can lead human employees to engage with algorithms, surely the leadership needed for this cannot also be algorithmic in nature?
Human employees need to be inspired to see the value of the new technology. This is unlikely to happen if the source of inspiration is provided by the new technology itself. As I have discussed extensively, algorithms can provide more optimal and systematic ways of looking at reality by analysing data at a level of speed and accuracy no human is able to do so. But, these same algorithms are not able to provide the authentic sense of leadership required to make decisions and subsequent meaningful changes to the humans being led. Building a culture that effects employees’ ways of thinking and acting requires a process of logic that connects with our human identity and ambitions.
Humans lead, algorithms manage
What this discussion makes clear is that the leadership of the future should be able to create a culture in which it becomes meaningful for humans to collaborate with non-humans. This empowerment of the new diversity requires guidance in such a way that both parties know their position within the work setting and accordingly create value that serves a society defined by humanity rather than by technological innovation.
To achieve this outcome, we need human-driven leadership. In fact, I would go even further to say that it requires a human leader to provide us with a vision and corresponding judgment that can help to stay close to an authentic sense of humanity in an increasingly automated reality. Having said this, at the same time, I do believe (and evidence is pointing in the same direction) that the execution of our ideas by means of adopting agreed-upon procedures and strategies, does not necessarily need a human hand.
The effective management of our procedures – once a vision is communicated and accepted – relies less on the ability to create meaning than on the ability to establish order. For this reason, as I argued earlier, the execution of the management function is very likely to become automated up to a level that management by algorithm will become the default.
Management by algorithm is more or less in line with what we expect from our human management. Today, we expect (human) managers to create order and stability by employing procedures we have designed for this purpose. Algorithms can be used in ways that provide a more optimal and accurate way of managing than humans are capable of. In itself, management by algorithm is also less likely to pose a threat to the more existential concerns of humans, when it comes down to deciding on how dependent to be on an algorithm. In other words, management by algorithm represents less of a threat to the existential fear of losing our human identity when it concerns the issue of how to run an organization. Why am I saying this?
A recent report by the Boston Consulting Group revealed that a majority of employees do not aspire to become a manager.209 The report also revealed that only one in ten Western non-managers said that they aspire to become a manager. Of the existing managers, the report reveals, only 37% of Western managers say they would like to remain a manager in five to ten years. This trend, in fact, may be good news in light of our argument that management by algorithm is more or less waiting to happen.
As illustrated earlier, most management tasks are likely to become fully automated. However, if employees today are not interested in those management positions anymore, then it seems less likely that humans will be plagued by the existential fear of losing their human identity to the increasing trend of automation. In fact, the situation, as described by the Boston Consulting Group, suggests that there is a close fit between the desires of human managers and the opportunities offered by an increase of automation in organizations. Humans want to do less managing and algorithms are increasingly more equipped to do the managing. A win-win situation seems to be emerging here!
This win-win situation, however, may only materialize if, at the same time, we, as a society, are successful in motivating humans to foster and train the unique human abilities that can allow us to think like leaders and not simply managers. In light of our ambition for continuous education, we can accept management by algorithm, but only if we, as humans, grow our leadership abilities to provide direction and give a sense of meaning to the decisions that we take and will have to take – at all levels of life. It is only then that the combination of algorithms (managing) and humans (leading) will create sustainable value. Why am I saying this so forcefully?
Well, it should be clear by now that we are mistaken if we look to algorithms for direction. They are machines and, as such, not guided by outcomes that are value-driven or meaningful in light of our unique human identity. Rather, algorithms operate within a utilitarian framework which optimizes their actions. A great example of how utilitarian companies really employ their algorithms is the discussion surrounding how the YouTube algorithm makes recommendations to viewers. The metric that YouTube uses to decide on their recommendations for you as a customer (i.e. watch time) is not aimed at helping customers to get what they want, but rather to maximize their engagement – and, hence, make them addicted – without any other consideration.210
Despite all the promising narratives floating around that AI will soon surpass human competence across the board, and as such that any human task will come into reach of algorithms (including leadership), reality shows a different face. As Melanie Mitchell writes in her book, Artificial Intelligence: A Guide for Thinking Humans, algorithms can achieve excellent levels of performance within the context of narrowly defined tasks, but in essence they are useless when it comes down to providing meaning within the broader context.211 According to Mitchell, this is the case because algorithms do not have common sense. If we adopt this point of view, then it becomes clear that algorithms cannot provide advice or any judgment about what the value of each outcome means to our human society. Indeed, algorithms are simply not able to assess the value created by means of their decisions and actions in light of its contribution to humanity.
Who knows where to go?
Leaders are required to think in terms of how contributions by the collective can create value for humanity. Leadership is out there to drive change and this process does not simply imply a transactional process without any emotional connection. Rather, leadership builds cultures that are able to achieve progressive outcomes, but at the same preserve a sense of moral and social harmony that allows for co-operation to emerge. Without the ability to create this kind of harmony, leadership cannot drive transformation in successful ways. This realization brings us back to the earlier mentioned idea that it is – and can only be – human leaders who are equipped to create the right mindset for co-operation between humans and algorithms to take place today. So, leadership in the organization of tomorrow remains human at its core exactly because unique human traits bring to the table what is needed for the employment of algorithms in ways that preserves humanity.
The importance of training, educating and promoting humans to lead in a tech-driven society cannot be underestimated; both because human are the most effective leaders, and because human virtues should be used as the basis of our society and organizations in the future. With respect to having the most effective leaders in place, it is clear that the main objective will be to establish the right kind of co-operation between humans and algorithms to ensure co-creation of the highest quality. This process requires leadership that is able to judge the value of the outcomes that result from such co-operation. It is those judgment calls that cannot be replicated by machines and so are considered uniquely human. Indeed, the ability to establish co-operation requires that the one leading the process is knowledgeable and aware of the importance of contributing to the collective good. This collective good is meant to bring welfare and benefits to the members of our organizations and society.
So, how do we ensure that leaders are able to foster such ways of working? This requires leaders to be curious and explore how to extend the boundaries of how our organizations work without sacrificing the human DNA of the society in which those organizations work. Similarly, leaders should be able to imagine what future human welfare should look like and hence be able to take the perspective (understand the emotions, desires and needs of the stakeholders) of the organization we need to create to achieve this purpose. This future-oriented thinking, combined with a strong sense of emotional intelligence and imagination, sets the stage to work in creative ways.
Continuous education
Although in theory nothing should be impossible to imagine, it seems safe to argue that algorithms cannot be fitted within the framework of a human-driven type of leadership. On the contrary, I would say that if we do try to imagine algorithms taking over leadership positions, fear and worry about our human condition will quickly surface. Specifically, when thinking about algorithms performing leadership tasks, many of us fear that this new technology is not sensitive and understanding enough to be able to contribute to the collective good and ensure humanity’s progress. Rather, algorithms are better off providing the necessary input to bolster technological developments to speed up human progress. The evaluation of whether the progress achieved is for the benefit of humanity is, however, another question, and requires a deeper understanding of what it means to be human.
Indeed, regardless of the fact that algorithms in several areas seem to be on their way towards matching or even surpassing human intelligence (e.g. detecting tumors, automated driving and so forth), it is undeniable, as Melanie Mitchell writes, that in order to create value for humanity, we still require our human ingenuity. It is therefore no surprise that when it comes down to shaping the leadership of the future, we have an obligation and responsibility to safeguard the human DNA of our leaders. The human DNA in this case stands for the ability to show compassion, forgiveness, empathy, ethical awareness, curiosity and imagination, all virtues that are needed to design tech-driven environments that ensure the promotion of our human identity.
Leadership is needed to bring human sense into today’s technology race, where we are employing algorithms at an increasing rate. It requires leaders of the future to find a balance between endorsing a tech-infused efficiency model as the new way of working, and promoting awareness that creativity, empathy and ethical judgments ultimately matter most when assessing the value created by employing algorithms. To achieve this balance requires continuous education and training. Indeed, leaders of tomorrow will have to participate in continuous education in two ways.212
Education should first address employees on the topic of new technology. Too often digital transformations fail because of people’s lack of knowledge. This ignorance leads to a lack of understanding in how it can be used to drive performance of the organization. It is therefore imperative that organizations train and continuously update their workforce so that they have a basic understanding of, for example, coding and its potential use for task execution. The Wall Street Journal reported that Amazon plans to spend $700m over the next six years to train 100,000 of its workforce in new technology skills.213 Similarly, Microsoft has built the AI Business School to share knowledge and insights from top executives and thought leaders on how to strategically use AI in organizations.214 It is this tech savviness that can make leaders more effective in employing algorithms in the most optimal ways. At the same time, it enables human employees to understand why this new (non-human) employee is needed.
A second type of education should promote the human skills considered necessary for future leaders. Indeed, we do not want leaders to get so wrapped up in the most sophisticated technological advancements that they lose sight of building a vision that has the human identity at its core. Our workforce will therefore have to be educated continuously in the emotional and creative skills required to build a tech-driven organization where questions around ethics, privacy and innovation for humankind are frequently shared and discussed.
Humanity in AI as a guiding tool
It is important to emphasize that the need for continuous education brings with it the question of what it is that the leaders of tomorrow need to know. Indeed, the two-fold approach to continuous education makes clear that organizations empowering algorithms in their search for efficiency do not require leadership thinking dominated by technology insights. Rather, a basic understanding of new technology will need to be complemented more than ever with insights from other, more humane fields, like philosophy and psychology. After all, even in a tech-dominated business world, organizations are still more likely to succeed if they act in a more, as opposed to less, human manner.
Consider the following assertion made by the intellectual fathers of AI, that AI is not developed with the purpose of replacing the human race – and thereby reversing the power dynamics – in a society that is becoming increasingly automated. Instead, it aims to contribute to the optimization and well-being of a human society defined by the unique values that make it human. The following passage from a 1960 article by Norbert Wiener, the founder of cybernetics, is particularly compelling in this respect: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.”215
With this peculiar challenge in mind, I would like to return to the story that I told you earlier, about my executive students asking whether soft skills will have a place in the leadership of tomorrow. As you may recall, many of them were worried about their own position and were contemplating devoting all their energy and time to becoming more skilled in coding. What is the truth here? Is it a valid concern and should it be addressed?
Yes, it should be addressed, and it is fine that leaders in organizations devote more resources to learning programming. However, it is not fine for leaders to think that the leadership skills of yesterday will no longer be required tomorrow. It is not alright if leaders today are pushed into thinking that our soft skills will have to be replaced with our newly-acquired tech skills. In fact, this line of thinking cannot be more wrong. Rather than leaders becoming more machine in their thinking, leaders should endeavor to become even more human!
As I have pointed out in this chapter, understanding the workings and sensitivities of the new technology is not an end in itself. Indeed, leaders today should not be asked to transform themselves into data analysts. The skill of being tech savvy is required because it represents a means of becoming more effective in augmenting the performance of the organization, but always from a human perspective. As such, the education of our leaders should be based on an understanding of new technology, but elevated by the skill of critical thinking and enhanced with reflection on what this technology means for our human identity.
Other voices utter the same idea. For example, Peter Thiel (an American entrepreneur and venture capitalist) announced that “People are spending way too much time thinking about climate change, and way too little thinking about AI.” Equally, the late Stephen Hawking (Cambridge University professor), warned in The Independent that success in our search for AI domination, while it would be “the biggest event in human history,” might very well “also be the last, unless we learn to avoid the risks.”
Even Bill Gates has publicly admitted of his disquiet, speaking of his inability to “understand why some people are not concerned.” Whether these concerns, uttered by the famous, are true or not, they are shared by many more. For example, in November 2018, The Guardian published an article titled, ‘The truth about killer robots,’ and a month later, The Economist wrote: “There are no killer robots yet – but regulators must respond to AI in 2019.”216,217 What these examples make clear, is that the education of our future leaders needs to include a strong emphasis on thinking about the consequences that algorithms in our organizations and society bring to our own human identity.
Tolerance for imperfection
By prioritizing our human identity, leaders of tomorrow will be able to drive algorithm-based transformations in ways that enable us to lead the development of technology rather than being led by technology itself. One key aspect of a humane society is the fact that we are able to be compassionate and forgive mistakes. In other words, we can be tolerant of imperfect behavior while we search for progress. Indeed, as the famous French writer and philosopher Voltaire once said: “What is tolerance? It is a necessary consequence of humanity. We are all fallible, let us then pardon each other’s follies. This is the first principle of natural right.”
Algorithms and deep learning techniques are focused on being more accurate and effective than ever before. In other words, a new technology focus brings with it a focus on perfection. If we allow this kind of thinking to dominate and take the place of leadership, then it is a legitimate concern that we may grow into organizations and societies that do not respect personal freedom. In fact, a shift towards perfection and consequently a rejection of being tolerant to human failures, which would slowly remove humanity from the algorithm equation, may already be happening.
Think, for example, about the use of facial recognition. China is currently using 170m cameras empowered by AI to fight crime and increase general security. Obviously, these are good reasons to use this kind of technology. But, think about it, if we adopt this facial recognition technology without spending at least some time pondering what it does to human values, such as our right of privacy and human freedom, then we may quickly develop a society where opposition against the new technology will no longer be tolerated. If this is the case, submission to machine will be total and the only way forward will then be to follow the logic of the machine advocating the use of complete transparency without the opportunity to challenge.
However, how much humanity does such a system represent? It is very likely that any human desire to avoid being screened in its entirety will then only elicit suspicion, because why oppose infringement of personal freedom if you have nothing to hide? Opposition should only happen if you have something to hide. Of course, if such kinds of thinking become the default, then we already have moved into a society that does not allow humans to make their own choices. After all, from the perspective of the perfect society as orchestrated by the perfect machine, human choices are imperfect and prone to failure.
In fact, under such a machine-driven regime, allowing individual freedom of choice only brings the risk of imperfection and therefore represents a threat to the dream of constructing the perfect society system. Hence, the use of algorithms on such a large and intimate scale may well be leading to a less humane society. It is in light of these concerns that we must view the recent decision of San Francisco to ban facial recognition, or the comment of Brad Smith, president of Microsoft, that it would be best for us to take a principled approach (driven by values like fairness, non-discrimination, notice and consent) toward the use and development of facial-recognition technology.218
Important in this challenge, of leaders staying close to the mission of ensuring humanity in our technological pursuit, is the realization that the task of automation requires the building of moral communities. The moral communities I talk about represent the ethical values that we, as humans, see as crucial to our identity. They represent the ethical values that we would like to see pursued through the application of new technology.
It cannot be that by introducing the new technology, we create a work culture where moral obligations and responsibilities – which were considered normal in the past – are pushed aside (and ultimately forgotten) in the organization of tomorrow. For example, replacing nurses with care robots may be effective in responding to labor shortages in hospitals, but, at the same time, if this new technological innovation is not led by leaders applying a humane perspective, we may forget that one important human value that needs to remain is to take care of each other. If technological revolutions lead to a shallow approach to humanity, innovations like care robots may ultimately lead to an increase in social isolation among humans. And that should not be an outcome that any human leader would wish to see happen.
203 Friedman, M. (1970). ‘The social responsibility of business is to increase its profits.’ The New York Times Magazine, September 13.
204 De Cremer, D. (2018). ‘Why Mark Zuckerberg’s Leadership Failure was a Predictable Surprise.’ The European Business Review, May-June, 7-10.
205 Darwin, C. (2006). On the Origin of Species. Dover Publications Inc.
206 PwC (2018). ‘PwC data uncovers disconnect between C-suite perception and employee experience with workplace technology.’ Retrieved from: https://www.pwc.com/us/en/press-releases/2018/c-suite-perception-employee-experience-disconnect.html
207 Accenture (2018). ‘The big disconnect: AI, leaders and the workforce.’ Retrieved from: https://www.accenture.com/us-en/insights/future-workforce /big-disconnect-ai-leaders-workforce and Accenture report (2018). Realizing the full value of AI. Retrieved from: https://www.accenture.com/_acnmedia/pdf-77/accenture-workforce-banking-survey-report
208 Microsoft (2019). Microsoft – IDC Study: Artificial Intelligence to nearly double the rate of innovation in Asia Pacific by 2021. https://news.microsoft.com/apac/2019/02/20/microsoft-idc-study-artificial-intelligence-to-nearly-double-the-rate-of-innovation-in-asia-pacific-by-2021/
209 Boston Consulting Group (2019). ‘The death and life of management.’ Retrieved from: https://www.bcg.com/d/press/18september2019-life-and-death-of-management-229539
210 Maack, M.M. (2019). ‘Youtube recommendations are toxic, says dev who worked on the algorithm.’ Retrieved from: https://thenextweb.com/google/2019/06/14/youtube-recommendations-toxic-algorithm-google-ai/
211 Mitchell, M. (2019). ‘Artificial Intelligence: A guide for thinking humans.’ Farrar, Straus and Giroux.
212 Davies, B. Diemand-Yauman, C., & van Dam, N. (2019). ‘Competitive advantage with a human dimension: From lifelong learning to lifelong employability.’ McKinsey Quarterly, February 2019. Retrieved from: https://www.mckinsey.com/featured-insights/future-of-work/competitive-advantage-with-a-human-dimension-from-lifelong-learning-to-lifelong-employability
213 The Wall Street Journal (2019). ‘Amazon to retrain a third of its U.S. Workforce.’ Retrieved from: https://www.wsj.com/articles/amazon-to-retrain-a-third-of-its-u-s-workforce-11562841120
214 Microsoft. AI Business School. Retrieved from: https://www.microsoft.com/en-us/ai/ai-business-school
215 Wiener, N. (1960). ‘Some moral and technical consequences of automation.’ Science, 131(3410), 1355-1358.
216 Vasquez, Z. (2018). ‘The truth about killer robots: The year’s most terrifying documentary.’ Retrieved from: https://www.theguardian.com/film/2018/nov/26/the-truth-about-killer-robots-the-years-most-terrifying-documentary
217 The Economist (2019). ‘There are no killer robots yet – but regulators must respond to AI in 2019.’ Retrieved from: https://www.economist.com/the-world-in/2018/12/17/there-are-no-killer-robots-yet-but-regulators-must-respond-to-ai-in-2019
218 Smith, B. (2018). ‘Facial recognition: It’s time for action.’ Retrieved from: https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/