Chapter 7: The Day that Empowerment Changed

IT HAS BEEN said that no two humans are alike. When it comes to leadership, we do not expect the next leader to be a replica of the previous one. However, when we talk about the trend to automate our organizations, the idea of replication is often raised. Indeed, algorithms learn and observe what the most consistent trends are and act accordingly. Algorithms therefore aim to replicate the best procedures available. This means that they can be more accurate (due to a reduced error rate) and faster to reach a conclusion than any human. But is the running of organizations really just about promoting replicability and consistency in operations and performance management?

To some extent, we do like replicability and consistency in our actions. Most humans are averse to uncertainty and feel more comfortable when things are predictable. In a world driven by financial incentives, humans also like to cut costs and the employment of consistent procedures helps to achieve that goal. Humans also do not like to waste time. Time is money and this is especially true when it comes down to leading people. Many leaders are put under pressure to avoid spending too much of their time on developing their own people to the sacrifice of financial gains. When reading all of this, we may conclude that algorithms could well be the answer to these three issues.

However, as we have seen so far, the most effective leaders do work differently. They appeal to a completely different set of human needs and values to transform companies into organizations that act wisely, responsibly and in line with their purpose, while creating value for all their stakeholders. Wise leaders critically evaluate the decisions that need to be made in light of different perspectives. They are driven by curiosity and are able to imagine how to deal in creative ways with problems. It means that wise leaders are aware of their responsibilities and as such try to treat all stakeholders fairly and respectfully.

This kind of leadership requires a range of abilities that we consider to be uniquely human. These are skills that algorithms cannot deliver, making them less suitable to replace humans in their leadership roles. Parry and colleagues (2016), for example, noted that “while automation of organizational leadership decision making might improve the optimality and transparency of decisions by organizational methods, important moral and ethical conundrums remain unaddressed” (p. 573).146 And, von Krogh (2018), emphasized the important value of our unique human qualities by stating that, “In the long run, outsourcing ‘intelligence’ to machines will neither be useful nor morally right. Although such technologies have many attractive features, they merely emulate cognitive processes and cannot substitute the great flexibility, adaptability, and generativity we associate with human intelligence” (p. 408).147

Empowerment makes the world go round

We expect our human leaders to make meaningful decisions for the future in light of any given situation. The best way to assure that others perceive your decisions as legitimate is to connect people’s personal experiences with the reasons why you have decided to act in a specific way. When people can connect to what you have decided, they will internalize the direction taken and support it. To achieve such an emotional connection, leaders need the abilities identified in chapter six. When leaders use those unique human abilities in effective ways, they are said to be able to empower others. Interestingly, decades of research shows that leaders can facilitate how meaningful people perceive their job and are motivated to do well, by empowering others. Empowering leadership provides others with a sense of autonomy by including them in the process of decision making.

Empowerment also enhances people’s self-esteem, confidence and sense of control over the execution of their job. These are all outcomes at the level of the individual employee that need to be nurtured by our leaders. These days, we work in volatile and complex business ecosystems, and the increase of automation further adds to this complexity. As a result, people today, perhaps more than ever, experience uncertainties, loss of control and anxiety. Leadership, therefore, has never had a greater responsibility to empower and ensure that their teams are motivated to deliver high-quality, innovative solutions.

Interestingly, with the introduction of algorithms, teams will be composed of humans and algorithms working together. A consequence of this new work reality will be that leaders need to be better equipped to empower both humans and algorithms. What might such empowerment look like?

Leadership empowers humans

Empowering human employees so that they can manage their work environment better and improve their performance is a crucial leadership requirement. This need for empowerment will become even more important in an age where employees interact with autonomously-working algorithms. With the introduction of algorithms as a new co-worker, the psychological experience of human employees will become more complex. Leaders of tomorrow need to account for this and should be informed on how to empower human employees in this new age (see Table 1).

Table 1: The empowering abilities for humans by the leaders of tomorrow

Empowering ability

Actions taken

Managing aversive emotions

Reduce emotions of fear and anxiety

Eliminate feelings of uncertainty

Avoid feelings of loss of control

Managing distrust

Avoid power struggles

Increase transparency

Managing technology education

Make leaders tech savvy

Promote continuous education

Managing employee’s expectations

Promote contact opportunities

Create feelings of familiarity

Learning to explain the how and why

Be sincere

Provide legitimate reasons

Provide adequate reasons

1. Managing aversive emotions

One consistent finding that academic literature reveals is that humans are not very eager to prefer algorithms’ forecasts over those made by humans.148 Generally speaking, humans assign more weight and importance to human input than to algorithmic input.149 As we saw earlier, this tendency for people to prefer human input over algorithmic input – even if the latter input proves to be more accurate – is called “algorithm aversion.”150 This aversion for relying on algorithms is somewhat irrational because, in specific tasks, algorithms actually perform faster and more accurately than humans.

A smart human should then surely be motivated to use the information generated by an algorithm to its own best interests, right? Well, we do not! In fact, people act quite irrationally – especially when situations are experienced as new, uncertain and complex.151 Within the automation age, leaders therefore see a new empowerment challenge emerge, which is to manage the irrational judgments of employees towards the operation of algorithms in their work setting.

Why aversive?

First of all, we need to realize that algorithm aversion is real and motivated by irrational behavior. For example, if algorithms perform equally well, or even better than humans, then the range of solutions they identify can be considered useful. However, people will discount these ideas more easily when they find out they were offered by an algorithm. As a result, none of these solutions will be used. This example teaches us that our biased attitude towards algorithms carries the significant risk of making us less informed, and hence, less smart.

We lack open-mindedness in those cases. But, unfortunately, it is open-mindedness that drives creativity. And, this is not where it stops. Imagine what will happen to those employees who do comply with the advice offered by algorithms? Well, research shows that if the bias of algorithm aversion is a shared mindset within the company, then these employees may risk becoming stigmatized and even excluded from the team.152 These findings clearly underscore the need for leaders to guide human employees more effectively (and less irrationally) in their interactions with algorithms.

But how do they do that?

Leaders will have to devote time and energy towards carefully assessing whether this aversion exists within their team, and, if so, where it originates from. Understanding this root cause will be helpful, since leaders will then be able to present a counter-argument through which employees can see the benefits of using algorithms. This will challenge the team perspective and hopefully ensure that team members become more effective and make a more significant contribution to the organization as a whole.

Tackle the fear

What are the reasons for algorithm aversion to exist? One reason is that most people are basically afraid of algorithms making autonomous decisions. An algorithm is a different kind of animal and, as such, is a relative unknown. Not being familiar with algorithms makes people feel uncomfortable. It leads to their being fearful about this new work situation. And with fear, comes uncertainty; whenever people feel uncertain, they distance themselves from the source of their fear. It is this kind of psychology that makes humans avoid or even discount the input of algorithms in the decision-making process.

The uncertainty experienced with respect to algorithms is particularly related to feeling uncomfortable with a new colleague who is not human. Just think about it! As humans, we have a strong drive for our society to adhere to human values, and, therefore, for decision makers to bring human qualities to the table. This is obviously not the case when we talk about algorithms. This technology does not have feelings and, in our view, is unable to address and deal with human experiences.153 Hence, we would rather avoid this new co-worker.

Leaders therefore should be trained to minimize the fears that impact their employees’ work experience. One way to do so is by making algorithms more human. Ensure that digital transformations are not simply executed because everyone else is doing it. Companies being motivated in such a way have usually not done their homework and are unable to assess the true value that algorithms can bring to the achievements of the company. It is therefore no surprise that many digital transformation processes fail.

In fact, digital transformations are more likely to succeed if organizations can help their employees to have a clear understanding of how and why algorithms should be used. Leaders thus need to better explain the use of technology to their employees. Creating a well-informed workplace allows a company to make better use of the best technology to fulfil their own unique purpose. Because, really, if the technology used is not up to the task, then there is no point in automating the workplace, since it will not bring any advantage. In fact, it may result in disadvantages, as employees will not engage out of fear of the new technology.

Tackle loss of control

A second reason underlying algorithm aversion is the fact that by allowing algorithms to be involved in the decision-making process, humans experience a loss of control in their job. The need to control one’s environment is considered to be one of the most basic concerns known to humans.154 People value control in almost all aspects of their life – especially so when decision making is involved. The concern to feel in control is so strong that people are even willing to make financial sacrifices, up to the level that it hurts their own financial wellbeing.155

In fact, my own research reveals that employees are willing to sacrifice a large part of their work budget to keep algorithms out of the decision-making loop, so that they can stay in control.156 Obviously, such behavior is detrimental to the functioning of any organization.

We do not want our employees to use their work budgets to avoid using digital improvements. When this happens, employees devote less time to their job and underperform because they lack resources. What we can learn from all of this is that leaders in the 21st century need to become better at managing the emotions of employees when algorithms become their new co-workers. In this process, the unique human abilities of empathy and emotional intelligence will play a very important role and therefore need to be inculcated.

2. Manage distrust

One problem raised by using algorithms as advisors is that we will not always know why advice is offered or how a conclusion has been reached. People shy away from using algorithms because they think of them as a black box. No matter how optimally designed the advice may be, if we do not know how the advice came about, humans quickly withdraw and have doubts about it. We do not consider advice to be legitimate if the procedure behind it is unclear.157 It is only when a source is considered legitimate, that results are easily accepted and used.

With respect to the use of machines in our work setting, the issue of distrust is a major one. Companies may be equipped to explain what kind of data is used and what kind of calculative principle the algorithm runs on. But, this is where, at best, it stops. What exactly happens inside the machine is unknown to most of us, and if it is known, it is often very difficult to explain. It is therefore no surprise that one of the major difficulties that companies experience is communicating the use of data.158

So, if we do not know the internal workings of the algorithm, how can we be sure that the advice is right? Even if the machine seems to be very accurate in making predictions, the lack of transparency turns it into a black box that humans have difficulty trusting.159 At the heart of the problem lies the fact that humans find it difficult to trust algorithms as advisors. Relying on the wisdom that with knowledge comes power, algorithms being perceived as a black box will create work situations in which people will attribute power to the algorithm because only the machine knows (not the human!). Under those circumstances, people will feel they are losing ground to the algorithm.

It’s all about power

With such a power struggle lined up, humans are likely to stand firm and discount the advice offered by an algorithm. The well-known example of IBM trying to use its supercomputer program, Watson, to help doctors diagnose and treat cancer (called Watson for Oncology) is a good example of this phenomenon. Watson was also seen as a black box, which led doctors to arrive at some peculiar conclusions. When Watson arrived at the same diagnosis as the doctors, they judged the program to be useless (it was not adding anything to what we already knew). When the computer program, however, arrived at a different conclusion, then doctors reasoned that the program was wrong.

You may ask yourself now, why did the doctors not think that they were wrong?

We know from research that humans have the tendency to evaluate themselves in more positive ways than others. Such a self-serving tendency is definitely more likely to occur when a human is compared to an inhuman entity. So, it happened that the doctors clearly considered their professional experience to be more reliable than the advice provided by Watson. And, of course, this helped them to be confident and trust themselves above the machine, whose internal workings were not even transparent.

Create transparency

So, what to do? Leaders need to create a feeling among employees that they do not have to be engaged in a power struggle with the algorithm. This can be achieved by opening the black box. By making the internal workings of the machine more transparent and easier to understand, the power balance may shift again. By giving more power to the human (employee), trust may be achieved.

Of course, I hasten to mention that this task of promoting transparency is not getting easier. Technology is becoming more complex and can be applied to almost anything. It makes it more difficult to truly understand the internal workings of the machine. For this reason, it is a requirement that leaders today are tech-savvy enough to understand and explain, at least, the basic reasons why the algorithm has become a player in today’s workforce.

3. Managing technology education

Humans are known to work together with others more easily if they have a shared sense of purpose. So, when algorithms enter the collaborative work circle as the new co-worker, an important job for any leader will be to explain the purpose that they serve. Leaders will need to know the answer to the simple question: why do we use algorithms in the first place?

The importance of addressing this question underscores the need for leaders in the 21st century to be reasonably tech savvy. Before you start sweating and pondering on whether you are tech savvy enough, let me assure you right away that it will not be the case that you need to know all the inner workings of an algorithm. As I mentioned earlier, there is no need for you to become a coder. What leaders do need to do is to stay abreast of the major trends in technology. This kind of awareness should be present to understand why it is that digital transformation processes are currently happening in the business world and whether your own company needs to do the same, and if so, in what way.

As leaders set out visions for the future of the company, they need to understand how the use of new technology can help develop strategies that help the competitiveness and long-term sustainability of the company. This type of leadership action in an era dominated by automation clearly requires leaders to be aware of the most recent technology trends. They also need to acquire sensitivities to the challenges that the introduction of algorithms brings to a human work force.

These sensitivities can be very well trained and maintained by, for example, being involved personally in the recruitment process of data scientists and engineers, and the introduction of these new recruits to their business colleagues in other departments. In fact, in the company of tomorrow, collaboration between teams of data scientists and teams in the other business areas (sales, marketing, finance, HR and so forth) will be needed more than ever, in order to create more effective solutions to future challenges.

Continuous education

Companies thus need to prepare their leaders to think in ways that combine their business knowledge and expertise with a greater awareness of the opportunities and challenges that technology brings. This requirement to familiarize yourself with recent technological developments is linked closely with today’s global mission of lifelong learning. Governments and companies alike have embraced the idea that their citizens and workforces need to be better prepared for a future where technology will bring many disruptions.

To meet these demands, training programs must be developed, and an attitude of learning continuously encouraged, at all levels of society. A report published by MIT Sloan Management Review and Deloitte shows that an increasing number of companies are investing significantly to achieve digital maturity in their workforce.160 This is done by engaging in activities of continuous learning.

There is a good reason for this goal. Companies are shown to perform better and create more innovation when the cultural mindset is one of learning. Many companies consider it an important responsibility for leaders to educate and promote learning.161 Of course, the best way to do this is to set an example, show leaders’ enthusiasm for life-long learning. Enthusiasm initiated at the top will cascade down and inspire others at lower levels in the organization. The first step for leaders is therefore to educate themselves about the technology employed within the company.

4. Managing employees’ expectations

The fact that many employees distrust algorithms indicates that they have specific expectations on how an algorithm should function. These kinds of expectations may well be biased, thereby further complicating how humans and algorithms mingle. Research has demonstrated that humans tend to expect algorithms to deliver advice that is perfect.162 In general, we expect any use of technology to be perfect. No mistake should be made or allowed once algorithms are brought into the workforce. And, if a mistake does take place, then trust in the automated advisor is lost immediately and completely.

Of course, we also do not like our human co-workers to make mistakes, but, contrary to when algorithms make mistakes, trust is not lost right away. The reason for this is that we know that as humans we are not perfect. And, because we are not perfect, we are able to forgive other humans and maintain trust to some extent. Algorithms, on the other hand, are supposed to be perfect, so why should we forgive them?

And, here lies the danger. If forgiveness cannot be granted, conflicts will build and create a non-co-operative work environment between humans and algorithms. In addition, because humans do not have much experience in dealing with algorithms as part of the work context, they are not yet at ease when working with them. Our biases and stereotypical expectations will then have a major impact on how we behave towards them.

One solution that could help is the contact hypothesis. According to this hypothesis, the longer that humans work together with algorithms, the more comfortable they will feel in this specific relationship. Consequently, the advice of algorithms will gradually be taken more seriously and, hence, used more often.

5. Learning to explain the how and why

It should be clear by now that when employees lack an explanation for what algorithms do and why they do it, trust will suffer. It is not only the trust in algorithms that will decline, but also trust in the company. In particular, companies that decide to make use of algorithms without making sense of them to their employees will be trusted less. As such, it is no luxury but a requirement that leaders explain clearly the value and operation mode of algorithms to their employees. With better explanations and more transparency, we can expect less resistance from employees towards algorithms. This will pave the way to more integrative and fruitful collaborations between humans and algorithms.

Interestingly, this issue of explaining the use of technology has not gone unnoticed and is surfacing everywhere in business. For example, Jack Dorsey, CEO of Twitter, noted, “We need to do a much better job at explaining how our algorithms work. Ideally opening them up so that people can actually see how they work. This is not easy for anyone to do.”

So, how to deliver the best and most adequate explanation? The Merriam-Webster dictionary defines an explanation as “the act or process of telling, showing,” or “being the reason for or cause of something.” Explanations are thus meant to provide meaning, justification and transparency about why something is needed or available.163 And, with explanation comes common understanding, more trust and co-operation, and less conflict.164

How to explain?

To achieve these positive outcomes, explanations need to include a few features. First, explanations need to be perceived as sincere.165 An explanation is sincere when you are seen as being an honest and authentic person. Such an impression shows that you feel personally involved in making sure everyone knows what is going on. Coming across as authentic lets people know you will take their issue seriously.

When it concerns humans, we like to see people who are able to explain their actions and decisions. There should be no need to ask for someone else to explain the decisions you made. However, this is not the case for algorithms. Even if an algorithm were able to explain its actions, it would still be limited in impact, simply because it does not have an authentic sense of intelligence.

Second, explanations need to record legitimate reasons for the situation that has occurred.166 It needs to make clear that logical and fair reasons exist for why certain decisions and actions have been undertaken. In a sense, almost anything can be explained, but if the explanation is perceived to be not fair or illegitimate, it may backfire. Finally, explanations need to provide adequate reasons for why the situation needs to be like this.167 The reasons need to be high-quality, easy to verify and come across as believable. This is more easily achieved if your reasons are backed up by empirical evidence and illustrated by examples.

Leadership empowers algorithms

Algorithms will be part of the team. Leaders are expected to guide teams in this transformative process, which means that leaders in the age of automation must learn to not only empower humans, but algorithms too. In this way, the right circumstances need to be created so that algorithms can fulfil their full potential and help reveal new business opportunities and value.

Which abilities do leaders need to show to achieve this kind of empowerment?

Table 2: The empowering abilities for algorithms by the leaders of tomorrow

Empowerment abilities

Actions taken

Delegating work to the algorithm

Give autonomy in task execution

Accept full responsibility

Create a loop of feedback meetings

Identify which data are most important

Set priorities

Use purpose to select the right dataset

Use the right frame to ask questions

Use purpose to frame the right questions for each business area

1. Delegating work to the algorithm

The first important task for leaders is to put the algorithm to work: algorithms need to be given jobs to do. They need to be put into the work circle and placed in charge of a specific number of tasks. Only then can an algorithm truly become a co-worker.

However, there is more to the task than simply having the algorithm analyze data that you have selected. An algorithm also delivers output and this output will determine future steps to be taken. Can algorithms take these steps?

Yes, they can – at least in the context of the simpler tasks. It is fair to say that algorithms will only become more sophisticated over time. This means that, at some point in the future, algorithms will be ready to be given the autonomy to determine and execute their own decisions. This is the moment when (simple) tasks will become fully automated. Interestingly, this scenario depicts a reality in which leaders have to empower both algorithms and humans.

How do leaders empower humans?

Humans feel most motivated when they experience a sense of autonomy in the tasks they conduct. This can be empowering for employees because they can decide on how to use their abilities to produce the best possible outcome. Through learning from and mastering tasks, employees grow into their job. Such processes allow them to tackle more difficult and complex problems in the future. In a somewhat similar vein, in the near future, algorithms will also have to be given a certain sense of autonomy.

What we expect from algorithms is that they work fast, produce accurate numbers and identify trends, that, if simple enough, can be applied immediately. Knowing this, leaders then need to decide which tasks and in what way to delegate work to an algorithm. Delegating work to algorithms frees up the time and work schedule of other employees, who can then focus on more complicated tasks. At the same time, it also allows for implementing the algorithm in the most optimal way possible. After all, if organizations invest significantly in their automation efforts, then it would be a waste of money if the algorithm cannot be trusted to fulfil their role.

In theory, this all sounds fine, but we do need to be cognizant of a limitation that algorithms carry with them into the work context. This limitation concerns the fact that they are usually developed in settings where no co-creation with humans is investigated a priori. Algorithms have not been tested beforehand in how to function and optimize their value in a social context.

Therefore, algorithms do not possess an intuitive sense of working together with a human. Leaders will need to experiment in how to best integrate an algorithm in a social context. Leaders will also have to be sensitive about how to communicate that certain tasks are to be delegated to an algorithmic co-worker, because the mere fact of delegation in this case will mean that the algorithm will work autonomously. This process has two sides.

The art of delegation

First of all, leaders will have to decide when to tell human employees that they have to defer to algorithmic judgment. This is not an easy task! Indeed, humans will usually resist such delegation efforts and may even resort to strategies of sabotage. In this context, the right kind of explanation is needed, where the leader expresses deep sympathy for the human employee and the impact on his/her work experience. At the same time, the leader needs to explain why it is necessary that the human employee complies with the advice delivered by an algorithm.

Put simply, the first side of the delegation process requires that a leader creates the room needed for the algorithm to perform well, while at the same time also taking full responsibility for the decision to have algorithms and humans work together in co-creation. Responsibility here means that the leader is liable for the outcomes that the algorithm reveals, in both its autonomous and collaborative tasks. As the leader, you should be willing to correct the work situation if those outcomes prove unsatisfactory.

How can leaders determine whether work outcomes are unsatisfactory in the collaborative context between algorithms and humans? With this question, we touch upon the second side of the delegation process. Even though leaders can grant more autonomy to algorithms, boundaries regarding their execution power need to be drawn. The goal of delegating tasks to algorithms is not to replace human employees, but to augment their abilities. Therefore, as in any delegation process, feedback meetings are needed to continuously evaluate the performance of the algorithm. Based on these meetings, boundaries to what an algorithm can do will be negotiated, set and re-set.

This feedback will have to be given by the peers of the algorithm, which, in the case of an algorithm, will be the human employee. The human employee is the end user of the algorithm here and therefore a source of feedback. An important point to stress here is that the input from the human employee in evaluating the algorithm needs to be tested on the earlier mentioned biases as well. As such, leaders need to build new work cultures in which biases and distrust are reduced as much as possible to assure that giving feedback will not be seen by employees as an opportunity to simply oppose the algorithm.

2. Identify which data is most important

Algorithms can make external data transparent so that we can more easily identify significant trends to help optimize business processes and value created. However, an algorithm cannot achieve any of this if it is not fed with data. So, who decides what kind of data is given to the algorithm?

These are important questions because intelligent machines will produce better insights and identify more useful trends if the data is of high quality. If the data quality is poor, one cannot expect optimal use of algorithms. In fact, when using low quality data, algorithms will underperform and may even perform worse than humans. As a result, they will face serious opposition from their co-workers and investments the company made in automation may quickly become futile. This possibility is a scenario that many companies take seriously, but nevertheless have difficulties in solving.

What is interesting to observe is that in cases where the algorithm fails because of poor data quality, organizational leadership does not so much blame themselves and their own costly investments, but rather the human workforce. In my consultancy efforts, I hear such comments very often. When things go wrong, blame the humans!

If such accusations are made, it is easy to understand that organizations could then be tempted to think that it might be easier to get rid of the human employee rather than the automated one. But, of course, we all know by now that it is not that simple. A more obvious explanation lies in the fact that too little guidance and clarity has been provided, to both humans and algorithms, in what needs to be achieved. In other words, if the organization is not clear about its purpose and the goals that come along with it, it will be difficult to identify the kind of data that needs to be used as input to the automated workforce.

Purpose makes for data

If it is unclear and not well-defined what kind of data needs to be used, the resulting outcome from any algorithmic effort will be so much less useful to the pursuit of company goals. Dewhurst and Willmot (2014) call this failure of algorithms acting in optimal ways, the “garbage in/garbage out” principle.168 If algorithms have to assist in making decisions and the appropriate data is not employed, why should we then expect that business processes and outcomes will become optimized? Algorithms cannot address the quality of data issue, so it is up to leaders to make those choices. Leaders set the priorities and those priorities will guide the collection of high-quality data. This reality also requires leaders to be clear about and embrace the purpose of the organization they are leading.

Decision making is key to effective leadership and this is no different in the age of automation.169 Leaders of tomorrow will still have to make strategic decisions, use soft skills in dealing with employees, and decide on the opportunities on offer for all stakeholders. The one big difference, however, is that in the future, algorithms will have to be included in the decision-making process. One important consequence of this change is that leaders will more than ever need to consider the importance of data quality for the value they want to create. Rather than processing and analysing the massive amounts of data available, leaders need to decide to focus on which type of data is required and for what reason.

Leaders today should be trained to recognize the purpose of the company and translate this knowledge into asking the right kinds of question. Organizations and their leadership need to envision the kind of business situation they want to grow into. They have a specific reason for pursuing that vision, and this is their purpose. The why of their business.

Purpose, as such, provides meaning to the vision a leader articulates. And, if we let our purpose dictate the goals we strive for and the actions we undertake, we are on the path to achieving value for the various stakeholders. From this point of view, it is clear that a purpose-driven approach to data selection is needed if we want to optimize the use of algorithms in decision making.

3. Use the right framework to ask questions.

Purpose helps to identify the priorities of a company, which will then direct the selection process of data for analysis. As such, purpose provides a framework that can be used to look at the external reality (data). The benefit of a framework is that it can also help in guiding the specific questions that need to be answered. Frameworks help color the kind of questions you need to ask to promote the development of innovative solutions. Asking the right kind of questions also helps to ensure that we continue to make the best use of the highest quality data.

A focus on purpose helps to select data regarding each stakeholder (customer, employee, supplier, shareholder and so forth) involved. But, of course, not every business department (marketing, sales, HR, finance, operations and so forth) is required to solve the same problems. It is therefore important that leaders use the purpose-driven framework to infer the type of questions that should be asked in each business department.


146 Parry, K., & Cohen, M. (2016). ‘Rise of the machines: A critical consideration of automated leadership decision making in organizations.’ Group and Organization Management, 41(5), 571-594.

147 von Krogh, G. (2018). ‘Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing.’ Academy of Management Discoveries, 4(4), 404-409.

148 Diab, D.L., Pui, S.-H., Yankelevich, M., & Highhouse, S. (2011). ‘Lay perceptions of selection decision aids in US and Non-US samples.’ International Journal of Selection and Assessment, 19(2), 209-216.

149 Promberger, M. & Baron, J. (2006). ‘Do patients trust computers?’ Journal of Behavioral Decision Making, 19(5), 455-468.

150 Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). ‘Algorithm aversion: People erroneously avoid algorithms after seeing them err.’ Journal of Experimental Psychology: General, 144(1), 114-126.

151 Ariely, D. (2009). ‘Predictably irrational: The hidden forces that shape our decisions.’ HarperCollins.

152 Shaffer, V.A., Probst, A., Merkle, E.C., Arkes, H.R., & Medow, M.A. (2013). ‘Why do patients derogate physicians who use a computer-based diagnostic support system?’ Medical Decision Making, 33(1), 108-118.

153 Gray, H.M., Gray, K., & Wegner, D.M. (2007). ‘Dimensions of mind perception.’ Science, 315(5812), 619.

154 White, R. W. (1959). ‘Motivation reconsidered: The concept of competence.’ Psychological Review, 66(5), 297-333.

155 Bobadilla-Suarez, S., Sunstein, C.R., & Sharot, T. (2017). ‘The intrinsic value of choice: The propensity to under-delegate in the face of potential gains and losses.’ Journal of Risk and Uncertainty, 54, 187-202.

156 De Cremer, D., McGuire, J., Mai, M.K., & Van Hiel, A. (2019). ‘Sacrificing to stop autonomous AI.’ Working paper NUS Business School.

157 Tyler, T.R. (1997). ‘The psychology of legitimacy: A relational perspective on voluntary deference to authorities.’ Personality and Social Psychology Review, 1(4), 323-345.

158 Berinato, S. (2019). ‘Data science and the art of persuasion.’ Harvard Business Review. Retrieved from: https://hbr.org/2019/01/data-science-and-the-art-of-persuasion

159 Castelvechi, D. (2016). ‘The black box of AI.’ Nature, 538, 20-23.

160 MIT Sloan Management Review and Deloitte (2018). ‘Coming of age digitally: Learning, leadership and legacy.’ Retrieved from: https://sloanreview.mit.edu/projects/coming-of-age-digitally/?utm_medium=pr&utm_source=release&utm_campaign=dlrpt2018

161 De Cremer, D. & Mancel, P. (2018). ‘Leadership is about making others smarter to better serve customers.’ The European Financial Review, October-November, 57-60.

162 Madhavan, P., & Wiegmann, D.A. (2007). ‘Similarities and differences between human-human and human-automation trust: An integrative review.’ Theoretical Issues in Ergonomics Science, 8(4), 277-301.

163 Shaw, J.C., Wild, E., & Colquitt, J.A. (2003). ‘To justify or excuse: A meta-analytic review of the effects of explanations.’ Journal of Applied Psychology, 88(3), 444-458.

164 Holtz, B. C., & Harold, C. M. (2008). ‘When your boss says no! The effects of leadership style and trust on employee reactions to managerial explanations.’ Journal of Occupational and Organizational Psychology, 81, 777–802.

165 Bies, R. J., Shapiro, D. L., & Cummings, L. L. (1988). ‘Casual accounts and managing organizational conflict: Is it enough to say it's not my fault?’ Communication Research, 15, 381–399.

166 Mansour-Cole, D. M., & Scott, S. G. (1998). ‘Hearing it through the grapevine: The influence of source, leader-relations, and legitimacy on survivors’ fairness perceptions.’ Personnel Psychology, 51, 25–54.

167 Bobocel, D. R., & Zdaniuk, A. (2005). ‘How can explanations be used to foster organizational justice?’ In J. Greenberg & J. A. Colquitt (Eds.), Handbook of Organizational Justice. Mahwah, NJ: Lawrence Erlbaum.

168 Dewhurst, M., & Willmott, P. (2014). ‘Manager and machine: The new leadership equation.’ McKinsey Quarterly, 1-8.

169 House, R.J. (1996). ‘Path-goal theory of leadership: Lessons, legacy, and a reformulated theory.’ The Leadership Quarterly, 7(3), 323-352.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!