RECOGNIZED AS THE new big thing, algorithms are ready to penetrate many of our daily activities and tasks. The reality, however, is that algorithms are not just preparing to dominate our lives, they already do.
Algorithms drive machines by telling them what to do to in order to produce what humans want to see. Yes, algorithms are not only the eagerly awaited, super intelligent aspect of the AI hype, but also the driver of ubiquitous machines we use today in a routine way, such as computers. Algorithms are thus already a pivotal part of our society, and their level of influence is only expected to increase over time.
I am emphasizing this omnipresence of algorithms today and in the future since many people in reality have no clear idea of what algorithms are and, as such, miss out on how we should assess the real value and application of algorithms in our work settings. This is an important observation to note because businesses seem happy to embrace the idea that leadership by algorithm will be the next evolutionary step to take. And if we plan to delegate the influence that leaders have over others to an algorithm, then we also need to understand who these leaders are.
If we think about leadership, we quickly arrive at specific types of individuals with specific skills, or, in other words, we have clear expectations and views on the identity of the people we consider to be leaders. Because we have those expectations, humans are able to quickly decide what kind of leader we need when situations change. Or, to put it differently, human psychology works in such a way that when a situation with specific demands presents itself, we can quickly infer the kind of leader that is needed. Given these situational demands, we will easily accept a leader with certain skills as the one in charge and comply with their directives.
Business today is confronted with much uncertainty, a volatile market, and rapid changes requiring leadership that is able to deal with complex and ever-changing situations. Algorithms and their unique capabilities of being rational and consistent in dealing with complex and highly ambiguous events seem to fit the bill to lead in such business situations. In fact, as I have documented earlier, today’s changing business environment is making the algorithm a prime candidate for tomorrow’s leader.
But is this really true? Can we make such a decision based on our perceived understanding of what the situation demands and the abilities an algorithm presents? Especially if we are not even clear how the algorithm is to be defined within the context of our society?
When it comes down to putting algorithms on the world stage as leaders-to-be, we need to become better informed about the real leadership potential that this new technology has before we blindly commit to automation. For effective leadership to emerge, one necessary condition is that any future leader is able to offer an identity that can be trusted. It is the presence of trust that makes people voluntarily engage in an open and co-operative relationship with the leader. It is only when relationships are characterized by open-mindedness and collaborative behavior that the influence of an effective leader can kick in. Put simply, if the actor placed in the leader role is not trusted, no leadership can emerge.
Being wise, not smart
So, how do algorithms fare in this respect? The late Peter Drucker once noted that “the computer makes no decisions; it only carries out orders. It’s a total moron, and therein lies its strength. It forces us to think, to set the criteria. The stupider the tool, the brighter the master has to be – and this is the dumbest tool we have ever had.”53
If Peter Drucker’s wisdom proves still to be true today, then we need to worry, because a potential leadership disaster may be circling above our heads. If algorithms move into a leadership role and fulfil the tasks dealing with a rapidly changing world, then a problem is likely to occur. That problem will center around whether or not algorithms do possess the skills to acquire influence to lead others. After all, leaders today need to be influential as they have to develop truly global organizations that operate effortlessly across borders. To achieve such influence, scholars have argued that a sense of wise leadership is needed.54 Are algorithms capable of doing so? If they are not, then we may have made algorithms appear to be wiser than they really are. And if this is the case, we need to be more careful in our assessment of how and when algorithms can be used in matters of authority.
Building on this logic, an important question to address is whether algorithms can really be wise while not being human? Again, research can help here. What we know so far, is that studies have shown that people perceive machines in general, and algorithms more specifically, as non-human. We perceive them this way for the simple reason that we are unable to attribute a “complete mind” to a machine.55 We do not consider machines and algorithms to possess the fully-fledged emotional (experiential) and thinking (agency) capabilities of humans. You may ask yourself whether it is necessary that machines need to possess the entire range of human emotions and cognitions for them to assume the role of leaders. Well, understanding that people only follow leaders if they perceive them to be legitimate, and that legitimacy is inferred from our perceptions of whether someone is wise, fair and mindful, then it is indeed necessary.
Research in psychology shows that – as humans – we only consider someone or something to have a mind when we can attribute both agency and experience to them.56 As algorithms are perceived to be limited in their abilities to show empathy or even understand the true meaning of human emotions, we look at them as not having a complete mind. Furthermore, if we consider someone else not to have a complete mind, and the ability to recognize and understand emotions, it is safe to assume that we also do not want this other one to make ethical choices on our behalf. If this is the case, then this consequence of not being able to make ethical choices obviously complicates the idea of algorithms taking up any leadership position. Leaders are expected to serve our interests and make the appropriate decisions to do so.
So, this should be the end of the story and algorithms should simply not move into leadership roles.
Right?
The leadership of today will not be the leadership of tomorrow
Maybe it is not the end of the story, but rather a new beginning. If we listen to popular media, business press and visionary keynotes, algorithms may still be in the leadership game – maybe even more than ever. Indeed, despite science identifying several important limitations that impede algorithms from taking up decision-making responsibilities, this has not prevented discussion on whether leadership by algorithm should still occur. The idea that algorithms can run organizations is not one that is dead and buried, but rather alive and kicking.
How then can this discussion about automated leadership survive and even be envisioned as the future leadership model? One possible reason may be that this leadership-by-algorithm hype in essence indicates a frustration with today’s (human) leadership. As a result, business and society at large may be looking for different forms of leadership. So, it seems likely that we are looking for a different kind of wisdom in our leaders of tomorrow. And, that kind of new wisdom could well be provided by an algorithm.
What I am saying is that we may have entered an era where we do not consider it necessary for our future business leaders to possess the kind of wisdom that we so dearly attribute to humans. Rather, it may be that we define the wisdom for our future leaders in terms of other attributes and skills. It could well be that we are looking for a kind of leadership that is best equipped to provide the most accurate and, at the same time, fastest decisions. If we want those decision-making qualities to be reflected in our future leaders, then it should not be a surprise that we are ready to embrace the idea of leadership by algorithm. After all, isn’t it the case that leaders able to make fast and accurate decisions should also be able to best manage a volatile business environment?
Interestingly, when we look at leadership literature, scholars in the past have portrayed good leadership as those who “make good decisions in a timely way.”57 We know that leaders have to make decisions on a daily basis. We also know that those decisions reveal important social consequences that can benefit or harm the organization and its employees.58,59 For these reasons, today’s focus in the digital era may be more on selecting leaders who are able to deal with data in the most optimal way. And, subsequently, we recognize suddenly the beauty of an algorithm as a likely candidate to make decisions and, hence, lead.
If we move from our theoretical exercise above and on to what we see in practice, we may find some evidence in favour of leadership by algorithm. The one thing that is not going unnoticed is that jobs are increasingly being automated, with algorithms integrated into decision-making processes. This trend could be interpreted as a signal that a new kind of automated leadership may well be on its way.
And, why should this be? Well, the faster acting, more accurate and consistent self-learning algorithms become, the more likely it could be that humans will gradually transfer the power to lead to those same algorithms. Today’s reality is that companies operate in complex and volatile business environments where a need for faster and more accurate decision-making is increasingly emerging. To be able to deal with this need, people begin to seek a new kind of leader to get the job done. This new kind of leader seems to be recognized as one that is automated, thinks rationally and, hence, offers a sense of accuracy and precision to provide the most optimal choice in any kind of situation.
Are we ready to accept?
At this moment, you may wonder whether our need and desire for this kind of new leader is not simply wishful thinking. On one hand, humans surely would not so easily transfer the power and influence of their leaders to an automated entity. On the other hand, maybe we are too sentimental about these questions. It is this emotional reaction that prevents us from turning leadership as we know it today into a more optimal form that may well be less human.
After all, from a rational point of view, we should do anything possible to optimize our way of doing business. Such efforts, without a doubt, should also include thinking about how we can run our organizations and make decisions in better, more optimized ways. If we are serious about thinking in a rational fashion about how we want to approach the future of work, it should only be a matter of time before automated leadership will happen.
Some research actually suggests that people today are ready to accept this idea. A 2019 study by Logg, Minson and Moore examined the attitude of humans towards the judgments and advice offered by algorithms. They arrived at some powerful conclusions.60 The authors concluded the following: “Our studies suggest that people are often comfortable accepting guidance from algorithms, and sometimes even trust them more than other people. … It may not be necessary to invest in emphasizing the human element of [the] process. Maybe companies that present themselves as primarily driven by algorithms, like Netflix and Pandora, have the right idea.”
These recent studies clearly underscore the idea that humans may not be so sentimental about the future leadership question and are actually more willing to trust the input and direction that algorithms provide than we expected. And, importantly, this tendency does not seem to be the case only in the context of experimental studies, but also in real life. We know that Uber riders respond in a more negative way to a price increase if it is set by a human, compared to when it is set by an algorithm. If a human sets out new guidelines that harm the self-interest of other humans, then those others will be less forgiving than when an algorithm initiates this policy.
After all, if a decision made by a human leads to negative consequences for another human, then it is usually evaluated as intentional. Whereas, when an algorithm makes a decision that leads to negative consequences for a human, people do not perceive the algorithm to have acted intentionally. Indeed, as discussed earlier, people do not perceive an algorithm to have a mind and therefore it is difficult to see how an algorithm could have bad intentions.
This shows that, under certain circumstances, humans actually prefer algorithms to make choices because they are not considered to be threatening to us. A recent study published in Nature Human Behaviour consolidated this idea.61 Studies revealed that human employees showed a stronger preference for being replaced by an algorithm than by another human. This is quite a surprising finding in light of the current debate that people are fearful of being replaced by AI. But then again, from the point of view that people may be more lenient toward having rationally-acting algorithms to make decisions on their behalf, it may not be that surprising. Researchers in this study found that the reason behind their finding was that humans experienced being replaced by another human to be more harmful to their self-interest. To be more precise, they considered being replaced by another human to be more threatening to their public image (how they are perceived by others) and self-esteem.
In fact, if you are replaced by another human, people may quickly reason that this other person is better than you. This judgment is something people obviously do not like, because it implies that others will have a negative view of you and your abilities (i.e. which is part of your public image). On the other hand, if you – as a human – are being replaced by an algorithm, you are being replaced by a non-human, and this event is experienced as less threatening to your public image. Indeed, a human and non-human are completely different species and therefore cannot be compared in the same dimension.
Both the Uber and research findings in Nature Human Behaviour illustrate circumstances where people prefer an algorithm to be in charge, as opposed to a human. Thus, these observations emphasize that humans are more inclined to rely on the actions and advice offered by algorithms exactly because they are non-human and, as such, take out the emotional and biased side of human decision makers. It may well be the case that humans are getting ready to see the benefits of a rationally-acting non-human machine to replace those positions that need more optimal and responsible decision making. Leadership positions, for example.
If our leaders do not let their intentions and biased judgments play a role in the decisions they take, we, as humans, will take it less personally and will be more assured of the decision being fair. From that point of view, we see that algorithms are gaining ground quickly in the area where they can provide advice to customers and act as experts to higher-level managers in organizations. For example, recent surveys among customer support and service operations report up to 85% willingness to use virtual assistants by 2020, leading to expectations that in the near future we will witness a significant increase in chatbot use by companies.62,63
Who qualifies?
What to think of all of this? Well, if we take into account the fact that businesses today operate in volatile and complex business environments, and therefore require fast and optimal decisions; the recent scientific evidence; and the optimistic vision of business people that automation will hit all levels of authority, then we can only conclude that leadership by algorithm is the best way forward.
It will happen and apparently for good reason. In fact, online work environments are becoming increasingly popular, and the application of algorithms to monitor, co-ordinate and evaluate performance of employees in these settings is very much present.64 Even more so, businesses today admit that they are increasingly relying on algorithms to co-ordinate work relationships. Delegating the power of authority to algorithms definitely does not seem like science fiction anymore. But is this really the end of the story? Is the final conclusion that leadership by algorithm will happen and that it will be for the better?
In my view, it is not! Yes, it would be the end of the story if we define leadership in a narrow way. So, what is meant by a narrow manner? To answer this question, let us first look at how we are approaching the whole issue of replacing humans with algorithms. Many people, and especially the popular press, only look at the skills that are required to execute the job. As we all know, however, doing a job well entails more than simply being able to tick the boxes representing a list of skills. Of course, skills are important and recognized as relatively good predictors of how well employees will perform. But that is not the only thing we need to become influential and effective at work. Another important aspect to ensure effective job execution is that meaning is given to the job. For people to stay motivated in their job, it is crucial that their function is understood in the broader organizational setting.
Jobs are part of a broader social context in any organization. And it is because of this broader social context that employees are also required to possess the social skills to talk, negotiate, lobby and collaborate with others. Unfortunately, it is also this element of giving meaning to the job in a broader work environment that is hardly ever a focus in the discussion of whether or not jobs should be automated. I argue that we are facing the same problem when we are talking about whether algorithms should and can move into leadership roles.
In today’s discussions, a trend has emerged that leadership is only looked upon as a set of required skills. If all the boxes are ticked, a person should be ready to assume a leadership role. The consequence of looking at the possible automation of leadership in this way is that organizations are too narrow in their thinking about what it takes for automated systems to run the organization. Specifically, this rather narrow way of defining leadership means that organizations will make the simple calculation that if any actor (human or non-human) delivers the skills needed to make decisions in fast and data-driven ways, then they are considered fit to lead. And, looking at the leadership challenge lying ahead of us, algorithms can then be considered very worthy candidates for the leadership job.
Will others follow?
But, let’s face it, this is not how leadership works! One key aspect of effective leadership is the influence they exert to motivate, inspire and direct others. Leaders are needed to drive change and for that they need to be able to influence others. A leader can only bring this kind of change, however, if those others are willing to accept and support the decisions taken by the leader.
From your own work experience, it is a given that you expect your leaders to be able to explain why change is needed. We all want our leaders to be able to show what change will look like and what kind of value it can create for us. It is those abilities that can motivate others to follow and, as such, make change happen. Leaders are then considered influential. However, if no one is willing to buy-in to the ideas of change communicated by the leader, then nothing will happen. Under those circumstances, we say that leaders are not influential and unable to change anything!
Why will nothing happen if people do not accept the ideas of a leader? Isn’t it the case that someone in a leadership position has the power to make things happen anyway (regardless even of any support)? The simple truth is that as a leader, you need others to get things going.65 If no one follows, there will be no one to help make change happen. Think about it, how many successful leaders do everything themselves? They don’t, because they rely on others to make their vision materialize. So, the key to being an effective leader is to be influential and drive change through those who follow. This process is potentially more important today than ever before.
In today’s business environment, leaders have to deal with the challenge of digital disruption. To do this, they are required to bring along a vision and identify the possible directions the organization can take to make this vision happen. To deal with these digital disruptions successfully, simply communicating what needs to be done is not enough – although many business leaders today get stuck in this phase. It is not at the stage of providing direction where the job of being a leader stops. People need to be willing to follow the strategic direction of their leader. So, even if a leader possesses all the technical and analytical skills imaginable, they alone cannot do everything. Leaders need others in the organization (and preferably as many as possible) to take up the challenge of change. Transformation can thus only happen if it is both supported and enacted upon by all within an organization.
The black-box problem
What does this kind of thinking about leadership teach us? It makes clear that if we want to know whether leadership by algorithm is a reality waiting to happen, we need to assess whether this technology is also able to create a culture where employees are willing to be influenced by a leader. So, if an algorithm wants to move into a leadership role, it will need to have the skills to play the influence game as well. And this is where it becomes interesting. If we look at the issue of leadership by algorithm from this point of view, we suddenly see a completely different story emerge. Now, algorithms may not seem to be the best leadership choice after all. Indeed, are algorithms able to touch the heart of human followers and mobilize them to make a vision happen?
To understand whether this is the case, we need to know whether algorithms have the ability to influence humans into adopting an open and trusting attitude towards automated leadership. And, there may be a problem with that. For example, data provided by Davenport reveals that 41.5% of US consumers do not trust AI with their financials and only 4% trusted AI in the employee hiring process.66
Giving authority to algorithms is actually seen by many as a frightening future. In 2015, for example, the late Cambridge scholar Stephen Hawking and 3,000 other researchers signed an open letter calling for a ban on autonomous weapons. They felt that algorithms could not be given powers in ways that could threaten our well-being and in a way humanity in its entirety. In fact, most surveys reveal that, at this moment in time, humans feel concerned, suspicious and uncomfortable when dealing with algorithms that make decisions on their behalf.
So, if people are not too optimistic about algorithms being able to motivate humans to be open to their directives, what does science say? An important finding emerging from a large number of studies concerns the general trend that people perceive the functioning of autonomous algorithms as something of a black box. It is this black-box perception that leads humans to be suspicious of autonomous algorithms in our work setting.
This suspicion is primarily because information generated by algorithms lack transparency (how was it generated?) and are difficult to explain – even engineers who initially designed the algorithm have struggle to understand its processes.67,68 So it should be no surprise that human employees hesitate in delegating tasks and execution powers to algorithms. This is because they do not understand what it is that those advanced algorithms do. Such perceptions and associated feelings then also create fertile ground for humans to distrust the employment of algorithms in work settings.69,70
With such a state of distrust, we see a situation emerging that makes algorithms obviously less suited to lead. Since algorithms are developed within the safe boundaries of an engineering lab, the reality seems to be that they are simply not equipped to deal with a volatile business environment where social pressures exist. One clear pressure, which introduces additional complexity in the decision-making process, is that employees expect explanations to be given for the decisions that automated systems make. And this social pressure will not decline over time – rather the opposite.
Take, for example, the requirement of the European Union (since summer 2018) that organizations need to be able to communicate in transparent ways on how the employment of their algorithms work; such a measure is increasingly proving problematic.
Trusting algorithms
Does this sense of distrust towards algorithms develop, or is it something that pre-exists in the human mind? If it is the latter, then distrust towards algorithms cannot be avoided. In fact, it will be the default position we have to work around in an increasingly automated world. Research reveals findings consistent with the idea that humans are negatively biased towards the ability of algorithms to make predictions and decisions, which induces distrust. For example, scholars identified biases responsible for the fact that people have a strong preference for forecasts made by humans, weigh human input as more important than algorithmic input, and judge professionals who seek out advice from an algorithm more harshly than when advice is asked of another human.71,72,73,74 The tendency to avoid using advice from algorithms has been named algorithm aversion.75
This phenomenon of algorithm aversion can be explained by our earlier observation that humans have a deep sense of distrust towards the use of algorithms and especially so in matters that are important to the interest of humans. In my view, the reason for this distrust as a default attitude towards humans is that a human touch is missing. In other words, even though algorithms can be very accurate and focused on identifying the most optimal decision available, it becomes a different issue when algorithms are making decisions on our behalf. And, this is exactly what leaders do, they make decisions on behalf of others. As such, the challenge introduced is one where we do not doubt the rational and optimizing abilities of algorithms, but rather we do not trust a non-human entity to make decisions that have implications for the interests of humans. Even though we feel that this new technology gives much, nevertheless “humanity is taken away.”76
As I see it, we have two important pieces of knowledge. Today, we know that algorithms are already champions at doing one thing very well. They may even outperform any human in that specific task or area.77 However, at the same time, we also know that leading an organization represents a more complex reality; a reality where any decision taken has implications for different human stakeholders. From that point of view, we, as humans, need decision makers to be closely aligned with the notion of humanity.
These two pieces of knowledge bring forward two contradictory views of the prospects for leadership by algorithm. On the one hand, algorithms possess superior skills when it comes to dealing with complex data, and are therefore able to identify trends and possibilities that can optimize our future decisions and strategies. For those who consider leadership as a way to make the most validated and accurate choice for others to follow, this may be music to their ears.
Leadership, however, entails more than just being able to identify accurate judgments and optimize decision making. First of all, even though algorithms can identify new and original ways of looking at challenges, this knowledge still needs to be interpreted in light of the values and goals a leader wishes to achieve. In other words, the output of any algorithm still requires a form of interpretation before deciding what needs to be done – a human touch is certainly needed at this point. Second, leaders need to decide how to motivate people and allow them opportunities to serve their own interests while working for the collective. It seems clear that algorithms may fall short in providing content and guidance in these two areas.
So, where are we now in the discussion on whether leadership by algorithm is a possibility?
The idea that emerges is that both camps clearly differ in their existential views and perspectives on what leaders need to do and what kind of meaning they should create. Such differences may be very difficult to solve (if, indeed, it can be solved). I would argue, however, that we need to look at the question of leadership by algorithm in a different way. We should focus on exploring a condition that can allow us to use algorithms in organizations in the most optimal way, while at the same time keeping the human component of leadership alive to its fullest extent.
As I will discuss in the remainder of this book, the guidance of our organizations is in human hands, but takes place at two different levels of authority. The first type of authority is what we call managerial influence. That is, the person in the role of manager. The second type of authority concerns leadership influence. This kind of influence refers to the person taking up the role of a leader. Both notions of management and leadership are different in terms of their primary function and how they execute their responsibilities. It is in their functional value that – in my view – lies the solution to understanding the extent to which algorithms can and will be part of the influence process of authorities.
53 Drucker, P. (1967). ‘The manager and the moron.’ McKinsey Quarterly, 4. mckinsey.com
54 Ikujiro, N., & Hirotaka, T. (2011). ‘The wise leader.’ Harvard Business Review, May, 89(5), 58-67.
55 Bigman, Y. E., & Gray, K. (2018). ‘People are averse to machines making moral decisions.’ Cognition, 181, 21-34.
56 Gray, H.M., Gray, K., & Wegner, D.M. (2007). ‘Dimensions of mind perception.’ Science, 315(5812), 619.
57 Hogan, R., & Kaiser, R. B. (2005). ‘What we know about leadership.’ Review of General Psychology, 9, 169.
58 Finkelstein, S., Cannella, S.F.B., & Hambrick, D.C. (2009). ‘Strategic leadership: Theory and research on executives, top management teams, and boards.’ Oxford University Press: New York.
59 Messick, D. M., & Bazerman, M. (1996). ‘Ethical leadership and the psychology of decision making.’ Sloan Management Review, 37, 9-22.
60 Logg, J., Minson, J.A., & Moore, D.A. (2019). ‘Algorithm Appreciation: People Prefer Algorithmic to Human Judgment.’ Organizational Behavior and Human Decision Processes, 151, 90-103.
61 Granulo, A., Fuchs C., & Puntoni, S. (2019). ‘Psychological reactions to human versus robotic job replacement.’ Nature Human Behavior, 3, 1062-1069.
62 Gartner (2018). ‘Gartner says 25% of customer service operations will use virtual customer assistants by 2020.’ Retrieved from: https://www.gartner.com/en/newsroom/press-releases/2018-02-19-gartner-says-25-percent-of-customer-service-operations-will-use-virtual-customer-assistants-by-2020
63 Capgemini Research Institute (2018). ‘Conversational commerce. Why consumers are embracing voice assistants in their lives.’ Retrieved from: https://www.capgemini.com/resources/conversational-commerce-dti-report/
64 Curchod, C., Patriotta, G., & Cohen, L. (in press). ‘Working for an algorithm: power asymmetries and agency in online work settings.’ Administrative Science Quarterly.
65 Shamir, B. (2007). ‘From passive recipients to active co-producers: followers’ role in the leadership process.’ In B. Shamir, R. Pillai, & Bligh, M.C. (Eds.), Follower-centered perspectives on leadership: A tribute to the memory of James R. Meindl. Greenwich, CT: Information Age Publishing.
66 Davenport, T.H., & Bean, R. (2018). ‘Big Companies Are Embracing Analytics, But Most Still Don’t Have a Data-Driven Culture.’ Harvard Business Review, 15 February. Retrieved from: https://hbr.org/2018/02/big-companies-are-embracing-analytics-but-most-still-dont-have-a-data-driven-culture
67 Castelvechi, D. (2016). ‘The black box of AI.’ Nature, 538, 20-23.
68 Zeng, Z., Miao, C., Leung, C. & Chin, J.J. (2018). ‘Building more explainable Artificial Intelligence with argumentation.’ Association for the Advancement of Artificial Intelligence, 8044-8045.
69 Frick, W. (2015). ‘Here’s why people trust human judgment over algorithms.’ Harvard Business Review, February 27. Retrieved from: https://hbr.org/2015/02/heres-why-people-trust-human-judgment-over-algorithms
70 Diab, D. L., Pui, S. Y., Yankelevich, M., & Highhouse, S. (2011). ‘Lay perceptions of selection decision aids in US and non‐US samples.’ International Journal of Selection and Assessment, 19(2), 209-216.
71 Eastwood, J., Snook, B., & Luther, K. (2012). ‘What people want from their professionals: Attitudes toward decision‐making strategies.’ Journal of Behavioral Decision Making, 25(5), 458-468.
72 Önkal, D., Goodwin, P., Thomson, M., Gönül, S., & Pollock, A. (2009). ‘The relative influence of advice from human experts and statistical methods on forecast adjustments.’ Journal of Behavioral Decision Making, 22(4), 390-409.
73 Promberger, M., & Baron, J. (2006). ‘Do patients trust computers?’ Journal of Behavioral Decision Making, 19(5), 455-468.
74 Shaffer, V.A., Probst, C.A., Merkle, E.C., Arkes, H.R. & Medow, M.A. (2013). ‘Why do patients derogate physicians who use a computer-based diagnostic support system?’ Medical Decision Making, 33(1), 108-118.
75 Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). ‘Algorithm aversion: People erroneously avoid algorithms after seeing them err.’ Journal of Experimental Psychology: General, 144(1), 114-126.
76 Dimitrov, A. (2018). ‘The digital age leadership: A transhumanistic perspective.’ Journal of Leadership Studies, 12(3), 79–81.
77 Grove, W. M., & Meehl, P. E. (1996). ‘Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical–statistical controversy.’ Psychology, Public Policy, and Law, 2(2), 293.