PART THREE
5
Why You Can’t Just Build a Castle
On a warm spring day in 2012 in Dhahran, Saudi Arabia, a computer technician on the information technology team of Saudi Aramco (officially the Saudi Arabian Oil Company) opened an email and clicked on a link. Nothing happened, and he went back to work. A few months later, during the Islamic holy month of Ramadan, when most of the company was away on holiday, a few people at the company began to notice that their computers were acting strangely. Files were disappearing and some machines shut down without any explanation. Within a few hours, thirty-five thousand of the company’s computers were partially destroyed. In a rush to mitigate damage, Saudi Aramco’s technicians began ripping cables out of the back of computers and servers at data centers across the world. Every office was physically unplugged to prevent the virus from spreading farther. The financial toll of lost connectivity on the world’s seventh-highest-revenue company began to stack up quickly. Although oil production continued, domestic trucks seeking refills had to be turned away, and the company’s ability to supply 10 percent of the world’s oil was suddenly at risk. For weeks, the company resorted to analog communication—typewriters, snail mail, faxing—and eventually began giving away oil for free to keep it flowing within the region. In the end, it took Saudi Aramco five months to build out a new system that they felt confident could be opened to the world. During this time, they flew representatives around the world to buy every computer hard drive coming off production lines. The company paid higher prices to cut in line ahead of all other potential buyers, temporary halting the supply of hard drives to the rest of the world and driving up prices.1
Saudi Aramco’s crisis was due to a computer virus named “Shamoon” that was ultimately traced to a fairly routine security breach that we will explain later in the chapter. Although not every security problem will have consequences as dire as putting 10 percent of the world’s oil at risk, our research and work with companies has taught us that a digital mindset must accept a broad lesson about security in the digital age: You will experience security problems. Full stop. Although there are ways to plan in order to reduce the likelihood that computer viruses will infiltrate your systems, that your data or the data of your users will be harvested by third parties, or that records of your transactions will be manipulated, sooner or later there will be a failure and you will have to deal with it. People with a digital mindset will understand this and be prepared to deal with the inevitable security crisis when it occurs. Two of the major questions about digital security you should be asking are: “When will a problem happen?” and “How can I be prepared to respond to it?”
When we ask people to describe how to keep valuables secure, they usually suggest something akin to building a castle. If you want your belongings—say, your gold and silver or most cherished documents—to stay secure in the castle, you make sure the castle walls are tall and well fortified. You need a moat and a drawbridge. You should have armed guards manning a strong portcullis over the main entrance. If you anticipate a security threat, you can deepen the moat or double the guards. In short, if you are like most people you can easily envision beefing up the castle’s defenses and—most importantly—you assume there are a finite number of entrances and securing them against most threats is just a matter of good planning.
However, the castle analogy is not useful or accurate when thinking about cybersecurity. For one, data are stored on digital networks that have more than a few ways in and ways out.
Second, the castle analogy might be useful in an analog world where the castle builder owns the thing. But digital ecosystems don’t operate that way. You might have tilted up the castle walls by yourself, but you likely bought prefabricated panels from one vendor and placed them atop a foundation that you bought from another vendor. You are likely to employ some of the castle guards, but others you contract from a security vendor who trains them differently. And the water in your moat isn’t in your control. As it ebbs and flows, over time it changes the contours of the shore and, sometimes, begins to waterlog the soil on which your foundation rests. In other words, the digital castle is so complicated and the infrastructures on which platforms are built are changing so quickly that even if you could create the ideal defense plan for today, it would be useless tomorrow.
Having a digital mindset means retiring the castle analogy and learning about the dynamism and decentralization of digital ecosystems. It also means recognizing that for all of your best intentions to keep things secure, you have much less control over your own security measures than you think. To understand this process and to develop the right mindset to deal with it, we discuss three key areas of security concern and how you should approach each one in ways that will increase your ability to navigate data security in the uncertain digital world. They are:
· Embrace ecosystem interdependence
· Design for privacy
· Assure data integrity through blockchain
Although you don’t have to become a security expert you do need to understand different ways that data can become vulnerable to outside threats so you will be able to make smart decisions that make it less likely for breaches to happen and so you can respond quickly and effectively when they do. We will spend most of this chapter discussing the third area of security concern—data integrity and blockchain—because this is one of the areas in which you have the most control to make proactive changes, and it’s also a digital technology about which many people are confused.
Embrace Ecosystem Interdependence
As we discussed in chapter 2, most of the technologies in your stack and all those in the broader ecosystem access your data. It’s like your castle has tens or hundreds of entrances that all need to be manned and monitored in different ways. What’s more, because the technologies are always changing (and also changing in relation to other technologies they work with), new doors can accidentally open where you didn’t expect them, and often they open without anyone noticing. And all of those problems don’t even account for the fact that roughly 20 percent of all security vulnerabilities are accidentally created by a company’s very own employees who change something in the software configuration or in the stack without fully understanding the repercussions of what they’re doing.
Appreciating security requires you to retire another common misconception: digital systems aren’t just pieces of hardware or products that are final once they’re in use. We’ve found that most nontechnical people think of digital systems as if they are like desks. Once a desk is built, you probably assume it will be useful and without the need for change for ten to fifteen years no matter the room it’s in, how it’s used, or how much stuff you put on top of it. Digital technologies are different. Imagine if the material for one leg of the desk was changed every year. If the wooden leg became plastic, the fastener that attaches the leg to the slab would likely not work anymore. Also, the new plastic leg might be less sturdy, which would require installing a brace on the table to keep it steady. What if you wanted to stay with your wooden leg to avoid all that bother? You might not own the leg and would therefore not have much choice. You could possibly find a new leg provider who probably doesn’t build legs in your exact size anyway, so you’d have to make other adaptations to the desk. You get the idea!
All digital systems constantly evolve, often in ways that you cannot control.2 That’s why it’s important to think of a digital system as an ecosystem consisting of a technology stack that changes and is managed by you and your partners, who change, too. When different companies or businesses control different parts of the stack or different technologies upon which your stack is dependent, you will see the technologies provided by those companies change often—they have to in order to stay relevant and to keep up with the competition. That means that everything you’ve built on those technologies or built to connect to those technologies will need to be updated too.
To give you an example of how these interdependencies play out, let’s look at Twitter, which has had a remarkably strong record for securing its own data and the data of its users but nonetheless has run into a number of problems because of the complexity of its technology ecosystem.
When you take a photo, your smartphone tags that photo with geolocation data indicating where the picture was taken. Twitter does not post this geo data with your pictures or share those data with partners for obvious security reasons. But a bug in the company’s iOS app accidentally and unknowingly introduced by programmers who were trying to fix another problem in the software temporarily sent the precise location data of a Twitter photo to a third party.3 Once the bug was discovered, it was quickly fixed. A second vulnerability occurred when engineers who work on the Android mobile platform were updating code to adjust for the leap year, but somehow the documentation for how to do so wasn’t correctly followed, and updated code was misconfigured. Twitter’s own internal security measures dictated that if an improper date and time were used in a server call, it should be rejected for fear that the system was being hacked. So, when the Android update went live, Twitter rejected anyone on Android who tried to connect. For sixteen hours, more than two thirds of the Twitter user base on Android were logged out and barred from logging back in.
It would be easy to conclude that Twitter simply didn’t fortify its grounds well enough, but consider just some of the players in those ecosystems that affected the security problems: there’s Twitter technology itself and its own developers; there’s iOS, controlled by Apple; Android, controlled by Google; the people who wrote documentation for the leap year change; also the companies who make the photo apps and the engineers who developed the GPS technology used to tag photos. And on and on.
The reality is that digital companies like Twitter can never secure all their entrances because those entrances are constantly moving. Again, to be digitally minded is to accept that the data you work hard to secure are inevitably going to become vulnerable at some point. It’s not a question of if, but when. When you come to terms with this fact, you have to begin to treat security as a risk assessment problem. How much risk are you willing to take on if you rush the development of your product? How much risk are you willing to take on if you buy a component (hardware or software) from a vendor instead of building it yourselves? How much risk are you willing to take on if you don’t prioritize updating certain components of your software in favor of building out new functionality?
In addition to making calculated decisions at these choice points, it is important to conduct periodic risk audits to determine where vulnerabilities are or might arise. Making calculated risks is half of the equation. The other half is being prepared to respond to security breaches once they do occur. Developing a plan ahead of time for how resources will be allocated to fix problems, how security breaches will be communicated to customers and stakeholders, and how to be transparent about the problems are all best practices we’ve seen in our work with successful digital companies to assure that when security problems do eventually arise, you can deal with them responsibly. Which brings us to technical debt.
Budget for Technical Debt
People with a digital mindset understand that you need to constantly update and integrate software with changes made in other parts of the ecosystem. The maintenance of these components in the ecosystem is what engineering managers often refer to as technical debt.4 Technical debt is a phrase originally coined by software developer Ward Cunningham, who in addition to being one of seventeen authors of “The Agile Manifesto,” is also credited with inventing the wiki.5 He first used the term to explain to nontechnical stakeholders at his company why they needed to budget resources for reworking their older technology investments.
That kind of budgeting is not very exciting for most companies who want to invest the money to build new features or products, rather than continually upgrade or future-proof the current offering. It’s like home renovations. We’d much rather install new countertops and appliances than spend our money on updating the plumbing or electrical. But if we keep spending our money on the fun stuff and don’t invest in the maintenance, eventually pipes will break and circuits will short and we’ll have to take on debt to fix the infrastructure emergency. Technical debt is the plumbing and electrical. Good product and engineering managers have long budgeted time and money to update core components of the product on a regular basis so that they continue to work well and the company doesn’t find itself one day in a situation in which they are encumbered by so much technical debt that they’ll have to stop building out the product to fix things that have been neglected for too long.
To stay on top of technical debt, AppFolio, a successful SaaS company that builds property management software, regularly holds “demolition derbies” where product engineers pitch projects for updating (usually proactively) portions of the stack that they control and fixing known problems in their core products. These demolition derbies, which can sometimes last up to two weeks, are a tool to stay on top of updates that are key to a healthy system. But as VP of engineering Eric Hawkins observes, “Nearly every time we do a demolition derby, several of the teams say they don’t need to participate because they’ve kept on top of their technical debt. They’ve just been taking care of it in the normal flow of work. That’s music to my ears. We make sure to budget time to constantly stay ahead of changes we know we’re going to need to make even if it sometimes means slowing down the pace of new software development. If you don’t pay your technical debt you’re going to have major problems down the road.”6
Additionally, you have to constantly think about how you are going to stay on top of changes in the systems provided by others on which you’re dependent. If you need an example of how technical debt can quickly accrue, look no further than Apple’s App Store. Each year thousands of apps introduced in the prior few years fail and vanish from the platform. That’s because you can’t just build an app and expect it to live forever. As the platform provider, Apple constantly changes its requirements for application performance. The mobile operating systems on which the apps run change, too. So does the hardware in the phones or computers on which those operating systems run. All of this means that app developers have to constantly refactor (a term software developers use when they refer to rewriting parts of the code in an application) and redesign their systems in order to keep the app working.7 Many app developers aren’t cognizant of the needs for these changes or don’t budget to keep up with them all along. They eventually incur so much technical debt that they have to abandon their app like a house condemned because the pipes burst and electrical isn’t up to code.
The speed with which the Shamoon computer virus was able to wreak so much havoc on Saudi Aramco was partly attributable to issues of unresolved technical debt. The virus’s creators found and exploited weaknesses in code controlling the interoperability between computers—they created their own entrances into the castle Saudi Aramco thought was so heavily fortified. Since that attack, the virus’s creators have become more sophisticated. In 2017 the same virus was responsible for shutting down several petrochemical plants in Saudi Arabia. The virus exploited vulnerabilities that arose because the systems operating between multiple companies had not been simultaneously updated—one company had made changes that affected the systems of another company, leaving unnoticed gaps in security that the hackers infiltrated. The hackers attempted to shut down controllers across the various plants that regulate tasks like voltage, pressure, and temperatures. Security experts believe that these attacks could have resulted in massive explosions.8
The higher up the problem is in the technology stack, the easier it is to understand and appreciate the consequences of technical debt on product or business performance. But for changes that occur lower in the stack, it’s often more difficult to conceptualize because the changes normally don’t have an immediate effect. However counterintuitive or unglamorous it may seem, it’s extremely important to understand that in a digital world you often have to make investments to set your product or platform up for future success by making changes to products that today work just fine. The time between today and that future for most digital businesses is constantly shrinking. Leaders of digital companies need to be proactive in allowing budget and time to make updates in the technology stack that will allow them to get ahead of ecosystem changes. In the world of digital business, if you wait until you need to make a change in your technology infrastructure, you’ve waited too long.9
Design for Privacy
In 2013, Cambridge Analytica, a British political consulting firm, announced the release of an app called “This is Your Digital Life,” which was developed in conjunction with data scientist Aleksandr Kogan and his company Global Science Research.10 The app presented users with a series of questions about their behaviors and digital technology use. Cambridge Analytica used those data to build psychological profiles that could be used, among other things, to predict voting behavior. To get access to the data they needed to construct these profiles, the company teamed up with Facebook, which required them to follow an informed consent process for research. The agreement stipulated that Facebook users would complete a survey in return for payment and their data would be used for academic purposes. In reality, the app did much more than simply collect surveys from informed users; it also collected personal information from other people in the survey respondent’s network (their group of Facebook friends), without the knowledge of informed users or their friends. Cambridge Analytica then used these data to construct a massive database of psychological profiles that were used to create targeted political ads. When news of the data leak went public, Facebook shares fell more than 24 percent within ten days, about a $134 billion loss. In 2019, the US Federal Trade Commission announced that Facebook was to be fined $5 billion for its privacy violations.11 The UK’s Information Commissioner’s Office also found Facebook guilty of exploiting users to a “serious risk of harm” and levied a substantial fine.
Privacy is a security concern. Data are valuable to a company for many reasons, but above all because they often contain confidential or proprietary information. Sometimes that information was generated or assembled by the company; other times it was given by customers in exchange for services. Either way, a digital business must secure those proprietary data not only to protect its customers, but to protect its own competitive assets. Our own individual data are also becoming increasingly valuable and important. Technology companies like Google, Facebook, and LinkedIn make the lion’s share of their profits by collecting data about our behaviors, using AI to make predictions about what we are likely to want to buy, and selling those predictions to third parties who try to sell us those things through targeted advertisements.12 Thus, making sure that the data we deem valuable remain secure is of the utmost importance.
Data privacy is linked to one of the most profound shifts enabled by digitization—the intensification of behavioral visibility. Each activity you conduct through digital technologies leaves traces that make your behavior visible to people you never intended to see it. In the digital age, visibility is tied to the amount of effort people must expend to locate information. As former director of Xerox PARC John Seely Brown has argued, if people perceive that information is difficult to access, or they do not know what information exists for them to access, they will likely not seek it out. In this regard, information about people’s work behaviors, tasks, knowledge, or whatever else, though it may be theoretically available for people to uncover, may be, for all intents and purposes, invisible.13 Yet digitization reduces the effort required to make one’s own behaviors visible, or to see the behaviors of others. Although digital technologies by no means cause behavioral visibility, the increasingly intense amounts of data about people’s behaviors means that we no longer live in a world where people are invisible and need to work to make themselves visible. Rather, we are always—through traces of us left in data—already visible to others. How visible and in what ways we are visible are the important questions about privacy and consent.
Facebook made behavior visible to Cambridge Analytica after users were expressly told that their data would remain the property of Facebook. Privacy is a tricky concept in the digital world. No user of Facebook or any social tool can expect that their online behaviors will only be seen by people in their friend network. All it takes is a simple click by one of those friends, and a post, a reaction, a photo, or a meme that you send can be propagated to an entirely new network comprised of people you may not know.14 You probably understand that this risk exists on sites like Facebook. But what you may not count on is that companies like Facebook might harvest your data and transfer it to other companies.
One major way that companies make our behaviors visible is through what’s called algorithmic ordering.15 At the simplest level, algorithms sort, rank, recommend, and categorize information so that it is more easily understood and useful. But algorithms also produce visibility. Microsoft researcher Tarleton Gillespie argues that algorithms do not make all behavior visible, nor do they make different kinds of behavior visible in the same way.16 For one, not all data are technically available to be inputs to algorithmic processing. Also, an algorithm’s human programmers decide what is likely to be relevant. As a result, the algorithms position some behaviors as visible while others are not. Algorithms do not treat everyone equally; behaviors made visible through algorithmic ordering are not uniform for all viewers. Search engines like Google regularly present different rankings to different selected classes of users to determine what kinds of content users will respond to most favorably (via clicks) and then, through comparative assessment, make further determinations about how to modify the core algorithm.
In practical terms, algorithmic ordering impacts your privacy and consent for how your individual data will be used. For example, if you actively provide information about yourself in hiring forms, surveys, and the like, you probably know the company has and can use the information. But when you use digital tools that collect seemingly innocuous records of what you do online to create a behavioral profile, you often don’t know that your data are being collected and made visible—even if you agreed to it when you signed an employment agreement or a technology use consent form.
In our extensive work with companies who collect and analyze digital data, we’ve distilled some best practices about how to appropriately respect employees’ or customers’ rights to privacy when they are using digital tools. Understanding when and how your private data can and cannot be harvested is part of developing a digital mindset. If you are a manager or involved in setting privacy protocols, the best practices below function as general guidance. If you are an employee who is concerned that your company is “spying on you” or encroaching into territory that should be private, these best practices can serve as a reality check on what your company may or may not know about your digital behaviors.
Follow best practices for privacy
First, transparency is a must. If amassing digital data, employees and customers should be asked to sign an agreement indicating they understand that their patterns of interactions on company-owned tools will be tracked for the purposes of analyzing their behavior. Full disclosure and employee consent is the only option. We’ve also uncovered some additional moves to get ahead of privacy concerns:
· Employees should have access to whatever digital data are collected about them. We recommend providing it at least annually. The data can include a map of the person’s own network, a summary of their transactions or key behaviors. For example, a report could provide an employee with a score about how much time they spend using particular features of a software application compared to others.
· Provide clarity about the level of data collection. The level that is most basic—and the least prone to privacy concerns—is generic pattern analysis. The analysis might show, for example, that someone is an outlier in their online social network, but not identify specifically why that person isn’t better connected. Or the analysis could show that a certain percentage of teams within an organization are the most innovative, but not identify which teams. The second level identifies which specific people have certain kinds of preferences. Scores may provide predictions about whether a person’s behavior is likely to make them an influencer or whether their departure from a company would make the organization vulnerable. Although this level of analysis provides more value, it singles out individuals. The highest level pairs digital data collection with machine learning. In this scenario, data are collected about whom employees interact with and about the topics they discuss. A company might examine the contents of emails and posts on social networking sites to identify who has expertise in what domains. This information provides the most specific guidance for leaders—for example, about who is likely to develop good ideas in certain areas. This most advanced level obviously also comes with the most privacy concerns, and senior leadership must develop deeply considered strategies to deal with them.
If you work for a company building digital technologies, privacy should be much more about simply having good policies. Having a strong digital mindset means incorporating concerns for privacy into everything you do, even how you design your digital tools. One popular approach to making sure user privacy is protected is a framework called “Privacy by Design” (PbD) developed by Dr. Ann Cavoukian, Ontario’s three-term Information and Privacy Commissioner. PbD is a framework for “protecting privacy proactively by embedding it often and early in technology” instead of treating it as an afterthought that is simply responding to regulatory demands.17 PbD is based on seven foundational principles:
1. Be proactive, not reactive—preventative, not remedial.
2. Lead with privacy as the default setting.
3. Embed privacy into design.
4. Retain full functionality: “Both privacy and security are important, and no unnecessary trade-offs need to be made to achieve both.”
5. Ensure end-to-end security: “Data life cycle security means that all data should be securely retained as needed and destroyed when no longer needed.”
6. Maintain visibility and transparency.
7. Respect user privacy—keep it user-centric.18
The PbD principles are not only the right thing to do for individual privacy, but the research suggests that companies that follow a PbD process when building new digital technologies have fewer material data breaches and happier users. Here’s how Deirdre Mulligan of UC Berkeley and Jennifer King of Stanford’s Center for Internet and Society define the inherent privacy issues with which our data-driven, digital era must reckon:
Understanding privacy as a human process requires companies to solicit and understand the context-dependent privacy expectations of affected individuals. This requires a conceptual and empirical inquiry into privacy’s meaning. This form of PbD begins with value-centered and human-centered processes. It requires a new set of privacy experts. Ensuring that a company accurately describes its privacy-related activities in its terms of service and provides appropriate mechanisms to capture consumer acceptance of them is a task for lawyers. Understanding the values at play and privacy requirements in a given context requires a separate set of skills. It requires research to understand and document what individuals bring to the table—their naivete, their uninformed and ill-conceived notions of how technology works, their mental models based in prior brick and mortar interactions, and their cognitive biases, to name a few. It demands attentiveness to context and human experience, the very attributes that companies, through privacy notices, attempt to disavow and make irrelevant.19
A good example of using PbD for consent management comes from HERE Technologies, a Netherlands-based company that uses AI to provide companies with mapping and location data generated by GPS systems in people’s vehicles. HERE Technologies’ director of data science, Dr. Michael Kopp, says that just four location data points are enough to reveal a person’s identity in 80 percent of cases.20 Given the highly sensitive nature of such data, HERE has taken a proactive approach by building a platform that offers a neutral server (a remote and secure server where data can be accessed securely without allowing third parties access) for people to exchange location-related data across industries that can be used for machine learning models. The fact that the server is neutral hinges on a consent management system—which allows users to determine exactly what data to share and how. Users can thus use others’ location-based data without it ever revealing personal information on either side of the transaction.
What’s exciting about PbD and HERE’s approach is the proactivity. The conventional wisdom is that technology is and must be inherently invasive and therefore its invasiveness is curbed by regulation. Not in this case.
Incorporating PbD also requires more than hiring a chief privacy officer or other compliance personnel. It requires far more than simply ensuring that the engineering staff have heard of the word privacy. Privacy by Design also requires that you, your colleagues, and your employees have the mindset to understand users’ privacy concerns and expectations and who has the authority to advocate for you even when users’ needs conflict with the company’s goals. To develop a digital mindset, it is essential to remember that privacy has to be at the core of everything that you do. It can’t be an afterthought; it must be architected into digital systems and developed into robust policies about how behavioral data will be collected, stored, analyzed, and used.
Assure Data Integrity through Blockchain
The global diamond market is valued at more than $90 billion a year. Experts estimate that each year up to 15 percent of that value—$13.5 billion—is lost to fraud. It’s not difficult to understand how. Between fifteen and twenty intermediaries touch a diamond between the time it is extracted from the earth to when it is purchased by a consumer. With so many handoffs and transactions, it’s easy for diamonds to disappear, get swapped, or misclassified. Prior to the year 2000, about a quarter of all diamonds were traded illegally. Until the United Nations set up the Kimberley Process in 2003 to ensure that “diamond purchases were not financing violence by rebel movements and their allies seeking to undermine legitimate governments,” a diamond’s path from mine to retail store was not closely tracked.21 Although eighty-one countries have signed on to the Kimberley Process, it has not worked well, because tracking is difficult, since certification is done via digital ledgers that are shipped around the world. Although a good gemologist can discern a raw diamond’s provenance and quality, as diamonds move up the value chain and become more refined, it becomes difficult for even the most seasoned professionals to determine if the diamond mined from its place of origin is the same diamond that is being presented to consumers. As Wuyi Wang, director of research and development at the Gemological Institute of America, observes, “Despite the concern from the public and within the industry … there is no scientific or technical way to tell where diamonds came from once they are cut.”22
The diamond supply chain serves as an apt metaphor for the importance of data integrity. Without data integrity, too many things can go wrong when multiple people act on and manipulate data concerning an agreement, relationship, or asset without other people knowing what is being done. Data integrity is a global problem that affects a wide range of industries, such as luxury goods, clothing, food products, pharmaceuticals, and more. Proving or disproving the authenticity and quality of an asset can be a challenge because traditional supply chains are long, complex, and lack transparency.
This is where blockchain comes in as a potential solution to this age-old, stubborn problem.
A company called Everledger works with the major diamond certification houses in the United States, Israel, India, and Belgium that grade and certify each diamond for the market.23 Everledger takes this data and creates a digital “DNA” record comprising the 4Cs (color, cut, clarity, and carat weight), fourteen other metadata reference points, and a unique identification code for each stone. With this information, Everledger knows who owns which diamond and where it is. It can even trace each diamond as it moves into retail stores and onto ecommerce platforms. In short, it ensures that the data about a diamond are kept secure so that no diamond fraud can occur. A digital mindset means that, in addition to security and privacy, you must also focus on data integrity. To help keep digital systems secure, we have to make sure that the data in them are not corrupted or changed to benefit certain parties and disadvantage others.
Although still in its infancy, the development of blockchain technologies represents an important step forward in assuring data integrity. Thus, understanding how blockchain works is important to building a strong digital mindset. Because of this, we’re going to discuss in very simple terms what blockchain is, and we will provide examples of how it can be used to enhance data integrity. Overall, blockchain is a very good security technology for verifying the integrity of data. Also, processes where maintaining data integrity is crucial (say, real estate transactions, medical credentialing, supply-chain tracking) that used to take weeks or months to secure can be cut to seconds when blockchain is involved.
What is blockchain technology, and what is it used for?
You probably have heard the term blockchain. Perhaps you thought that’s something too complicated to understand or something that only concerns coders. Neither is true, and you should get to know the basics because blockchain is only going to gain in importance.
A blockchain is simply a ledger; it’s just a super secure and non-repudiable one. Like a traditional ledger, blockchains record data regarding transactions: who owns what, who bought what from whom, who is allowed to make certain decisions, and so on. In an office, a ledger is kept private. Once it was locked in a safe box and used only by certain people. Today it’s likely a highly secure software system. But this is where blockchain differs, cleverly. The digital ledger is distributed. This means that blockchains function as a network of shared databases that any authorized party—whether inside or outside of an organization—can use to verify the particulars of a transaction and terms of engagement.
An easy way to understand this is to imagine a magic world in which you could make records in a ledger, and when you did, your records magically showed up in a thousand ledgers around the world—with no one party controlling all these ledgers. Anyone can show up to that place and access the ledger, provided they know the password. If they don’t have it, the ledger literally won’t open. If they do, they can not only look; they can record their own transactions, too, which will show up in all the other magic ledgers.
Finally, once you add a transaction to the ledger, it can’t be undone, and it never goes away. The ledger just keeps growing.
That’s how blockchain works. The real keys to it are the immutability of the transaction history and its decentralization.
Unlike a more traditional database where data are stored in tables, with blockchain—as the name suggests—pieces of data (blocks) are linked to one another to form a chain. The amount of digital data that any one user has on the blockchain, then, is tracked across a web of interconnected transactions, with each transaction (or exchange of currency) building upon the last one. This chain structure makes it so that changing any one block of data would require altering each linked block as well, a process that would take an infeasibly high amount of computing power. This means that once a transaction has taken place using blockchain technology, there is no going back, except through a new transaction. Such permanency also ensures data security, as it’s nearly impossible to tamper with this system.
Decentralization means no one party has control over a blockchain ledger. This differs from how we typically operate, where a third party like a bank has to verify that everything is in order before we can purchase goods from a vendor, for example. With blockchain, the technology design itself is the guarantor so multiple parties can exchange payment for services or goods directly without the need for third-party intermediaries to act as guarantors. This is because transactions over blockchain take place between blockchain addresses, which are unique to each user. In other words, virtual currency is held in a kind of digital, personalized wallet, rather than a bank.
Moreover, everyone can see these addresses, but they can’t unlock them. This is part of a system known as public-key cryptography and is also what makes blockchain so appealing for data security. Because each user has a unique address that is shared across the blockchain, other users know that they’re transacting with the user they actually think they are. Each user also has their own unique private access code that allows them to access the contents of what’s called their personal digital wallet. In this way, users can be confident in knowing their payment is going to the intended party—without the need for a central power to tell them so—and that their digital wallet is secure from outside parties.
Because transactions occur in this way—between unique accounts and without the need for verification from a central authority—they can occur rapidly. The information pertaining to these transactions is also updated across the blockchain and made available to each party instantly. This makes transactions not only more transparent, but also more secure, as it adds further blocks to the chain, thus increasing irreversibility. This combination of immediacy, transparency, and security makes blockchain particularly useful for large transactions that need to be done quickly and safely (like a large asset exchange), or for foreign exchange transactions.
Because it is particularly well suited for tracking complex information across systems and assuring that these important data are not tampered with, organizations have begun using blockchain technology to create streamlined, internal record-keeping systems, which can keep track of things from IT equipment to tax and voting records.24 MedRec, which is part of the MIT Media Lab, is seeking to use blockchain technology to create a medical records system to give patients a transparent and easy-to-understand view of their medical records across providers.25 Blockchain technology is also being used to track, trace, and record the movement of goods across various industries. IBM, for example, is using blockchain as part of their Food Trust initiative to track the origins of food shipments not only to increase the efficiency of supply chains and decrease waste, but also to isolate potential outbreaks of contamination.26 A number of industry giants have joined since the project’s inception in 2017, including Walmart, Carrefour, Tyson, and more.27
Blockchain is also promising to facilitate more revolutionary applications that will displace well-established models and institutions. For example, new peer-to-peer marketplaces may emerge that would make centralized hubs like Amazon obsolete. Such applications might someday take the place of existing models, but they will require a high degree of coordination before they can do so.28 Cryptocurrency systems like Bitcoin, Ethereum, and Ripple, the aspect of blockchain technology that has perhaps received the most media hype, also fall into this category. Although cryptocurrencies could technically replace physical currencies, doing so would require a high degree of complex coordination as financial institutions across the world would have to agree to adopt them before they could be of widespread value.
One of the most transformational applications at present is the “smart contract” that can clearly delineate and enforce in real time the breakdown of things like royalty payments for digital content. Smart contracts are already finding use in the music industry, where this process had otherwise become extremely tedious, murky, and costly. In 2018, the British musician Imogen Heap, for example, launched the blockchain project Mycelia, which aims to create a new way for musicians to manage their projects and metadata. She also released the first song that sends out payment to each creative party involved automatically through a smart contract with each listen.29 By quickly and transparently divvying out payment at the appropriate times automatically, these smart contracts aim to eliminate the need for lengthy negotiations that eat up both time and money. Beyond royalty payments, smart contracts can also be used for the execution of wills, deeds, leases, or even parking tickets.30 This sort of application is one of the most transformational, as it will fundamentally change how we devise and handle all sorts of contracts by reducing or eliminating the need for attorneys, brokers, or other third-party mediators. It is also, however, one of the more difficult to implement, as it requires us to rethink how we go about these deeply entrenched processes.31
Elsewhere, some companies, particularly in financial services, have started to create “localized” applications, such as private chains that facilitate quick and secure transactions between frequent partners.32 Interstellar, for example, has partnered with organizations like Nasdaq, Visa, and Citigroup to develop enterprise-level blockchain applications that will make it easier to track assets.33 One such network is Stellar, which IBM has used to create Blockchain World Wire, a blockchain-based payment system aimed at facilitating the transfer of value across different currencies and financial assets.34 A system like this would eliminate the transfer time needed to settle and clear transactions like large asset exchanges, which banks like Santander estimate will save up to $20 billion annually.35
Preparing for a future that will rely on blockchain
Our colleagues, Marco Iansiti and Karim Lakhani of Harvard Business School, argue that blockchain could create as profound a shift in how we live and work as another foundational technology: the internet. Retrospectively, in our current digital age, it is obvious to see how revolutionary the internet was, but at the time of its inception, it was met with widespread skepticism. In the 1960s and 1970s, for example, many telecommunications companies doubted that data could be sent—let alone voice or video connections—across the early architecture that would one day underlie the internet. In fact, they were in the business of operating in a non- (or yet-to-be-) digital landscape, and they made quite a bit of money doing so.
This skepticism, paired with the time it took for the proper technology to be developed, had a direct effect on the value of the early internet. Just like with any network, much of the utility of the internet is rooted in the size of its user base—more users means there is more you can do.36 It took years for many of the internet’s greatest benefits to be realized, because the user base grew gradually. Nevertheless, even in the early stages, simple applications like email drove an increase in interest and use, which thereby paved the way for further innovative applications. In this analogy, Bitcoin is like email: an application that wasn’t particularly novel and that didn’t require a lot of infrastructure to coordinate, but that provided a better, more efficient way of doing something that was immediately valuable to those who used it. And, to continue the analogy, it is applications like Bitcoin that will gradually convince more organizations to become users, which will allow the full potential of blockchain to be realized.
As blockchain is already beginning to revolutionize systems and pave the way for new solutions, Lakhani and another coauthor, Saïd Business School’s Teppo Felin, urge companies to start thinking about how they can best make use of the technology rather than passively become late adopters.37 You may be interested to know that to do so, they suggest considering three unique aspects that can make a company’s use of blockchain its own: the company’s strategy, its capabilities, and the problems it can solve for its stakeholders. One simple way to do this is for people to start with itemizing and noticing the everyday problems you may be facing and brainstorm how blockchain’s fundamental bent toward recording, tracking, verifying, and aggregating data can be best put to use. At present, this might be a simpler, more straightforward application, something like an internal record system that does not need a large number of users. Over time, expect more innovative and disruptive solutions to emerge, but for now, the key is to anticipate the coming change and begin thinking about how to integrate the technology into your regular workflow. If you are in a position to do so, you may want to change your business model and invest in developing the technology that will enable this future transition, before it’s too late.
Strive to Reduce Rather Than Eliminate Security Problems
Computer viruses, Facebook profiles, and the journey that diamonds make from mines to retail stores each represent a particular kind of security threat that is common in today’s digital environment. You cannot avoid security problems, but a strong digital mind knows to ask, “When will a problem happen?” and “How can I be prepared to respond to it?” Although you do not have to be a security expert, if you can embrace ecosystem interdependence, pay off technical debt, design for privacy, and understand the emerging uses of blockchain to assure data integrity, you are well on your way to reducing, if not eliminating, security problems large and small.
GETTING TO 30 PERCENT
Security
Developing a digital mindset means letting go of the idea that data security is like securing a castle with fixed access points; dynamic digital environments are inherently interdependent, complex, and bound to change. Accept that it’s not a matter of if your data become vulnerable, but when. Privacy is another security concern. Plan proactively with security and privacy top of mind:
· Budget for technical debt continually and deliberately by updating old infrastructure and future-proofing existing infrastructure.
· Each activity you conduct online leaves traces—digital exhaust—that make your behavior visible to people you never intended to see it. These clues can be pieced together to form behavioral profiles.
· You should be able to opt in or out of the collection of your personal data.
· Companies need to design for privacy as the default setting, embed privacy into all aspects of design, and be transparent about data collection.
· Blockchain refers to an array of distributed ledger technologies.
· Blockchain technologies are capable of recording data for complex transactions quickly and securely. Transactions performed using blockchain are permanent and irreversible, and they can be processed immediately without the need for a third party.
· Speed, security, and the ability to facilitate direct peer-to-peer transactions make blockchain ideal for cryptocurrencies, efficient record-keeping systems, the tracking of goods across supply chains, and more.
· Blockchain functions like any network—its value increases as more users adopt it—and promises to be a foundational technology that will revolutionize business and society.
Understanding when and how your private data can and cannot be harvested is part of developing a digital mindset. Remember also that digital systems are always evolving, which can bring about both intended positives and unexpected vulnerabilities.