The Why of Privacy: 4 Reasons Privacy Matters to People, and Why Companies Need to Know Them

While almost all companies collect and use their customers, visitors and users’ personal information, primarily online and through in-person customer interactions such as point-of-sale transactions, the privacy landscape is in a near-constant state of turbulence and flux. There is the steady flow of data breach reports affecting companies of almost every size and market segment. New data privacy laws, rules and regulations continue to be introduced and enacted around the world, such as the US-EU Privacy Shield program, the EU General Data Protection Regulation (GDPR), and Argentina’s draft Data Protection Bill, placing new legal obligations and restrictions on the collection and use of personal information. Challenges continue to be raised against laws which are perceived to overreach or conflict with privacy rights, such as the continued challenges to the Privacy Shield program and EU’s Model Contract Clauses.

The one constant in this turbulent landscape is that consumers’ awareness of data privacy and security continues to grow. Given this, it is important to step back from the day-to-day privacy developments and look at a more fundamental question. It is axiomatic in the world of privacy that privacy matters to people, but why it matters is more complicated. People often argue about why privacy is important to individuals, but there is no “one-size-fits-all” answer. Privacy matters to different people in different ways, so there are many equally valid reasons why privacy is important to individuals.

Understanding the “why of privacy” is also critically important to businesses and other organizations. By now, most companies understand the importance of providing notice of their privacy collection practices and choice with respect to the use of collected information. A company collecting, processing and/or controlling personal information that understands the reasons privacy matters to the data subjects whose data they collect and use can design more effective privacy practices and policies attuned to the needs of their data subjects, such as by creating customer privacy profiles for use in product design and testing.  This follows the “privacy by design” framework advocated by the Federal Trade Commission and helps increase trust in the company’s commitment to data privacy and security, which is critical to the success of every company in today’s world and can provide a competitive advantage.

The reason why privacy matters differs from person to person. However, I believe these reasons can be grouped into four core categories: (1) privacy is a right, (2) privacy is an entitlement, (3) privacy is an expectation, and (4) privacy is a commodity. I’ll explore each of them in turn.

Privacy is a Right

Persons falling into this first category value privacy as an irrevocable right guaranteed to all. People living in countries with constitutional data privacy protections often fall into this category. For example, the European Union Charter of Fundamental Rights recognizes the right to data protection and the right to privacy as fundamental human rights. In some countries, it has been implied through interpretation of constitutional and legal rights, such as the right to privacy found by the U.S. Supreme Court and the right to privacy recognized under the Canadian Charter of Rights and Freedoms even though it does not specifically mention privacy. In August 2017, a unanimous Supreme Court of India held that privacy is a fundamental right as an integral part of the Right to Life and Personal Liberty guaranteed under Article 21 of the Constitution of India.  The 1948 United Nations’ Universal Declaration of Human Rights states that people have a fundamental human right not to “be subjected to arbitrary interference with their privacy, family, home or correspondence.”

  • People in this category are more likely to take a very rigid view of privacy trumping all other interests, including business interests, and may be less willing to “trade” any of their privacy for other benefits such as increased security.
  • People in this category tend to expect that any consent given to use personal information must be clear, unambiguous, express, and fully revocable and that use of the information must be specifically limited to the grant of rights or as otherwise expressly permitted by law, which creates a significant burden for businesses and other organizations collecting and using personal information.
  • Privacy as a right is an individual view – the rights of the individuals to protect their personal information are paramount to almost all other rights by others to use or access that personal information.

Privacy is an Entitlement

Persons falling into this second category value privacy as something to which they are entitled under laws, rules and regulations applicable to them. There are many laws, either comprehensive data privacy laws such as Canada’s PIPEDA or sectoral laws such as the privacy laws enacted in the United States, whose prohibitions or restrictions on privacy practices may be viewed by individuals as creating privacy obligations to which they are entitled. An example is the U.S. Children’s Online Privacy Protection Act, which among other things prohibits the collection of personal information from children under 13 without verifiable parental consent. Some parents view COPPA as creating an entitlement for their children to be left alone unless the parent consents to the collection of personal information from their children.

  • Similar to privacy as a right, people in this category are likely to view privacy as trumping other interests, including business interests, and may be less willing to give up privacy for other benefits.
  • They tend to expect that any consent given to use personal information must be fully compliant with legal requirements, and that use of the information must be specifically limited to those use rights expressly permitted by law, which creates a burden for businesses and other organizations collecting and using personal information.
  • As with privacy as a right, privacy as an entitlement is an individual view, where a individual’s entitlement to privacy outweighs other interests in a person’s personal information.
  • A key differentiator between privacy as a right and privacy as an entitlement is that an entitlement can be revoked, e.g., through changes to the law, whereas a right is irrevocable. While some might argue that a judicially-recognized right to privacy should be an expectation, I believe that the recognition by a country’s supreme court that privacy is a right, which is unlikely to be overturned or legislatively reversed, should be considered a right.

Privacy is an Expectation

Persons falling into this third category value privacy as something they expect to receive, whether or not they have a right or entitlement to it. New technologies (such as drones and biometric identifiers) and practices (such as marketing strategies) tend to be ahead of laws specifically governing them, and people in this category expect to receive privacy protections regardless of whether existing laws or other rights cover the technology or practice. They may also expect societal norms with respect to privacy to be followed by businesses and other organizations, whether or not stricter than applicable legal requirements. There are also certain expectations of privacy that are generally recognized within a given society. For example, in the United States, many people have an expectation of privacy in their own home and other private areas such as a public bathroom stall. If a person or organization interferes with this expectation of privacy, there may be legal liability for invasion of privacy under state laws. There are other expectations of privacy on a per-situation basis, such as a private conversation between two individuals.

  • People in this category believe that third parties, such as companies and government entities, should recognize that their expectation of privacy trumps those third parties’ desire (or rights) to access and use their personal information, but also understand that the expectation of privacy has limits. For example, a person should not have an expectation of privacy in a public place (e.g., a public sidewalk), and there is no right of privacy that extends to a person’s garbage placed on the street for collection.  In the United States, there is also no expectation of privacy in the workplace.
  • An expectation of privacy can be breached by a superior interest by a third party. For example, if a court approved surveillance of someone suspected of engaging in illegal activity, any expectation of privacy that person may have that his conversations are private is superseded by the government’s interest in preventing and prosecuting crime.
  • People in this category also generally do not question or challenge the terms of a privacy policy or other agreement granting rights to use or collect their personal information. People in this category also tend to expect businesses and other organizations collecting and/or using their personal information will not unreasonably collect or use their personal information, and will respect usage opt-out requests.
  • Privacy as an expectation is a middle-of-the-road view, in which the individual view of privacy as paramount is tempered with the understanding that in some cases the general or specific value of allowing a third party to receive and use their personal information outweighs the personal interest.

Privacy is a Commodity

Persons falling into this fourth category value privacy as a commodity that they are willing to exchange for other benefits, goods or services. We live in an information economy, where data has been commoditized. To many companies a core or important part of their product or service offering (i.e., part of the general value of the product or service) or business strategy is the ability to monetize personal, aggregate, and/or anonymous data collected through its use. Companies argue that the value derived from data monetization is factored into the value and cost of the product or service. Other companies offer something of specific value, such as registering for an extended product warranty, for sharing personal information such as an email address or demographic information. Many people give businesses some rights to use their personal information simply by visiting a webpage, requesting information from them, or purchasing goods or services from them in which they agree to be bound by the company’s privacy policy or terms of use/terms of sale. We also live in a world where many people are willing to sacrifice some privacy in exchange for increased security against terrorism and other potential physical and cyber threats. People falling into this category have a strong understanding of the trade-off between privacy and other benefits.

  • People in this category are more willing to give third parties the right to use their information as long as the thing they receive in return is valuable enough to them – they view their personal information as currency. If a company or organization offers something of value, they are very likely to agree to share personal information with that company or organization. These are the kind of people who don’t really care that they’re receiving targeted ads while surfing online.
  • Conversely, if they do not believe they are receiving value in return for their personal information, people in this category are more likely not to share their information.
  • Privacy as a commodity is a transactional view, meaning that the an individual is willing to allow a third party to receive and use their personal information if the general or specific value of allowing that third party to receive and use the information outweighs their personal interest in keeping their information.
  • It may require a greater transfer of value to convince someone viewing privacy as a right, entitlement or expectation to treat it as a commodity.

 

As a closing thought, these four reasons why privacy matters to people are not mutually exclusive, meaning that there are additional sub-categories of people for whom two or more of these reasons are important. For example, it is possible for someone to view privacy as both an entitlement and a commodity. Such a person would expect that while they have the ability to exchange their personal information for something of value, it must always be a voluntary exchange – they would reject any need to trade away their personal information. Businesses who take the time to understand the “why of privacy” will find themselves better positioned to create sample customer profiles based on their customers’ privacy values, leading to more robust privacy practices, processes and policies and a potential competitive advantage on privacy in the marketplace.

Eric Lambert has spent most of his legal career working in-house as a proactive problem-solver and business partner. He specializes in transactional agreements, technology/software/e-commerce, privacy, marketing and practical risk management. Any opinions in this post are his own. This post does not constitute, nor should it be construed as, legal advice. He is a technophile and Internet evangelist/enthusiast. In his spare time Eric enjoys reading and implementing and integrating connected home technologies and dabbles in voice-over work.

EU-US Privacy Shield Update – Roadworthy (For Now), But All Roads May Be Dead Ends

Last July, the new US-EU Privacy Shield framework became effective. The Privacy Shield replaced the International Safe Harbor Privacy Principles (commonly known as “Safe Harbor”) which had been in place since 2000. Under the EU Data Protection Directive, companies can only transfer data outside of the EU to a country deemed to have an “adequate” level of data protection, and the US (which takes a sectoral approach to data privacy and has no comprehensive national data privacy law) is not one of those countries. Given the importance of EU-US data transfer in the global economy, the Safe Harbor principles were developed as an entity-level, instead of country-level, adequacy mechanism, to allow a US company to achieve a level of adequacy (in the eyes of the EU) which allowed EU-US data transfers with that company to take place. Safe Harbor served as an alternative to two other entity-level adequacy mechanisms: standard contract clauses (SCCs, also known as model contract clauses), which are separately required for each EU company transferring data to a US entity making them difficult to scale, and binding corporate rules (BCRs), which require Board of Directors approval and significant time and resources and have only been implemented by very large multinational companies. (There is also an individual-level adequacy mechanism – direct consent.)

Everything changed in October 2015, when the European Court of Justice (ECJ) released its decision in a case brought against Facebook brought by Austrian citizen Max Schrems. The ECJ held that the Safe Harbor framework did not provide adequate privacy protections to EU individuals, and was therefore invalid. Among other reasons for invalidation, the ECJ found broad US government powers to access data (including data of EU citizens) held by private US companies directly conflicted with the EU’s declaration of data protection as a fundamental human right. Given the importance of the Safe Harbor program in facilitating EU-US data transfers, its invalidation had a far-reaching impact. While the EU agreed to wait a few months before bringing any actions against companies in the Safe Harbor program which did not move to an alternative entity-level adequacy mechanism, US companies faced a difficult choice – switch to an alternative and more difficult/costly approach, such as standard contract clauses, or wait and see whether the EU and US could quickly agree on a Safe Harbor replacement before the EU’s enforcement deadline.

Fortunately, The European Commission and the US government quickly accelerated existing talks on resolving shortcomings of the Safe Harbor principles, leading to the announcement of the Privacy Shield program in February 2016. The European Commission quickly issued a draft adequacy decision for the Privacy Shield program, and despite some misgivings about the program from certain groups the European Union gave its final approval on July 12, 2016. The Privacy Shield program is made up of 7 core principles and 15 supplemental principles. Like Safe Harbor before it, it is a self-certification program, and there are a number of the principles common to both Safe Harbor and Privacy Shield. The Privacy Shield program seeks to address a number of the perceived shortcomings of the Safe Harbor principles, including protection for onward transfer of information by US companies to third parties such as their service providers, multiple ways for individuals to make a compliant about a Privacy Shield-certified company, stronger enforcement mechanisms, and an annual review mechanism. Its intent is to be a replacement entity-level mechanism which addresses the concerns around Safe Harbor cited by the ECJ in the Schrems decision, complies with EU laws, and respects EU citizens’ fundamental rights to privacy and data protection.

Challenges and Headwinds

Since the Privacy Shield program went live in July, over a thousand companies (1,234 as of December 10, 2016, according to the Privacy Shield List) have self-certified under the program. However, the Privacy Shield program, and EU-US data transfers in general, continue to face challenges and headwinds.

  • Legal challenges – déjà vu all over again? After the Privacy Shield program was announced in February 2016, some groups and individuals expressed concerns about the program. When Privacy Shield was approved in July 2016, Max Schrems went on record stating his belief that the Privacy Shield framework was fundamentally flawed and could not survive a legal challenge. As the first legal challenges against Privacy Shield have been filed, we will find out how prescient Mr. Schrems’ comments are. In September, the digital rights advocacy group Digital Rights Ireland filed an action in the EU courts arguing that the EU’s finding of adequacy for the Privacy Shield should be annulled on the basis that the Privacy Shield program’s privacy safeguards are not adequate. In November, a similar challenge was brought by La Quadrature du Net, a French privacy advocacy group. The results of these challenges may result in the Privacy Shield program being very short-lived. Additionally, the ECJ is considering another challenge against Facebook referred to it by the Irish Data Protection Commissioner, this time to standard contract clauses. The proponents in that case are arguing that the same concerns behind the ECJ’s Safe Harbor decision should apply to standard contract clauses. The forthcoming decision in this challenge has the potential to create a precedent that could bring down the Privacy Shield program as well.
  • Other public and private actions may erode Privacy Shield’s validity. On December 1, 2016, a change to Rule 41 of the Federal Rules of Criminal Procedure became effective. The change was intended to give investigators more power to obtain warrants against cyber criminals using botnets or otherwise masking their identity, such as through secure web browsers or virtual private networks. Under the amended rule, law enforcement seeking to use remote access to search media and obtain electronically stored information can obtain a warrant from a magistrate judge located in a district where “activities related to a crime may have occurred” if the actual location of the media or information has been “concealed through technological means.” Since this rule is not limited on its face to servers in the US, without further clarification of the scope of this rule it is possible for it to be used by law enforcement to have a US magistrate judge issue a warrant to search and seize information from servers located in the EU. This global reach would likely be found in direct conflict with the concepts of privacy and data protection as a fundamental human right under the EU’s Charter of Fundamental Rights. Additionally, in early October, reports surfaced that Yahoo! had secretly scanned the email accounts of all of its users at the request of US government officials, which if true would likely be inconsistent with the terms of the Privacy Shield agreement. Opponents of Privacy Shield could use actions such as these as ammunition in their efforts to invalidate the program. In fact, there have already been calls for the European Commission and the EU’s Article 29 Working Party to investigate the Yahoo! scanning allegations, and according to a European Commission spokesperson statement on November 11, 2016, the EC has “contacted the U.S. authorities to ask for a number of clarifications.”
  • Can any EU-US framework be adequate? The legal challenges and public/private actions cited above all lead to one fundamental question that many parties involved in the Privacy Shield program have been hesitant to ask – is there a fundamental, irreconcilable conflict between (1) the United States’ approach to privacy and (2) the EU’s commitment to privacy and data protection as fundamental human rights? If yes, the US’s sectoral approach to data privacy legislation and powers for law enforcement to obtain information from privacy companies and servers may mean that no entity-level mechanism to facilitate EU-US data transfers is “adequate” in the eyes of the EU, meaning that EU-US data transfers are approaching a dead end. While the US government has imposed restrictions on its surveillance activities in the post-Snowden world, it remains very unclear whether anything short of concrete legislation protecting the rights of EU citizens (which would run counter to US counter-terrorism activities), or a modification of the EU’s principles, would be sufficient. I suspect there may be a difference between the view of those in the EU seeking a pragmatic approach (those that believe that the importance of EU-US data transfers, including economic and geopolitical benefits, necessitate some compromise), and those seeking an absolute approach (those that believe that the EU’s belief that data protection is a fundamental human right must trump any other interests). The forthcoming decisions in the challenges to standard contract clauses and the Privacy Shield program will likely help shed light on whether this fundamental conflict is fatal to any entity-level mechanism.
  • Compliance, not certification, with the Privacy Shield principles is what matters. A number of US companies have chosen to tout their Privacy Shield self-certification via blog posting or press release (for examples, see here, here and here). While a press release on Privacy Shield certification can be a useful to demonstrate its global presence and commitment to data privacy, remember that it’s a self-certification process (although some companies are using third-party services such as TrustE to help them achieve compliance). A company’s certification of compliance with the Privacy Shield principles is less important than the processes and procedures they have put in place to manage their compliance. If you need to determine if a company is self-certified under the Privacy Shield program, you can search the database of certified companies at http://www.privacyshield.gov/list, and check their website privacy policy which should contain disclosures affirming and relating to their commitment to the Privacy Shield principles. If you’re a company certified under the Privacy Shield, be prepared to answer questions from EU companies on how you comply with the Privacy Shield principles – you may be asked.

So, what does all this mean? At the moment, Privacy Shield may be a bit rickety, but unless your company can effectively use standard contractual clauses or binding corporate rules, short of direct consent it’s the only game in town for US companies which need to receive data from their EU client, customers and business partners. Even SCCs may be a short-lived solution, meaning many companies may not want to invest the time, effort and expense required to adopt that entity-level approach. Due to the current state of Privacy Shield and EU-US data transfers in general, US companies may want to consider the wisdom of the “Herd on the African Savanna” approach to compliance – the safest place to be in a herd on the African savanna is in the center. It’s almost always the ones on the outside which get picked off, not the ones in the center. Unless there is a compelling business reason to be on the outside of the herd (desire to be viewed as a market leader, willingness to risk doing nothing until clearer direction is available, etc.), the safest place from a compliance perspective is to stick with the pack. While that approach is not for everyone, many companies may feel that being in the center of the herd of US companies dealing with EU-US data transfers is the safest approach while the fate of the Privacy Shield, and EU-US data transfers in general, plays out.

Blockchain and Distributed Ledger Technology Will Change the World (Eventually)

Many people associate “blockchain” with the crypto-currency Bitcoin. However, they are not one and the same. Bitcoin is an application; blockchain and distributed ledger technology are the methods behind it. Given the widespread potential applications of blockchain and distributed ledger technology, it is poised to revolutionize many aspects of the world around us. It may prove to be as disruptive and innovative of a force as augmented reality, or the Internet itself. New articles touting blockchain and distributed ledger technology are coming every day, even while the technology is unknown or confusing to many people. What is it? How might it change the world? And what legal and other risks does it bring?

What is Distributed Ledger Technology and Blockchain?

Centralized Ledgers

Let’s start with what we know – a centralized ledger. Ledgers (what we’ll call a database, list, or other information record) have played an important role in commerce for millennia, recording information about things such as physical property, intangible property including financial holdings, and other assets. The most recent innovation has been the move from physical ledgers (paper, tablets, etc.) to electronically stored ledgers. A “centralized ledger” is a ledger maintained and administered in a single, central location (e.g., a computer database stored on a server) accessible by anyone without use of access controls (public) or through an access control layer by persons or organizations with valid login credentials (permissive). This is a “hub-and-spoke” system of data access and management. Centralized ledgers have historically had many benefits, such as minimized data redundancy, limited number of access points to the data for security purposes, centralized administration, and centralized end user access. However, there are also disadvantages, such as greater potential for loss or inaccessibility if the central location suffers a hardware failure or connectivity outage, inability to recover lost data elements, and a dependence on network connectivity to allow access to the ledger by its users.

Distributed Ledgers

One way to address these disadvantages is through a distributed ledger, where an electronic ledger is distributed (mirrored) to a network of participants (aka “nodes”) through a software program so that each participant has a complete and identical copy of the ledger. Nodes can be individuals, sites, companies/institutions, geographical areas, etc. There is no centralized administrator or “primary node” — if a change is made to one copy of the ledger, that change is automatically propagated to all copies of the ledger in the system based on the rules of the system (called a “consensus algorithm“) which ensures that each distributed copy of the ledger is identical. (For example, in Bitcoin, each node uses an algorithm that gives a score to each version of the database, and if a node receives a higher scoring version of the ledger, it adopts the higher scoring version and automatically transmits it to other nodes.) Since the distributed ledger software on each node validates each addition to the distributed ledger, it’s very difficult to introduce a fraudulent transaction (to put it another way, transactions are audited in real time). Essentially, each node builds an identical version of the distributed ledger using the information it receives from other nodes. The use of distributed models in computing goes back to the origins of the Internet itself — ARPANET, which evolved into what we know today as the Internet, used a distributed model instead of a linear model to manage the transfer of data packets between computer networks.

The software on each node uses cryptographic signatures to verify that it is authorized to view entries in, and make changes to, the distributed ledger. If a participant with rights to modify the ledger makes an addition to the ledger using the participant’s secure keys (e.g., a record of a change in ownership of an asset or recording of a new asset), the addition to the ledger is validated by the consensus algorithm and propagated to all mirrored copies of the ledger, which helps to ensure that the distributed ledger is auditable and verifiable.

Thus, the four central tenets of a distributed ledger are:

  1. distributed copies among nodes via client software;
  2. cryptographic signatures to allow nodes to view, or add to, the distributed ledger in an auditable and verifiable fashion;
  3. a consensus algorithm to ensure distributed copies of the ledger match among participants without the need for a centralized administrator; and
  4. record permanency so that verified entry accepted to the ledger via the consensus algorithm becomes permanent (it can be corrected via a later addition to the ledger but never removed).

Unlike a centralized ledger such as a database, where the data records and access/usage logs are maintained separately, a distributed ledger maintains data records within a validated structure that captures access and changes within the data store itself. Whereas the server with the centralized ledger is different from the computers which retrieve data from the centralized ledger, each node in a distributed ledger is an equally trusted “peer” of every other node. Another key difference between centralized and distributed ledgers is that a distributed ledger cannot be forked — if you make a copy of a centralized ledger and store it somewhere else, it will be out of sync with the original copy, whereas each copy of a distributed ledger is kept identical by the client software.

Blockchains

A “blockchain” is a specific way of implementing distributed ledger technology – or more precisely, it’s a specific type of distributed ledger. In a blockchain ledger, each record of new value added to the ledger and each transaction affecting entries in the ledger (which we will collectively call “blocks”) includes a timestamp and a cryptographic verification code based on a data signature from the previous block called a “hash” linking it to the previous block, forming a block “chain.” Because each block is cryptographically tied to the previous block via one-way hash, the entire chain is secure – a client can verify that a block in the blockchain validates against the previous block, but does not allow someone to trace the blockchain forward. If a block in the chain is altered, it changes the hash value and no longer matches the hash stored in later blocks, and the alteration will be rejected by the nodes on the blockchain network. In a blockchain, transactions entered into the system during a specified period of time are bundled together and added to the blockchain as a new block.

Bitcoin is an early example of a blockchain application. Participants can add new bitcoins to the blockchain by solving a cryptographic puzzle (this is called “mining” and takes a lot of computing power). Transactions for the purchase and sale of bitcoins are also recorded in a block in the Bitcoin blockchain – the blockchain is the public ledger of all Bitcoin transactions.

Blockchain applications can be grouped into 3 categories: Blockchain 1.0 applications (crypto-currencies such as Bitcoin); Blockchain 2.0 applications (financial applications); and Blockchain 3.0 applications (other emerging applications). Blockchain applications, like other distributed ledgers, can be either public (the client software does not have an access control layer, meaning anyone with access to the software can access the blockchain network), or permissive (the client software has an access control layer to restrict access to the blockchain network). It can be optimized to handle transactions (such as currency transactions) or logic (such as managing business and governance rules). Public blockchain networks can be a permanent record of transactions, whereas permissive blockchain networks can protect against external hacking attempts.

How might blockchain and distributed ledgers change the world?

The impact of new technology presents at first as rapidly disruptive (positively and negatively), but often manifests organically and transparently to change the world over time.

Roy Amara, a former president of the Institute of the Future, said that people overestimate a technology’s effect in the short term and underestimate it in the long run, a statement known as “Amara’s Law.” However, I think a corollary is in order – the impact of new technology presents at first as rapidly disruptive (both positively and negatively), but often manifests organically and transparently to change the world over time at a proportional rate to the maturity of the commercially available applications, to consensus on technological standards, and to decreasing costs to implement (and increasing ROI from implementing) the technology in practical business and consumer situations. For example, RFID technology was touted early on as a “change the world” technology, and it has — but most prominently through integration of the technology organic and innovative improvements to supply chain and inventory management. Social networking is viewed by many as a “killer app” which helped usher in the third Age of the Internet, and it has changed the world by changing how we connect with others — we now post updates instead of sending letters or emails to our friends. Both took years to become pervasive in society and industry. A “killer app” is a catalyst that accelerates the adoption of a new technology.

Blockchain and distributed ledger networks have the potential to change the way many systems and business processes work across industries. Since blockchain and distributed ledger networks are platform-agnostic, a distributed ledger could be stored in different hardware/software configurations across different nodes, reducing the need for expensive and time-consuming upgrades to support the distributed model. For example, a permissioned blockchain model could help an organization such as the US Veterans Administration better manage appointment scheduling across a large number of hospitals and clinics (in fact, a resolution was recently passed in the US House of Representatives promoting just that, “to ensure transparency and accountability.” Financial and currency transactions are a major focus of practical applications of distributed ledger networks and blockchain technology. The technology could also be used in applications such as better and more secure management of governmental records and other services; tracking tax collection and receipts; managing assets; identity verification; decentralized voting; managing and tracking inventory levels and B2B/B2C product fulfillment; tracking the “data supply chain” for the flow of data among systems; managing system access controls; protection of critical public and privacy infrastructure; tracking royalties due to artists for the use of their works; and “smart contracts” (aka “blockchain contracts”) to create, execute, and enforce agreements between parties when certain pre-arranged conditions occur. Distributed ledger networks have the advantage of being more secure as the consensus algorithm makes it considerably difficult for a cyber-attacker to successfully alter the distributed ledger. It could also allow for greater access transparency, a central tenet of many privacy principles, by allowing individuals to access records in the ledger relating to them or containing their information.

The companies that immediately benefit from a new disruptive business method such as blockchain are those which seek to innovate applications of the method to monetize it, obtain a first mover advantage, and ideally seize significant market share for as long as possible. Industry groups and trade associations will form to seek to promote it, and regulators will begin to take notice. Blockchain and distributed ledger technology is already following this pattern. In late September 2016, two members of the US House of Representatives formed the Congressional Blockchain Caucus as a bipartisan group, and a “Blockchain Innovation Center” opened in Washington, DC. A coalition of lawyers and academics have founded the Digital Currency and Ledger Defense Coalition (DCLDC) whose mission, per their website, is “to help protect individual constitutional rights and civil liberties” with respect to the emerging technology. Groups such as the Hyperledger Project seek to promote the adoption of distributed ledger networks and blockchain applications. As distributed ledger and blockchain matures and start-ups present intriguing new products and services coupled with a strong value proposition for businesses to early adopt the technology, companies will begin to implement blockchain and other distributed ledger technologies in a variety of ways.

Risks and Challenges Associated with Blockchain and Distributed Ledger Technology

As companies evaluate the adoption blockchain and distributed ledger applications, they will need to focus on the risks and challenges raised by the technology. These include:

  • Ensuring the ROI and business case is there. Blockchain and distributed ledger technology is not intended to replace existing centralized ledgers such as databases. If a number of parties using different systems need to track something electronically that changes or updates frequently, a distributed ledger may be a good solution. If those needs are not there, or if there is a continuing need to rely on paper transaction records, a centralized ledger continues to be the better choice. Companies need to ensure there is a compelling ROI and business case before implementing the technology.
  • Record retention risks. One of the features of blockchain and distributed ledger networks is record permanency. This may be incompatible with the requirements for data to be destroyed and deleted after a period of time, such as credit/debit card data under PCI rules, HR data under various regulatory requirements, and the limitations of a company’s own record retention policy.
  • Data Privacy. Distributed ledger technology such as blockchain is inherently designed to share information among every participant/node. If information in a ledger transaction or block contains private information, such as an account number or company confidential information, it will be visible to every user of every node. This is one of the reasons permissive and privacy distributed ledgers are a focus of many companies seeking to innovate in the space. Additionally, as nodes in a distributed ledger network can be geographically disparate, rules and requirements for the transfer of data between geographies may play a major role. It is also likely that at some point, decryption technology will evolve to the point where cryptographic signatures may no longer be considered safe.
  • Loss of Control. Companies routinely implement controls (processes and procedures) to manage their systems and operations, which controls may be audited by customers/partners or certified under standards such as SOC 2. But who is accountable for a database distributed across geographies and companies? Use of a distributed ledger system with nodes outside of a company’s systems means ceding some control to an automated process and to a decentralized group of participants in the distributed ledger/blockchain. An error in a record in a distributed ledger becomes permanent and can be corrected but never removed.
  • A Square Peg in a Legal and Regulatory Round Hole. As is often the case, one of the challenges for lawyers and others is determining how existing laws and regulations will likely be interpreted to fit new technologies such as blockchain and distributed ledger; where new laws may be required and how permissive or restrictive they may be; and how enforcement and penalties of both new and existing laws will play out. However, a distributed ledger network may cross multiple jurisdictions, resulting in cross-border regulation and enforcement issues. All but the earliest adopters often take the “herd on the savanna” approach (staying in the center of the herd as the safest point, and migrating to one edger or another once the risks to the outliers has been better gauged). Additionally, contract law requires, at its core, offer, acceptance and consideration between the contracting parties. The emergence of “smart contracts” that rely on computer algorithms to establish the formation and performance of contracts may challenge the nature and application of traditional legal principles of contract law such as contract formation and termination, and the traditional focus of laws on the acts of persons (not automated technologies).

Finally, any technology brings both benefits and potential risks. If the benefits outweigh the risks on the whole, the public interest is not served when the legal, regulatory and privacy pendulum swings too far in response. The spread of blockchain and other distributed ledger technologies and applications will be dependent on the creation and fostering of a legal, regulatory, and privacy landscape that fosters innovation in the space.

The Fourth Age of the Internet – the Internet of Things

We are now in what I call the “Fourth Age” of the Internet.  The First Age was the original interconnected network (or “Internet”) of computers using the TCP/IP protocol, with “killer apps” such as e-mail, telnet, FTP, and Gopher mostly used by the US government and educational organizations. The Second Age began with the creation of the HTTP protocol in 1990 and the original static World Wide Web (Web 1.0). The birth of the consumer internet, the advent of e-commerce, and 90’s dot-com boom (and bust in the early 2000’s) occurred during the Second Age. The Third Age began in the 2000’s with the rise of user-generated content, dynamic web pages, and web-based applications (Web 2.0). The Third Age has seen the advent of cloud computing, mobile and embedded commerce, complex e-marketing, viral online content, real-time Internet communication, and Internet and Web access through smartphones and tablets. The Fourth Age is the explosion of Internet-connected devices, and the corresponding explosion of data generated by these devices – the “Internet of Things” through which the Internet further moves from something we use actively to something our devices use actively, and we use passively. The Internet of Things has the potential to dramatically alter how we live and work.

As we move deeper into the Fourth Age, there are three things which need to be considered and addressed by businesses, consumers and others invested in the consumer Internet of Things:

  • The terms consumers associate with the Internet of Things, e.g., “smart devices,” should be defined before “smart device” and “Internet of Things device” become synonymous in the minds of consumers.  As more companies, retailers, manufacturers, and others jump on the “connected world” bandwagon, more and more devices are being labeled as “smart devices.”  We have smart TVs, smart toasters, smart fitness trackers, smart watches, smart luggage tags, and more (computers, smartphones and tables belong in a separate category). But what does “smart” mean?  To me, a “smart device” is one that has the ability not only to collect and process data and take general actions based on the data (e.g., sound an alarm), but can be configured to take user-configured actions (e.g., send a text alert to a specified email address) and/or can share information with another device (e.g., a monitoring unit which connects wirelessly to a base station). But does a “smart device” automatically mean one connected to the Internet of Things?  I would argue that it does not.

Throughout its Ages, the Internet has connected different types of devices using a common protocol, e.g., TCP/IP for computers and servers, HTTP for web-enabled devices. A smart device must do something similar to be connected to the Internet of Things. However, there is no single standard communications protocol or method for IoT devices. If a smart device uses one of the emerging IoT communications protocols such as Zigbee or Z-Wave (“IoT Protocols”), or has an open API to allow other devices and device ecosystems such as SmartThings, Wink or IFTTT to connect to it (“IoT APIs”), it’s an IoT-connected smart device, or “IoT device.” If a device doesn’t use IoT Protocols or support IoT APIs, it may be a smart device, but it’s not an IoT device. For example, a water leak monitor that sounds a loud alarm if it detects water is a device.  A water leak monitor that sends an alert to a smartphone app via a central hub, but cannot connect to other devices or device ecosystems, is a smart device.  Only if that device uses an IoT Protocol or support IoT APIs to allow it to interconnect with other devices or device ecosystems is an IoT device.

“Organic” began as a term to define natural methods of farming.  However, over time it became overused and synonymous with “healthy.”  Players in the consumer IoT space should be careful not to let key IoT terminology suffer the same fate. Defining what makes a smart device part of the Internet of Things will be essential as smart devices continue to proliferate.

  • Smart devices and IoT devices exacerbate network and device security issues. Consumers embracing the Internet of Things and connected homes may not realize that adding smart devices and IoT devices to a home network can create new security issues and headaches. For example, a wearable device with a Bluetooth security vulnerability could be infected with malware while you’re using it, and infect your home network once you return and sync it with your home computer or device.  While there are proposals for a common set of security and privacy controls for IoT devices such as the IoT Trust Framework, nothing has been adopted by the industry as of yet.

Think of your home network, and your connected devices, like landscaping.  You can install a little or a lot, all at one or over time.  Often, you have a professional do it to ensure it is done right. Once it’s installed, you can’t just forget about it — you have to care for it, through watering, trimming, etc. Occasionally, you may need to apply treatments to avoid diseases. If you don’t care for your landscaping, it will get overgrown; weeds, invasive plants (some poisonous) and diseases may find their way in; and you ultimately have a bigger, harder, more expensive mess to clean up later on.

You need to tend your home network like landscaping, only if you don’t tend your home network the consequences can be much worse than overgrown shrubbery. Many consumers are less comfortable tinkering with computers than they are tinkering with landscaping.  Router and smart device manufacturers periodically update the embedded software (or “firmware”) that runs those devices to fix bugs and to address security vulnerabilities. Software and app developers similarly periodically release updated software. Consumers need to monitor for updates to firmware and software regularly, and apply them promptly once available.  If a device manufacturer goes out of business or stops supporting a device, consider replacing it as it will no longer receive security updates. Routers need to be properly configured, with usernames and strong passwords set, encryption enabled, network names (SSID) configured, etc.  Consumers with a connected home setup should consider a high-speed router with sufficient bandwidth such as 802.11ac or 802.11n.

The third party managed IT services industry has existed since the Second Age. As connected homes proliferate resulting in complex connected home infrastructure, there is an opportunity for “managed home IT” to become a viable business model.  I expect companies currently offering consumer-focused computer repair and home networking services will look hard at adding connected home management services (installation, monitoring, penetration testing, etc.) as a new subscription-based service.

  • Smart device companies need to think of what they can/can’t, and should/shouldn’t, do with data generated from their devices.  IoT devices and smart devices, and connected home technologies and gateways, generate a lot of data.  Smart/IoT device manufacturers and connected home providers need to think about how to store, process and dispose of this data.  Prior to the Internet of Things, behavioral data was gathered through the websites you viewed, the searches you ran, the links you clicked – “online behavioral data.”  The IoT is a game-changer. Now, what users do in the real world with their connected devices can translate to a new class of behavioral data – “device behavioral data.” Smart/IoT device manufacturers, and connected home providers, will need to understand what legal boundaries govern their use of device behavioral data, and how existing laws (e.g., COPPA) apply to the collection and use of data through new technologies. Additionally, companies must look at what industry best practices, industry guidelines and rules, consumer expectations and sentiment, and other non-legal contours shape what companies should and should not do with the data, even if the use is legal.  Companies must consider how long to keep data, and how to ensure it’s purged out of their systems once the retention period ends.

IoT and smart device companies, and connected home service and technology providers, should build privacy and data management compliance into the design of their devices and their systems by adopting a “security by design” and “privacy by design” mindset. Consumers expect that personal data about them will be kept secure and not misused. They must ensure their own privacy policies clearly say what they do with device behavioral data, and not do anything outside the boundaries of their privacy policy (“say what you do, do what you say”). Consider contextual disclosures making sure the consumer clearly understands what you do with device behavioral data.  Each new Age of the Internet has seen the FTC, state Attorneys General, and other consumer regulatory bodies look at how companies are using consumer data, and make examples of those they believe are misusing it. The Fourth Age will be no different. Companies seeking to monetize device behavioral data must make sure that they have a focus on data compliance.

Key Security Provisions for Vendor/Partner Contracts

One of the most important lessons from the 2013 Target breach was that hackers will look for the weakest link in a company’s security chain when seeking a point of entry. Often, that weakest link is the vendors and partners which integrate with your IT infrastructure or have login credentials to your systems. Target’s HVAC vendor suffered a phishing attack that resulted in hackers obtaining access credentials to Target’s network which they used as their point of entry. Companies are increasingly doing security diligence on their vendors and partners to ensure that if they have access to the company’s network or systems, they will meet minimum security requirements.  It’s critical that your vendors and partners agree to minimum contractual security commitments as well. I often use a “security addendum” with controlling language to ensure that my standard provisions control over any conflicting provisions in the vendor/partner agreement, but will sometimes embed them directly into the contract.

Here are some of the provisions I like to include in vendor and partner agreements:

  • Definitions of Personal Information and Financial Account Information.  It’s important to define what “personal information” and “financial account information” mean.  In many cases, your vendor/partner’s definition of these terms may differ from yours. Ensuring you’re on the same page (e.g., you may consider IP addresses to be personal information, they do not) can be critical in the event there is an unauthorized release of information.  Be careful using a list of information types as the list may change over time; instead, consider a broad definition with examples.
  • Credentials. If you are providing credentials to your vendor/partner to access your network or systems, or that of a third party (e.g., a marketing service, a cloud hosting environment, etc.), ensure they will only use them as required by the contract.  Ensure they fall under the contractual definition of Confidential Information and will be treated as such.  Access to credentials should be limited to those with a “need to know.”
  • Safeguards.  I like to include a requirement to implement and follow administrative, physical and technical safeguards (no less rigorous than industry standard) designed to protect information and credentials.  This can be a good catch-all that can be leveraged if the vendor/partner has a problem later on and did not use industry standard security safeguards.  I also like to call out the importance of installing security software patches immediately to reduce the risk of an exploitable security hole.  If the vendor/partner has obtained security certifications (e.g., SSAE16, ISO 27001, etc.) that you are relying on, ensure they provide evidence of current certification upon request and do not let certifications lapse during the term of the Agreement.
  • Anti-Phishing Training.  Over 90% of hacking attacks start with a “phishing” attack. Consider specifically requiring your vendors/partners to provide anti-phishing training to all employees.
  • Payment Account Information.  If the vendor/partner will not be handling payment account information, add an affirmative obligation that the vendor/partner will not access, use, store, or process payment account information. If you are afraid that information might be inadvertently provided to the vendor/partner, consider adding a provision stating that if any payment account information is inadvertently provided to the vendor/partner, as long as they destroy it immediately and notify your company the vendor/partner will not be in breach of the affirmative obligation not to use payment account information.  If your vendor/partner will handle payment account information, ensure you have appropriate language that covers both current and future PCI-DSS (Payment Card Industry Data Security Standard) versions.  If appropriate, add language making clear that payment account information will be stored in active memory only, and not stored or retained on the vendor/partner’s servers (e.g., where the payment information is “tokenized” and/or securely transmitted to your company’s own servers at the time the transaction is processed).
  • Information Security Questionnaire.  Include the right to have the vendor/partner complete a written security questionnaire once a year signed by a corporate officer. Requiring an annual questionnaire can help identify whether your vendors/partners are on top of emerging threats and risks. If you have limited resources to conduct audits, the responses to the questionnaires can help you identify which vendors/partners may be best to audit.  As part of the questionnaire, ask for copies of the vendor/partner’s disaster recovery plan and business continuity plan, and certificate of insurance for the vendor/partner’s cyber security policy if your company is named as an additional insured.
  • Audit Rights.  Include a right to do a security audit of a vendor/partner’s information technology and information security controls. This should include the right to conduct penetration testing of the vendor/partner’s network, ideally on an unannounced basis.  Make sure the vendor/partner is obligated to correct any security discrepancies found at their expense; if they don’t make corrections to your reasonable satisfaction, you should be able to exit the contract.  Ensure you can use internal and third party resources to conduct the training. In addition to a right to audit on a regular basis (e.g., once per year), allow the right to audit after a security breach so you can do your own analysis of how well the vendor/partner has bulletproofed their systems in light of a breach.
  • Security Breach.  Define what a “security breach” is (consider a broad definition that includes security incidents as well).  Ensure the vendor/partner promptly notifies your company in the event of a security breach, ideally by email to a “role” mailbox or to your CIO/CTO.  The vendor/partner should take any triage steps necessary to close the immediate security hole and then thoroughly review and bulletproof its systems and networks.  The vendor/partner should agree to work with your company and any government entities in any investigation of the breach.  Ensure that your company, not the vendor/partner, decides whether and how to communicate with affected individuals.  Ensure the vendor/partner bears the costs associated with a security breach.
  • Preservation Notices and E-Discovery.  If the records of the vendor/partner may be important if litigation is brought against your company, consider adding a clause ensuring that the vendor/partner will comply with any document preservation/litigation hold notice you provide, and that the vendor/partner will reasonably assist with electronic discovery requests.  A “friendly” clause like this can help avoid issues and strain on the partnership if litigation occurs.

Once you have these provisions in your agreement, don’t forget to tie them into your risk allocation provisions. If the vendor/partner carries insurance to protect against security breaches, ensure you are an additional insured and ask for a certificate of insurance annually. Ensure your indemnification section fully covers any breach of security obligations, and consider excluding these from your limitation of liability to the greatest extent possible.