The Promise of, and Legal Issues and Challenges With, Blockchain and Distributed Ledger Technology

[Originally published in December 2016. Updated on April 7, 2018 to clarify the explanation of blockchain and distributed ledger technology and to add more information on the legal risks and challenges.]

Blockchain and distributed ledger technology is poised to revolutionize many aspects of the world around us. It may prove to be as disruptive and innovative of a force as augmented reality. Many people associate “blockchain” with “Bitcoin,” whose meteoric rise as a cryptocurrency has been well reported. However, they are not one and the same. Bitcoin is an application; blockchain and distributed ledger technology are the methods behind it.  But what is it? How might it change the world? And what legal and other risks does it bring?

What is Distributed Ledger Technology and Blockchain?

The Old – Centralized Ledgers

Centralized ledgers (a database, list, or other information record) have played an important role in commerce for millennia, recording information about things such as physical property, intangible property including financial holdings, and other assets. The most recent innovation in centralized ledgers has been the move from physical ledgers (paper, stone tablets, etc.) to digital ledgers stored electronically. A “centralized ledger” is a ledger maintained and administered in a single, central location (e.g., a computer database stored on a server) accessible by anyone without use of access controls (public) or through an access control layer by persons or organizations with valid login credentials (permissive). This is a “hub-and-spoke” system of data access and management. Centralized ledgers have historically had many benefits, such as minimized data redundancy, limited number of access points to the data for security purposes, centralized administration, and centralized end user access. However, there are also disadvantages, such as greater potential for loss or inaccessibility if the central location suffers a hardware failure or connectivity outage, inability to recover lost data elements, and a dependence on network connectivity to allow access to the ledger by its users.

The New – Distributed Ledgers

Distributed ledgers seek to address these disadvantages by distributing (mirroring) the ledger contents to a network of participants (aka “nodes”) through a software programso that each participant has a complete and identical copy of the ledger, and ensuring all nodes agree on changes to the distributed ledger. Nodes can be individuals, sites, companies/institutions, geographical areas, etc. There is no centralized administrator or “primary node” — if a change is made to one copy of the ledger, that change is automatically propagated to all copies of the ledger in the system based on the rules of the system (called a “consensus algorithm“) which ensures that each distributed copy of the ledger is identical. For example, in Bitcoin, each node uses an algorithm that gives a score to each version of the database, and if a node receives a higher scoring version of the ledger, it adopts the higher scoring version and automatically transmits it to other nodes. Since the distributed ledger software on each node validates each addition to the distributed ledger, it’s extremely difficult to introduce a fraudulent transaction (to put it another way, transactions are audited in real time). Essentially, each node builds an identical version of the distributed ledger using the information it receives from other nodes. The use of distributed models in computing goes back to the origins of the Internet itself — ARPANET, which evolved into what we know today as the Internet, used a distributed model instead of a linear model to manage the transfer of data packets between computer networks.

The software on each node uses cryptographic signatures to verify that it is authorized to view entries in, and make changes to, the distributed ledger. If a participant with rights to modify the ledger (e.g., a digital token giving the participant the right to record a transaction) makes an addition to the ledger using the participant’s secure keys (e.g., a record of a change in ownership of an asset or recording of a new asset), the addition to the ledger is validated by the consensus algorithm and propagated to all mirrored copies of the ledger, which helps to ensure that the distributed ledger is auditable and verifiable. A key difference between centralized and distributed ledgers is that a distributed ledger cannot be forked — if you make a copy of a centralized ledger and store it somewhere else, it will be out of sync with the original copy, whereas each copy of a distributed ledger is kept identical by the client software.

Thus, the five typical characteristics of a distributed ledger are:

  1. distributed copies among nodes via client software;
  2. cryptographic signatures, or “keys,” to allow nodes to view, or add to, the distributed ledger in an auditable and verifiable fashion;
  3. a digital token (better known as a cryptocurrency)used within many distributed ledger networks to allow participants to record ledger entries;
  4. a consensus algorithm to ensure distributed copies of the ledger match among participants without the need for a centralized administrator; and
  5. record permanency so that verified entry accepted to the ledger via the consensus algorithm becomes permanent (it can be corrected via a later addition to the ledger but never removed).


While most press reporting around blockchains equates blockchain with distributed ledgers, a “blockchain” is a specific type of distributed ledger. Each record of new value added to the ledger and each transaction affecting entries in the ledger (which we will collectively call a “block“) includes a timestamp and a cryptographic verification code based on a data signature or “hash” from the previous block which “chains” it to the previous block, forming a “chain of blocks,” or “blockchain,” within the nodes hosting the blockchain. Because each block is cryptographically tied to the previous block via one-way hash, the entire chain is secure – a client can verify that a block in the blockchain validates against the previous block, but it does not allow someone to trace the blockchain forward. If a block in the chain is altered, it changes the hash value and no longer matches the hash stored in later blocks, and the alteration will be rejected by the nodes on the blockchain network. In a blockchain, transactions entered into the system during a specified period of time are bundled together and added to the blockchain as a new block.

There are three primary types of blockchain networks – public, private, and permissioned.

  • Public blockchains allow anyone to participate, and therefore rely more heavily on a strong consensus algorithm to ensure the requisite level of trust between blockchain participants.
  • Private blockchainsare limited to a discrete and specified group of participants, are usually small, and may not require use of a cryptocurrency given the inherent level of trust amount private blockchain participants. Private blockchains often do not require a strong consensus algorithm.
  • Permissioned blockchainsfunction much like public blockchains, but require participants have permission to access, transact on, or create new blocks within a blockchain.

Tennessee’s recent state law on blockchain, Tn. Stat. § 47-10-201, contains a good summary definition.  It defines “blockchain technology” as “distributed ledger technology that uses a distributed, decentralized, shared and replicated ledger, which may be public or private, permissioned or permissionless, or driven by tokenized crypto currencies or tokenless.  The data on the ledger is protected with cryptography, is immutable and auditable, and provides an uncensored truth.”  Arizona’s statutory definition (which predates Tennessee’s) is almost identical, except that “crypto currencies” is replaced with “crypto economics.”

Bitcoin is an early, and famous, example of a public blockchain application. Nodes on the Bitcoin blockchain network earn new bitcoins as a reward for solving a cryptographic puzzle through computing power, or “mining.” Transactions for the purchase and sale of bitcoins are also recorded in a block in the Bitcoin blockchain – the blockchain is the public ledger of all Bitcoin transactions. In other blockchain applications, the cyrptocurrency is used as payment for blockchain transactions.

Blockchain and distributed ledger technology is not intended to fully replace existing centralized ledgers such as databases. If a number of parties using different systems need to track something electronically that changes or updates frequently, a distributed ledger may be a good solution. If those needs are not there, or if there is a continuing need to rely on paper transaction records, a centralized ledger continues to be the better choice. Companies need to ensure there is a compelling ROI and business case before implementing a blockchain development and implementation program.

Smart Contracts

An important concept in blockchain technology is the “smart contract.”  Tennessee’s blockchain law defines a smart contract as “an event-driven program, that runs on a distributed, decentralized, shared and replicated ledger and that can take custody over and instruct transfer of assets on that ledger.” Arizona’s definition is identical other than an additional reference to state.  In other words, a smart contract is a computer program encoded into a blockchain that digitally verifies, executes, and/or enforces a contract without the need for human intervention. Where a traditional contract involves risk that a party will fail to perform (e.g., a shipper delivers products but the recipient fails to make payment for the products), smart contracts are self-executing and self-verifying.  In a smart contract for the purchase of goods tracked via blockchain, the seller and buyer would program a smart contract into the blockchain.  Once the delivery record is added to the blockchain, the smart contract automatically validates the shipper’s performance, and automatically triggers payment from the buyer.  Since execution of a smart contract is part of the blockchain, it is permanent once completed. Blockchain protocols such as Ethereum have developed programming languages for smart contracts.

How Might Blockchain and Distributed Ledgers Change the World?

The impact of new technology presents at first as rapidly disruptive (positively and negatively), but often manifests organically and transparently to change the world over time.

Roy Amara, a former president of the Institute of the Future, said that people overestimate a technology’s effect in the short term and underestimate it in the long run, a statement known as “Amara’s Law.” However, I think a corollary is in order – the impact of new technology presents at first as rapidly disruptive (both positively and negatively), but often manifests organically and transparently to change the world over time at a proportional rate to the maturity of the commercially available applications, to consensus on technological standards, and to decreasing costs to implement (and increasing ROI from implementing) the technology in practical business and consumer situations. For example, RFID technology was touted early on as a “change the world” technology, and it has — but most prominently through integration of the technology organic and innovative improvements to supply chain and inventory management. Social networking is viewed by many as a “killer app” (a catalyst that accelerates the adoption of a new technology) which helped usher in the third Age of the Internet, and it has changed the world by changing how we connect with others. Both took years to become pervasive in society and industry.

Blockchain and distributed ledger networks have the potential to change the way many systems and business processes work across industries. Financial and currency transactions are a prominent emerging application of distributed ledger networks and blockchain technology. Since blockchain and distributed ledger networks are platform-agnostic, a distributed ledger could be stored in different hardware/software configurations across different nodes, reducing the need for expensive and time-consuming upgrades to support the distributed model. For example, a permissioned blockchain model could help an organization such as the US Veterans Administration better manage appointment scheduling across a large number of hospitals and clinics (in fact, a resolution was recently passed in the US House of Representatives promoting just that, “to ensure transparency and accountability.” Industry groups, such as the Blockchain in Transport Alliance (BiTA), have sprung up to help develop and promote industry-specific blockchain standards and applications.

The technology could also be used in applications such as better and more secure management of governmental records and other services; tracking tax collection and receipts; managing assets; identity verification; decentralized voting; managing and tracking inventory levels and B2B/B2C product fulfillment; tracking the “data supply chain” for the flow of data among systems; managing system access controls; protection of critical public and privacy infrastructure; tracking royalties due to artists for the use of their works; and use of smart contracts to digitally create, execute, and enforce agreements between parties via blockchain transactions. Distributed ledger networks have the advantage of being more secure as the consensus algorithm makes it considerably difficult for a cyber-attacker to successfully alter the distributed ledger. It could also allow for greater access transparency, a central tenet of many privacy principles, by allowing individuals to access records in the ledger relating to them or containing their information.

Blockchain and Distributed Ledger Legal Risks and Issues

As with any new technology, blockchain creates some interesting conflicts with existing laws and regulations and raises interesting and complex legal and compliance issues.  These include:

Data privacy issues. Distributed ledger technology such as blockchain is inherently designed to share information among every participant and node. If information in a ledger transaction or block contains private information, such as an account number or company confidential information, it will be visible to every user of every node. This is one of the reasons permissive and privacy distributed ledgers are a focus of many companies seeking to innovate in the space. Additionally, as nodes in a distributed ledger network can be geographically disparate, rules and requirements for the transfer of data between geographies may play a major role. It is also possible that at some point in the future decryption technology will evolve to the point where cryptographic signatures used in blockchain and distributed ledgers may no longer be considered safe.

EU personal data and the “Right to be Forgotten.”  In the EU, personal privacy is considered a fundamental human right under the Charter of Fundamental Rights of the European Union. The General Data Protection Regulation (GDPR) is Europe’s new comprehensive data protection framework that as of May 25, 2018 has the force of law in every EU member state.  Under Article 17 of the GDPR, EU data subjects have a “right to be forgotten” which requires companies to erase personal information about that data subject if certain conditions are met (e.g., the personal data is no longer necessary in relation to the purposes for which they were collected or otherwise processed). This right has cropped up in the United States as well, for example, in California for minors under 18 with respect to websites, social media sites, mobile apps, and other online services under Cal. Bus. & Prof. Code § 22580-81.  The “right to be forgotten” creates a direct conflict with the permanency of blockchain.  Companies should factor the “right to be forgotten” into their blockchain development planning, e.g., consider hashing technologies to pseudonymize personal data before encoding it into a blockchain, or other ways to avoid this conflict.  Developments in blockchain and distributed ledger technology may also arise to address this issue.

Jurisdictional issues.The nodes in a blockchain are often in multiple jurisdictions around the country and/or around the world.  As each is a perfect copy, this can create issues from a jurisdictional perspective.  Legal concepts such as title, contract law, regulatory requirements, etc. differ from jurisdiction to jurisdiction. Does a blockchain network need to comply with the laws of every jurisdiction in which a node is operated?  Cross-border enforcement may become an issue – will one jurisdiction seek to impose its laws on all other nodes of a blockchain network? Blockchain network operators should consider how to specify, in a binding manner, a single choice of law and venue to govern disputes arising from the blockchain network and provide specificity as to compliance requirements.  This jurisdictional issue will likely lead to races between jurisdictions to establish themselves as a “blockchain and distributed ledger friendly” jurisdiction, just as Delaware established itself as a “corporation-friendly” jurisdiction in which many corporations choose to incorporate.  Jurisdictional issues will also impact discovery of data within the digital ledger network, e.g., through subpoenas.  The rules regarding document discovery differ from state to state.  A company seeking to obtain blockchain data through judicial process may have the ability to engage in “forum shopping” to find the most convenient, and friendly, jurisdiction in which to file a document discovery request.

Record retention risks. One of the features of blockchain and distributed ledger networks is record permanency. This permanency may be incompatible with statutory requirements for data to be destroyed and deleted after a period of time, such as credit/debit card data under PCI rules and HR data under various regulatory requirements, and under privacy frameworks such as the GDPR.  It also likely conflicts with a company’s existing record retention policies.  Given these factors, companies looking to introduce blockchain technology should review their record retention policies and create a separate “permanent” category for data stored in blockchain applications.  At the same time, a blockchain is permanent so long as the blockchain itself still exists.

Service Level Agreements.  Many companies include a service level agreement (SLA) in their service agreements, which provides committed minimum service levels at which the service will perform, and often includes remedies for a breach of the SLA.  SLAs are relatively easy to offer when they are limited to a company’s own systems and infrastructure.  However, a blockchain (other than perhaps a small private blockchain) may by its very nature be distributed beyond a company’s own network.  SLAs often exclude from downtime issues outside of its control, e.g., downtime caused by a third party’s hardware or software.  Does a third-party node still fit within this? Many SLAs also address latency, i.e., the time it takes for a system to respond to an instruction. Companies will also need to think about what measure of latency (if any) should apply to transactions via blockchain and other distributed ledgers, and how to address blockchain in their SLAs.

Liability and Force Majeure issues. Companies routinely implement controls (processes and procedures) to manage their systems and operations, which controls may be audited by customers/partners or certified under standards such as SOC 2. But who is accountable for a database distributed across geographies and companies? Use of a distributed ledger system with nodes outside of a company’s systems means ceding some control to an automated process and to a decentralized group of participants in the distributed ledger/blockchain. An error in a record in a distributed ledger becomes permanent and can be corrected but never removed. Is an issue with a third-party node considered a force majeure event which excuses performance under an agreement? Is the type of network (public, private or permissioned) a factor?  Companies will need to think about how blockchain should tie into an agreement’s general force majeure provision, and how to allocate blockchain risk within a contract (through indemnities, limitation of liability, etc.).

Insurance issues.  Any new technology is quickly tested under insurance policies.  Companies will begin to tender claims under their electronic errors and omissions policies, commercial general liability policies, and possibly specialized cyber policies.  As insurance companies build up experience with blockchain claims, companies will likely see new endorsements and exclusions limiting insurance carriers’ liability under standard policies for blockchain-related losses.  This is often closely followed by the emergence of custom policy riders (for additional premium) to provide add-on insurance protection for blockchain-related losses.  Companies implementing blockchain technologies may want to discuss blockchain-related losses with their insurance carriers.

Intellectual property issues.As with any new technology, there has already been a flood of patent applications by companies “staking their claim” in the brave new frontier of blockchain and distributed ledger. While the core technology is open source, companies have created proprietary advancements in which they may assert patent or other intellectual property rights.  Dozens of companies have already obtained blockchain patents.  Technology and other financial companies have undoubtedly already filed large numbers of blockchain patents that are working their way through the Patent and Trademark Office.  As is often the case with new technologies, there will likely be a flurry of patent infringement lawsuits as new patent holders seek to enforce their exclusive rights to their inventions.  Adopters of blockchain using custom applications or non-standard implementations should be especially sensitive as to whether their application or implementation could potentially be infringing filed or issued blockchain patents.  Consulting external patent counsel knowledgeable in blockchain technology will become more and more important for these types of adopters.

Confidentiality issues. Information placed into a node of a public blockchain – even if that node is within a company’s own servers – is no different than putting code into GitHub. The result is that the information enters the public domain. Even with a private or permissioned blockchain, information encoded into the blockchain becomes visible to all participants with access rights.  A company’s use of a blockchain or distributed ledger to store confidential information, such as information subject to an NDA or the company’s own trade secrets, creates a risk of a breach of confidentiality obligations or loss of trade secret protection.  Companies should consider how to prevent confidential and other sensitive company information from being stored in blockchains in a manner that could result in a breach of confidentiality. Additionally, agreements routinely require the return or destruction of the discloser’s confidential information and other provided data and/or materials upon termination or expiration. An exception for data encoded onto a blockchain must be considered.

Discovery and Subpoenas.  Information encoded into a public blockchain may be considered in the public domain.  When litigation arises, will companies be able to push back on a discovery request encompassing data in a blockchain by stating that it is publicly available?  If a person can find the identity of other nodes in a blockchain network, we may see an increase in subpoenas directed to a node for blockchain data within the copy of the blockchain or digital ledger hosted at that node (possibly based on favorable jurisdiction as noted above). Since every node maintains their own copy of a distributed ledger, and no one node owns or controls the data, this may affect the ability of a company to keep information out of third party hands as they may not have the ability to quash a subpoena directed at an independent node.

Application of existing legal structures to blockchain, smart contracts, and distributed ledgers. As is often the case, one of the challenges for lawyers and others is determining how existing laws and regulations will likely be interpreted to fit new technologies such as blockchain and distributed ledger technology; what new laws and regulations may be coming and how permissive or restrictive they may be; and how enforcement and penalties in connection with the new technologies under both new and existing laws will play out. “Smart contracts” that rely on computer algorithms to establish the formation and performance of contracts may challenge the nature and application of traditional legal principles of contract law such as contract formation and termination, and the traditional focus of laws on the acts of persons (not automated technologies), making it difficult for courts to stretch traditional contract law principles to the new technology.

Emerging laws.  It is axiomatic that law lags technology. The companies that immediately benefit from a new disruptive business method such as blockchain are those which seek to innovate applications of the method to monetize it, obtain a first mover advantage, and ideally seize significant market share for as long as possible. Industry groups and trade associations form to seek to promote it, and legislators take notice (especially given the meteoric rise of bitcoin prices during 2017). Legislators often jump to regulate something they don’t fully understand and whose potential is not fully realized, which can impede development and proliferation of the new technology.  A handful of states (including Arizona, Nevada, Tennessee, Delaware, Illinois, Vermont, and Wyoming) have already adopted blockchain-specific legislation, and this number will likely grow substantially in the next couple of years. Fortunately, the legislation enacted to date appears to support, rather than inhibit, blockchain technology. Other states have introduced or enacted legislation to study blockchain technology.

Disruptive technologies such as blockchain and distributed ledger technology bring both benefits and potential risks. If the benefits outweigh the risks on the whole, the public interest is not served when the legal, regulatory and privacy pendulum swings too far in response. The spread of blockchain and other distributed ledger technologies and applications will be dependent on the creation and fostering of a legal, regulatory, and privacy landscape that fosters innovation in the space.

Eric Lambert is the Commercial Counsel for the Transportation and Logistics division of Trimble Inc., an integrated technology and software provider focused on transforming how work is done across multiple professions throughout the world’s largest industries. He is counsel for the Trimble Transportation Mobility (including PeopleNet, Innovative Software Engineering, and Trimble Oil and Gas Services) and Trimble Transportation Enterprise (including TMW and 10-4 Systems) business units, leading providers of software and SaaS fleet mobility, communications, and data management solutions for transportation and logistics companies. He is a corporate generalist and proactive problem-solver who specializes in transactional agreements, technology/software/cloud, privacy, marketing and practical risk management. Eric is also a life-long techie, Internet junkie and avid reader of science fiction, and dabbles in a little voice-over work. Any opinions in this post are his own. This post does not constitute, nor should it be construed as, legal advice.

The What, Why and How of SLAs, aka Service Level Agreements (part 1)

Every company uses technology vendors, such as Software-as-a-Service providers, to provide critical components of their business operations. One pervasive issue in technology vendor agreements is the vendor’s commitment to the levels of service the customer will receive.  A representation to use commercially reasonable efforts to correct product defects or nonconformity with product documentation may not be sufficient for a customer relying on a technology vendor’s service for a mission-critical portion of its business. In this situation, the vendor may offer (and/or a customer may require) a contractual commitment as to the vendor’s levels of service and performance, typically called a “Service Level Agreement” or “SLA.” Service Level Agreements (SLAs) ensure there is a meeting of the minds between a vendor and its customer on the minimum service levels to be provided by that vendor.

At a high level, a SLA does three things:

  1. Describes the types of minimum commitments the vendor will make with respect to levels of service provided by the vendor;
  2. Describes the metrics by which the service level commitments will be measured; and
  3. Describes the rights and remedies available to the customer if the vendor fails to meet their commitments.

In many cases, a SLA is presented as an exhibit or appendix to the vendor agreement (and not a separate agreement). In others, a SLA may be presented as a separate document available on a vendor’s website.  Think of the former as a customer-level SLA which is stated directly in (and quite often negotiated on a customer-by-customer basis as part of) the service agreement with that customer, and the latter as a service-level SLA which the vendor wants to apply equally to every user of its service.

In this two-part post, I’ll explain the contents of, reasons for, and important tips and tricks around technology SLAs.  Part 1 will cover uptime and issue resolution SLAs.  Part 2 will cover other types of technology SLA commitments, SLA remedies, and other things to watch for.

Common types of commitments in SLAs

The most common types of commitments found in technology SLAs are the uptime commitment and the issue resolution commitment.

Uptime SLA Commitment

An uptime commitment is generally provided in connection with online services, databases, and other systems or platforms (a “Service”). A technology vendor will commit to a minimum percentage of Service availability during specified measurement periods.  This percentage is typically made up of nines – e.g., 99% (“two nines”), 99.9% (“three nines”), 99.99% (“four nines”), 99.999% (“five nines”), etc.  Some SLAs will use “.5” instead of “.9”, for example, 99.5% or 99.95%”.   Uptime is typically calculated as follows:

(total minutes in the measurement period - minutes of Downtime in that period) / Total minutes in the measurement period

Definitions are key. The right definitions can make all the difference in the effectiveness of an uptime SLA commitment. Vendors may gravitate towards a narrower definition of “Downtime” (also called “Unavailability” in some SLAs) to ensure they are able to meet their uptime commitment, e.g., by excluding a slowdown that makes the Service hard (but not impossible) to use. Customers should look carefully at this definition to ensure it covers any situation in which they cannot receive substantially all of the value of the Service. For example, consider the difference between Unavailability/Downtime as a period of time during which the Service fails to respond or resolve, versus a period of time during which a material (or non-material) function of the service is unavailable. The SLA should define when the period of Unavailability/Downtime starts and ends, e.g., starting when the vendor first learns of the issue, and ending when the Service is substantially restored or a workaround is in place; customers should look at this carefully to ensure it can be objectively measured.

Mind the measurement period. Some vendors prefer a longer (e.g., quarterly) measurement period, as a longer measurement period reduces the chance a downtime event will cause a vendor to miss its uptime commitment. Customers generally want the period to be shorter, e.g., monthly.

Consider whether the uptime percentage makes sense in real numbers. Take the time to actually calculate how much downtime is allowed under the SLA – you may be surprised. For a month with 30 days:

  • 99% uptime = 432 minutes (7 hours, 12 minutes) of downtime that month
  • 99.5% uptime = 216 minutes (3 hours, 36 minutes) of downtime that month
  • 99.9% uptime = 43.2 minutes of downtime that month
  • 99.99% uptime = 4.32 minutes of downtime that month

One critical question customers should ask is whether a Service is mission-critical to its business.  If it’s not, a lower minimum uptime percentage may be acceptable for that service.

Some vendors may offer a lower uptime commitment outside of business hours, e.g., 99.9% from 6am to 10pm weekdays, and 99% all other times. Again, as long as this works for a customer’s business (e.g., the customer is not as concerned with downtime off-hours), this may be fine, but it can make it harder to calculate.

Ensure the Unavailability/Downtime exclusions are appropriate. Uptime SLAs generally exclude certain events from downtime even though the Service may not be available as a result of those events. These typically include unavailability due to a force majeure event or an event beyond the vendor’s reasonable control; unavailability due to the equipment, software, network or infrastructure of the customer or their end users; and scheduled maintenance.  Vendors will often seek to exclude a de minimisperiod of Unavailability/Downtime (e.g., less than 5/10/15 minutes), which is often tied to the internal monitoring tool used by the vendor to watch for Service unavailability/downtime. If a vendor wouldn’t know if a 4-minute outage between service pings even occurred, it would argue that the outage should not count towards the uptime commitment.

Customers should make sure there are appropriate limits to these exclusions (e.g., force majeure events are excluded provided the vendor has taken commercially reasonable steps to mitigate the effects of such events consistent with industry best practices; scheduled maintenance is excluded provided a reasonable amount of advance written notice is provided.  Customers should watch out for overbroad SLAs that try to exclude maintenance generally (including emergency maintenance).  Customers may also want to ensure uptime SLAs include a commitment to take reasonable industry-standard precautions to minimize the risk of downtime (e.g., use of no less than industry standard anti-virus and anti-malware software, firewalls, and backup power generation facilities; use of redundant infrastructure providers; etc.)

Don’t overlook SLA achievement reporting. One important thing customers should look for in a SLA is how the vendor reports on SLA achievement metrics, which can be critical to know when a remedy for a SLA failure may be available. Vendors may place the burden on the customer to provide notice of a suspected uptime SLA failure within a specified amount of time following the end of the measurement period, in which case the vendor will review uptime for that period and verify whether the failure occurred. However, without proactive metrics reporting, a customer may only have a suspicion of a SLA failure, not actual facts. Customers using a mission-critical system may want to consider asking for proactive reporting of SLA achievement within a certain amount of time following each calendar month.

Issue Resolution SLA Commitment

Of equal importance to an uptime commitment is ensuring that a Service issue (downtime or otherwise) will be resolved as quickly as possible.  Many technology SLAs include a service level commitment for resolution of Service issues, including the levels/classifications of issues that may occur, a commitment on acknowledging the issue, and a commitment on resolving the issue.  The intent of both parties should be to agree on a commitment gives customers assurances that the vendor is exerting reasonable and appropriate efforts to resolve Service issues.

Severity Levels. Issue resolution SLAs typically include from 3-5 “severity levels” of issues.  Consider the following issues:

Impact Example Classification
Critical The Service is Unavailable
High An issue causing one or more critical functions to be Unavailable or disrupting the Service, or an issue which is materially impacting performance or availability
Medium An issue causing some impact to the Service, but not materially impacting performance or availability
Low An issue causing minimal impact to the Service
Enhancement The Service is not designed to perform a desired function

Issue resolution SLAs typically use some combination of these to group issues into “severity levels.”  Some group critical and high impact issues into Severity Level 1; some do not include a severity level for enhancements, instead allowing them to be covered by a separate change order procedure (including it in the SLA may be the vendor’s way of referencing a change order procedure for enhancements). Vendors may include language giving them the right to reclassify an issue into a lower severity level with less stringent timeframes. Customers should consider ensuring whether they should have the ability to object to (and block) a reclassification if they disagree that the issue should be reclassified.

Acknowledgment Commitment. Issue resolution SLAs typically include a commitment to acknowledge the issue. As with the uptime SLA, the definition of the acknowledgment timeframe is important (when it starts and when it ends). A vendor will typically define this as the period from the time it is first notified of or becomes aware of the issue to the time the initial communication acknowledging the issue is provided to the customer.  Customers should look at the method of communication (e.g., a post to the vendor’s support page, tweet through their support Twitter account, an email, a phone call from the customer’s account representative required, etc.) and determine if a mass communication method versus a personal communication method is important.

For critical and high impact issues, vendors (especially those operating multi-tenant environments) will often not offer a specific acknowledgment commitment, instead offering something like “as soon as possible depending on the circumstances.”  The argument for this is that for a critical or high impact issue, a vendor wants all available internal resources triaging and working the problem, not reaching out to customers to tell them there is a problem. In many cases, this may be sufficient for a customer provided there is some general acknowledgment provided to a support page, support Twitter account, etc. to alert customers that there is an issue. In others, a customer may want to push for their account representative, or a vendor representative not involved in triaging the problem such as an account executive, to acknowledge the issue within a fixed amount of time, putting the burden on the vendor to ensure it has appropriate internal communication processes in place.

Resolution Commitment. Issue resolution SLAs also typically include a time commitment to resolve the issue. One important thing to focus on here is what “resolve” means.  Vendors may define it as the implementation of a permanent fix or a workaround that temporarily resolves the problem pending the permanent fix; in some cases, vendors may also define it as the commencement of a project to implement a fix.  Customers should ensure that a vendor promptly implement a permanent fix if a workaround is put in place, and that failure to do so is a failure under the SLA. Many vendors are reluctant to provide a firm issue resolution timeframe, as the time required to resolve or implement a workaround is dependent on the issue itself, and are often unwilling to negotiate the resolution commitment or commit to a fixed timeframe for resolution.  Customers should ensure the resolution commitment is reasonable and that the vendor is doing everything it can to correct issues.  For example, for critical and high impact issues, consider an issue resolution commitment of “as soon as possible using continuous diligent efforts” – as long as the vendor is working diligently and continuously to fix the issue, they’re in compliance with the SLA. For lower impact issues, consider a commitment to implement a fix or workaround in the ordinary course of business.

In part 2, I’ll cover other types of technology SLA commitments, SLA remedies, and other things to watch for.

Eric Lambert has spent most of his legal career working in-house as a proactive problem-solver and business partner. He specializes in transactional agreements, technology/software/e-commerce, privacy, marketing and practical risk management. Any opinions in this post are his own. This post does not constitute, nor should it be construed as, legal advice. He is a technophile and Internet evangelist/enthusiast. In his spare time Eric dabbles in voice-over work and implementing and integrating connected home technologies.

The Augmented World — Legal and Privacy Perspectives on Augmented Reality (AR)

You’ve likely heard that Augmented Reality (AR) is the next technology that will transform our lives. You may not realize that AR has been here for years. You’ve seen it on NFL broadcasts when the first down line and down/yardage appear on the screen under players’ feet. You’ve seen it in the Haunted Mansion ride in Disneyland when ghosts seem to appear in the mirror riding with you in your cart. You’ve seen it in cars and fighter jets when speed and other data is superimposed onto the windshield through a heads-up display. You’re seeing it in the explosion of Pokémon Go around the world. AR will affect all sectors, much as the World Wide Web did in the mid-1990s. Any new technology such as AR brings with it questions on how it fits under the umbrella of existing legal and privacy laws, where it pushes the boundaries and requires adjustments to the size and shape of the legal and regulatory umbrella, and when a new technology leads to a fundamental shift in certain areas of law. This article will define augmented reality and the augmented world, and analyze its impact on the legal and privacy landscape.

What is “augmented reality” and the “augmented world?”

One of the hallmarks of an emerging technology is that it is not easily defined. Similar to the “Internet of Things,” AR means different things to different people, can exist as a group of related technologies instead of a single technology, and is still developing. However, there are certain common elements among existing AR technologies from which a basic definition can be distilled.

I would define “augmented reality” as “a process, technology, or device that presents a user with real-world information, commonly but not limited to audiovisual imagery,augmented with additional contextual data elements layered on top of the real-world information, by (1) collecting real-world audiovisual imagery, properties, and other data; (2) processing the real-world data via remote servers to identify elements, such as real-world objects, to augment with supplemental contextual data; and (3) presenting in real time supplemental contextual data overlaid on the real-world data.” The real world as augmented through various AR systems and platforms can be referred to as the “augmented world.” AR and the augmented world differs from “virtual reality” (VR) systems and platforms, such as the Oculus Rift and Google Cardboard, in that VR replaces the user’s view of the real world with a wholly digitally-created virtual world, where AR augments the user’s view of the real world with additional digital data.

“Passive” AR (what I call “first-generation AR”) is a fixed system — you receive augmented information but do not do so interactively, such as going through the Haunted Mansion ride or watching your television set. The next generation of AR is “active,” meaning that AR will be delivered in a changing environment, and the augmented world will be viewed, through a device you carry or wear. Google Glass and the forthcoming Microsoft HoloLens are examples of “active AR” systems with dedicated hardware; when worn, the world is augmented with digital data superimposed on the real-time view of the world. However, AR has found ways to use existing hardware — your smartphone. HP’s Aurasma platform is an early example of an active AR system that uses your smartphone’s camera and screen to create digital content superimposed on the real world. What AR has needed to go fully mainstream was a killer app that found a way for AR to appeal to the masses, and it now has one — Pokémon Go. Within days of its launch in early July, TechCrunch reported that Pokémon Go had an average daily user base of over 20 million users. Some declared it the biggest “stealth health” app of all time as it was getting users out and walking.

Active AR has the capacity to change how people interact with the world, and with each other. It is an immersive and engaging user experience. It has the capacity to change the worlds of shopping, education and training, law enforcement, maintenance, healthcare, and gaming, and others. Consider an AR system that shows reviews, product data, and comparative prices while looking at a shelf display; identifies an object or person approaching you and makes it glow, flash, or otherwise stand out to give you more time to avoid a collision; gives you information on an artist, or the ability to hear or see commentary, while looking at a painting or sculpture; identifies to a police officer in real time whether a weapon brandished by a suspect is real or fake; or shows you in real time how to repair a household item (or how to administer emergency aid) through images placed on that item or on a stricken individual. For some, the augmented world will be life-altering, such as a headset as assistive technology which reads road signs aloud to a blind person or announces that a vehicle is coming (and how far away it is) when the user looks in the vehicle’s direction. For others, the ability to collect, process and augment real-world data in real time could be viewed as a further invasion of privacy, or worse, technology that could be used for illegal or immoral purposes.

As with any new technology, there will be challenges from a legal and digital perspective. A well-known example of this is the Internet when the World Wide Web became mainstream in the mid-1990s. In some cases, existing laws were interpreted to apply to the online world, such as the application of libel and slander to online statements, the application of intellectual property laws to file sharing over peer-to-peer networks, and the application of contract law to online terms of use. In others, new laws such as the Digital Millennium Copyright Act were enacted to address shortcomings of the existing legal and regulatory landscape with respect to the online world. In some instances, the new technology led to a fundamental shift in a particular area of law, such as how privacy works in an online world and how to address online identity theft and breaches of personal information. AR’s collection of data, and presentation of augmented data in real time, creates similar challenges that will need to be addressed. Here are some of the legal and privacy challenges raised by AR.

  • Rethinking a “reasonable expectation of privacy.” A core privacy principle under US law is that persons have a reasonable expectation of privacy, i.e., a person can be held liable for unreasonably intruding on another’s interest in keeping his/her personal affairs private. However, what is a “reasonable expectation of privacy” in a GoPro world? CCTV/surveillance cameras, wearable cameras, and smart devices already collect more information about people than ever before. AR technology will continue this trend. As more and more information is collected, what keeping “personal affairs private” looks like will continue to evolve. If you know someone is wearing an AR device, and still do or say something you intend to keep private, do you still have a reasonable expectation of privacy?

What is a “reasonable expectation of privacy” in a GoPro world?


  • Existing Privacy Principles. Principles of notice, choice, and “privacy by design” apply to AR systems. Providers of AR systems must apply the same privacy principles to AR as they do to the collection of information through any other method. Users should be given notice of what information will be collected through the AR system, how long it will be kept, and how it will be used. Providers should collect only information needed for the business purpose, store and dispose of it securely, and keep it only as long as needed.

AR systems add an additional level of complexity — they are collecting information not just about the user, but also third parties. Unlike a cellphone camera, where the act of collecting information from third parties is initiated by the user, an AR system may collect information about third parties as part of its fundamental design. Privacy options for third parties should be an important consideration in, and element of, any AR system. For example, an AR system provider could ensure users have the ability to toggle the blocking of third party personal data from being collected or augmented, so personal information is only augmented when the user wants it to be. AR system providers may also consider an indicator on the outside of the device, such as an LED, to let third parties know that the AR system is actively collecting information.

Additionally, AR may create interesting issues from a free speech and recording of communications perspective. Some, but not all, court rulings have held that the freedom of speech guaranteed by the First Amendment extends to making recordings of matters of public interest. An AR system that is always collecting data will push the boundaries of this doctrine. Even if something is not in the public interest, many states require the consent of both parties to record a conversation between them. An AR system which persistently collects data, including conversations, may run afoul of these laws.

  • Children’s Privacy.It is worth a special note that AR creates an especially difficult challenge for children’s privacy, especially children under 13. The Children’s Online Privacy Protection Act (“COPPA”) requires operators of online services, including mobile apps, to obtain verifiable parental consent before collecting any personal information from children under 13. “Personal information” includes photos, videos, and audio of a child’s image or voice. As AR systems collect and process data in real time, the passive collection of a child’s image or voice (versus collection of children’s personal information provided to a company through an interface such as a web browser) is problematic under COPPA. AR operators will need to determine how to ensure they are not collecting personal information from children under 13. I expect the FTC will amend the COPPA FAQ to clarify their position on the intersection of AR and children’s privacy.
  • Intellectual Property. Aside from the inevitable patent wars that will occur over the early inventors of AR technologies, and patent holders who believe their patent claims cover certain aspects of AR technologies, AR will create some potentially interesting issues under intellectual property law. For example, an AR system that records (and stores) everything it sees will invariably capture some things that are protected by copyright or other IP laws. Will “fair use” be expanded in the augmented world, e.g., where an album cover is displayed to a user when a song from that is heard? Further, adding content to a copyrighted work in the augmented world may constitute a prohibited derivative work. From a trademark perspective, augmenting a common-law or registered trademark with additional data, or using a competitor’s name or logo to trigger an ad about your product overlaid on the competitor’s name or logo, could create issues under existing trademark law.
  • Discrimination.  AR systems make it easy to supplement real-world information by providing additional detail on a person, place or thing in real time. This supplemental data could intentionally or inadvertently be used to make real-time discriminatory decisions, e.g., using facial or name recognition to provide supplemental data about a person’s arrest history, status in a protected class, or other restricted information which is used in a hiring or rental decision. An AR system that may be used in a situation where data must be excluded from the decision-making process must include the ability to automatically exclude groups of data from the user’s augmented world.

The world of online digital marketing and advertising will expand to include digital marketing and advertising in the augmented world. Imagine a world where anything — and I mean anything — can be turned into a billboard or advertisement in real time. Contextual ads in the augmented world can be superimposed anytime a user sees a keyword. For example, if you see a house, imagine if an ad for a brand of paint appears because the paint manufacturer has bought contextual augmented ads to appear in an AR system whenever the user sees a house through the augmented world.

Existing laws will need to be applied to digital marketing and advertising in the augmented world. For example, when a marketing disclaimer appears in the online world, the user’s attention is on the ad. Will the disclaimer have the same effect in an augmented environment, or will it need to be presented in a way that calls attention to it? Could this have the unintended consequence of shifting the user’s attention away from something they are doing, such as walking, thereby increasing the risk of harm? There are also some interesting theoretical advertising applications of AR in a negative context. For example, “negative advertising” could be used to blur product or brand names and/or to make others more prominent in the augmented world.

  • The Right of Publicity.  The right of publicity — a person’s right to control the commercial use of his or her name, image, and likeness — is also likely to be challenged by digital marketing in the augmented world. Instead of actively using a person’s likeness to promote a product or service, a product or service could appear as augmented data next to a person’s name or likeness, improperly (and perhaps inadvertently) implying an endorsement or association. State laws governing the right of publicity will be reinterpreted when applied to the augmented world.
  • Negligence and Torts. AR has the capacity to both further exacerbate the problem of “distracted everything,” paying more attention to your AR device than your surroundings, as some users of Pokémon Go have discovered. Since AR augments the real world in real time, the additional information may cause a user to be distracted, or if the augmented data is erroneous could cause a user to cause harm to him/herself or to others. Many have heard the stories of a person dutifully following their GPS navigation system into a lake. Imagine an AR system identifying a mushroom as safe to eat when in fact it is highly poisonous. Just as distracted driving and distracted texting can be used as evidence of negligence, a distracted AR user can find him/herself facing a negligence claim for causing third party harm. Similarly, many tort claims that can arise through actions in the real world or online world, such as liable and slander, can occur in the augmented world. Additionally, if an AR system augments the real world in a way that makes someone think they are in danger, inflicts emotional distress, or causes something to become dangerous, the AR user, or system provider, could be legally responsible.
  • Contract liability. We will undoubtedly see providers of AR systems and platforms sued for damages suffered by their users. AR providers have and will shift liability to the user through contract terms. For example, Niantic, the company behind Pokémon Go, states in their Terms of Use that you must “be aware of your surroundings and play safely. You agree that your use of the App and play of the game is at your own risk, and it is your responsibility to maintain such health, liability, hazard, personal injury, medical, life, and other insurance policies as you deem reasonably necessary for any injuries that you may incur while using the Services.” AR providers’ success at shifting liability will likely fall primarily to tried-and-tested principles such as whether an enforceable contract exists.

None of the above challenges are likely to prove insurmountable and are not expected to slow the significant growth of AR. What will be interesting to watch is how lawmakers choose to respond to AR, and how early hiccups are seized on by politicians and reported in the press. Consider automobile autopilot technology. The recent crash of a Tesla in Autopilot mode is providing bad press for Tesla, and fodder for those who believe the technology is dangerous and must be curtailed. Every new technology brings both benefits and potential risks. If the benefits outweigh the risks on the whole, the public interest is not served when the legal, regulatory and privacy pendulum swings too far in response. Creating a legal, regulatory and privacy landscape that fosters the growth of AR, while appropriately addressing the risks AR creates and exacerbates, is critical.

The Rewards and Risks of Open-Source Software

Open-source software (or “OSS”) is computer software distributed under a license whose source code is available for modification or enhancement by anyone.  This is different than free (or public domain) software, which is not distributed under a license.  Free and open-source software are alternatives to “closed source,” or proprietary, software.

Companies use OSS for a variety of reasons.  In some cases, it’s used as part of a project deliverable, such as a DLL or a JavaScript library. In others, it’s used as a tool as part of the development process or production environment, such as a compiler, development environment, web server software, database software, etc.

The Rewards of OSS.  There are significant benefits to using open-source software in your business.  Here are some of the most significant:

  • Enhanced Security.Anyone can modify and enhance OSS, resulting in a larger developer base than proprietary software. This means that security holes are often found more quickly, and patched more quickly, than proprietary software.
  • Lower Cost. There is no license fee for open-source software.  (That does not mean it’s totally free – OSS is subject to license requirements.)
  • Dev Cycle Streamlining. Using OSS in a project cuts down development time by allowing developers to avoid “reinventing the wheel” on needed code if an OSS version of that code is available.
  • Perpetual Use. As long as you abide by the terms of the open-source software license, you can generally use it forever.  There are no annual renewal fees or license renegotiations for mission-critical software.
  • Adaptability/Customizability. Users of closed source software must find the software package that most closely aligns with the business’ needs, and adapt to it.  There’s no need to settle with OSS – since it can be customized and adapted, you can start with the existing code and modify it to fit your company’s exact needs.
  • Better Quality. Since there is a larger developer base, new and enhanced features and functionality are often rolled out, and usability bugs fixed, at a more rapid rate than in proprietary software.
  • Support Community. Many common closed source software packages require the purchase of a maintenance subscription along with a license.  Well-used OSS has a robust developer community that can help with questions.There are also companies that have sprung up around common OSS packages to provide support solutions.

Know Your OSS Licenses.The author of software code owns the copyright to that code.  If the author released software into the public domain, he/she is waiving his/her copyrights in that code, making it free for anyone to use.  However, if someone creates a derivative work of public domain code, the new portions of code are protected by copyright, and are not in the public domain.  In other words, by adding his/her own modifications, someone can take public domain software and make it proprietary.

That’s the primary difference between free software and OSS.  In most cases, when an author makes his/her software code open-source, that author is allowing use of his/her copyrighted code under an open-source software license, but is not relinquishing his/her copyright.  Under the OSS license, the author grants others a right to use the author’s copyrighted code to modify, copy and redistribute it, but only if they follow the terms of the open-source software license.  There are hundreds (or more) open-source licenses out there.  However, there are relatively few that are considered generally accepted with a strong developer community.  The Open Source Foundation (OSF) categorizes the most common OSS licenses here.  The most common are the GNU General Public License (GPL), the GNU Lesser General Public License (LGPL), the “New” BSD License, the “Simplified” BSD License, the MIT License, and the Apache License v2.  However, not all OSS licenses are the same.  There are many websites that can help you analyze the differences between OSS licenses, including tl;dr Legal and Wikipedia’s Comparison of Open-Source Software Licenses.

Many OSS licenses are “permissive” licenses, meaning that a work governed by that license (e.g., a BSD License) may be modified and redistributed under a different license as long as you comply with the requirements of the permissive license (e.g., attribution). Other OSS licenses are “copyleft” licenses.  A copyleft license is one under which a work may only be used, modified or distributed if the same license rights apply to anything derived from it.  The copyleft license will “infect” modifications and derivations of the source (some think of it as a “viral” license).  It’s a play on words as copyright and copyleft are converse terms: copyright gives exclusive rights to a work to one person, and copyleft gives non-exclusive rights to a work to everyone.  There are two types of copyleft licenses:

  • “Strong” copyleft licenses (e.g., the GNU GPL) state that if you modify code governed by a copyleft license, you must distribute the software as a whole under that copyleft license, or not distribute it at all.
  • “Weak” copyleft licenses (e.g., the GNU Lesser GPL) state that if you modify code governed by a copyleft license, portions of the software containing modifications (e.g., a software module or library) must be distributed under that copyleft license, but other portions may be distributed under a different license type.

The Risks of OSS.  Due to its benefits and rewards, most companies use open-source software, whether the management and Legal teams know it or not. Quite often, developers rely on OSS to deliver software development projects on time and within budget. The bigger question is whether developers are using OSS in a way that exposes the company to risk.  Unless your company has a well-defined OSS policy that has been well-communicated to the developers at your company, you’re “flying blind” when it comes to OSS usage. Here are some of the risks and considerations for companies using OSS:

  1. OSS makes more sense for “utility layer” software needs than for “competitive/proprietary layer” software needs.Think of the software used in business as two layers. The first is software at the “utility layer” – software packages that go to the general operation of the business and its IT infrastructure, and do not give the business a competitive advantage based on the code itself. Examples are web server software, database software, and standard APIs.  Above that is the software at the “competitive/proprietary layer” – software that gives your company a competitive advantage you’re your competition, or provides significant offensive or defensive IP protection. Examples are custom functionality on your website and specialized software applications. OSS makes a lot of sense at the utility layer – you don’t need something better than everyone else, just something that works and works well. Introducing OSS at the competitive/proprietary layer can be problematic as you may want to ensure the entire solution is proprietary.
  1. You can’t get IP warranties or indemnification for OSS.When you negotiate a software license agreement for proprietary closed source proprietary code, in most cases the software licensor will provide warranties and/or indemnification against claims of IP infringement. With OSS, there is no IP warranty or indemnity. If someone introduced proprietary code into the OSS earlier in its life, you bear the risk of infringement if you use it.
  1. Some OSS license types can snuff out IP rights to your own developed code (and even expose it). The type of OSS license governing OSS used in your business, and how you use OSS software, can directly affect your IP rights to your own developed code. If you use OSS governed by a strong copyleft license to enhance your own codebase, your entire codebase could potentially become governed by a copyleft license.  This means that a savvy competitor or customer that suspected or learned of OSS in your code could send you a letter demanding a copy of your source code under the copyleft license, or just decompile it and modify it, putting you on the defensive as to why your software license should override the copyleft open source license.
  1. If you don’t follow the license terms, you can be sued.Open source software is licensed. That means there are license terms you must follow.  If you don’t, you may face litigation from competitors or others.  There has been a recent upswing in litigation for breach of the terms of open source licenses, and that trend is expected to continue.  For example, VMware was sued in March 2015 alleging that it violated the GNU GPL v2 license by not releasing the source code for VMware software that used OSS subject to the copyleft license.

Implement a Company-Appropriate OSS Policy.  To mitigate the risks associated with OSS, all companies should implement an open-source software policy governing the when, why and how of using open-source software in the company’s codebase.  Here are some important considerations:

  • Ensure there is alignment on the goal of the OSS policy at the outset.Different stakeholders may have different views on the goal of an OSS policy.  To Legal, it make be to protect the company’s intellectual property; to IT, it may be to leverage OSS to reduce costs; to developers, it could be to ensure they are free to keep using the OSS they need to meet goals and deadlines.  One thing stakeholders cannot do is go in with the mindset that OSS is bad for business or that they can keep it out of their code.  OSS in business is a reality that can either be ignored or accepted.  The policy’s goal should be to ensure OSS is being used effectively to advance the company’s business objectives while protecting its IP and living within its risk profile.
  • An OSS policy must balance the practical needs of developers with risk management.OSS is the domain of the developer, not the Legal department.  While the risks are something lawyers consider, a policy written and imposed by non-developers on your developer corps will likely face an uphill battle, or worse, be viewed as “out of sync with the goals of the business” and just ignored.  The attorneys’ role in creating an OSS policy is to provide guidance on the risks of OSS to the company as a whole, provide “best practices” guidance in OSS policies, and to draft the actual policy from the outline in plain English (remember, developers, not other attorneys, are the audience).  IT management’s role is to provide guidance on the outside contours of the policy.  Developers need to be directly involved in developing the policy itself as they are the ones using OSS in their daily work.  Developers, Legal and IT should develop the company’s OSS strategy, and its OSS policy, as equal stakeholders.
    • Ensure senior management buys into the policy before it is finalized; it’s important that management understand how OSS is used in the business.
    • Ensure the policy covers key topics, e.g., sourcing OSS; selecting OSS code for use at the utility layer and the competitive/proprietary layer; the OSS approval process; support and maintenance requirements; redistribution; tracking OSS usage; and audits/training.
    • Ensure the policy covers independent contractor developers as well as employees.
  • OSS code review and approval must be a streamlined process.If the review and approval process is complicated, developers will be more likely to just skip it.  Make approval easy.  Provide a “pre-approved list” of OSS – certain combinations of license types, utility level software categories, and/or specific code packages that only need notification of usage for tracking purposes.
    • Have a simple process for vetting other usage requests, asking the critical questions (e.g., What is the name and version number of the software package for which use is requested? What license type applies? Where was the code sourced from? Will the code be modified? What is the support plan?  Will the code be distributed or used internally?  What is the expected usage lifetime of the code? Are there closed source alternatives? Etc.) so that the legal and business risks can be measured and balanced against the benefits of usage.
    • Determine who will do the first review and escalated review (IT, Legal).
    • Turn requests quickly as delays can impact development timeframes.
  • Keep a database of all used OSS, including where is it used and what license type applies.Knowing what OSS you’re using is critical to avoid introducing code that has a bad reputation or is governed by an OSS license your company is not comfortable with (e.g., a strong copyleft license). IT should maintain a database of OSS used by the company, including the license type for each OSS.  This database is also helpful when responding to security questionnaires and is often needed in M&A due diligence.
  • Other Considerations.Consider conducting quarterly or semi-annual reviews of OSS usage, e.g., questionnaires to developers.  Consider having developers acknowledge the OSS policy at hire, and on an annual basis.  Consider conducting OSS training if your company’s learning management system (LMS) has an available course module on OSS.  And most importantly, review the OSS policy no less than once a year with all stakeholders to ensure it is evolves as the world of OSS, and the company’s own needs, change over time.

The Fourth Age of the Internet – the Internet of Things

We are now in what I call the “Fourth Age” of the Internet.  The First Age was the original interconnected network (or “Internet”) of computers using the TCP/IP protocol, with “killer apps” such as e-mail, telnet, FTP, and Gopher mostly used by the US government and educational organizations. The Second Age began with the creation of the HTTP protocol in 1990 and the original static World Wide Web (Web 1.0). The birth of the consumer internet, the advent of e-commerce, and 90’s dot-com boom (and bust in the early 2000’s) occurred during the Second Age. The Third Age began in the 2000’s with the rise of user-generated content, dynamic web pages, and web-based applications (Web 2.0). The Third Age has seen the advent of cloud computing, mobile and embedded commerce, complex e-marketing, viral online content, real-time Internet communication, and Internet and Web access through smartphones and tablets. The Fourth Age is the explosion of Internet-connected devices, and the corresponding explosion of data generated by these devices – the “Internet of Things” through which the Internet further moves from something we use actively to something our devices use actively, and we use passively. The Internet of Things has the potential to dramatically alter how we live and work.

As we move deeper into the Fourth Age, there are three things which need to be considered and addressed by businesses, consumers and others invested in the consumer Internet of Things:

  • The terms consumers associate with the Internet of Things, e.g., “smart devices,” should be defined before “smart device” and “Internet of Things device” become synonymous in the minds of consumers.  As more companies, retailers, manufacturers, and others jump on the “connected world” bandwagon, more and more devices are being labeled as “smart devices.”  We have smart TVs, smart toasters, smart fitness trackers, smart watches, smart luggage tags, and more (computers, smartphones and tables belong in a separate category). But what does “smart” mean?  To me, a “smart device” is one that has the ability not only to collect and process data and take general actions based on the data (e.g., sound an alarm), but can be configured to take user-configured actions (e.g., send a text alert to a specified email address) and/or can share information with another device (e.g., a monitoring unit which connects wirelessly to a base station). But does a “smart device” automatically mean one connected to the Internet of Things?  I would argue that it does not.

Throughout its Ages, the Internet has connected different types of devices using a common protocol, e.g., TCP/IP for computers and servers, HTTP for web-enabled devices. A smart device must do something similar to be connected to the Internet of Things. However, there is no single standard communications protocol or method for IoT devices. If a smart device uses one of the emerging IoT communications protocols such as Zigbee or Z-Wave (“IoT Protocols”), or has an open API to allow other devices and device ecosystems such as SmartThings, Wink or IFTTT to connect to it (“IoT APIs”), it’s an IoT-connected smart device, or “IoT device.” If a device doesn’t use IoT Protocols or support IoT APIs, it may be a smart device, but it’s not an IoT device. For example, a water leak monitor that sounds a loud alarm if it detects water is a device.  A water leak monitor that sends an alert to a smartphone app via a central hub, but cannot connect to other devices or device ecosystems, is a smart device.  Only if that device uses an IoT Protocol or support IoT APIs to allow it to interconnect with other devices or device ecosystems is an IoT device.

“Organic” began as a term to define natural methods of farming.  However, over time it became overused and synonymous with “healthy.”  Players in the consumer IoT space should be careful not to let key IoT terminology suffer the same fate. Defining what makes a smart device part of the Internet of Things will be essential as smart devices continue to proliferate.

  • Smart devices and IoT devices exacerbate network and device security issues. Consumers embracing the Internet of Things and connected homes may not realize that adding smart devices and IoT devices to a home network can create new security issues and headaches. For example, a wearable device with a Bluetooth security vulnerability could be infected with malware while you’re using it, and infect your home network once you return and sync it with your home computer or device.  While there are proposals for a common set of security and privacy controls for IoT devices such as the IoT Trust Framework, nothing has been adopted by the industry as of yet.

Think of your home network, and your connected devices, like landscaping.  You can install a little or a lot, all at one or over time.  Often, you have a professional do it to ensure it is done right. Once it’s installed, you can’t just forget about it — you have to care for it, through watering, trimming, etc. Occasionally, you may need to apply treatments to avoid diseases. If you don’t care for your landscaping, it will get overgrown; weeds, invasive plants (some poisonous) and diseases may find their way in; and you ultimately have a bigger, harder, more expensive mess to clean up later on.

You need to tend your home network like landscaping, only if you don’t tend your home network the consequences can be much worse than overgrown shrubbery. Many consumers are less comfortable tinkering with computers than they are tinkering with landscaping.  Router and smart device manufacturers periodically update the embedded software (or “firmware”) that runs those devices to fix bugs and to address security vulnerabilities. Software and app developers similarly periodically release updated software. Consumers need to monitor for updates to firmware and software regularly, and apply them promptly once available.  If a device manufacturer goes out of business or stops supporting a device, consider replacing it as it will no longer receive security updates. Routers need to be properly configured, with usernames and strong passwords set, encryption enabled, network names (SSID) configured, etc.  Consumers with a connected home setup should consider a high-speed router with sufficient bandwidth such as 802.11ac or 802.11n.

The third party managed IT services industry has existed since the Second Age. As connected homes proliferate resulting in complex connected home infrastructure, there is an opportunity for “managed home IT” to become a viable business model.  I expect companies currently offering consumer-focused computer repair and home networking services will look hard at adding connected home management services (installation, monitoring, penetration testing, etc.) as a new subscription-based service.

  • Smart device companies need to think of what they can/can’t, and should/shouldn’t, do with data generated from their devices.  IoT devices and smart devices, and connected home technologies and gateways, generate a lot of data.  Smart/IoT device manufacturers and connected home providers need to think about how to store, process and dispose of this data.  Prior to the Internet of Things, behavioral data was gathered through the websites you viewed, the searches you ran, the links you clicked – “online behavioral data.”  The IoT is a game-changer. Now, what users do in the real world with their connected devices can translate to a new class of behavioral data – “device behavioral data.” Smart/IoT device manufacturers, and connected home providers, will need to understand what legal boundaries govern their use of device behavioral data, and how existing laws (e.g., COPPA) apply to the collection and use of data through new technologies. Additionally, companies must look at what industry best practices, industry guidelines and rules, consumer expectations and sentiment, and other non-legal contours shape what companies should and should not do with the data, even if the use is legal.  Companies must consider how long to keep data, and how to ensure it’s purged out of their systems once the retention period ends.

IoT and smart device companies, and connected home service and technology providers, should build privacy and data management compliance into the design of their devices and their systems by adopting a “security by design” and “privacy by design” mindset. Consumers expect that personal data about them will be kept secure and not misused. They must ensure their own privacy policies clearly say what they do with device behavioral data, and not do anything outside the boundaries of their privacy policy (“say what you do, do what you say”). Consider contextual disclosures making sure the consumer clearly understands what you do with device behavioral data.  Each new Age of the Internet has seen the FTC, state Attorneys General, and other consumer regulatory bodies look at how companies are using consumer data, and make examples of those they believe are misusing it. The Fourth Age will be no different. Companies seeking to monetize device behavioral data must make sure that they have a focus on data compliance.

Put Electronic Signatures to Work for You

Companies and in-house law departments are increasingly adopting new technology-driven processes to create efficiencies in their day-to-day operations.  One such process is the use of electronic signatures, or “e-signatures.”  E-signatures provide many benefits to companies if implemented correctly, but there are some important caveats to keep in mind.  Understanding what they are and how to use (and not use) them is critical.

What is an electronic signature?The federal Electronic Signatures in Global and National Commerce (E-SIGN) Act defines an electronic signature as “an electronic sound, symbol or process which is attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record.”  In other words, an electronic signature is an electronic identifier of a person who places it on a document or record and intentionally consents to, accepts, or approves that document or record in a way that the identifier can be attributed to that person. An easy way to remember this is as an electronic identifier that’s affixed, accepted and attributable.  The good news is that E-SIGN’s definition is technology-agnostic, meaning it will apply to new developments in e-signature technology.

Examples of e-signatures include a person’s signature captured on a tablet on a contract followed by pressing a “Purchase” button; pressing a button (e.g., “1”) on your phone on a recorded line to accept a new 2-year cable subscription; checking a box to indicate that you have read and accept a software EULA; or a Google Wallet or Apple Pay transaction automatically done by computers (“electronic agents”) which you initiated and a merchant accepted electronically.

Is it the same as a digital signature? No, although many people use the terms interchangeably.  A digital signature is a more secure form of electronic signature that uses encryption or a biometric identifier to ensure the signature is authentic and can be linked back to the signer.  It can’t be tampered with thanks to the encryption or biometric identifier. (Examples include using a private encryption key to sign a document, or using a thumbprint to embed a digital code in a document.) Digital signatures are commonly found in financial transactions and where being able to detect a forged signature is critical.

Are electronic signatures legal?  Yes.  In 2000, Congress enacted the E-SIGN Act, which states that electronic signatures on contracts and records related to commercial transactions are just as effective as a physical (or “wet” signature). However, if a law or regulation requires a written contract or record, an electronic signature isn’t sufficient if the contract or record can’t be retained and accurately reproduced by all parties. 48 states have enacted their own e-signature law based on the Uniform Electronic Transactions Act (UETA). (MD and VA have enacted a different model law called the Uniform Computer Information Transactions Act (UCITA) that covers computer information.) There are specialized digital signature laws applicable to some industries, such as the federal e-signature regulation specifically related to the FDA. Electronic signatures are generally valid in other countries.

It’s important to note that there are some types of contracts and records that cannot be electronically signed, such as wills, trusts, and marriage certificates/divorce decrees.

Can e-signed documents be notarized?  Yes, but it’s still fairly uncommon. E-SIGN permits electronic notarization.  However, most e-signature providers are still adding functionality to support electronic notarization of an e-signature. You’ll need to find a notary authorized to do e-notarizations (in Minnesota, for example, becoming an e-notary requires an additional authorization on top of your standard notary license). You still have to electronically sign an agreement in the presence of an e-notary (except in Virginia which permits remote notarization, e.g., via video conference), which basically defeats the purpose.  As e-signatures continue to gain traction, e-notarization will likely start to catch up.

If I want to use electronic signatures with my contracts, is there anything I should add to them?  Consider adding a disclaimer such as this to your contract templates: “The Parties agree that electronic signatures are intended to bind each Party with the same force and effect as an original handwritten signature, and a copy containing an electronic signature is considered an original.” UETA requires that the parties have agreed to conduct business electronically. Although it can be inferred from the conduct of the parties, including an affirmative statement can be helpful (and demonstrates to your clients and vendors that you are embracing 21st century contracting methods).

Are there e-signature risks I should watch out for?  The biggest risk is that an e-signature you were relying on turns out to be unenforceable. Just because E-SIGN says that an e-signature has the same legal effect as a physical signature doesn’t mean that it’s automatically enforceable. Parties seeking to avoid liability under a contract may look to attack the validity of the contract in the first place by claiming it was never validly signed.  The identifier on a contract (e.g., “/s/ Scott Signer”) isn’t enough to establish that it’s a valid electronic signature — you have to be able to attribute that identifier to me to provide that I was the one that wrote it.  This gets even more complicated when trying to use e-signatures on a small device, such as a smartphone.

Think of e-signatures as falling into one of two buckets based on whether the contract or record being electronically signed is considered “low priority” (the enforceability is not likely to be challenged, such as on a low-value, one-time transaction), or “high priority” (enforceability of the agreement is very important given the strategic or monetary value of the transaction).  For low priority contracts and records unlikely to be challenged, being able to conclusively attribute an e-signature to a person may be less critical, so an identifier on a contract or record (“/s/ Scott Signer”) without a strong authentication mechanism may be “good enough.”  For high priority contracts and records, being able to conclusively establish affixation, acceptance and attribution is critical, so using a strong e-signature process (such as an e-signature provider) that validates the identity of each signatory, and keeps copies of the signed agreement available to each signatory, can help ensure enforceability.

The reverse is also true — be careful that you don’t unintentionally create an electronic signature (e.g., with an email signature).  You don’t want someone trying to argue that your email saying “yes, that sounds good” to a business offer, where your email had your signature as General Counsel or Chief Operating Officer, constituted a binding agreement.  (I use a disclaimer in my long-form work email signature that emails cannot be used as an electronic signature.)


I would strongly encourage all companies interested in using electronic signatures on contracts to consider an electronic signature provider such as EchoSign or DocuSign.  E-signature providers have well-developed systems that make it easy for companies to execute contracts, forms, and other records electronically through a legally defensible process, can support “batch sending” of documents for signature via a mail merge-like process, and can be configured to automatically send fully executed copies to all parties (as well as to your Legal department or contract manager).

Don’t get Hooked by Phishing or Spear Phishing

Cyber attacks such as the Anthem breach, the Home Depot breach, and the Target breach are becoming almost commonplace.  Major cyber attacks compromising information about millions of people often start not with a bang, but a whisper – a “phishing” or “spear phishing” email through which an attacker tries to acquire login credentials that can be used to launch a sophisticated and crippling attack. Over 90% of cyber attacks take the form of, or start with, a spear phishing attack, and phishing attacks are also very common. These attacks happen both in the office and at home. Phishing and spear phishing attacks can happen at any time, and can target any person or employee.

What is “Phishing?In a “phishing” attack, an attacker uses an email sent to a broad group of recipients (and not targeted to a specific group) to impersonate a company or business in an effort to get you to reveal personal information or login IDs/passwords, or to install malware or exploit a security hole on your computer.  It generally uses an official-looking email and website to gather information, and often contains the logo(s) of the company it is impersonating.

What is “Spear Phishing?In a “spear phishing” attack, an attacker uses an email tailored for a specific group of recipients (e.g., a group of employees at a specific business), often impersonating an individual such as someone from your own company or business, in an effort to get you to reveal personal information, login IDs/passwords, to steal money or data, or to install malware or exploit a security hole on your computer.

How do I spot a phishing or spear phishing email?Look for one or more of these key indicators that an email in your inbox is actually a phishing or spear phishing attack.

  • The email has spelling or grammatical errors. A phishing or spear phishing email often contains spelling or grammatical errors, and does not appear to be written by a business professional.
  • You do not recognize the sender’s email address. If you get an email asking you to click on a link or open an attachment, look carefully at the email address of the sender.  Be especially alert for email addresses that are similar to, but not the same as, your company’s email address (e.g., “” instead of “”).
  • The email contains links that don’t go where they say they do. Before you click on a link in an email you don’t recognize, “hover” your mouse cursor over the link. A pop-up will appear showing you where the link will go.  If they don’t match, it’s probably a phishing or spear phishing attempt.  In this example, this innocuous-looking link actually goes to a malicious website:

Bad link sample

  • The email asks you to open an attachment you don’t recognize. Many spear phishing emails ask you to open an attachment or click on a link.  If an email you don’t recognize asks you to open an attachment you weren’t expecting or that doesn’t look familiar, or to click on a link you don’t recognize, don’t click on it or open it, and check with your IT or Security department if you want to know for sure.
  • The email seems to be a security-related email, or asks you to take immediate action. Watch out for emails that state that your account will be suspended; ask you to reset, validate or verify your password, account information or personal information, or otherwise ask you to take immediate action to prevent something from happening.
  • The email relates to a current news event. Many phishing emails use a current news event, such as a natural disaster or security breach, to get you to provide information, click a link or open an attachment.
  • The email contains information from your social media accounts or other public information. Spear phishing attackers will often look at your public social media accounts (e.g., your Facebook feed, LinkedIn profile, tweets, etc.) and other public sources (e.g., Google searches) and use information about you or your friends to make a spear phishing email seem authentic.  If an email contains personal information about you other than your name and email address, take a close look to ensure it’s not a spear phishing attempt.

If you think an email you received is a phishing or spear phishing attempt, (1) do NOT click or open any links or attachments in the email, (2) if you are at work, immediately contact your Security or IT department to report it, especially if you clicked on an attachment or link or otherwise took action before you realized this (failing to report it will be much worse, so don’t be embarrassed); and (3) delete the email immediately.

Document protection in Word not so secure

Microsoft Word has this nifty feature called “Protect Document.” Basically, you can put in a password which prevents others who access the document from accepting or rejecting changes (but still allows them to make edits which show in redline). You can event set protection to allow only fillable fields to be edited, or to prevent someone from making any edits to a document entirely. (This protection is different from the password protection on opening a document.)

Many attorneys (and others) will lock a document, such as a nondisclosure agreement or a draft, with the “track changes” locked on. The idea is that by locking it, the drafter doesn’t have to go through the trouble of generating their own redline of changes sent back by the other side, which is often a good idea to ensure that no changes were “inadvertently” made but not marked in redline.

Microsoft’s dirty little secret, all the way back to the late 90’s when they released Word 97, is that the security mechanism for Word’s document protection is, well, bad. Really bad. In Word 2003 or earlier, you can get around it in 15 seconds by using Microsoft Script Editor to edit the script for the document and to remove the password entry, or even simpler, by saving a Word document in RTF (Rich Text Format), then closing it, reopening the RTF version, and saving it back into Word document format. Once you open the new Word version and click “Unprotect Document,” the password is gone and the document automatically unlocks. (You lose little if any formatting by converting it into RTF and back again.) You can do the same thing in Word 2007 by saving it from Word 97-2003 (.doc) format into Word 2007 (.docx) format, and then back again to Word 97-2003 format.

I use the Protect Document functionality to lock on “track changes” on almost every doc I send out. However, I’m sure to check to make sure that it comes back not only locked, but locked with the password I sent it out with. If you unlock it and it doesn’t have your password, it’s a sure bet that someone unlocked and then re-locked it.

So, if you rely on document protection in your Word documents, be warned…just because you always show your redlines when you prepare a revised version of an agreement, doesn’t mean everyone else will.