The Coming Age of Artificial Intelligence: What Lawyers Should Be Thinking About

fidnick

by José P. Sierra

The Profession

During the Spring and Fall legal conference seasons, emails addressing “data breaches,” “improving cyber defenses,” and “what you (or your general counsel/board) need to know about cyber security/insurance,” hit our inboxes on an almost weekly basis.  Although it took some time, everybody now wants a slice of hot “cyber” pie, and law firms have been quick to jump on the cyber security bandwagon and form cyber-practices.  What hasn’t gotten the same rapt attention of conference organizers, tech vendors, and the legal community is the coming age of artificial intelligence, or “AI.”  There are at least two reasons for this.  First, although large-scale deployment of self-driving cars is just over the horizon, most of the bigger, life-changing AI products are still years away. Second, most laypeople (including lawyers) do not understand what AI is or appreciate the enormous impact that AI technology will have on the economy and society.  As a result, those in the “vendor” community (which includes lawyers) have yet to determine how their clients and their clients’ industries will be affected, and how they themselves can profit from the AI revolution.

AI and What It Will Mean for Everyone

AI may be defined as a machine or super computer that can simulate human intelligence by acquiring and adding new content to its memory, learning from and correcting its prior mistakes, and even enhancing its own architecture, so that it can continue to add content and learn.  A few years ago, AI development and its celebrated successes were limited to machines out-playing humans in games like chess (e.g., IBM’s Watson beating world chess champion Gary Kasparov and winning on the show, Jeopardy).  And while most of us are now familiar with “intelligent” assistants like “Siri,” “Alexa,” and other “smart” devices, for the average person, the full import of AI’s capabilities and potential hasn’t been grasped (though the advent of autonomous cars has given us some glimpse of things to come).

Already, AI can do many things that people can do (and in some cases better).  In addition to driving cars, AI can detect and eliminate credit card payment fraud before it happens, trade stocks, file insurance claims, discover new uses for existing drugs, and detect specific types of cancer.  Then there is the work that most people think can be done only by humans, but which AI can do today, including: (1) predicting the outcome of human rights trials in the European Court of Human Rights (with 79% accuracy); (2) doing legal work  –  numerous law firms have “hired” IBM’s Ross to handle a variety of legal tasks, including bankruptcy work, M&A due diligence, contracts review, etc.; and, more disturbing than possibly replacing lawyers, (3) engaging in artistic/creative activities, like oil canvas painting, poetry, music composition, and screenplay writing.  In short, almost no realm of human endeavor – manual, intellectual, or artistic – will be unaffected by AI.

What AI Will Mean for Lawyers

Some legal futurists think that AI simply will mean fewer jobs for lawyers, as “law-bots” begin to take over basic tasks.  Other analysts focus on the productivity and cost-savings potential that AI technology will provide.  Two other considerations of the impending AI revolution merit discussion:  revenue opportunities and the role lawyers can and should play in shaping AI’s future.

How AI May Shape the Legal Economy                 

Some of the most profitable practice areas in an AI-driven economy are likely to be:

  • Patent Prosecution and Litigation. This one should be obvious and already has taken off.  Fortunes will be made or broken based on which companies can secure and defend the IP for the best AI technologies.
  • M&A. Promising AI start-ups with good IP will become targets for acquisition by tech-giants and other large corporations that want to dominate the 21st century economy.
  • Antitrust. Imagine that Uber, once it has gone driverless, decides to buy Greyhound and then merges with Maersk or DHL shipping, which then merges with United Airlines.  How markets are (re)defined in an AI-driven economy should keep the antitrust bar very busy.
  • Labor and Employment. AI technology has the potential to disrupt and replace human labor on a large-scale.  To take just one example, in an AI-created driverless world, millions of car, taxi, bus, and truck drivers will find themselves out of work.  What rights will American workers have when AI claims their jobs?  How will unions and professional organizations protect their members against possible long-term unemployment?  Labor and employment lawyers will be at the forefront of labor re-alignment issues.
  • Tax. If AI reduces the human labor pool, as expected, and there is a corresponding loss in tax revenue, the tax code will most likely need to be revised, which will mean new strategies for the tax bar.
  • Cyber-law/compliance. The importance of protecting IP, proprietary, and confidential information, and the legal exposure of not doing so, will be even greater in the higher-stakes world of AI.
  • Criminal Defense. Will AI help law enforcement solve crimes?  Will AI be used to commit crimes?  If so, both prosecutors and the defense bar will be busy prosecuting and representing more than the typical criminal defendant.

How Lawyers May Help Shape AI

Is there a role for the legal profession in the coming AI age other than helping our clients adapt to a “brave new world?” In my view, lawyers should play a necessary and leading role.  For if AI has the potential to affect every industry and occupation and permanently eliminate jobs along the way, society’s leaders cannot afford to leave the decisions about which AI technologies will be developed in what industries (and which ones won’t) to sheer market forces.  Private industry and investors are currently making these decisions based on one overarching criterion — profit — which means everything is on the table.  Although that approach propelled the industrial and digital revolutions of the last two centuries, jobs lost by those revolutions were eventually replaced by higher-skilled jobs.  For example, teamsters of horse-powered wagons were replaced by modern teamsters, i.e., truck drivers.  That won’t be the case following an AI revolution.  The ultimate question, therefore, in the coming AI century is what areas of human endeavor do we, as a society, want to keep in human hands, even if such endeavors can be accomplished faster, cheaper, and better by AI machines?  As the profession responsible for protecting society’s interests through law and policy, lawyers cannot afford to take a back seat to the free-for-all development of AI, but instead must lead and help shape the AI century to come.

José P. Sierra is partner at Holland & Knight. He focuses his practice in the areas of white collar criminal defense, healthcare fraud and abuse, pharmaceutical and healthcare compliance, and business litigation.

Advertisements

Making Sense of the Internet of Things

Lefkowitz_peterby Peter M. Lefkowitz

The Profession

We have seen the marketing. According to a recent report by a top consulting firm, the Internet of Things will have an annual economic impact of between $4 trillion and $11 trillion by 2025.  Another firm has announced that there will be 50 billion internet-connected devices globally by 2020.  And companies already have rebranded in grand fashion, declaring the arrival of “Smart Homes,” “Smart Cities,” the “Smart Planet,” the “Industrial Internet” (the contribution of the author’s company), and even the “Internet of Everything.”  We also have seen the reality of Fitbits that record our activity and suggest changes to our exercise and sleep patterns, cars that accept remote software updates, and airplane engines that communicate maintenance issues from the tarmac.  For all of this potential, and even greater claimed potential, our shared late-night admission is that none of us has a well-defined picture what, precisely, the Internet of Things is or does.

This combination of wide promise and shared confusion is not a trivial matter.  Companies are setting long-term strategy based upon Jetsons-like glimmers of the future; consumer expectations and fears are being set in an environment of rapidly-evolving offerings and — most critically for attorneys providing advice to clients considering investments in this area  — legislators and regulators are being asked to set legal and enforcement frameworks without a clear picture of the future product landscape or whether products still in their infancy will create anticipated harm.  In order to advise properly in this area, and to avoid regulatory frameworks getting far ahead of actual product development, it is important that lawyers appreciate the scope of Internet of Things technology and the policy implications of internet-connected goods and the data they create and use.

So what is the Internet of Things?  Simply put, the Internet of Things, or IoT, is a set of devices that connect to and send or receive data via the internet, but not necessarily the devices people most often think of as being connected to the internet.  In the consumer world, IoT includes smart meters that measure home energy use, refrigerators that can report back on maintenance needs or whether the owner needs more eggs, and monitors that can record blood sugar results and communicate via Bluetooth to a connected insulin pump.  It also increasingly includes cars that sense other cars in close proximity and record and report on driver speed, location and music listening choices.  And in the industrial space, offerings include an array of sensors and networks that measure and manage the safety and efficiency of oil fields or the direction, speed and service life of wind turbines and airplane engines;  X-ray and CT machines with remote dose monitoring; and badge-based radio-frequency identification systems that analyze whether medical providers are washing their hands in the clinical setting and the resulting impact on infection rates.  This definition generally does not include computers, tablets and other computing devices, although — with smartphone apps advancing to the point of measuring movement and heart rate and reading bar codes to compare prices at local retailers — one could argue that the iPhone and Android phone are the Swiss Army Knives of personal internet-based data collection and use.  In turn, IoT devices generate large sets of sensor-based data, or Big Data, which can be aggregated and analyzed to generate observations concerning the world around us and to improve products and services in healthcare, energy, transportation and consumer industries.

These developments have not been lost on government.  The White House has commissioned two major studies on the potential of Big Data.  The Federal Trade Commission held a full-day workshop to discuss IoT in the home, in transportation and in healthcare, and FTC staff subsequently issued a comprehensive report discussing benefits and risks of IoT.  Branches of the European Commission are encouraging companies to establish European research and development footholds for internet-based devices.  The European Commission noted the development of internet-based devices and the prospect of a Digital Single Market as inspirations for the anticipated replacement of the European Data Privacy Directive.  And European Data Protection Commissioners have boldly asserted their authority, declaring that in light of the risk presented by sensor-based devices, “big data derived from the internet of things . . . should be regarded and treated as personal data” under European data privacy law.  Unfortunately, the Commissioners did not distinguish industrial uses such as wind turbines and oil wells from consumer goods that actively collect personal information.

The FTC report above summarizes many of the practical and policy challenges presented by emerging IoT technologies and the views of advocates for industry and consumers.  Security is, for many, the most compelling issue.  Internet-connected devices must collect data accurately; those data sets need to be communicated securely to data centers; and devices and back-end computing systems need to be protected against hackers, both to protect the data collected from devices and to protect the networks and devices against hijacking.  Recent stories of rogue engineers using laptops to break into parked cars and controlling car brakes remotely, and the dystopian nightmare of a hacked pacemaker on the TV drama Homeland, have not helped mitigate these concerns.  This risk is compounded by the prospect of “big data warehouses” that can store and analyze zettabytes of data in support of technological breakthroughs.

Separately, there is the question of notice and consent for the collection and use of IoT data.  As the FTC staff report notes, it is significantly easier to provide notice about a company’s data practices on a computer screen than on a piece of medical equipment or in a friend’s car that already is collecting and reporting a wide array of data.  This problem is compounded in industrial settings, for example, where passenger weight is analyzed to optimize airplane engine function, or where data sets from and surrounding an MRI machine are communicated to the hospital network to read the scan and to the device manufacturer to facilitate maintenance and product improvement.

Other questions abound.  Will data from an internet-connected device be used for unanticipated purposes, such as devising large consumer medical or credit reports, without the consumer having the ability to know what is being done or how to correct or delete data?  Will providers use data to discriminate improperly, or will better use of data create a more level playing field, facilitating new services at lower prices for a wider swath of consumers?  And are some issues already addressed by current regulatory frameworks like HIPAA or the Fair Credit Reporting Act, related standards like the Payment Card Industry security rules, or extensive regulatory frameworks governing security and data use for government contractors, transportation providers and energy providers?

In turn, certain baselines have emerged.  First, “security by design” and “privacy by design,” the practices of building security and privacy protections into the development lifecycle of goods and networks, are essential.  These requirements become even more compelling in light of the recent decision of the Third Circuit in FTC v. Wyndham Corporation Worldwide, holding, among other things, that the FTC has authority to bring claims alleging “unfairness” for a company’s purported failure to properly secure networks and data.  Second, companies collecting data from IoT devices must carefully consider how much data they need and whether it can be de-identified to minimize privacy risk, whether the data will be aggregated with other data, and whether consumer choice is needed to make specific use of the resulting data set.  And in light of privacy and national security laws around the world — including recent data localization and national security laws in Russia and China — companies will need to evaluate where data is transferred globally and where to locate the associated databases and possibly even global computing, service and engineering staff.

Much of the promise and peril of the Internet of Things and Big Data are in the future.  Google and Dexcom, a maker of blood sugar monitoring devices, recently announced an initiative to make a dime-sized, cloud-based disposable monitor that would communicate the real-time glucose values of diabetes patients directly to parents and medical providers.  No date has been announced, although recent advances in remote monitoring suggest hope.  And the journal Internet of Things Finland recently published an article announcing the proof-of-concept for a “wearable sensor vest with integrated wireless charging that . . . provides information about the location and well-being of children, based on received signal strength indication (RSSI), global positioning system (GPS), accelerometer and temperature sensors.”

Thus far, rule-making has focused on security standards for connected devices and related computing networks.  The FDA has issued detailed security guidance for connected devices and systems, and the Department of Defense has issued security standards for contractors that include an expansive definition of government data subject to coverage under the U.S. Department of Commerce’s NIST 800-171 standard for protecting sensitive federal information.  However, there has not been a push in the U.S. for comprehensive legislation governing internet-connected goods and services.   As the FTC staff report explained: “[t]his industry is in its relatively early stages.  Staff does not believe that the privacy and security risks, though real, need to be addressed through IoT-specific legislation at this time.  Staff agrees with those commentators who stated that there is great potential for innovation in this area, and that legislation aimed specifically at IoT at this stage would be premature.”

The marketplace for internet-connected goods and services surely will continue to expand, and the product and service landscape will advance rapidly.  Whether we will see more than $10 trillion dollars of annual economic impact has yet to be determined.  In this fast-moving environment, companies considering investment in the Internet of Things and Big Data and the attorneys who advise them would be well served to monitor the evolving regulatory and legislative landscape.

Peter Lefkowitz is Chief Counsel for Privacy & Data Protection, and Chief Privacy Officer, at General Electric. Mr. Lefkowitz previously served on the Boston Bar Journal’s Board of Editors.