Thanks to Bots, Transparency Emerges as Lawmakers’ Choice for Regulating Algorithmic Harm

Digital conversational agents, like Amazon’s Alexa and Apple’s Siri, and communications agents, like those found on customer service website pages, seem to be everywhere.  The remarkable increase in the use of these and other artificial intelligence-powered “bots” in everyday customer-facing devices like smartphones, websites, desktop speakers, and toys, has been exceeded only by bots in the background that account for over half of the traffic visiting some websites.  Recently reported harms caused by certain bots have caught the attention of state and federal lawmakers.  This post briefly describes those bots and their uses and suggests reasons why new legislative efforts aimed at reducing harms caused by bad bots have so far been limited to arguably one of the least onerous tools in the lawmaker’s toolbox: transparency.

Bots Explained

Bots are software programmed to receive percepts from their environment, make decisions based on those percepts, and then take (preferably rational) action in their environment.  Social media bots, for example, may use machine learning algorithms to classify and “understand” incoming content, which is subsequently posted and amplified via a social media account.  Companies like Netflix uses bots on social media platforms like Facebook and Twitter to automatically communicate information about their products and services.

While not all bots use machine learning and other artificial intelligence (AI) technologies, many do, such as digital conversational agents, web crawlers, and website content scrappers, the latter being programmed to “understand” content on websites using semantic natural language processing and image classifiers.  Bots that use complex human behavioral data to identify and influence or manipulate people’s attitudes or behavior (such as clicking on advertisements) often use the latest AI tech.

One attribute many bots have in common is that their functionality resides in a black box.  As a result, it can be challenging (if not impossible) for an observer to explain why a bot made a particular decision or took a specific action.  While intuition can be used to infer what happens, secrets inside a black box often remain secret.

Depending on their uses and characteristics, bots are often categorized by type, such as “chatbot,” which generally describes an AI technology that engages with users by replicating natural language conversations, and “helper bot,” which is sometimes used when referring to a bot that performs useful or beneficial tasks.  The term “messenger bot” may refer to a bot that communicates information, while “cyborg” is sometimes used when referring to a person who uses bot technology.

Regardless of their name, complexity, or use of AI, one characteristic common to most bots is their use as agents to accomplish tasks for or on behalf of a real person.  This anonymity of agent bots makes them attractive tools for malicious purposes.

Lawmakers React to Bad Bots

While the spread of beneficial bots has been impressive, bots with questionable purposes have also proliferated, such as those behind disinformation campaigns used during the 2016 presidential election.  Disinformation bots, which operate social media accounts on behalf of a real person or organization, can post content to public-facing accounts.  Used extensively in marketing, these bots can receive content, either automatically or from a principal behind the scenes, related to such things as brands, campaigns, politicians, and trending topics.  When organizations create multiple accounts and use bots across those accounts to amplify each account’s content, the content can appear viral and attract attention, which may be problematic if the content is false, misleading, and biased.

The success of social media bots in spreading disinformation is evident in the degree to which they have proliferated.  Twitter recently produced data showing thousands of bot-run Twitter accounts (“Twitter bots”) were created before and during the 2016 US presidential campaign by foreign actors to amplify and spread disinformation about the campaign, candidates, and related hot-button campaign issues.  Users who received content from one of these bots would have had no apparent reason to know that it came from a foreign actor.

Thus, it’s easy to understand why lawmakers and stakeholders would want to target social media bots and those that use them.  In view of a recent Pew Research Center poll that found most Americans know about social media bots, and those that have heard about them overwhelmingly (80%) believe that such bots are used for malicious purposes, and with technologies to detect fake content at its source or the bias of a news source standing at only about 65-70 percent accuracy, politicians have plenty of cover to go after bots and their owners.

Why Use Transparency to Address Bot Harms?

The range of options for regulating disinformation bots to prevent or reduce harm could include any number of traditional legislative approaches.  These include imposing on individuals and organizations various specific criminal and civil liability standards related to the performance and uses of their technologies; establishing requirements for regular recordkeeping and reporting to authorities (which could lead to public summaries); setting thresholds for knowledge, awareness, or intent (or use of strict liability) applied to regulated activities; providing private rights of action to sue for harms caused by a regulated person’s actions, inactions, or omissions; imposing monetary remedies and incarceration for violations; and other often seen command-and-control style governance approaches.  Transparency, which is another tool lawmakers could deploy, could impose on certain regulated persons and entities that they provide information publicly or privately to an organization’s users or customers through a mechanism of notice, disclosure, and/or disclaimer (among other techniques).

Transparency is a long-used principal of democratic institutions that try to balance open and accountable government action and the notion of free enterprise with the public’s right to be informed.  Examples of transparency may be found in the form of information labels on consumer products and services under consumer laws, disclosure of product endorsement interests under FTC rules, notice and disclosures in financial and real estate transactions under various related laws, employee benefits disclosures under labor and tax laws, public review disclosures in connection with laws related to government decision-making, property ownership public records disclosures under various tax and land ownership/use laws, various healthcare disclosures under state and federal health care laws, and laws covering many other areas of public life.  Of particular relevance to the disinformation problem noted above, and why transparency seems well-suited to social media bots, is current federal campaign finance laws that require those behind political ads to reveal themselves.  See 52 USC §30120 (Federal Campaign Finance Law; publication and distribution of statements and solicitations; disclaimer requirements).

A recent example of a transparency rule affecting certain bot use cases is California’s bot law (SB-1001; signed by Gov. Brown on September 28, 2018).  The law, which goes into effect July 2019, will, with certain exceptions, make it unlawful for any person (including corporations or government agencies) to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.  A person using a bot will not be liable, however, if the person discloses using clear, conspicuous, and reasonably designed notice to inform persons with whom the bot communicates or interacts that it is a bot.  Similar federal legislation may follow, especially if legislation proposed this summer by Sen. Diane Feinstein (D-CA) and legislative proposals by Sen. Warner and others gain traction in Congress.

So why would lawmakers choose transparency to regulate malicious bot technology use cases rather than use an approach that is arguably more onerous?  One possibility is that transparency is seen as minimally controversial, and therefore less likely to cause push-back by those with ties to special interests that might negatively respond to lawmakers who advocate for tougher measures.  Or, perhaps lawmakers are choosing a minimalist approach just to demonstrate that they are taking action (versus the optics associated with doing nothing).  Maybe transparency is seen as a shot across the bow warning to industry leaders: work hard to police themselves and those that use their platforms by finding technological solutions to preventing the harms caused by bots or else be subject to a harsher regulatory spotlight.  Whatever the reason(s), even something viewed as relatively easy to implement as transparency is not immune from controversy.

Transparency Concerns

The arguments against the use of transparency applied to bots include loss of privacy, unfairness, unnecessary disclosure, and constitutional concerns, among others.

Imposing transparency requirements can potentially infringe upon First Amendment protections if drafted with one-size-fits-all applicability.  Even before California’s bots measure was signed into law, for example, critics warned of the potential chilling effect on protected speech if anonymity is lifted in the case of social media bots.

Moreover, transparency may be seen as unfairly elevating the principals of openness and accountability over notions of secrecy and privacy.  Owners of agent-bots, for example, would prefer to not to give up anonymity when doing so could expose them to attacks by those with opposing viewpoints and cause more harm than the law prevents.

Both concerns could be addressed by imposing transparency in a narrow set of use cases and, as in California’s bot law, using “intent to mislead” and “knowingly deceiving” thresholds for tailoring the law to specific instances of certain bad behaviors.

Others might argue that transparency places too much of the burden on users to understand the information being disclosed to them and to take appropriate responsive actions.  Just ask someone who’s tried to read a financial transaction disclosure or a complex Federal Register rule-making analysis whether the transparency, openness and accountability actually made a substantive impact on their follow-up actions.  Similarly, it’s questionable whether a recipient of bot-generated content would investigate the ownership and propriety of every new posting before deciding whether to accept the content’s veracity, or whether a person engaging with an AI chatbot would forgo further engagement if he or she were informed of the artificial nature of the engagement.

Conclusion

The likelihood that federal transparency laws will be enacted to address the malicious use of social media bots seems low given the current political situation in the US.  And with California’s bots disclosure requirement not becoming effective until mid-2019, only time will tell whether it will succeed as a legislative tool in addressing existing bot harms or whether the delay will simply give malicious actors time to find alternative technologies to achieve their goals.

Even so, transparency appears to be a leading governance approach, at least in the area of algorithmic harm, and could become a go-to approach to governing harms caused by other AI and non-AI algorithmic technologies due to its relative simplicity and ability to be narrowly tailored.  Transparency might be a suitable approach to regulating certain actions by those who publish face images using generative adversarial networks (GANs), those who create and distribute so-called “deep fake” videos, and those who provide humanistic digital communications agents, all of which involve highly-realistic content and engagements in which a user could easily be fooled into believing the content/engagement involves a person and not an artificial intelligence.

The AI Summit New York City: Takeaways For the Legal Profession

This week, business, technology, and academic thought leaders in Artificial Intelligence are gathered at The AI Summit in New York City, one of the premier international conferences offered for AI professionals. Below, I consider two of the three takeaways from Summit Day 1, published yesterday by AI Business, from the perspective of lawyers looking for opportunities in the burgeoning AI market.

“1. The tech landscape is changing fast – with big implications for businesses”

If a year from now your law practice has not fielded at least one query from a client about AI technologies, you are probably going out of your way to avoid the subject. It is almost universally accepted that AI technologies in one form or another will impact nearly every industry. Based on recently-published salary data, the industries most active in AI are tech (think Facebook, Amazon, Alphabet, Microsoft, Netflix, and many others), financial services (banks and financial technology companies or “fintech”), and computer infrastructure (Amazon, Nvidia, Intel, IBM, and many others; in areas such as chips for growing computational speed and throughput, and cloud computing for big data storage needs).

Of course, other industries are also seeing plenty of AI development. The automotive industry, for example, has already begun adopting machine learning, computer vision, and other AI technologies for autonomous vehicles. The robotics and chatbot industries have seen great strides lately, both in terms of humanoid robotic development, and consumer-machine interaction products such as stationary and mobile digital assistants (e.g., personal robotic assistants, as well as utility devices like autonomous vacuums). And of course the software as a service industry, which leverages information from a company’s own data, such as human resources data, process data, healthcare data, etc., seems to offers new software solutions to improve efficiencies every day.

All of this will translate into consumer adoption of specific AI technologies, which is reported to already be at 10% and growing. The fast pace of technology development and adoption may translate into new business opportunities for lawyers, especially for those who invest time to learning about AI technologies. After all, as in any area of law, understanding the challenges facing clients is essential for developing appropriate legal strategies, as well as for targeting business development resources.

“2. AI is a disruptive force today, not tomorrow – and business must adapt”

Adapt or be left behind is a cautionary tale, but one with plenty of evidence demonstrating that it holds true in many situations.

Lawyers and law firms as an institution are generally slow to change, often because things that disrupt the status quo are viewed through a cautionary lens. This is not surprising, given that a lawyer’s work often involves thoughtful spotting of potential risks, and finding ways to address those risks. A fast-changing business landscape racing to keep up with the latest in AI technologies may be seen as inherently risky, especially in the absence of targeted laws and regulations providing guidance, as is the case today in the AI industry. Even so, exploring how to adapt one’s law practice to a world filled with AI technologies should be near the top of every lawyer’s list of things to consider for 2018.

How Privacy Law’s Beginnings May Suggest An Approach For Regulating Artificial Intelligence

A survey conducted in April 2017 by Morning Consult suggests most Americans are in favor of regulating artificial intelligence technologies. Of 2,200 American adults surveyed, 71% said they strongly or somewhat agreed that there should be national regulation of AI, while only 14% strongly or somewhat disagreed (15% did not express a view).

Technology and business leaders speaking out on whether to regulate AI fall into one of two camps: those who generally favor an ex post, case-by-case, common law approach, and those who prefer establishing a statutory and regulatory framework that, ex ante, sets forth clear do’s and don’ts and penalties for violations. (If you’re interested in learning about the challenges of ex post and ex ante approaches to regulation, check out Matt Scherer’s excellent article, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” published in the Harvard Journal of Law and Technology (2016)).

Advocates for a proactive regulatory approach caution that the alternative is fraught with predictable danger. Elon Musk for one, notes that, “[b]y the time we’re reactive in A.I., regulation’s too late.” Others, including leaders of some of the biggest AI technology companies in the industry, backed by lobbying organizations like the Information Technology Industry Council (ITI), feel that the hype surrounding AI does not justify quick Congressional action at this time.

Musk criticized this wait-and-see approach. “Normally, the way regulation’s set up,” he said, “a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”

Assuming AI regulation is inevitable, how should regulators (and legislators) approach such a formidable task? After all, AI technologies come in many forms, and their uses extend across multiple industries, including some already burdened with regulation. The history of privacy law may provide the answer.

Without question, privacy concerns, and privacy laws, touch on AI technology use and development. That’s because so much of today’s human-machine interactions involving AI are powered by user-provided or user-mined data. Search histories, images people appear in on social media, purchasing habits, home ownership details, political affiliations, and many other data points are well-known to marketers and others whose products and services rely on characterizing potential customers using, for example, machine learning algorithms, convolutional neural networks, and other AI tools. In the field of affective computing, human-robot and human-chatbot interactions are driven by a person’s voice, facial features, heart rate, and other physiological features, which are the percepts that the AI system collects, processes, stores, and uses when deciding actions to take, such as responding to user queries.

Privacy laws evolved from a period during late nineteenth century America when journalists were unrestrained in publishing sensational pieces for newspapers or magazines, basically the “fake news” of the time. This Yellow Journalism, as it was called, prompted legal scholars to express a view that people had a “right to be let alone,” setting in motion the development of a new body of law involving privacy. The key to regulating AI, as it was in the development of regulations governing privacy, may be the recognition of a specific personal right that is, or is expected to be, infringed by AI systems.

In the case of privacy, attorneys Samuel Warren and Louis Brandeis (later, Justice Brandeis) were the first to articulate a personal privacy right. In The Right of Privacy, published in the Harvard Law Review in 1890, Warren and Brandeis observed that “the press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip…has become a trade.” They contended that “for years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons.” They argued that a right of privacy was entitled to recognition because “in every [] case the individual is entitled to decide whether that which is his shall be given to the public.” A violation of the person’s right of privacy, they wrote, should be actionable.

Soon after, courts began recognizing the right of privacy in civil cases. By 1960, in his seminal review article entitled Privacy (48 Cal.L.Rev 383), William Prosser wrote, “In one form or another,” the right of privacy “was declared to exist by the overwhelming majority of the American courts.” That led to uniform standards. Some states enacted limited or sweeping state-specific statutes, replacing the common law with statutory provisions and penalties. Federal appeals courts weighed in when conflicts between state law arose. This slow progression from initial recognition of a personal privacy right in 1890, to today’s modern statutes and expansive development of common law, won’t appeal to those pushing for regulation of AI now.

Even so, the process has to begin somewhere, and it could very well start with an assessment of the personal rights that should be recognized arising from interactions with or the use of AI technologies. Already, personal rights recognized by courts and embodied in statutes apply to AI technologies. But there is one personal right, potentially unique to AI technologies, that has been suggested: the right to know why (or how) an AI technology took a particular action (or made a decision) affecting a person.

Take, for example, an adverse credit decision by a bank that relies on machine learning algorithms to decide whether a customer should be given credit. Should that customer have the right to know why (or how) the system made the credit-worthiness decision? FastCompany writer Cliff Kuang explored this proposition in his recent article, “Can A.I. Be Taught to Explain Itself?” published in the New York Times (November 21, 2017).

If AI could explain itself, the banking customer might want to ask it what kind of training data was used and whether the data was biased, or whether there was an errant line of python coding to blame, or whether the AI gave the appropriate weight to the customer’s credit history. Given the nature of AI technologies, some of these questions, and even more general ones, may only be answered by opening the AI black box. But even then it may be impossible to pinpoint how the AI technology made its decision. In Europe, “tell me why/how” regulations are expected to become effective in May 2018. As I will discuss in a future post, many practical obstacles face those wishing to build a statute or regulatory framework around the right of consumers to demand from businesses that their AI explain why it made or took a particular adverse action.

Regulation of AI will likely happen. In fact, we are already seeing the beginning of direct legislative/regulatory efforts aimed at the autonomous driving industry. Whether interest in expanding those efforts to other AI technologies grows or lags may depend at least in part on whether people believe they have personal rights at stake in AI, and whether those rights are being protected by current laws and regulations.

Inaugural Post – AI Tech and the Law

Welcome. I am excited to present the first of what I hope will be many useful and timely posts covering issues arising at the crossroads of artificial intelligence technology and the law. My goal with this blog is to provide insightful discussion concerning the legal issues expected to affect individuals and businesses as they develop and interact with AI products and services. I also hope to engage with AI thought leaders in the legal industry as new AI technology-specific issues emerge. Join me by sharing your thoughts about AI and the law. If you’d like to see a particular issue discussed on these pages, I invite you to send me an email.

Much has already been written about the promises of AI and its ever-increasing role in daily life. AI technologies are unquestionably making their presence known in many impactful ways. Three billion smartphones in use worldwide, and many of them use one form of AI or another. Voice assistants driven by AI are appearing on kitchen countertops everywhere. Online search engines, powered by AI, deliver your search results. Select like/love/dislike/thumbs-down on your music streaming or news aggregating apps empowers AI algorithms to make recommendations for you.

Today’s tremendous AI industry expansion, driven by big data and enhanced computational power, will continue at an unprecedented rate in the future. We are seeing investors fund AI-focused startups across the globe. As Marc Cuban predicted earlier this year, the world’s first trillionaire will be an AI entrepreneur.

Not everyone, however, shares the same positive outlook concerning AI. Elon Musk, Bill Gates, Stephen Hawking and others have raised concerns. Many foresee problems arising as AI becomes ubiquitous, especially if businesses are left to develop AI systems without guidance. The media have written about displaced employees due to autonomous systems; bias, social justice, and civil rights concerns in big data; AI consumer product liability; privacy and data security; superintelligent systems, and other issues. Some have even predicted dire consequences from unchecked AI.

But with all the talk about AI–both positive and negative–businesses are operating in a vacuum of laws, regulations, and court opinions dealing directly with AI. Indeed, with only a few exceptions, most businesses today have little in the way of legal guidance about acceptable practices when it comes to developing and deploying their AI systems. While some advocate for a common law approach to dealing with AI problems on a case-by-case basis, others would like to see a more structured regulatory framework.

I look forward to considering these and others issues in the months to come.

Brian Higgins