At the Intersection of AI, Face Swapping, Deep Fakes, Right of Publicity, and Litigation

Websites like GitHub, Reddit and others offer developers and hobbyists dozens of repositories containing artificial intelligence deep learning models, instructions for their use, and forums for learning how to “face swap,” a technique used to automatically replace a face of a person in a video with that of a different person. Older versions of face swapping, primarily used on images, have been around for years in the form of entertaining apps that offered results with unremarkable quality (think cut and paste at its lowest, and photoshop editing at a higher level). With the latest AI models, however, including deep neural networks, a video with a face-swapped actor–so-called “deep fake” videos–may appear so seamless and uncanny as to fool even the closest of inspections, and the quality is apparently getting better.

With only subtle clues to suggest an actor in one of these videos is fake, the developers behind them have become the target of criticism, though much of the criticism has also been leveled generally at the AI tech industry, for creating new AI tools with few restrictions on potential uses beyond their original intent.  These concerns have now reached the halls of New York’s state legislative body.

New York lawmakers are responding to the deep fake controversy, albeit in a narrow way, by proposing to make it illegal to use “digital replicas” of individuals without permission, a move that would indirectly regulate AI deep learning models. New York Assembly Bill No. A08155 (introduced in 2017, amended Jun. 5, 2018) is aimed at modernizing New York’s right of publicity law (N.Y. Civ. Rights Law §§ 50 and 50-1)–one of the nation’s oldest publicity rights laws that does not provide post-mortem publicity rights–though it may do little to curb the broader proliferation of face swapped and deep fake videos. In fact, only a relatively small slice of primarily famous New York actors, artists, athletes, and their heirs and estates would benefit from the proposed law’s digital replicas provision.

If enacted, New York’s right of publicity law would be amended to address computer-generated or electronic reproductions of a living or deceased individual’s likeness or voice that “realistically depicts” the likeness or voice of the individual being portrayed (“realistic” is undefined). Use of a digital replica would be a violation of the law if done without the consent of the individual, if the use is in a scripted audiovisual or audio work (e.g., movie or sound recording), or in a live performance of a dramatic work, that is intended to and creates the clear impression that the individual represented by the digital replica is performing the activity for which he or she is known, in the role of a fictional character.

It would also be a violation of the law to use a digital replica of a person in a performance of a musical work that is intended to and creates the clear impression that the individual represented by the digital replica is performing the activity for which he or she is known, in such musical work.

Moreover, it would be a violation to use a digital replica of a person in an audiovisual work that is intended to and creates the clear impression that an athlete represented by the digital replica is engaging in an athletic activity for which he or she is known.

The bill would exclude, based on First Amendment principles, a person’s right to control their persona in cases of parody, satire, commentary, and criticism; political, public interest, or newsworthy situations, including a documentary, regardless of the degree of fictionalization in the work; or in the case of de minimis or incidental uses.

In the case of deep fake digital replicas, the bill would make it a violation to use a digital replica in a pornographic work if done without the consent of the individual if the use is in an audiovisual pornographic work in a manner that is intended to and creates the impression that the individual represented by the digital replica is performing.

Similar to the safe harbor provisions in other statutes, the New York law would provide limited immunity to any medium used for advertising including, but not limited to, newspapers, magazines, radio and television networks and stations, cable television systems, billboards, and transit advertising, that make unauthorized use of an individual’s persona for the purpose of advertising or trade, unless it is established that the owner or employee had knowledge of the unauthorized use, through presence or inclusion, of the individual’s persona in such advertisement or publication.

Moreover, the law would provide a private right of action for an injured party to sue for an injunction and to seek damages. Statutory damages in the amount of $750 would be available, or compensatory damages, which could be significantly higher.  The finder of fact (judge or jury) could also award significant “exemplary damages,” which could be substantial, to send a message to others not to violate the law.

So far, AI tech developers have largely avoided direct legislative or regulatory action targeting their AI technologies, in part because some have taken steps to self-regulate, which may be necessary to avoid the confines of command and control-style state or federal regulatory schemes that would impose standards, restrictions, requirements, and the right to sue to collect damages and collect attorneys’ fees. Tech companies efforts at self-regulating, however, have been limited to expressing carefully-crafted AI policies for themselves and their employees, as well as taking a public stance on issues of bias, ethics, and civil rights impacts from AI machine learning. Despite those efforts, more laws like New York’s may be introduced at the state level if AI technologies are used in ways that have questionable utility or social benefits.

For more about the intersection of right of publicity laws and regulating AI technology, please see an earlier post on this website, available here.

Artificial Intelligence Won’t Achieve Legal Inventorship Status Anytime Soon

Imagine a deposition in which an inventor is questioned about her conception and reduction to practice of an invention directed to a chemical product worth billions of dollars to her company. Testimony reveals how artificial intelligence software, assessing huge amounts of data, identified the patented compound and the compound’s new uses in helping combat disease. The inventor states that she simply performed tests confirming the compound’s qualities and its utility, which the software had already determined. The attorney taking the deposition moves to invalidate the patent on the basis that the patent does not identify the true inventor. The true inventor, the attorney argues, was the company’s AI software.

Seem farfetched? Maybe not in today’s AI world. AI tools can spot cancer and other problems in diagnostic images, as well as identify patient-specific treatments. AI software can identify workable drug combinations for effectively combating pests. AI can predict biological events emerging in hotspots on the other side of the world, even before they’re reported by local media and officials. And lawyers are becoming more aware of AI through use of machine learning tools to predict the relevance of case law, answer queries about how a judge might respond to a particular set of facts, and assess the strength of contracts, among other tools. So while the above deposition scenario is hypothetical, it seems far from unrealistic.

One thing is for sure, however; an AI program will not be named as an inventor or joint inventor on a patent any time soon. At least not until Congress amends US patent laws to broaden the definition of “inventor” and the Supreme Court clarifies what “conception” of an invention means in a world filled with artificially-intelligent technologies.

That’s because US patent laws are intended to protect the natural intellectual output of humans, not the artificial intelligence of algorithms. Indeed, Congress left little wiggle room when it defined “inventor” to mean an “individual,” or in the case of a joint invention, the “individuals” collectively who invent or discover the subject matter of an invention. And the Supreme Court has endorsed a human-centric notion of inventorship. This has led courts overseeing patent disputes to repeatedly remind us that “conception” is the touchstone of inventorship, where conception is defined as the “formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”

But consider this. What if “in the mind of” were struck from the definition of “conception” and inventorship? Under that revised definition, an AI system might indeed be viewed as conceiving an invention.

By way of example, let’s say the same AI software and the researcher from the above deposition scenario were participants behind the partition in a classic Turing Test. Would an interrogator be able to distinguish the AI inventor from the natural intelligence inventor if the test for conception of the chemical compound invention is reduced to examining whether the chemical compound idea was “definite” (not vague), “permanent” (fixed), “complete,” “operative” (it works as conceived), and has a practical application (real world utility)? If you were the interrogator in this Turing Test, would you choose the AI software or the researcher who did the follow-up confirmatory testing?

Those who follow patent law may see the irony of legally recognizing AI software as an “inventor” if it “conceives” an invention, when the very same software would likely face an uphill battle being patented by its developers because of the apparent “abstract” nature of many software algorithms.

In any case, for now the question of whether inventorship and inventions should be assessed based on their natural or artificial origin may merely be an academic one. But that may need to change when artificial intelligence development produces artificial general intelligence (AGI) that is capable of performing the same intellectual tasks that a human can.

Marketing “Artificial Intelligence” Needs Careful Planning to Avoid Trademark Troubles

As the market for all things artificial intelligence continues heating up, companies are looking for ways to align their products, services, and entire brands with “artificial intelligence” designations and phrases common in the surging artificial intelligence industry, including variants such as “AI,” “deep,” “neural,” and others. Reminiscent of the era of the early 2000’s, when companies rushed to market with “i-” or “e-” prefixes or appended “.com” names, today’s artificial intelligence startups are finding traction with artificial intelligence-related terms and corresponding “.AI” domains. The proliferation of AI marketing, however, may lead to brand and domain disputes. But a carefully-planned intellectual property strategy may help avoid potential risks down the road, as the recent case, Inc. v. Stellar A.I, Inc., filed in the U.S. District Court for the Northern District of California, demonstrates.

Artificial Intelligence startups face plenty of challenges getting their businesses up and going. The last things they want to worry about is unexpected trademark litigation involving their “AI” brand and domain names. Fortunately, some practical steps taken early may help reduce the risk of such problems.

According to court filings, New York City-based Stella.AI, Inc., provider of a jobs matching website, claims that its “stella.AI” website domain has been in use since March 2016, and its STELLA trademark since February 2016 (its U.S. federal trademark applications was reportedly published for opposition in April 2016 by the US Patent and Trademark Office). Palo Alto-based talent and employment agency Stellar A.I., formerly JobGenie, obtained its “” domain and sought trademark status for STELLAR.AI in January 2017, a move, claims, was prompted after JobGenie learned of Stella.AI, Inc.’s domain. Stella.AI’s complaint alleges unfair competition and false designation of origin due to a confusingly-similar mark and domain name. It sought monetary damages and the transfer of the domain.

In its answer to the complaint, Stellar A.I. says that it created, used, and marketed its services under the STELLAR.AI mark in good faith without prior knowledge of Stella.AI, Inc.’s mark, and in any case, any infringement of the STELLA mark was unintentional.

Artificial Intelligence startups face plenty of challenges getting their businesses up and going. The last things they want to worry about is unexpected trademark litigation involving their “AI” brand and domain names. Fortunately, some practical steps taken early may help reduce the risk of such problems.

As a start, marketers should consider thoroughly searching for conflicting federal, state, and common law uses of a planned company, product, or service name, and they should also consider evaluating corresponding domains as part of an early branding strategy. Trademark searches often reveal other, potentially confusingly-similar, uses of a trademark. Plenty of search firms offer search services, and they will return a list of trademarks that might present problems. If you want to conduct your own search, a good place to start might be the US Patent and Trademark Office’s TESS database, which can be searched to identify federal trademark registrations and pending trademark applications. Evaluating the search results should be done with the assistance of the company’s intellectual property attorney.

It is also good practice to look beyond obtaining a single top-level domain for a company and its brands. For example, if “” is in play as a possible company “AI” domain name, also consider “” and others top-level domains to prevent someone else from getting their hands on your name. Moreover, consider obtaining domains embodying possible shortcuts and misspellings that prospective customers might use (i.e., “” transposes two letters).

Marketers would be wise to also exercise caution when using competitor’s marks on their company website, although making legitimate comparisons between competing products remains fair use even when the competing products are identified using their trademarks. In such situation, comparisons should clearly state that the marketer’s product is not affiliated with its competitor’s product, and website links to competitor’s products should be avoided.

While startups often focus limited resources on protecting their technology by filing patent applications (or by implementing a comprehensive trade secret policy), a startup’s intellectual property strategy should also consider trademark issues to avoid having to re-brand down the road, as Stellar A.I. did (their new name and domain are now “Stellares” and “,” respectively).