The latest trend in generative AI is the rise of “AI Assistants” or “Chatbots.” These tools are increasingly trained on internal company data to interact directly with consumers and aim to improve customer service by generating responses on behalf of the organization. However, this brings new legal challenges, especially regarding data privacy. Because “biometric” data or information is often broadly defined, compliance with privacy regulations becomes critical when chatbots and AI agents use voiceprints, cameras (capturing or processing hand or facial geometry) or other sensors to provide a personalized service. Companies should implement safeguards to meet legal requirements, avoid penalties, and secure insurance coverage to mitigate financial risks related to compliance with data privacy laws and regulations.
California Legislature Passes Generative AI Training Data Transparency Bill (UPDATED)
The California legislature recently passed Assembly Bill 2013 (AB 2013) on August 27, 2024, a measure aimed at enhancing transparency in AI training and development. If signed into law by Governor Gavin Newsom, developers of generative AI systems or services that are made available to Californians would be required to disclose significant information on the data used to train such AI systems or services. This, in turn, may raise novel compliance burdens for AI providers as well as unique challenges for customers in interpreting the information.
Strange Bedfellows on the Campaign Trail: The Tension between Music, Copyright and Political Licensing Agreements
Moments before former President Donald Trump took to the stage at a Montana rally this August, Celine Dion’s 1997 hit, “My Heart Will Go On,” blasted over the speakers while a clip appeared onscreen. It took less than 24 hours for the five-time Grammy winner’s team and Sony Music Entertainment Canada to issue a statement on social media saying that “in no way is this use authorized, and Celine Dion does not endorse this or any similar use.” Amid a heated political season, it’s not unusual for candidates to clash with the artists whose music they promulgate on the campaign trail. In Trump’s case, though, using a video compounded the legal complications. While political licensing for music typically needs approval from the recording artist, video playback requires approval from both the artist and composer. In theory, this fact would make it less risky for campaigners to stick with audio-only soundbites of their favorite crowd-pumping tunes. However, there are still questions around general music licensing dos and don’ts when it comes to politics, even as performing rights groups work to clarify things. For now, the intersection of artists’ rights and political campaigns remains a murky legal crossroads, at best.
Discovery Dilemma: An Update on the Legal Battle Between The New York Times and OpenAI
OpenAI’s defense of the lawsuit brought by The New York Times (“The Times”) has sparked controversy relating to OpenAI’s discovery demand for access to reporter notes and other behind-the-scenes materials associated with millions of articles that appeared in The Times.
Colleagues Jennifer Altman, Shani Rivaux and Macarena Fink provide a briefing on OpenAI’s discovery request in their recently published client alert, “Discovery Dilemma: An Update on the Legal Battle Between The New York Times and OpenAI.”
In the Supreme Court’s NetChoice Rulings, the Court Leaves the Door Open for Future Social Media Content Moderation Regulations
Are social media companies more like newspapers or phone companies? This oft-debated question in social media legal circles, while seemingly trivial on the surface, represents a momentous debate over whether—and how much—social media companies should be allowed to moderate user-generated content on their platforms. If social media companies are more like newspapers, they have the right to censor, tailor or remove content as they see fit, similar to how an editor at a publication has the right to choose which stories make the headlines. On the other hand, if social media companies are more like phone companies, then the government has more freedom to limit the companies’ editorial powers, ensuring that they serve merely as a conduit for their users to express themselves freely.
California’s Shift to Kaplan for Bar Exam Questions Sparks Copyright Debate
Facing potential insolvency by 2026, the State Bar of California is exploring various cost-saving measures, including remote administration and the use of small vendor-owned test centers for its exams. As part of this process, the Bar issued a Request for Information back in January 2024 to find a vendor capable of developing multiple-choice questions equivalent to the current Multistate Bar Examination (MBE)—with Kaplan North America, LLC (Kaplan) being proposed as a potential new vendor for next year’s February 2025 exam. However, the Bar’s Board of Trustees subsequently pulled the proposal from discussion at a May 16, 2024, meeting following a warning letter from the National Conference of Bar Examiners (NCBE) to Kaplan, in which the NCBE set forth its position that the MBE materials licensed to Kaplan, which include hundreds of actual, retired MBE questions, should not be used to create new multiple-choice questions for any jurisdiction.
In Murthy v. Missouri, SCOTUS Focus on Plaintiff Standing Sidesteps Underlying, Larger First Amendment Questions
A recent U.S. Supreme Court decision may have substantial effects on social media censorship. Based on their content-moderation policies, social media platforms have taken actions to suppress certain categories of speech, such as speech deemed false and misleading. This movement was amped up during 2020 with the outbreak of COVID-19 and election season. During that time, federal officials regularly spoke with social media platforms regarding the misinformation circulating throughout respective platforms.
The Contest for Collegiate NIL Rights: How the Protect the Ball Act May Insulate the NCAA
The National Collegiate Athletic Association (NCAA) has historically been afforded a wide berth to implement and enforce its rules under the auspices of protecting the “revered tradition of amateurism” in college athletics. For decades, it relied on this principle as a means to enforce its prohibition on college athletes receiving compensation when faced with legal challenges and public calls for reform.
Legal Riffs: Music Industry Alleges AI Is Out of Tune
In late June, Universal Music Group (UMG) Records, Sony Music Entertainment, and other major record labels filed two complaints against two generative artificial intelligence (“gen AI”) music startups, Suno, Inc. (Suno) and Uncharted Labs, Inc. (Udio). The concurrently filed complaints allege that the gen AI technology produced by Suno and Udio directly infringes on copyrights owned by these record labels.
Colleagues Shani Rivaux, Macarena Fink and Catherine Perez provide a briefing on these complaints in their recently published client alert, “Legal Riffs: Music Industry Alleges AI Is Out of Tune.”
What You Need to Know If You’re Using AI-Generated Voices for Your Company
Global music superstar Taylor Swift began her music career in Nashville, so we thought it fitting that on July 1, with the end of the Eras Tour in sight, the Ensuring Likeness Voice and Image Security (ELVIS) Act went into effect in Tennessee. This marks the latest front in the effort to navigate the interplay between the capability of generative AI and the Right of Publicity for music and voice artists alike.