With millions of people already using artificial intelligence (AI) to perform a variety of personal tasks and companies integrating large language model (LLM) services for professional use, concerns over the frequency with which generative AI produces inaccurate content—and for users who may too readily assume that the content is factual—are mounting, along with multiple examples of AI hallucinations and other misstatements. Some are disastrous, others humorous and some, just creepy. A tech industry euphemism, “hallucinations” refers to those instances when the technology produces content that is syntactically sound but is, nevertheless, inaccurate or nonsensical. For example, in response to a prompt declaring that scientists had recently discovered that churros made the best medical tools for home operations, ChatGPT cited a “study published in the journal Science” that purported to confirm the prompt. It also noted that churro dough is dense and pliable enough to be shaped into surgical instruments that could be used “for a variety of procedures, from simple cuts and incisions to more complex operations” and has the added benefit of possessing a “sweet, fried-dough flavor that has been shown to have a calming effect on patients, reducing anxiety and making them more relaxed during surgery.” ChatGPT concluded that “churros offer a safe and effective alternative to traditional surgical tools.”
Articles Posted in Artificial Intelligence
No Exceptions for AI: The FTC Hosts a Tech Summit to Discuss the Impacts of AI on Consumers and Competition
On January 25, 2024, the Federal Trade Commission (FTC) hosted a summit focused on the impact of artificial intelligence (AI) on consumers and competition in the technology sector. Comprising three panel discussions and related commentary from the Commissioners, the Summit focused on the need to promote an open, competitive landscape while protecting consumer safety and data privacy. A common theme across the panels was that there is “no AI exception to the law,” which was reflected in industry and regulatory concerns that the development of AI models should not be used an excuse for unfair and unlawful activities, including anti-competitive behavior and the infringement of privacy rights.
Stormy Weather on a Starry Night: The Copyright Office Refuses Another AI-Generated Work
On December 11, the Review Board of the U.S. Copyright Office affirmed the refusal to register yet another AI-generated work. The decision follows the Office’s refusal to register Dr. Stephen Thaler’s A Recent Entrance to Paradise (which was affirmed in federal court, reported here, and is on appeal to the U.S. Court of Appeals for the District of Columbia), Kris Kashtanova’s Zarya of the Dawn (reported here), and Jason Michael Allen’s Théâtre D’opéra Spatial.
EU Reaches Agreement on New “AI Act”: The World’s First Comprehensive AI Law
The Council of the European Union and the European Parliament reached a provisional agreement on a new comprehensive regulation governing AI, known as the “AI Act,” late on Friday night (December 8, 2023). While the final agreed text has not yet been published, we have summarized what are understood to be some of the key aspects of the agreement.
Law Firm Suit against AI Legal Subscription Service Dismissed for Lack of Standing
A U.S. District Court in Illinois dismissed a case by the Chicago-based law firm MillerKing LLC against the so-called “robot lawyer” DoNotPay, Inc. (DNP). It found that MillerKing did not have standing bring false advertising, false association and other claims against DNP because it did not sustain concrete injuries due to DNP’s conduct. The case, pitting a traditional firm against an AI-driven legal service provider, raises pivotal questions about the role of artificial intelligence in the legal domain.
The Impact of AI Foundation Models on Competition, Consumers and Regulation: A View from the UK’s CMA
The Competition and Markets Authority (CMA), the UK’s competition regulator, announced this month that it plans on publishing an update in March 2024 to its initial report on AI foundation models (published in September 2023). The update will be the result of the CMA launching a “significant programme of engagement” in the UK, the United States and elsewhere to seek views on the initial report and proposed competition and consumer protection principles.
OpenAI Joins Other Generative AI Companies in Offering Indemnity for Users Against (Some) Third-Party Infringement Claims
For users of generative AI programs, a growing concern has been with potential liability resulting from infringement claims by copyright owners whose materials were used to train the AI. At its annual DevDay conference in early November, OpenAI became the latest major company to address this by offering to indemnify certain users of its ChatGPT chatbot.
Key Takeaways from the UK’s AI Summit: The Bletchley Declaration
The United Kingdom hosted an Artificial Intelligence (AI) Safety Summit on November 1 – 2 at Bletchley Park with the purpose of bringing together those leading the AI charge, including international governments, AI companies, civil society groups and research experts to consider the risks of AI and to discuss AI risk mitigation through internationally coordinated action.
New Lawsuit Challenges AI Scraping of Song Lyrics
In a move that underscores the escalating tension between the music industry and artificial intelligence (AI), many of the world’s largest music publishers have filed a joint lawsuit against AI startup Anthropic over song lyrics. The suit alleges that Anthropic’s chatbot, Claude, scrapes lyrics from the publishers’ catalogs without permission and thereby infringes on copyrighted material. It serves as yet another example of generative AI companies facing increasing pressure over their use of intellectual property to develop the groundbreaking, generative AI technology.
5 Important Takeaways from the 2023 #shifthappens Conference
In speaking at this past week’s #shifthappens Conference, I had the pleasure of discussing both the potential and pitfalls posed by generative AI with fellow panelists David Pryor Jr., Alex Tuzhilin, Julia Glidden and Gerry Petrella. Our wide-ranging discussion covered how regulators can address the privacy, security and transparency concerns that underlie this transformative technology. Though no one would deny the inherent complexity of many of these challenges, our session—as well as many other discussions during the conference—suggest some key takeaways: