Counterfactuals, the buzzy term being touted in the latest AI news, assert some impressive promises for the tech world. But the general idea is nothing new. Counterfactual thinking describes the human drive to conjure up alternative outcomes based on different choices that might be made along the way—essentially it’s a way of looking at cause and effect. Counterfactual philosophy dates back to Aristotle, and along the way it’s been debated by historians and even written about by Winston Churchill in his 1931 essay, “If Lee Had NOT Won the Battle of Gettysburg.” In more recent years, scientists have found that it’s possible to translate such counterfactual theories into complex math equations and plug them into AI models. These programs aim to use causal reasoning to pore over mountains of data and form predictions (and explain their logic) on things like drug performance, disease assessment, financial simulations and more.
Greenlights, Passing Grades and Seals of Approval: Keeping Up with the Downsides of Technology
New and emerging technologies have always carried a host of potential risks to accompany their oft-blinding potential. Just as dependably, those risks have often been ignored, glossed over or just missed as public enthusiasm waxes and companies race to bring a product to market first and most effectively. Automobiles promised to get people (and products) from one place to another at life-changing speeds, but also posed a danger to life and limb while imposing a new burden on existing infrastructure. Even as technology leaps have transitioned from appliances and aircraft to computers, connectivity and large language models (LLMs), new and untested technologies continue to outpace the government and the public’s ability to moderate them. But while one can debate what constitutes an acceptable gap between the practical and ideal when it comes to regulating, mandating and evaluating the pros and cons of new technology, societies tend to generate their own methods of informing the public and attempting to rein in the more harmful aspects of the latest thing.
China Tackles Generative AI
As the emergence of generative AI brings new market opportunities to China, leading China-based tech giants have released or plan to release their own self-developed generative AI services. On April 11, 2023, China’s main cybersecurity and data privacy regulator, Cyberspace Administration of China (CAC) issued its Administrative Measures on Generative Artificial Intelligence Service draft for public comments. (The public comment period will end on May 10, 2023.)
In “China Issues Proposed Regulations on Generative AI,” colleagues Jenny (Jia) Sheng, Chunbin Xu and Wenjun Cai break down the proposed rules, which apply to all generative AI services open to users in mainland China and are focused on cybersecurity and data privacy risks.
A “Far-Reaching Decision” for the Copyrightability of Computer Programs
On April 6, 2023, the U.S. Court of Appeals for the Federal Circuit affirmed Judge Gilstrap’s ruling in SAS Institute, Inc. v. World Programming Limited, which effectively denied copyright protection to SAS Institute’s data analysis software. The decision is likely to have lasting implications for developers that seek to protect software through copyright law.
AI as Prior Art: New Hurdles and Horizons in Patent Disputes
Artificial intelligence is rapidly evolving, and large language models (LLMs) like ChatGPT are one of the more exciting examples. Their generative capabilities have implications for our patent system, some of which are underappreciated and nonintuitive.
Under U.S. patent law, an inventor may not obtain a patent if the claimed invention would have been obvious to an artisan of ordinary skill, in view of the prior art. (See 35 U.S.C. § 103.)
News of Note for the Internet-Minded (4/13/23) – Quantum Health Care, Rare Earth Finds and Gen AI for the Crabs
In today’s News of Note, generative AI continues to draw criticism and even a ban, but that doesn’t stop developers from pushing forward with everything from music prediction and mind-reading—to talking with crabs. Plus, we look at quantum computing in health care, a new report on the impact of deep-sea rare earths mining, and so much more.
Risks, Reliability and Regulated Industries: A Series on AI Systems in Commercial Contracting
Over on Pillsbury’s SourcingSpeak blog, colleagues Elizabeth Zimmer, Sandro Serra and Mia Rendar provide an in-depth exploration of the many concerns and considerations in play for organizations seeking to integrate AI systems into their own operations.
Old School Meets New School: Critical Minerals Used in Quantum Computing
It is not every day that the rough-and-tumble “giga” world of mining and mineral refining interacts with the rarefied and metaphysical “nano” realm of quantum physics. The lawyers at Pillsbury and other law firms engaged in each endeavor rarely darken each other’s doors. But the streams are indeed converging today, as rare earths and related critical materials have been found to be uniquely suited for developments in quantum computing.
A New Dawn for Copyright in AI-Generated Works?
On February 21, 2023, the Copyright Office eclipsed its prior decisions in the area of AI authorship when it partially cancelled Kristina Kashtanova’s registration for a comic book titled Zarya of the Dawn. In doing so, the Office found that the AI program Kashtanova used—Midjourney—was primarily responsible for the visual output that the Office chose to exclude from Kashtanova’s registration. (Midjourney is an AI program that creates images from textual descriptions, much like OpenAI’s DALL-E.) The decision not only highlights tension between the human authorship requirements of copyright law and the means of expression that authors can use, but it also raises the question: Can AI-generated works ever be protected under U.S. copyright law?
U.S. Department of the Treasury Confronts the Risks to the Financial Sector Associated with Cloud Computing
On February 8, 2023, the U.S. Department of the Treasury released a report citing its “findings on the current state of cloud adoption in the sector, including potential benefits and challenges associated with increased adoption.” Treasury acknowledged that cloud adoption is an “important component” of a financial institution’s overall technology and business strategy, but also warned the industry about the harm a technical breakdown or cyberattack could have on the public given financial institutions’ reliance on a few large cloud service providers. The Treasury also noted that “[t]his report does not impose any new requirements or standards applicable to regulated financial institutions and is not intended to endorse or discourage the use of any specific provider or cloud services more generally.”