Content for this Newsletter:
1. AI and IP Lawsuits
2. California AI Bill
3. EU AI Act
Free Webinar: From Development to Deployment – Patenting and Commercializing Artificial Intelligence Inventions
Gain a competitive edge in the rapidly evolving AI landscape. Whether you’re a business leader seeking to maximize ROI or a legal professional navigating complex regulations, this webinar provides essential insights. Join me and Michael Dilworth of Dilworth IP to discover practical strategies for safeguarding your AI assets and mitigating risks.
I will guide you through protecting your AI invention beyond patents, maximizing its value through effective licensing, and addressing AI governance and data privacy challenges to ensure compliance and prevent potential government investigations, fines, and class action lawsuits. Michael will address the challenges of patenting AI technologies, review recent USPTO guidelines, examine real-world case studies, and weigh the financial implications of patenting versus not patenting.
Don’t miss this opportunity to stay ahead of the curve! Reserve a spot today: https://lnkd.in/eZtxedcN
Now, let’s move on to the latest developments in AI law.
- AI and IP Lawsuits
a. Anthropic Lawsuits
Anthropic, an AI company, is being sued for using copyrighted books without permission to train its chatbot, Claude. Three authors say their books were used to help Claude learn how to respond to people. Anthropic hasn’t said anything about the lawsuit yet. This is the second time Anthropic has been sued for copyright issues; the first lawsuit was about using song lyrics without permission. This new case is part of a trend where more AI companies are being sued for using copyrighted material to train their systems.
b. Nvidia Lawsuit
Nvidia is being accused of taking millions of YouTube videos without permission to train its AI model. A group lawsuit says this breaks YouTube’s rules and is unfair competition. Nvidia disagrees, saying they are following the law and that creating new and different things is fair and right.
c. Legal Victory Against AI Companies
Visual artists have won an important victory against major AI art companies like Stability AI, Midjourney, DeviantArt, and Runway AI. A judge in California has decided that the artists can continue with their lawsuit claiming that these companies’ AI image generation systems have violated copyright and trademark laws. Although it’s still unclear if using copyrighted images to train AI models is legal, this decision is a major blow to the AI industry.
2. California AI Bill
California legislators are considering a bill (SB 1047) that would regulate AI development and deployment. Despite support from some tech leaders, major tech companies like Google, Meta, and OpenAI oppose the bill.
Here are the key points of the bill:
Safety Testing: Requires advanced AI models to undergo safety tests.
Kill Switch: Developers must have a way to shut down AI models if they malfunction.
Government Oversight: Gives the attorney general the authority to sue developers who don’t comply.
Third-Party Audits: Developers must hire independent auditors to check their safety practices.
The bill aims to protect the public from potential AI risks. However, tech companies argue that it might hinder innovation and push developers away from California. They’re particularly worried about its impact on open-source AI models, which many believe are crucial for quickly developing safer AI applications. Companies like Meta are concerned that the bill could make them responsible for monitoring open-source models. The bill’s sponsor, Senator Scott Wiener, has made changes to address these concerns.
Despite the opposition, the bill has already passed the state Senate and is now waiting for a vote by the Assembly. If it passes, it will go to Governor Gavin Newsom for final approval.
3. EU AI Act’s Scope in Plain English
The European Artificial Intelligence Act (the “AI Act”) went into effect on August 1, 2024. Whether you are a multinational company, a startup, or somewhere in between, you should first conduct a legal analysis to determine if the AI Act applies to you before committing any resources to it. Most AI systems, such as spam filters and AI-enabled video games, are not subject to the AI Act’s requirements.
Compared to the AI Act, U.S. AI laws are more complex, encompassing both federal and state laws, case law and statutes, and existing laws as well as AI-specific regulations.
For a detailed overview of the EU AI Act’s scope in plain English, please see below. For information on U.S. AI laws, join our webinar as mentioned at the beginning of the Newsletter.
- Who is affected by the EU AI Act?
The AI Act sets rules for anyone involved with AI systems that connect to the EU market. Here’s who it affects:
- Providers: Those who offer AI systems or general-purpose AI models for sale in the EU.
- Deployers: Those who use AI systems and are based in the EU.
- International Providers and Deployers: Those outside the EU whose AI systems affect the EU market.
In other words, AI systems developed and used entirely outside the EU without impacting EU citizens are not subject to the AI Act.
The AI Act does not apply to AI systems used solely for scientific research and development, or open-source AI systems unless they are banned or considered high-risk.
b. What kind of AI is Banned?
The AI Act bans certain AI practices that are considered harmful, abusive, or against EU values. This includes using AI to secretly influence or deceive people in ways that significantly change their behavior. Some exceptions allow law enforcement to use real-time remote biometric identification in public spaces.
c. What kind of AI is considered High-Risk AI?
The AI Act uses a risk-based approach to set rules for AI systems, categorizing them by how much risk they pose. “High-risk AI systems” are divided into two groups:
- AI systems that are used as a safety component in a product (or those subject to EU health and safety laws).
- AI systems used in eight specific areas, such as education, employment, access to key public and private services, law enforcement, migration, and the justice system.
However, AI systems in these eight areas might not be considered high-risk if they are only used for:
- Carrying out narrow, specific tasks
- Improving the outcomes of human work that has already been done
- Identifying patterns or changes in decision-making without replacing or affecting human judgment
- Conducting basic tasks before a full risk assessment
However, AI systems in these areas are always considered high-risk if they are used for profiling individuals.
d. GPAI
A GPAI model is defined as an AI model that is trained on large amounts of data using self-supervision, can perform a variety of different tasks well, and can be used in different systems or applications. This definition does not include AI models used only for research, development, or prototyping before being marketed.
GPAI models considered to have systemic risk are covered under the AI Act. A GPAI model is classified as having systemic risk if it has high-impact capabilities, which are assessed using technical tools and benchmarks, or if it is labeled as such by the European Commission. A model is assumed to have high-impact capabilities if it uses a large amount of computational power, specifically more than 10^25 floating point operations (FLOPs).
e. Deep Fakes
Deep fakes are defined as “AI-created or altered images, audio, or videos that look like real people, objects, places, entities, or events and might trick someone into thinking they are real”.
Under the EU AI Act, those who use AI systems to produce deep fakes must clearly indicate that the content has been artificially created or altered by labeling it as such and stating its artificial origin. This rule does not apply if the deep fake is used legally for detecting, preventing, investigating, or prosecuting a crime. If the content is part of an obvious artistic work, the requirement is only to mention the existence of the generated or altered content in a way that does not interfere with how the work is displayed or enjoyed.
Please check out my article comparing the EU AI Act and the Colorado AI Act, published by Bloomberg Law. https://lklawfirm.net/publications/
Leave a Reply