Welcome to the inaugural edition of my AI law newsletter! This newsletter aims to keep you informed about the latest developments in AI law across various legal domains, including but not limited to securities law, intellectual property, data privacy, and employment law. You can expect coverage of important topics such as lawsuits and new legislation. I may also provide key takeaways and actionable legal strategies from time to time.
Please note that some weeks I may publish articles, while other weeks I will send out the newsletter. I welcome feedback from you on topics you’d like to see covered or any suggestions you may have.
Index
1. SEC Actions: Cracking Down on AI Washing
2. Copyright Lawsuits: AI Training
3. Data Privacy & AI: EU AI Act & FTC
4. Employment Law & AI: Navigating AI Bias
1. SEC Actions: Cracking Down on AI Washing
In my article titled “How to Avoid Regulatory Risk for ‘AI Washing,’ published by Legal Dive https://www.legaldive.com/news/avoid-regulatory-risk-ai-washing-greenwashing-artificial-intelligence-FTC-SEC-scrutiny/704507/, I predicted that public statements from publicly traded companies promoting their AI products or services would face stricter scrutiny from the SEC.
This prediction materialized on March 18 when the SEC charged two investment advisers with making false and misleading statements about their use of artificial intelligence.
Two investment firms, Delphia (Toronto) and Global Predictions (San Francisco), were caught exaggerating their use of artificial intelligence (AI) by the SEC.
• Delphia: From 2019 to 2023, they made false claims in official documents, press releases, and their website about using AI and client data for smarter investing. The SEC found these claims false as Delphia’s AI capabilities weren’t what they advertised. They also violated marketing rules by making misleading statements.
• Global Predictions: In 2023, they made false claims on their website and social media about being the “first regulated AI financial advisor” and offering “expert AI-driven forecasts.” Additionally, they misled investors about tax services and included unfair terms in their contracts. This also violated marketing rules.
Neither firm admitted wrongdoing, but they agreed to pay fines and stop these practices. Delphia will pay $225,000 and Global Predictions will pay $175,000.
2. Copyright Lawsuits: AI Training
a. Google
France’s competition watchdog revealed on March 20 that it has fined Google, a subsidiary of Alphabet Inc., $271.7 million for breaching European Union regulations related to intellectual property in its interactions with media publishers. The regulatory body asserted that Google’s AI-powered chatbot, formerly named Bard and currently rebranded as Gemini, was trained on content from publishers and news agencies without obtaining prior notification.
Google has chosen not to contest the facts presented during settlement negotiations. Furthermore, the company has suggested a series of corrective actions to rectify identified shortcomings, in accordance with the regulatory requirements. body.
b. Nvidia
On March 10, three authors, Brian Keene, Abdi Nazemian, and Stewart O’Nan, have filed a lawsuit against Nvidia, alleging that the company used their copyrighted books without permission to train its NeMo AI platform. The authors claim that their works were included in a dataset of approximately 196,640 books used to train NeMo in simulating ordinary written language. Nvidia took down the dataset in October following reported copyright infringement. The authors argue that Nvidia’s actions constitute copyright infringement and seek unspecified damages on behalf of individuals in the United States whose copyrighted works contributed to training NeMo’s large language models over the past three years. The lawsuit covers works such as Keene’s “Ghost Walk,” Nazemian’s “Like a Love Story,” and O’Nan’s “Last Night at the Lobster.”
I discussed the legal landscape surrounding copyright lawsuits involving AI LLM models in the CLE program and provided key takeaways and explore a hypothetical case scenario in this 4-minute clip. https://www.lklawfirm.net/publications-speaking-engagements
3. Data Privacy and AI: EU AI Act & FTC
a. EU AI Act
On March 13, 2024, the European Parliament passed the Artificial Intelligence Act, marking the world’s first comprehensive legislation on AI. The EU AI Act is applicable to both providers and developers of AI systems that are marketed or utilized within the EU, encompassing free-to-use AI technology, irrespective of their location within or outside the EU.
Noncompliance with the regulations outlined in the Act can result in substantial penalties, ranging from €35 million or 7 percent of global revenue to €7.5 million or 1.5 percent of revenue, depending on the severity of the violation and the size of the company.
While the Act does not specifically address AI systems processing personal information of EU citizens, it stipulates that existing EU laws regarding the protection of personal data, privacy, and confidentiality apply to the collection and utilization of such information for AI technologies. Therefore, the AI Act does not supersede the GDPR (Regulation 2016/679) or the ePrivacy Directive 2002/58/EC (refer to Article 2(7)).
The AI Act adopts a risk-based framework for classifying AI systems into four tiers, taking into account factors such as the sensitivity of the data involved and the specific AI use case or application. Examples of prohibited AI systems include biometric systems, such as workplace emotion recognition systems or real-time individual categorization.
b. FTC
In the US, the Federal Trade Commission (FTC) is taking a strong stance on biometric data privacy as well. On May 18, 2023, the FTC issued a cautionary statement highlighting the growing utilization of consumers’ biometric data and associated technologies, particularly those driven by AI, which poses notable concerns regarding consumer privacy, data security, and the risk of bias and discrimination. Over the past few years, the FTC has initiated enforcement measures against companies like Everalbum, a photo app maker, and Facebook, alleging misrepresentation regarding their utilization of facial recognition technology.
In the CLE program, I analyzed three recent FTC enforcement cases, providing key takeaways along with actionable advice. Here’s the 5-minute clip. https://www.lklawfirm.net/publications-speaking-engagements
4. Employment Law & AI: Navigating AI Bias
On February 21, 2024, Workday encountered revived allegations of utilizing artificial intelligence tools that allegedly discriminate against job applicants at several prominent companies with which it contracts. This case could potentially be one of the earliest instances addressing the emerging legal challenges associated with employers’ growing dependence on AI-driven hiring software.
Derek Mobley, who claims to have been rejected for over 100 jobs he applied for through Workday’s platform, filed an updated complaint in San Francisco federal court on Tuesday. This comes after U.S. District Judge Rita Lin dismissed his initial lawsuit last month.
I’ve discussed the issue of AI Employment Bias in my article: AI Legal Strategies Series No. 6: Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies https://www.lklawfirm.net/blog/ai-employment-bias-eeoc-class-actions-legal-compliance-guidelines-and-strategies
You can explore my speaking engagements and published articles at: https://www.lklawfirm.net/publications-speaking-engagements.
Leave a Reply