Content for this Newsletter:
1. FTC’s Action Against AI Lawyer
2. AI & Fake Image Lawsuit
3. AI in Healthcare
4. AI & Copyright Lawsuits
5. In-House Counsel Q&A Initiative
a. My Bloomberg Law Publication
My article “AI Needs Regulatory Guardrails in the US to Ensure Safe Use” has been published by Bloomberg Law. To read the full article and learn more about compliance strategies, please visit and subscribe to my blog at https://lklawfirm.net/ai-needs-regulatory-guardrails-in-the-us-to-ensure-safe-use/
b. Upcoming Webinars: The AI Revolution: Are You Prepared to Govern It?
Join me and Zhaoying (Dorothy) Du, Global Head of Legal at Motorola, for an exclusive webinar. Discover actionable strategies for AI governance, cross-border compliance, and ethical AI practices. Don’t miss this must-attend session for legal, compliance, and business leaders!
📅 December 12, 2024 | ⏰ 12:00 PM EST | ⏳ 60 Minutes. Registration link: https://docs.google.com/forms/d/e/1FAIpQLScl-O7GST2WCcDLz5-OeCGXzdBaNKhxT-EpQOfiPtbin3VVOA/viewform
Now, let’s move on to the latest developments in AI law. The world of AI law is evolving rapidly, and even if your company isn’t located in those states, these changes may still impact your business.
1. FTC’s Action Against AI Lawyer
The Federal Trade Commission (FTC) is taking action against companies that have used artificial intelligence (AI) to deceive or harm consumers. In a new law enforcement sweep called Operation AI Comply, the FTC is targeting companies that have used AI to create fake reviews, promote misleading “AI Lawyer” services, and make false claims about using AI to help consumers make money online.
The FTC’s Chair, Lina M. Khan, has stated that using AI to trick, mislead, or defraud people is illegal. The agency is emphasizing that there is no exemption from existing laws for companies using AI. By cracking down on unfair or deceptive practices, the FTC aims to protect consumers and ensure a fair playing field for honest businesses.
2. AI and Fake Image Lawsuit
Elon Musk, Tesla, and Warner Brothers Discovery have been sued for allegedly using an AI-generated image from the film “Blade Runner 2049” to promote Tesla’s robotaxi concept. The lawsuit, filed by the film’s producer, Alcon Entertainment, claims that Musk and the other defendants requested permission to use the image but were denied.
The lawsuit alleges that Musk then used an AI-generated fake image of the “Blade Runner 2049” still during a presentation for Tesla’s Cybercab. Alcon argues that this constitutes copyright infringement and false endorsement.
3. AI in Healthcare:
California’s new law, AB 3030, is setting the stage. Starting January 1, 2025, healthcare providers in California will be subject to new regulations governing the use of generative AI (GenAI) in patient care. This law mandates transparency and accountability when using AI to generate patient communications. Here are key requirements of AB 3030:
i. Disclosure: Healthcare providers must clearly indicate when AI has generated patient communications, whether written, audio, or visual.
ii. Human Contact: Patients must have a clear way to contact a human healthcare professional for questions or concerns about AI-generated information.
🚫 Exemptions to the Rule:
Human-Reviewed AI: Communications that have been reviewed and approved by a licensed or certified healthcare provider.
Administrative Tasks: Routine messages like appointment reminders or billing notices.
⚖️ Enforcement and Penalties:
Physicians: Violations of AB 3030 could lead to disciplinary action by the Medical Board of California or the Osteopathic Medical Board of California.
Healthcare Facilities and Clinics: Non-compliance may result in enforcement actions under California’s Health and Safety Code.
By implementing these regulations, California aims to ensure that AI is used responsibly in healthcare, protecting patient safety and promoting transparency.
4. AI and Copyright Lawsuits
a. Perplexity AI Lawsuit
A lawsuit filed by Dow Jones and the New York Post against Perplexity AI highlights the growing tension between publishers and tech companies over the use of copyrighted content for AI development. The publishers allege that Perplexity, an AI startup competing with Google, has been illegally copying their content to train its AI models and generate summaries.
While some publishers are exploring licensing agreements with AI companies, many AI developers argue that they have not broken any laws by accessing content for free. In May, News Corp announced a multi-year partnership with OpenAI, a leading AI research lab, to explore ways to leverage AI technology while protecting intellectual property rights.
b. Open AI Lawsuit
A federal judge in New York has dismissed a lawsuit against OpenAI, the company behind ChatGPT. The lawsuit, filed by news outlets Raw Story and AlterNet, claimed that OpenAI used their articles without permission to train its AI models.
⚖️ The judge ruled that the news outlets had not demonstrated sufficient harm to proceed with the lawsuit. However, they were given a chance to file an amended complaint.
🔍 OpenAI asserts that it uses publicly available data to train its models, a practice protected by fair use laws. This case is part of a broader trend of lawsuits against AI companies regarding copyright issues, with The New York Times being another notable example.
The legal landscape surrounding AI and copyright is still evolving, and this case will likely have important implications as AI companies continue to face similar legal challenges.
5. In-House Counsel Q&A Initiative:
Many in-house legal counsel have raised concerns about the challenges of complying with data privacy laws in the AI context, particularly as several new state privacy laws are set to take effect next year. For this month’s In-House Counsel Q&A initiative, I’m sharing insights I provided during an interview with the ABA E-Privacy Committee:
To address these challenges effectively, I recommend a comprehensive, structured approach.
1. Conducting a thorough AI inventory is essential. In-house counsel should work closely with IT and business units to identify all AI systems in use or under development. From there, it’s important to categorize these systems based on their data processing activities and any associated privacy risks.
2. Developing an AI governance framework is key. I recommend forming a cross-functional AI governance committee that includes legal, IT, data science, and business stakeholders. This committee should establish clear policies and procedures for AI development, deployment, and monitoring, with legal review and sign-off before any new AI systems are deployed.
3. It’s key to implement privacy impact assessments (PIAs). These assessments should be tailored specifically for AI systems and required for all new AI initiatives and major updates. PIAs should cover data collection, processing, retention, and any potential privacy risks associated with these activities.
4. Be aware of AI’s impact on sector-specific regulations and ensure compliance with regulations such as HIPAA for health information, GLBA for financial data, or COPPA for children’s data.
5. Enhance data governance. This means working with IT to establish robust data classification and handling procedures, applying data minimization principles to AI training and operational data, and creating clear data retention and deletion policies for AI-processed data.
6. In-house counsel should review and update privacy policies and notices to ensure transparency about AI usage, particularly in customer-facing documents. At the same time, developing internal privacy guidelines for employees interacting with AI systems is crucial.
7. Strengthening consent mechanisms is another area of focus. I advise implementing granular consent options for AI data processing and ensuring the language used is clear and specific about how AI is involved.
8. In vendor relationships, it’s important to create AI-specific contractual clauses. These should address key issues like data ownership, model training, and liability for AI decisions.
9. Develop an AI incident response plan. This plan should outline procedures for addressing AI-related privacy breaches or system malfunctions, ensuring compliance with breach notification requirements across jurisdictions.
10. Ongoing monitoring and auditing are also essential. Regular audits of AI systems for privacy compliance are key, and in-house counsel should stay up to date on regulatory developments to adjust practices as needed.
Finally, I recommend providing training programs. Developing specific training on AI privacy risks for developers, data scientists, and business users is vital. Regular briefings for senior management on emerging AI privacy issues help ensure the organization stays aligned.
In-house legal counsel are invited to submit one question to lkempe@lklawfirm.net. Your name and company will remain confidential. If you still don’t feel comfortable, I will not publish the answer.
Join the conversation and explore pressing legal challenges around AI, technology, intellectual property, and data privacy.
To learn more about our firm’s services and client testimonials, please click on the tabs above.
Leave a Reply