AI NEWSLETTER EDITION 2

Index:

1.        AI and Employment Law: New Development

2.        AI and Securities Law: SEC Warning on AI-Related Security Risk Disclosure

3.        AI and Data Privacy:

a.        Target Lawsuit

b.       Proposed American Privacy Rights Act’s Impact on AI

4.        AI and State AG: MA Attorney General's Office

 

1.                          AI and Employment Law: New Development

Is your company using Workday or another AI tool for hiring? The EEOC is back in the spotlight concerning AI hiring tools. On April 11, the EEOC strongly advised a federal judge against rejecting Derek Mobley’s proposed lawsuit alleging discrimination based on race and disability against Workday, the human resources tech company. The EEOC's AI Guidance made clear that employers are liable for any disparate impact caused by AI tools that are designed or administered by third-party AI vendors.

 How to ensure your company's AI tools are built and used in a non-discriminatory manner when it’s utilized in the employment context? For details about this case and a deep dive into legal strategies for mitigating AI bias when utilizing AI tools in employment, check out my article published by the American Bar Association’s peer reviewed publication, Business Law Today, titled Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies https://www.lklawfirm.net/blog/ai-employment-bias-eeoc-class-actions-legal-compliance-guidelines-and-strategies

 

2.             AI and Securities Law: SEC Warning on AI-Related Security Risk Disclosure

Could your public company’s C-level executives be personally liable for AI-related security risk disclosure?  At the 2024 Program on Corporate Compliance and Enforcement Spring Conference, SEC Enforcement Director Gurbir S. Grewal cautioned individual actors about disclosure failures concerning security risks posed by AI. In addition to addressing “AI-washing” (see my published blog article on this topic https://www.lklawfirm.net/blog/avoid-regulatory-risk-ai-washing-greenwashing-artificial-intelligence-ftc-sec-scrutiny ), Grewal discussed the SEC's approach to individual liability for AI-related disclosure failures, emphasizing alignment with SEC standards regarding knowledge and actions.

Grewal's remarks underscore the obligation of individual actors to ensure accurate AI-related disclosures. The SEC not only monitors individual conduct but also encourages reporting failures in good faith, indicating potential action against non-compliance. Therefore, it's critical for a public company's in-house legal counsel to assist C-level executives in understanding their disclosure obligations and ensuring accurate representation of AI's role and risks in their businesses to shield them from personal liability.

 

3.             AI and Data Privacy

a.             Target Lawsuit

Is your company using AI technology to collect biometric data from employees or consumers? Companies are increasingly using AI to collect biometric data (like fingerprints or facial scans) from both employees (for security, access control) and consumers (for identification, recommendations).

A recent lawsuit filed in Illinois has thrust biometric data privacy into the spotlight, alleging that Target violated Illinois’s Biometric Information Privacy Act (BIPA) by surreptitiously collecting customers' biometric data. This case highlights the growing importance of understanding biometric data privacy and its legal implications for your company. BIPA has seen a significant number of lawsuits since its enactment in 2008. The adoption of biometric privacy laws is a growing trend across the country.

 

I'll be diving deeper into biometric data privacy laws and recent lawsuits in an upcoming CLE program on May 23rd. Here's the registration link: https://www.nbi-sems.com/ProductDetails/99056ER!

Nationwide CLE credits are available. Use Promo Code FSPN50A at checkout to get $50 off!  I have two complimentary tickets if you contact me directly.

Hope to see you there! 

 

b.             Proposed American Privacy Rights Act’s Impact on AI

If passed, the American Privacy Rights Act (APRA) will have an impact on AI.  The law might limit the amount of data AI developers can use to train their models due to its data minimizing principle. In addition, the APRA mandates that companies utilizing "covered algorithms" (encompassing AI) to make decisions or aid human decision-making using covered data must adhere to various obligations. These include evaluating design to mitigate potential harms, conducting impact assessments, and offering notice and an opt-out opportunity if a covered algorithm influences "consequential decisions" (such as those pertaining to an individual's access to housing, employment, education, healthcare, insurance, credit, or public accommodation).

Several states, including California, have already passed similar laws. Additionally, the data minimization principle presents a unique challenge for AI. I'll be discussing both topics in the May 23 CLE program.  Here's the registration link: https://www.nbi-sems.com/ProductDetails/99056ER! Use Promo Code FSPN50A at checkout to get $50 off!

4. AI and State AG: MA Attorney General Advisory

On April 16, 2024, the Massachusetts Attorney General's ("AG") published an Attorney General Advisory on the Application of the Commonwealth’s Consumer Protection, Civil Rights, and Data Privacy Laws to Artificial Intelligence.  The Advisory clarifies the application of existing consumer protection, data security, and antidiscrimination laws to AI systems.

a. Consumer Protection:

The Advisory identifies several deceptive practices related to AI systems as violations of the Massachusetts consumer protection law. These include:

  • False advertising of AI capabilities.

  • Misrepresentation of AI performance, safety, or condition.

  • Deepfakes, voice cloning, or chatbots used for deception.

  • Supplying defective, unusable, or unfit-for-purpose AI systems.

 b. Data Security:

The Advisory confirms that the state data security law applies to AI systems. This necessitates safeguarding protected personal information used by AI and complying with data breach notification requirements.

c.  Antidiscrimination:

The AG emphasizes that AI systems, including algorithmic decision-making tools, cannot perpetuate discrimination. This includes using discriminatory inputs or generating discriminatory results based on protected characteristics, even unintentionally.

The Advisory cites the Equal Credit Opportunity Act as an example, requiring creditors using AI to provide specific, non-discriminatory reasons for loan application denials. I discuss this issue in my blog article titled “AI Risk Mitigation and Legal Strategies Series No. 1: Financial Services Industryhttps://www.lklawfirm.net/blog/financial-services-aml-glba-fcra-ecoa-regulatory-compliance-artificial-intelligence

 
Next
Next

AI Legal Strategies Series No. 6: Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies