This article was published by Bloomberg Law. https://news.bloomberglaw.com/business-and-practice/more-state-data-laws-signal-companies-to-act-on-ai-and-privacy
The data privacy storm is brewing, with rising government enforcement actions and artificial intelligence’s growing impact. As 2025 ushers in new data privacy laws in eight more states, businesses nationwide will be impacted if they meet certain thresholds, regardless of their location.
Faced with an increasingly complex regulatory climate, companies need to reexamine their data privacy frameworks to mitigate potential class actions and government actions.
State attorneys general have escalated enforcement of data privacy laws. In California, New Hampshire, Texas, and Virginia, they established privacy units for enforcing such laws, signaling heightened scrutiny for companies.
California Attorney General Rob Bonta settled several California Consumer Privacy Act violation cases, including actions against Sephora, DoorDash, and Glow, and launched an investigative sweep of mobile apps and streaming services.
Texas Attorney General Ken Paxton announced aggressive enforcement of Texas privacy laws this year, including significant actions such as a $1.4 billion settlement with Meta. Attorneys general also collaborate through multistate coalitions under the National Association of Attorneys General to investigate data breaches and privacy incidents.
The Federal Trade Commission has expanded its privacy enforcement, focusing on health and location data. Recent actions include cases against health technology companies such as BetterHelp and Celebral for unauthorized health data sharing.
In 2024, the FTC amended the Health Breach Notification Rule to cover health and wellness apps. It targeted Mobilewalla, Gravy Analytics and Venntel, InMarket, and X-Mode and Outlogic for selling location data, emphasizing its stance that location data constitutes sensitive data.
Adding to the complexity is AI’s impact on the data privacy issue. The FTC introduced “algorithmic disgorgement” as an enforcement tool, mandating companies delete AI models trained on data collected in violation of data privacy laws.
Some state laws have addressed AI-related privacy concerns by granting consumers the right to opt out of automated decision-making, including profiling, and requiring data privacy assessments for activities posing a “heightened risk of harm.”
The patchwork of state data privacy laws is growing more complex. New Hampshire, Delaware, Iowa, Nebraska, New Jersey, Tennessee, Minnesota, and Maryland will implement their own data privacy laws this year.
These laws align with existing privacy laws in terms of obligations and consumer rights. For example, most require companies to honor universal opt-out mechanisms for data processing, as highlighted by the Sephora case, where Sephora failed to process opt-out requests through the Global Privacy Control, a technical specification that allows internet users to effectively communicate their privacy preferences to businesses.
However, some of these laws have unique requirements. The Delaware Personal Data Privacy Act doesn’t exempt nonprofits and only exempts data covered under the Health Insurance Portability Accountability Act, instead of HIPAA-exempted organizations. A telemedicine provider may be subject to the Delaware law for non-protected health information it collects from Delaware residents, such as website analytics data or marketing information.
The Minnesota Consumer Data Privacy Act exempts small businesses but requires opt-in consent before selling sensitive personal data. It doesn’t exempt all organizations subject to the Minnesota law, but only data subject to the Minnesota law and certain financial institutions. It requires businesses to maintain a data inventory, a best practice not required under most other laws. The Minnesota law also grants consumers the right to contest profiling outcomes based on their data and mandates clear hyperlinks labeled “your opt-out rights” or “your privacy rights.”
The Maryland Online Data Privacy Act prohibits selling sensitive personal data and limits collection, process, or sharing to what is strictly necessary for requested services. It also bans selling or processing minors’ data for targeted advertising if the controller knows or should’ve known that the consumer is under 18.
Starting October, the Maryland law will also require privacy impact assessments for high-risk algorithms, potentially resulting in extensive assessments. This isn’t an exhaustive list of all unique requirements under the new data privacy laws.
Compliance Strategies
In light of these aggressive government actions, AI’s impact, and the data privacy laws set to soon take effect, companies should consider the following key actions, and determine whether to adopt a uniform nationwide approach or a state-by-state approach.
Under the nationwide approach, a company will adopt data privacy practices that meet both the common legal requirements and unique legal requirements of state laws. This approach may help future-proof against new state data privacy laws. However, it may lead to loss of revenue due to unnecessary restrictions imposed in states with more permissive laws. It could also result in smaller companies or those with limited data processing activities spending resources on unnecessary compliance.
Under the state-by-state approach, a company’s compliance mechanisms would vary based on the laws of each state to which they provide products or services. This approach could be cost-effective for companies subject to only a limited number of state laws. However, it would require constant updates as new privacy laws are enacted.
- Regardless of the approach you decide to adopt, a company needs to re-examine every aspect of its data practices including data collection, processing, sharing, storing, and deletion—in light of the new requirements under privacy law, the targeted components by the FTC, and AI-related components, especially concerning sensitive data and minor data.
- It’s essential to implement privacy impact assessments as required by law, including high-risk AI systems or processing activities that pose a heightened risk of harm to consumers.
- Consumer-facing privacy policies and notices should be reviewed and revised as necessary to ensure compliance with new privacy laws. At the same time, developing internal privacy guidelines for employees interacting with AI systems is crucial.
- Strengthening consent mechanisms is another key focus for complying with new privacy laws. Implementing granular consent options for AI data processing is key, and ensuring the language used is clear and specific about how AI is involved.
- Financial services and health-care providers should conduct a thorough legal analysis of the applicability of state data privacy laws, even though they are subject to the GLBA and HIPAA.
The complexity of navigating patchwork regulations, stricter government enforcement, and AI-driven privacy challenges necessitate consulting with a highly skilled, not just experienced, privacy attorney.
By building robust data privacy compliance programs and avoiding costly enforcement actions, businesses can gain customer trust and strengthen their brand in an increasingly privacy-conscious world.
Leave a Reply