Privacy and AI in the Hot Seat: What 2024’s Enforcement Trends Reveal about Compliance Priorities

Key Lessons from Data Crackdowns
Enforcement actions in 2024 exposed weaknesses in artificial intelligence (AI)-driven business practices, data retention and security practices, third-party data transfers, and consumer-facing notices. From AI claims misleading consumers to unauthorized transfers of personal data, federal, state, and international regulators are sending a strong message: companies must take a proactive approach to privacy and AI compliance or risk enforcement action. This client alert breaks down notable enforcement actions of 2024 and lessons businesses should take away to close compliance gaps before regulators act.
AI under Fire: When Innovation Becomes an Issue
AI’s rapid expansion drew intense regulatory scrutiny, and enforcement agencies are making it clear that innovation is not an excuse for deception. The Federal Trade Commission’s (FTC) Operation AI Comply put AI-driven business practices under the microscope, targeting companies that use artificial intelligence as a marketing and selling point without delivering on their promises.
For example, DoNotPay, a company that positioned itself as the “world’s first robot lawyer,” promised consumers that its chatbot could operate like a human lawyer and generate demand letters or initiate cases while applying the relevant laws to consumer’s legal and factual situations and relying on legal expertise and knowledge to avoid complications. However, DoNotPay could not live up to or substantiate its claims: its service could not replace the expertise of a human lawyer. Similarly, Ascend and Ecommerce Empire Builders claimed their AI-powered service could generate for consumers passive income through online storefronts. These companies promised income-generating businesses through the use of AI-powered tools, but little to no profit materialized for many consumers. The FTC found such claims to be unsubstantiated or false. Meanwhile, Rytr offered consumers an AI-driven review generator. The tool led to polluting the marketplace with fake consumer testimonials that included material details unrelated to the user’s input. Because of this, the resulting testimonials were false and would likely deceive potential consumers who relied on reviews to make purchasing decisions.
At the state level, the Texas attorney general (AG) reached a settlement with Pieces Technologies (“Pieces”), an AI healthcare company accused of misleading hospitals about the accuracy of its AI-powered product, which summarized, charted, and drafted clinical notes for medical staff. The company claimed its product could deliver with near-perfect accuracy, even stating its products had a “severe hallucination rate” (or error rate) of “<1 per 100,000.” The AG’s investigation found that such claims were likely inaccurate. As part of the settlement, Pieces agreed to disclose the extent of its product’s accuracy and the extent to which users should or should not rely on its products. Notably, the AG warned, “AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use. Anything short of that is irresponsible.”
AI Innovation and Regulation: Finding the Balance
While enforcement actions highlight the risk of improper deployment of AI, policymakers continue to champion AI innovation while balancing consumer protection. The Trump administration has been active in shaping AI governance, with a recent executive order revoking older AI policies and directives with the purported aim of allowing the US to remain at the forefront of global AI leadership. However, that aim should not be misinterpreted as a license to ignore sound AI governance and risk management practices. In fact, in 2019 the Trump administration through Executive Order 13859 laid the groundwork for what would become the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, which has become one of the de facto standards for companies seeking to mitigate AI-related risks. Companies should also keep in mind that states have been actively developing AI legislation, with Colorado leading the way.
For companies leveraging AI, the takeaway is clear: while AI is the future, it should have guardrails. As AI enforcement continues, companies must balance innovation with consumer protection.
BRG’s Privacy and Data Compliance team can help you build business-friendly AI governance strategies that keep you on the cutting edge without creating unnecessary risks and inviting regulatory scrutiny.
Data Retention Reckoning: Poor Data Retention and Disposal Practices Create Compliance Risks
Storing sensitive data without securing it and failing to enforce a data retention policy is a liability. The FTC’s enforcement action against Blackbaud, Inc. highlights the dangers of both poor security practices and excessive data storage. Blackbaud, a provider of data services and fundraising software, failed to encrypt sensitive consumer data, including social security numbers and bank account details, leaving it vulnerable to attack. In early 2020, a hacker exploited weaknesses in the company’s system, accessing and exfiltrating unencrypted consumer data resulting in a breach that went undetected for months. In the FTC’s view, much of that data should not have been there in the first place. Although the FTC had other concerns in Blackbaud (including security and notice issues), a key takeaway from the case is that the FTC ruled for the first time that excessive retention of consumer dataSection 5 of the FTC Act (even if the notice and security issues had not existed).
Data minimization is a core privacy principle: companies should collect and retain only the personal data they need for a specific purpose and securely dispose of it when it’s no longer necessary. When organizations hold onto data indefinitely they increase the risk of exposure and create unnecessary regulatory risk. Strong data retention and disposal practices are a key privacy safeguard.
Improper Transfers: When Third-Party Trackers Expose More Than Intended
Regulators are making it clear: improper transfers of personal data to third parties are a major compliance risk. Recent enforcement actions in Sweden against Apoteket AB and Apohem AB and Avanza Bank highlight the dangers of misconfigured tracking tools like Meta Pixel, which resulted in the unlawful transfer of customer data. These actions serve as a reminder that businesses integrating third-party technologies must ensure they control, limit, and monitor the data being shared.
The Cost of Misconfigured Pixels: Unintended Data Transfers to Meta
- Apoteket AB and Apohem AB, two of Sweden’s largest pharmacy chains, were fined for unintentionally transmitting customer names, emails, addresses, product purchases, and sensitive health-related purchase data (e.g., health treatments, self-tests) to Meta. This transfer stemmed from their activation of function, which enhanced data processing beyond the intended purpose, allowing for the improper transfer of personal data.
- Avanza Bank, a major financial institution, was fined for a similar Meta Pixel configuration that led to the unauthorized transfer of highly sensitive financial data, including account numbers, securities holdings, loan amounts, and social security numbers, to Meta over a two-year period.
The Swedish Privacy Protection Authority concluded that these companies failed to implement sufficient technical and organizational measures to secure sensitive customer data. The companies did not have processes in place to detect or prevent such transfers, nor did they perform the necessary risk assessments before activating the AAM function.
Conducting proper privacy assessments is a sign of a mature privacy program. It demonstrates to regulators an organization’s commitment to data processing oversight and risk management.
Privacy Notices under the Microscope: “Crystal Clear” Is the Standard
Privacy notices must be clear, detailed, and accessible. The Dutch Data Protection Authority (DPA) fined Netflix €4.75 million for failing to inform customers about how their data was used between 2018 and 2020. The investigation found that Netflix’s privacy statement lacked key details about why it collects data and what it does with it.
The Dutch DPA stressed that a company of Netflix’s size should have provided “crystal clear” information about its data practices.
Regulators expect businesses to be upfront and explicit about their data practices. A privacy notice is not just a formality; it’s a legal and consumer trust obligation.
DoorDash and Hidden Data Sale: When Sharing Becomes a Compliance Risk
DoorDash was also found to have used insufficient notice practices. The California AG’s office found that DoorDash’s participation in a marketing cooperative with which it shared customer names, addresses, and transaction details in exchange for marketing services was considered a sale about which DoorDash failed to properly inform users in their notice or provide them with the required opt-out mechanism.
This action underscores a critical compliance lesson: “sharing” personal data for marketing purposes can qualify as a sale. Companies must clearly disclose these practices and offer consumers an opt-out.
As these enforcement actions make clear, keeping up with changing laws and the enforcement landscape in privacy and AI is complex. BRG’s Privacy and Data Compliance team believes that the best way to prepare is through flexible, scalable, and proactive governance. Our team can help your company navigate complex privacy laws and implement best-in-class compliance strategies before issues arise. Let us help you get ahead of them.


Prepare for what's next.
ThinkSet magazine, a BRG publication, provides nuanced, multifaceted thinking and expert guidance that help today’s business leaders adopt a more strategic, long-term mindset to prepare for what’s next.