The New Compliance Challenge Hiding in Your Optimization Stack
If you opened Uber Eats in New York recently, you might have noticed something new, a disclaimer stating "This price was set by an algorithm using your personal data." It's easy to dismiss as yet another privacy notice cluttering up the interface. It's not.
This disclosure represents the first shot across the bow in what's shaping up to be the next major regulatory wave hitting digital businesses. And if you're running optimization and personalization programs using tools like Adobe Target, Optimizely, or any platform that adjusts what users see based on their data, you need to pay attention. Because while this law targets "algorithmic pricing," the line between optimization, personalization, and pricing is a lot blurrier than most organizations realize.
What Just Happened in New York
On November 10, 2025, New York's Algorithmic Pricing Disclosure Act took effect. The law is straightforward. If your business uses an algorithm that incorporates personal data to set prices, you must display a clear disclosure stating "THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA."
The disclosure must appear alongside the price itself, not buried in terms of service or privacy policies. It must be "easily visible and understandable" to the average consumer. And the penalties for noncompliance are $1,000 per violation with no cap, enforceable by the New York Attorney General without needing to prove any consumer harm occurred.
This isn't just a theoretical law waiting for enforcement. Attorney General Letitia James issued a consumer alert days before the law took effect, encouraging New Yorkers to report violations through an official complaint form. She's made enforcement a stated priority, and companies are taking notice. Uber, DoorDash, and Uber Eats have already implemented the disclosures.
The National Retail Federation tried to stop the law, arguing it violated the First Amendment by compelling misleading speech. In October 2025, Judge Jed S. Rakoff rejected that challenge, calling the disclosure "plainly factual" and "uncontroversial," and noting that a business's preference not to make a disclosure doesn't render the requirement unconstitutional.
The law survived. The enforcement is real. And the implications extend far beyond food delivery apps.
Why This Isn't Just a New York Problem
If you're thinking "we don't operate in New York" or "this is really about Uber's surge pricing," you're missing the pattern that anyone who lived through GDPR and the subsequent privacy law avalanche will recognize immediately.
Let’s check in on what California is doing. In October 2025, Governor Newsom signed AB 325, which treats algorithmic pricing as an antitrust issue. Effective January 1, 2026, California prohibits using "common pricing algorithms" that share competitor data and creates liability for "coercing" others to adopt algorithm-recommended prices. The law also lowers the pleading standard for antitrust claims, making it easier to sue. Penalties run up to $6 million for corporate violations.
New York then passed a second law (AB 1417B) specifically targeting rental housing, banning the use of algorithms to set rental rates. That takes effect December 15, 2025. New Jersey, Pennsylvania, and Ohio have pending legislation. Cities including Philadelphia, Seattle, Minneapolis, and Jersey City have passed their own bans on algorithmic rental pricing.
At the federal level, the FTC launched a "surveillance pricing" investigation in July 2024, issuing orders to eight companies including Mastercard, Accenture, PROS, and McKinsey. While the new FTC leadership has deprioritized continuing that study, the Justice Department has been aggressive. In August 2024, the DOJ sued RealPage, a property management software company, for allegedly enabling algorithmic price-fixing in rental housing. In November 2025, they reached a settlement requiring RealPage to stop using real-time competitor data. The DOJ also sued six major landlords who used RealPage's software.
Sound familiar? This is the same trajectory we saw with privacy regulation. GDPR passed in 2016. Most US companies assumed it was "a Europe problem." Then California passed CCPA in 2018. By 2025, nineteen states have comprehensive privacy laws, each with slightly different requirements, definitions, and enforcement mechanisms. Organizations that moved early had time to build proper compliance infrastructure. Those who waited are now playing expensive catch-up across a patchwork of conflicting state requirements.
The algorithmic pricing regulatory wave is following the exact same pattern, just moving faster. And notably, there's no federal preemption on the horizon that would simplify this landscape.
The Accountability Gap: Who's Responsible When the Algorithm Sets the Price?
Here's where this gets uncomfortable for a lot of organizations, you're responsible for what your vendor's algorithm does, even if you don't fully understand how it works.
The vendor pitch sounds great:
"Our AI maximizes revenue through personalized experiences."
"Machine learning delivers the right offer to the right customer."
"Automated optimization drives conversion lift."
These platforms are marketed on the promise that you don't need to understand the technical details because the algorithm handles it.
The RealPage case illustrates what happens when that logic meets regulatory scrutiny. Landlords using RealPage's software argued they were just following recommendations from a sophisticated pricing tool. The DOJ's position was clear that you don't get to outsource accountability to your software vendor. If the algorithm is coordinating prices using competitor data in ways that violate antitrust law, the landlords using it are liable.
One landlord told RealPage they started increasing rents within a week of adopting the software and had raised them more than 25% within eleven months.The software was explicitly marketed as helping landlords "drive every possible opportunity to increase price" and "avoid the race to the bottom in down markets."
Now translate this to optimization and personalization platforms. Adobe Target offers "Automated Personalization" powered by Adobe Sensei AI that "delivers fully individualized experiences at scale by dynamically adjusting content to each user without requiring predefined segments." It can integrate "dynamic external data sources (e.g., inventory updates, weather conditions, pricing data) to personalize in real-time."
Optimizely markets algorithmic features that automatically allocate traffic to winning variations using multi-armed bandit algorithms. Dynamic Yield (owned by Mastercard) was explicitly named in the FTC's surveillance pricing investigation and uses machine learning to "continuously optimize experiences across every user interaction."
These are powerful tools. But the question every optimization team needs to answer is, can you document exactly what personal data your platform collects, how it uses that data in decision-making, and whether those decisions affect pricing either directly or indirectly?
If your answer involves phrases like "the vendor handles that," "it's in the platform," or "we'd need to ask our account rep," you have a documentation problem that's about to become a compliance problem.
The Gray Zone: Where Optimization Meets Pricing
The New York law defines "personalized algorithmic pricing" as "dynamic pricing derived from or set by an algorithm that uses consumer data." Consumer data means "any data that identifies or could reasonably be linked, directly or indirectly, with a specific consumer or device."
That's broad. Deliberately so.
Some scenarios are clear violations. If you're charging different customers different base prices for identical products based on their browsing history, demographics, or purchase patterns, that triggers disclosure requirements. If your algorithm looks at someone's ZIP code and income level to determine what price to show them, you're in scope.
Some scenarios are clearly safe. If you're running a straightforward A/B test showing half your traffic Price A and half Price B with random assignment and no personalization, that's testing market response, not personalized algorithmic pricing. If your prices change based purely on supply and demand dynamics (airline seats getting more expensive as departure approaches) without incorporating individual user data, you're doing dynamic pricing but not personalized algorithmic pricing.
But there's a large gray area in between that encompasses much of what optimization and personalization programs actually do:
Personalized promotions: Your platform shows different discount offers to different users based on their profile or behavior. You're not changing the "base price," but you're algorithmically determining who sees what promotional price. Does that trigger disclosure?
Content personalization affecting price visibility: Your recommendation engine determines which products to show different users. User A sees premium products because their browsing history suggests higher willingness to pay. User B sees budget options. The prices themselves are consistent, but the personalization affects what prices users encounter. Where's the line?
Location-based pricing granularity: Rideshare pricing based on distance and duration is explicitly exempt from the New York law. But what about e-commerce that adjusts prices based on ZIP code analysis? Or restaurants that charge different delivery fees based on neighborhood? At what level of geographic granularity does location data become "personal data" triggering disclosure?
Loyalty program pricing: Your members get special prices. The platform uses their purchase history and engagement patterns to determine what offers to present. Is that using personal data for algorithmic pricing, or is it a subscription-based relationship that's exempt?
The honest answer is that much of this is untested. The law is three weeks old. We don't have case law, regulatory guidance, or established precedent. What we have is broad statutory language and an Attorney General who has said enforcement is a priority.
The practical reality for optimization teams is if you can't clearly explain what your platform does and why it's not in scope, you need to assume you might be in scope.
What This Means for Your Optimization Program
The immediate question isn't "should we stop optimizing" or "are A/B tests illegal now." The question is, do you actually know what your tools are doing?
1 - Start with an audit:
Can you document what personal data your optimization platforms collect? Not what they're allowed to collect per the contract, but what they actually collect.
Can you trace how that data flows through your systems and whether it influences pricing decisions at any point?
Can you explain, in plain language, how your personalization algorithms make decisions? If a user asks "why am I seeing this price," could you answer?
Do you have the technical documentation and audit trails to demonstrate your algorithms aren't using personal data for pricing, if challenged?
If you use 3rd party tools, and most organizations do, can you clearly delineate where your vendor's responsibility ends and yours begins? What does your contract actually say about compliance obligations?
2 - Review your vendor relationships:
This is a good time to revisit what your optimization vendors are actually providing and how much visibility you have into their algorithms. When you signed up for Adobe Target's "Automated Personalization" or Optimizely's AI-powered features, did anyone explain exactly how the algorithms work? Can you get that documentation now?
The vendor selection criteria that made sense three years ago like fastest time to value, easiest implementation, best out-of-the-box features, may need updating. Explainability, auditability, and compliance support are becoming table stakes.
3 - Understand the strategic implications:
The New York law requires disclosure, not prohibition. You can still do personalized algorithmic pricing you just have to be transparent about it.
For some businesses, that transparency might actually be a competitive advantage. "Our AI finds you the best price based on your preferences" could be a feature, not a liability. But it requires framing the value exchange clearly e.g. here's what data we use, here's how you benefit.
For others, the disclosure might trigger consumer backlash. Uber criticized the New York law as "poorly drafted and ambiguous" even as it implemented compliance, suggesting the company views the disclosure as potentially damaging rather than neutral.
4- Don't wait for a crisis to figure this out:
Organizations that took privacy compliance seriously early, when GDPR was still just a proposal, had time to build proper data governance, update systems thoughtfully, and train teams on new requirements. Those who waited until enforcement actions started are now scrambling through retroactive compliance at much higher cost and risk.
The same dynamic applies here. Right now, you have time to audit your tools, update documentation, revise vendor contracts, and make deliberate choices about how you want to approach optimization in this new environment. Six months from now, when you're responding to an Attorney General inquiry, your options narrow considerably.
The Bigger Picture: AI Accountability Comes Home
Step back from the specific compliance requirements we are talking about here, and what you're seeing is a broader reckoning with algorithmic decision-making across sectors.
The "black box" excuse, we can't explain how the AI works, it's too complex, is being rejected by regulators. The DOJ didn't accept it from RealPage. The New York court didn't buy it from the National Retail Federation. European regulators building the AI Act aren't buying it from anyone.
For business leaders, this creates a fundamental shift in how you should think about AI and automation tools:
Third-party AI tools are your legal responsibility. When that algorithm makes a decision on your behalf be it setting a price, approving a loan, determining who sees what offer, you own the outcome. The vendor might provide the platform, but you're accountable for how it's used.
"We didn't know how it worked" isn't a defense. In fact, it might make things worse. If you deployed an algorithmic system that materially affects customers without understanding what it does, that's a governance failure, not an excuse.
Vendor selection criteria must evolve. The ability to explain what an AI system does, provide audit trails, and support compliance requirements are a must have. If your vendor can't provide clear documentation of how their algorithms work and what data they use, that's a red flag.
Budget for compliance infrastructure. Organizations collectively spent an estimated $55 billion on CCPA compliance. That's not just California, that's all the companies who had to build systems to handle California requirements. Now multiply that by nineteen state privacy laws with different requirements. The cost of managing patchwork compliance is substantial and ongoing.
The irony is hard to miss. Tools marketed specifically for optimization and efficiency are creating new categories of compliance work that require significant resources to manage. The ROI calculation on your optimization stack needs to factor in not just conversion lift, but compliance cost.
Where This Goes Next
The New York disclosure law is narrow in scope, it applies only to New York, only to pricing, only when algorithms use personal data. But its implications are potentially very broad.
California's antitrust approach opens another enforcement avenue entirely. The DOJ's aggressive pursuit of RealPage and major landlords signals federal regulators view algorithmic coordination as serious antitrust violation. The FTC investigation, even if deprioritized under new leadership, created a public record of surveillance pricing practices that will inform future policy.
Other states are watching. Pennsylvania, New Jersey, and Ohio have pending legislation. Cities are acting faster than states on rental pricing. More proposals will come.
The question for any organization running optimization or personalization programs isn't whether this regulatory trend will affect you. It's when, and how much it will cost you to comply.
If you're running Adobe Target campaigns that adjust offers based on user profiles, using Optimizely's algorithmic features to optimize conversion, or deploying any platform that personalizes experiences using behavioral data, you need to be asking harder questions about what those platforms are actually doing, how they use personal data, and whether their decisions touch pricing in any way.
Because regulators are starting to ask those questions. And "the vendor handles that" is not going to be an acceptable answer.
The organizations that will navigate this transition most successfully are the ones that stop treating their optimization tools as black boxes and start treating them as systems they're responsible for understanding and controlling. That means better documentation, clearer vendor relationships, actual knowledge of how the algorithms work, and honest assessment of whether the optimizations you're running are the experiences you want to be transparent about delivering.
The New York law requires twelve words of disclosure. But the real requirement is much more fundamental, know what your tools do, own the decisions they make on your behalf, and be prepared to explain it when asked.
If you can't do that today, you have work to do. And the organizations that do that work now, deliberately and thoroughly, will be far better positioned than those who wait for enforcement to force the issue.
If you need help auditing your optimization stack for compliance risks or evaluating whether your personalization programs trigger disclosure requirements, let's talk. This is exactly the kind of strategic technical assessment where having an outside perspective with no vendor stake in the answer can cut through a lot of uncertainty quickly.