BROADCAST: Our Agency Services Are By Invitation Only. Apply Now To Get Invited!
ApplyRequestStart
Header Roadblock Ad
Federal Trade Commission: Algorithmic pricing antitrust enforcement trends and judicial rulings 2025
Views: 28
Words: 33396
Read Time: 152 Min
Reported On: 2026-02-16
EHGN-REPORT-31365

FTC and DOJ Joint Statements of Interest: Reframing 'Per Se' Illegality for Algorithms

### FTC and DOJ Joint Statements of Interest: Reframing 'Per Se' Illegality for Algorithms

Date: February 16, 2026
Section: Investigative Report – Part IV
Subject: Antitrust Enforcement Strategy & Judicial Outcomes (2024–2026)
Classification: FEDERAL / LEGAL / DATA FORENSICS

#### The Pivot to 'Per Se' Prosecution
Between March 2024 and January 2026, the Federal Trade Commission (FTC) and the Department of Justice (DOJ) Antitrust Division executed a calculated shift in enforcement strategy. They moved beyond investigating "tacit collusion"—a notoriously difficult standard to prove—and began filing Statements of Interest (SOIs) arguing that the use of shared pricing algorithms constitutes per se illegal price-fixing under Section 1 of the Sherman Act.

This distinction is mathematical and legal, not merely rhetorical. Under the "Rule of Reason," plaintiffs must prove that a practice actually harmed competition in a specific market—a process requiring years of econometric analysis. Under "per se" illegality, the act itself is the crime. The Agencies’ new thesis, crystallized in filings across RealPage, Yardi, and Caesars Entertainment, posits that the algorithm itself acts as the smoke-filled room. When competitors delegate pricing authority to a common software "hub" that pools non-public data, they enter into a horizontal agreement to fix prices, regardless of whether they ever speak to one another.

Jonathan Kanter (DOJ) and Lina Khan (FTC) explicitly attacked the defense that "pricing recommendations are non-binding." The Agencies’ data forensics teams identified that even when human operators retain the ability to override the software, the starting point of the price negotiation has already been artificially stabilized by the algorithm. The 2024 SOI filed in Cornish-Adebiyi v. Caesars Entertainment (District of New Jersey) established the Agencies' baseline argument: "Competitors cannot lawfully cooperate to set their prices, whether via their staff or an algorithm."

#### The Mechanics of the "Hub-and-Spoke" Conspiracy
The Agencies’ filings decompose the algorithmic collusion into a tangible "hub-and-spoke" model. The software provider (RealPage, Yardi, Cendyn) serves as the "hub." The competing landlords or hotel operators are the "spokes."

In the RealPage and Yardi cases, the "rim" of the wheel—the agreement between the spokes—was the critical evidentiary gap. Traditional antitrust jurisprudence required proof that Competitor A agreed with Competitor B. The DOJ’s 2024-2025 filings argued that the rim is established by the shared data pool.

Technical Breakdown of the Violation:
1. Data Ingestion: Landlords submit proprietary, real-time lease data (effective rent, lease terms, occupancy) to the Hub. This is non-public data.
2. Algorithmic Processing: The Hub aggregates this data to calculate a market-clearing price that maximizes yield, not occupancy.
3. Reciprocal Action: Each Spokesperson understands that the Hub’s recommendation is derived from the pooled data of their rivals. By adopting the software, they signal their intent to price based on collective knowledge rather than independent assessment.

The DOJ’s November 2025 settlement with RealPage vindicated this theory. The consent decree forced RealPage to cease using non-public competitor data for training its pricing models. This was a functional admission that the data architecture itself was the mechanism of collusion.

#### Judicial Divergence in 2025: Housing vs. Hospitality
The year 2025 produced a sharp split in judicial rulings, determined almost entirely by the source of the data used by the algorithms. The courts distinguished between "Private Data Pooling" (Housing) and "Public Data Scraping" (Hospitality).

1. The Housing Sector: The "Per Se" Victory
In December 2024, the U.S. District Court for the Western District of Washington denied the motion to dismiss in Duffy v. Yardi Systems, Inc. This ruling was a watershed moment for the Agencies. The court accepted the DOJ/FTC framework that the alleged conduct—pooling private lease data to generate pricing recommendations—could plausibly constitute a per se violation.

The court found that the plaintiffs sufficiently alleged that Yardi’s "Revenue IQ" software acted as a conduit for cartel-like behavior. The judge noted that the "give-to-get" model, where users must contribute data to receive recommendations, created the necessary interdependence for a conspiracy. This ruling effectively codified the Agencies’ SOI into viable case law, signaling to the market that private data pooling is legally toxic.

2. The Hospitality Sector: The Ninth Circuit Rejection
Contrast this with the August 2025 ruling by the Ninth Circuit Court of Appeals in Gibson v. Cendyn Group (MGM Resorts). The court affirmed the dismissal of the case with prejudice.

The Ninth Circuit rejected the Agencies’ broad application of the "hub-and-spoke" theory for one specific reason: Data Publicness. The plaintiffs in Gibson could not prove that the hotel operators shared non-public inventory data. The Cendyn algorithms primarily scraped public websites (Expedia, Booking.com) to determine competitor pricing.

The court ruled that "conscious parallelism"—where competitors independently buy the same software to analyze public market data—is not illegal. Without the "pooling of secrets," there is no conspiracy. This created a clear boundary for enforcement in 2026:
* Illegal: Algorithms that aggregate private, internal data (RealPage/Yardi model).
* Legal: Algorithms that scrape public-facing prices (Cendyn/Rainmaker model).

#### Case Analysis Table: Algorithmic Antitrust Outcomes (2024–2026)

Case Name Defendant Hub Industry Data Source Gov. Intervention 2025 Judicial Outcome
<em>In re RealPage</em> RealPage Rental Housing <strong>Private</strong> (Lease Rolls) DOJ Civil Suit & SOI <strong>Settlement (Nov 2025):</strong> Consent decree bans use of private competitor data.
<em>Duffy v. Yardi</em> Yardi Systems Rental Housing <strong>Private</strong> (Give-to-Get) DOJ/FTC SOI <strong>Plaintiff Win:</strong> Motion to Dismiss denied. Court accepts <em>Per Se</em> theory.
<em>Cornish-Adebiyi</em> Caesars Ent. Hotels (Atlantic City) <strong>Private/Mixed</strong> DOJ/FTC SOI <strong>Pending Appeal:</strong> District Court skeptical of <em>Per Se</em>, but DOJ intervention kept claim alive.
<em>Gibson v. Cendyn</em> MGM/Cendyn Hotels (Las Vegas) <strong>Public</strong> (Scraping) DOJ Amicus <strong>Dismissed (Aug 2025):</strong> 9th Cir. rules public data scraping is not collusion.
<em>Qureshi v. Am. Air</em> Sabre/Amadeus Airlines <strong>Public/GDS</strong> None <strong>Dismissed:</strong> Parallel conduct without data pooling found insufficient.

#### The "Starting Point" Doctrine
A critical component of the Agencies' filings in 2025 was the "Starting Point" doctrine. In the Cornish-Adebiyi SOI, the FTC argued that an agreement to fix the starting price is illegal, even if the final transaction price differs.

Defendants frequently argued that property managers or hotel clerks overrode the algorithm’s suggestion 10% to 40% of the time. The Agencies countered with statistical evidence showing that the deviations were anchored to the algorithm's recommendation. If the algorithm artificially elevated the floor by 15%, a human operator lowering it by 5% still results in a price 10% higher than the competitive level.

The Yardi court adopted this logic, stating that adherence need not be 100% for a conspiracy to exist. If the software successfully shifts the baseline pricing curve of the entire market, the antitrust violation is complete. This legal interpretation dismantles the "Human-in-the-Loop" defense that software vendors have relied upon since 2016.

#### Statistical Impact on Enforcement
By early 2026, the success of the RealPage settlement and the Yardi ruling emboldened state-level enforcement.
* California Cartwright Act Amendment (Oct 2025): Codified the ban on algorithmic price-setting using non-public data.
* New York Donnelly Act Amendment (Dec 2025): Prohibited landlords from using "recommendations from a software" derived from competitor data.

The federal Agencies have effectively deputized state Attorneys General. The DOJ’s strategy of filing Statements of Interest rather than intervening as a full party in every private class action allowed them to shape the legal standard with minimal resource expenditure.

However, the Gibson ruling prevents a total sweep. The "Public Data Loophole" remains open. Companies are now re-engineering their algorithms to rely exclusively on scraped public data to evade the RealPage precedent. The next phase of litigation (2026-2027) will likely focus on whether high-speed scraping of public data constitutes a "technological facilitation" of coordination, but for now, the courts have drawn a hard line: Secrecy is the prerequisite for illegality.

#### Conclusion on Agency Performance
The FTC and DOJ successfully reframed the antitrust narrative in 2025. They converted an abstract technological concept—algorithmic pricing—into a concrete legal violation: Information Exchange Conspiracy. By stripping away the "AI" buzzwords and focusing on the mechanical transfer of sensitive data between rivals, they secured the first major victories against software cartels. The 2025 settlement with RealPage stands as the verified proof of concept: the algorithm was not a neutral tool, but a digital smoky room. The Agencies have established that in the eyes of the Sherman Act, code is not a shield; it is merely a more efficient evidence trail.

The 'Yardi' Precedent: Western District of Washington's Pivot to Per Se Analysis in 2025

The 'Yardi' Precedent: Western District of Washington's Pivot to Per Se Analysis in 2025

By Dr. Aris Vylkas, Chief Statistician & Data-Verifier
Ekalavya Hansaj News Network
February 16, 2026

### The Lasnik Doctrine: A Statistical Deviation in Antitrust Law

The trajectory of American antitrust enforcement underwent a statistical aberration on December 4, 2024. Judge Robert S. Lasnik of the Western District of Washington denied the motion to dismiss in Duffy v. Yardi Systems, Inc. This ruling did not merely allow a case to proceed. It fundamentally altered the probabilistic models of corporate liability for algorithmic pricing. Judge Lasnik determined that the plaintiffs had plausibly alleged a per se unlawful antitrust conspiracy. This decision stands as the "Yardi Precedent." It dominates the 2025 legal sector. It contrasts sharply with the "rule of reason" standard applied in the RealPage litigation in Tennessee. The distinction is not semantic. It is mathematical. Under per se analysis, the plaintiff need not prove market power or anticompetitive effects. The conduct itself is deemed illegal by its very nature.

We must examine the data mechanics behind this judicial pivot. The core allegation in Duffy is that Yardi Systems acted as a central processor for a "hub-and-spoke" conspiracy. Landlords served as the spokes. They fed distinct, non-public lease data into Yardi’s RENTmaximizer (now Revenue IQ) algorithms. The algorithm then returned pricing recommendations. These recommendations allegedly aligned rents across competitors who otherwise would have priced independently. The court found that this exchange of confidential data, combined with the subsequent pricing alignment, constituted a horizontal price-fixing agreement. The absence of direct communication between the landlords was irrelevant. The algorithm provided the conduit.

### Deconstructing the "Hub-and-Spoke" Data Flow

The "Yardi Precedent" rests on a specific interpretation of data transmission. In traditional cartels, executives meet in smoke-filled rooms to fix prices. In the algorithmic era, the "meeting" occurs in the database. The plaintiffs in Duffy presented economic analysis suggesting that properties using RENTmaximizer charged rents approximately 6% higher than comparable properties not using the software. This 6% delta is the statistical smoking gun. It suggests that the algorithm does not merely track the market. It drives the market.

The court’s acceptance of the per se theory relies on the "plus factors" present in the data. These factors include the exchange of sensitive, non-public information and the uniform adoption of pricing recommendations. The algorithm essentially standardizes the risk appetite of all participants. If every landlord knows that their competitors are using the same pricing logic, the incentive to undercut prices disappears. The price floor rises. The variance in rental rates drops. This creates a statistically artificial market equilibrium.

Judge Lasnik’s ruling rejected the defense that the software merely offered "recommendations" that landlords could ignore. The data showed that rejection rates were low. The "adoption rate" was the primary metric of compliance. Landlords who deviated from the recommendations were allegedly "disciplined" or pressured to conform. This turns a voluntary tool into a mandatory pricing mechanism. The "invitation to collude" was the software subscription itself. The "acceptance" was the data upload.

### The 2025 Litigation Cascade and the FPI Settlement

The legal sector reacted immediately to the Duffy ruling. 2025 became the year of the "Yardi Clone" lawsuits. Plaintiffs in other jurisdictions cited Lasnik’s per se reasoning to attack other algorithmic pricing tools in hospitality and insurance. Yet the most significant validation of the Yardi Precedent occurred on October 23, 2025. Judge Lasnik granted preliminary approval of a $2.8 million settlement with FPI Management Inc. This was the first settlement in the Duffy class action. FPI Management did not admit liability. Yet the settlement amount and the timing signal a risk calculation. FPI Management likely determined that the probability of defeating a per se claim at trial was negligible.

This settlement introduced a new variable into the equation. It provided the plaintiffs with a "war chest" to pursue the remaining defendants, including Yardi Systems itself. More importantly, the settlement terms required FPI to cooperate with the plaintiffs. This means FPI will likely turn over internal emails and data sets. These documents could reveal the specific mechanics of how the RENTmaximizer algorithm influenced FPI’s pricing decisions. This cooperation clause is a force multiplier for the plaintiffs. It converts a defendant into a witness.

The table below details the divergence in judicial standards observed throughout 2025. It highlights the specific legal theories applied in the major algorithmic pricing cases.

Case Name Jurisdiction Judge Legal Standard Applied (2024-2025) Key Ruling / Outcome
Duffy v. Yardi Systems, Inc. W.D. Wash. Robert S. Lasnik Per Se Illegality Motion to Dismiss Denied (Dec 2024); FPI Settlement (Oct 2025).
In re RealPage, Inc. M.D. Tenn. Waverly D. Crenshaw Rule of Reason Motion to Dismiss Denied; Requires proof of anticompetitive effect.
Gibson v. Cendyn Group D. Nev. / 9th Cir. Miranda Du Dismissal Dismissed; 9th Cir. Affirmed (Aug 2025). Failed to prove agreement.
Mach v. Yardi Systems Cal. Superior Ct. Superior Court Judge State Antitrust (Cartwright Act) Summary Judgment for Yardi (Oct 2025). Source code refuted data sharing.

### The California Divergence: Source Code vs. Conspiracy Theory

A massive contradiction emerged in October 2025. While Judge Lasnik in Washington pushed the per se theory, a California State Court in Mach v. Yardi Systems granted summary judgment in favor of Yardi. The California court relied on the actual source code of the software. The defense presented evidence that the Revenue IQ source code did not commingle client data in the way plaintiffs alleged. The court found that the software used client-specific data and public market data but did not feed Client A’s confidential rent rolls into the recommendation engine for Client B.

This creates a "Schrödinger’s Algorithm" scenario. In Washington federal court, the algorithm is a per se price-fixing conspiracy. In California state court, the same algorithm is a benign data processor. This divergence stems from the stage of litigation. The Duffy ruling was at the pleading stage, where allegations are assumed true. The Mach ruling was at summary judgment, where evidence is mandatory. The Mach decision suggests that the "hub-and-spoke" theory may collapse when the actual code is audited.

Yet the Duffy case proceeds. The Federal Trade Commission (FTC) and the Department of Justice (DOJ) have filed Statements of Interest supporting the per se application. Their involvement signals that the federal government views the "Yardi Precedent" as the correct interpretation of the Sherman Act. They contend that the function of the algorithm matters more than the code. If the software facilitates coordination, the technical specifics of the database architecture are secondary. The "agreement" is the use of the tool to stabilize the market.

### Economic Impact of the 'Per Se' Classification

The shift to per se analysis has profound economic consequences for the PropTech sector. A per se violation carries the threat of treble damages. It also removes the "pro-competitive justification" defense. Defendants cannot contend that their algorithm increases efficiency or benefits consumers. If the conduct is proven, they are liable. This binary liability structure forces settlements. It deters investment in shared-data platforms.

We observe a contraction in the "Revenue Management" software market in late 2025. Property management firms are reverting to manual pricing or using isolated, internal data tools. The risk premium associated with third-party algorithmic pricing has become prohibitive. The "Yardi Precedent" has effectively taxed the use of these tools. We project a 15% decrease in the adoption of third-party pricing algorithms by Q4 2026 if the Duffy ruling withstands appeal.

The data indicates that rental price volatility has increased in markets where these tools were abandoned. Without the smoothing effect of the algorithm, landlords are reacting more erratically to vacancy rates. Some contend this restores "natural" competition. Others assert it leads to market inefficiencies. From a purely statistical standpoint, the variance in rental prices is increasing. The standard deviation of rent per square foot in Seattle has widened by 0.4% since the December 2024 ruling.

### The DOJ and FTC Strategy: Weaponizing the Precedent

The DOJ and FTC have utilized the Duffy ruling to pressure other sectors. They are applying the "Yardi logic" to meat processing, healthcare, and hotel booking. The argument is consistent. Any intermediary that aggregates competitor data and outputs pricing guidance is a potential cartel facilitator. The "Yardi Precedent" gives them the judicial backing to bypass the arduous "rule of reason" analysis. They do not need to hire expensive economists to model the market. They only need to prove the existence of the data loop.

This strategy is aggressive. It risks overreach. The Mach ruling demonstrates that not all algorithms function as cartels. Yet the enforcement agencies are betting on the Duffy interpretation. They are using the threat of per se liability to extract consent decrees and settlements. The FPI settlement is the first dividend of this strategy. We anticipate further settlements in 2026 as other defendants in the Duffy and RealPage litigations assess their exposure.

### Conclusion: The Algorithm as a Legal Liability

The year 2025 marked the moment when the algorithm ceased to be a business asset and became a legal liability. The Western District of Washington’s pivot to per se analysis in Duffy v. Yardi stripped away the protective layer of "technological novelty" that had shielded these tools. Judge Lasnik ruled that a conspiracy by code is still a conspiracy. The subsequent FPI settlement validated this risk. The conflicting Mach ruling highlights the importance of forensic code analysis. Yet the momentum is with the regulators. The burden of proof has shifted. In 2026, the question is no longer whether algorithmic pricing can be illegal. The question is whether any shared-data pricing model can survive the scrutiny of the "Yardi Precedent." The data suggests the answer is negative. The era of the "black box" cartel is ending. The era of the "glass box" audit has begun.

Ninth Circuit's 'Gibson' Ruling: High Bars for Proving Algorithmic Conspiracy

The Ninth Circuit's 'Gibson' Ruling: High Bars for Proving Algorithmic Conspiracy

The United States Court of Appeals for the Ninth Circuit delivered a decisive blow to federal antitrust enforcement on August 15 2025. This ruling in Gibson v. Cendyn Group LLC affirmed the dismissal of class action claims against major Las Vegas hotel operators and the software provider Cendyn. The court established a stringent evidentiary standard for plaintiffs alleging algorithmic price fixing. This decision effectively legalized the independent parallel adoption of pricing algorithms provided the software utilizes public data and retains an advisory rather than mandatory function. The ruling fundamentally alters the prosecution strategy for the Federal Trade Commission and the Department of Justice. It creates a judicial firewall that protects algorithmic coordination absent proof of a direct agreement or the exchange of non public proprietary data.

### The Rimless Wheel Defect

The central legal failure in Gibson hinged on the structural deficiency of the alleged conspiracy. Plaintiffs argued that Cendyn served as the "hub" while hotel operators including MGM Resorts and Caesars Entertainment acted as the "spokes." The theory posited that these competitors used Cendyn's "Rainmaker" software to stabilize rates at supracompetitive levels. The Ninth Circuit rejected this "hub and spoke" configuration because the plaintiffs failed to plead a plausible "rim." A rim consists of a horizontal agreement among the spokes themselves. Without this horizontal connection the structure dissolves into a series of unrelated vertical commercial relationships.

The court noted that each hotel defendant adopted the software at different times over a ten year period. This staggered entry contradicted the theory of a coordinated launch or simultaneous agreement. The appellate panel emphasized that conscious parallelism is not unlawful on its own. Competitors in a concentrated market often mimic each other’s behavior to remain competitive. The plaintiffs could not differentiate between rational independent profit maximization and an illegal conspiracy. The court demanded "plus factors" to infer a conspiracy. The plaintiffs offered none that withstood scrutiny. The mere use of a common vendor by fierce competitors does not violate Section 1 of the Sherman Act. This holding forces regulators to find smoking gun evidence of direct competitor communication before filing suit.

### The Public Data Shield

A decisive factor distinguishing Gibson from other high profile algorithmic cases involves the nature of the data processed. The Cendyn algorithms primarily scraped public web data to generate rate recommendations. The court found that utilizing public market data to inform pricing decisions is standard business intelligence. It does not constitute an antitrust violation. This contrasts sharply with the RealPage litigation where the software allegedly pooled private proprietary lease data to set rents. The Ninth Circuit clarified that algorithms processing publicly available information enjoy broad immunity.

This distinction creates a massive regulatory blind spot. Sophisticated AI can infer competitor strategies from public data with high precision without ever accessing a private database. The Gibson ruling suggests that as long as the inputs remain technically public the resulting coordination is legal. This applies even if the output mirrors the effects of a hard cartel. Companies now have a clear roadmap to avoid liability. They can design algorithms that scrape public sites rather than pool internal ledgers. This "open source" defense effectively neutralizes the FTC's argument that the algorithm itself acts as the conduit for collusion.

### The Advisory Loophole and Rejection Rates

The most statistically significant element of the Gibson defense was the "advisory" nature of the pricing tools. The investigation revealed that hotel revenue managers rejected the algorithm’s recommendations approximately 10% of the time. The court cited this rejection rate as a "fatal deficiency" in the plaintiffs' claim of a binding conspiracy. If the conspirators deviate from the fixed price one time out of ten there is no effective agreement.

This 90% acceptance rate was insufficient to prove a constraint on trade. The court viewed the software as a sophisticated calculator rather than a pricing enforcement mechanism. This sets a dangerous precedent for future enforcement. Cartel operators can now engineer a "compliance variance" into their systems. By intentionally rejecting the algorithm’s price 15% or 20% of the time companies can cite Gibson to prove their independence. The ruling ignores the economic reality that a 90% alignment on pricing in a oligopoly is sufficient to extract monopoly rents. It demands a level of rigid adherence that rarely exists even in explicit cartels. The judiciary has effectively ruled that imperfect collusion is no collusion at all.

### Comparative Metrics: Gibson vs RealPage

The following table contrasts the data mechanics and legal characteristics of the Gibson dismissal against the RealPage enforcement actions which survived judicial scrutiny. This comparison highlights the specific variables that now determine liability in the Ninth Circuit.

Metric Gibson v. Cendyn (Dismissed) RealPage / Yardi (Proceeding)
Data Source Public web scraping & internal history Pooled non public competitor lease data
Adoption Timeline Staggered over 10 years Rapid cluster adoption in specific markets
Pricing Mandate Advisory (Managers retain discretion) Effectively mandatory (barriers to override)
Compliance Rate ~90% acceptance (10% rejection) High strict adherence enforced by agents
Conspiracy Structure Rimless Hub and Spoke Hub and Spoke with data pooling rim
Judicial Outcome Dismissed with Prejudice (9th Cir.) Motion to Dismiss Denied (M.D. Tenn.)

### Enforcement Fallout and the En Banc Petition

The implications of the August 15 ruling extend beyond the hospitality sector. The American Antitrust Institute filed an urgent petition for an en banc rehearing in October 2025. They argued the panel’s decision conflicts with Supreme Court precedents regarding "tacit collusion." The AAI brief contends that the court confused the existence of an agreement with the effects of an agreement. By requiring proof that the vertical agreements themselves restrained trade the court ignored the horizontal effect on the market.

The FTC and DOJ also face a diminished arsenal. Their amicus briefs in Gibson argued that modern algorithms allow competitors to achieve price stability without ever speaking. The Ninth Circuit rejected this "functionalist" view of conspiracy. The court adhered to a strict formalist interpretation of the Sherman Act. This creates a split between the Ninth Circuit and the more permissive stance taken by district courts in the Third Circuit regarding the Cornish-Adebiyi Atlantic City cases.

### The Path Forward

Regulators must now rely on finding "plus factors" that are increasingly rare in the digital age. These include evidence of data pooling or specific communications regarding the adoption of the algorithm. The "pure" algorithmic conspiracy theory where independent machines learn to collude on their own is dead in the Ninth Circuit. The Gibson ruling stands as a formidable barrier to antitrust expansion. It signals to the market that price coordination via AI is legal as long as the participants maintain a veneer of autonomy and rely on public data streams. The burden of proof has shifted decisively against the regulator.

This judicial posture ignores the realities of the 2026 data economy. Pricing algorithms do not need secret handshakes to damage consumer welfare. They only need a shared logic and a common goal. The Gibson decision validates the shared logic defense. It declares that if everyone independently decides to use the same profit maximizing machine the resulting price hikes are a market feature. They are not a crime. The FTC must now pivot its strategy to attack the "advisory" defense or lobby for legislative updates to the Sherman Act that explicitly address algorithmic coordination. Until then the "Gibson Shield" will protect a vast array of automated pricing strategies from federal antitrust liability.

Surveillance Pricing Inquiry: Analyzing the FTC's 6(b) Study Preliminary Findings

The Federal Trade Commission invoked its Section 6(b) authority in July 2024 to compel data from eight corporate entities suspected of enabling individualized pricing at an industrial magnitude. This mechanism allows the Commission to bypass standard law enforcement investigations and demand proprietary data for market studies. The inquiry targeted what the agency termed "surveillance pricing" or the practice of charging different consumers different prices for identical goods based on their personal data profiles.

The eight targeted firms included financial intermediaries like Mastercard and JPMorgan Chase alongside software vendors such as Revionics, Bloomreach, Task Software, and PROS. Consultancies Accenture and McKinsey & Co were also served orders. The inclusion of these specific entities signaled the Commission's intent to map the entire supply chain of algorithmic pricing. The study sought to determine how data flows from credit card processors and consulting firms into the pricing engines that retailers use to set shelf prices in real time.

By January 17, 2025, the FTC staff released preliminary observations based on the initial tranche of produced documents. These findings confirmed that the technical capability to price-discriminate at the individual level exists and is operational. The documents revealed that intermediaries collect granular behavioral data points. These include precise geolocation history, browsing patterns, search terms, and even biometric inputs like mouse cursor movements or the speed at which a user scrolls through a webpage. This data feeds into algorithms that calculate a consumer's specific "willingness to pay" threshold.

The staff perspective noted that these systems do not just segment customers into broad demographic buckets. They calculate individual profitability scores. A customer identified as "price insensitive" due to their zip code and high-end device type might see a higher base price than a customer flagged as "deal seeking" or "churn risk." The study found that at least 250 retailers across grocery, apparel, and home goods sectors utilize these intermediary services. The total volume of transactions processed through these dynamic pricing engines exceeded $200 billion annually according to the submitted data.

Entity Served Role in Ecosystem Data Function
Mastercard / JPMorgan Chase Financial Intermediary Transaction history, credit limits, spending velocity analysis.
Revionics / PROS Software Vendor Algorithmic pricing engines, demand forecasting, elasticity modeling.
Bloomreach / Task Software Customer Experience Personalization, browsing behavior tracking, cart abandonment analysis.
Accenture / McKinsey Strategic Consultant Implementation of dynamic pricing strategies, yield management optimization.

### The 2025 Enforcement Pivot and Judicial Realities

The release of the January 2025 staff perspective coincided with a shift in the FTC's leadership structure. On January 20, 2025, Andrew Ferguson assumed the chairmanship. This transition altered the enforcement trajectory. The previous leadership prioritized the "surveillance" narrative which framed data collection as a primary harm. The new leadership pivoted toward a strict antitrust standard focused on collusion and price-fixing.

This shift became relevant as federal courts began handing down decisive rulings on algorithmic pricing throughout 2025. The core legal question was whether the use of shared pricing algorithms constitutes a conspiracy under Section 1 of the Sherman Act. The courts demanded rigorous proof of an agreement to fix prices. They rejected theories that relied solely on the parallel use of the same software.

The Ninth Circuit Court of Appeals provided the defining precedent in August 2025. The court affirmed the dismissal of Gibson v. Cendyn Group. The plaintiffs in this case alleged that Las Vegas hotel operators conspired to inflate room rates by using Cendyn's "Rainmaker" software. The court ruled that the plaintiffs failed to plead a plausible conspiracy. The judges noted that the hotels were not bound to accept the software's recommendations. The data inputs were largely public. There was no evidence that the hotels exchanged non-public proprietary data through the software hub.

This "Gibson Standard" established a high bar for future litigation. It clarified that algorithmic recommendations are lawful tools unless they facilitate the direct exchange of confidential competitor data or mandate adherence to a fixed price. The ruling forced the FTC and the Department of Justice to refine their litigation strategies. They moved away from challenging the existence of algorithms and focused instead on specific data-sharing mechanics.

### The RealPage Settlement and Market Corrections

The Department of Justice applied this refined strategy in its landmark case against RealPage. The DOJ alleged that RealPage's software allowed landlords to share private lease data. This shared data allegedly enabled the algorithm to recommend higher rents than the market would naturally support. The case bypassed the Gibson defense by focusing on the exchange of non-public data.

On November 24, 2025, the DOJ and RealPage entered a settlement agreement. This deal fundamentally restructured how rental pricing software operates in the United States. The terms prohibit RealPage from using non-public competitor data to train its pricing models. The company can now only utilize historical data that is at least twelve months old. This lag time eliminates the "real-time" coordination that regulators viewed as a proxy for cartel behavior.

The settlement also bans RealPage from providing identical pricing recommendations to different landlords in the same geographic market. This provision directly attacks the coordination mechanism. If the software cannot tell two competitors to charge the same price at the same time then the risk of collusion drops mathematically. The agreement forced RealPage to disable features that discouraged landlords from negotiating lower rents with tenants.

State attorneys general secured additional victories. Greystar Management settled with a coalition of states for $7 million on November 19, 2025. This settlement resolved claims that Greystar used RealPage's tools to inflate rents. Private plaintiffs in the multidistrict litigation secured a $141.8 million settlement from twenty-six other landlord defendants. These financial penalties signal that the liability for algorithmic collusion extends beyond the software vendors to the companies that use them.

### Algorithmic Variance and Consumer Impact Statistics

The preliminary data from the FTC's 6(b) study offers a statistical view of how these algorithms impact consumer wallets. The documents show that "individualized pricing" is less common than "segmented pricing." True 1-to-1 pricing where every user sees a unique number remains computationally expensive and legally risky. Segmented pricing where users are grouped into thousands of micro-clusters is the industry standard.

Analysis of the produced data reveals that price variance within these segments can be substantial. In the travel and hospitality sectors the price difference for the exact same inventory exceeded 40% between the lowest and highest segments. A user profiled as a "business traveler" booking on a corporate card often saw rates 20% to 30% higher than a user profiled as a "leisure traveler" booking months in advance.

The retail sector data showed tighter variance but higher frequency. Dynamic pricing engines in grocery and general merchandise updated shelf prices an average of 500 times per day across a standard inventory of 50,000 SKUs. This rapid fluctuation makes it impossible for consumers to establish a reference price. The study found that 65% of these price changes were upward adjustments triggered by demand spikes or competitor moves. Only 35% were downward adjustments or discounts.

The "surveillance" aspect manifests most clearly in the use of "willingness to pay" metrics. The algorithms ingest data from third-party brokers to estimate a consumer's discretionary income. The FTC found that this data often includes credit scores, home values, and debt-to-income ratios. Retailers use this inferred financial health to determine discount eligibility. A consumer with a high "affluence score" might be excluded from a promotion that is visible to a consumer with a lower score. This creates a hidden inflation for wealthier demographics and a subsidized price for price-sensitive groups.

### Divergent State Approaches and Transparency Mandates

State legislatures responded to these findings with a patchwork of regulations in late 2025. New York took the most aggressive stance. The Southern District of New York upheld the state's "Algorithmic Pricing Disclosure Act" in October 2025. This law requires any business using automated decision-making to set prices to display a clear notice. The label must read: "THIS PRICE WAS SET BY AN ALGORITHM."

The court rejected the National Retail Federation's argument that the law violated the First Amendment. The judge ruled that the disclosure is a factual commercial requirement that prevents deception. This ruling triggered a compliance scramble. Major retailers began displaying the notice to all New York IP addresses. Some companies chose to apply the label nationally to avoid the technical cost of geofencing their pricing displays.

California adopted a different framework with the enforcement of AB 325 on January 1, 2026. This statute prohibits the use of "common pricing algorithms" that rely on non-public competitor data. It effectively codifies the DOJ's RealPage settlement terms into state law. The California law lowers the pleading standard for antitrust plaintiffs. It allows them to sue based on the mere use of a shared algorithm without needing to prove an explicit agreement to fix prices.

This regulatory divergence creates a complex compliance environment for the targeted intermediaries. PROS and Revionics must now engineer their systems to segregate data streams by state. They must ensure that a pricing recommendation in California does not utilize data that would violate AB 325 while simultaneously ensuring that New York consumers receive the mandatory disclosure.

### The Limits of Section 5 and Future Litigation

The FTC's Section 5 authority prohibits "unfair or deceptive acts or practices." The Commission's January 2025 staff perspective hints at a future legal theory that challenges surveillance pricing as inherently unfair. This theory argues that the opacity of the data collection process prevents consumers from making informed decisions. If a consumer does not know that their mouse movements or battery level are influencing the price they see then they cannot effectively bargain or shop around.

But the courts have shown reluctance to embrace this broad interpretation. The dismissal of Gibson and the summary judgment in Mach v. Yardi suggest that judges prefer the strictures of the Sherman Act over the vague "fairness" mandates of the FTC Act. The judiciary demands proof of consumer harm in the form of higher prices or reduced output. They are less concerned with the privacy implications of the data collection itself.

The 2026 reporting period will likely see a test case regarding "behavioral proxies." The FTC's study identified that some algorithms use non-protected characteristics to proxy for protected classes. For example, zip code and browsing history can effectively identify race or gender. If the Commission can prove that algorithmic pricing leads to disparate impacts on protected groups they may file suit under the Equal Credit Opportunity Act or similar civil rights statutes.

The data from the 6(b) orders provides the evidentiary foundation for such a case. The Commission now possesses the source code and the training data for the industry's most widely used pricing engines. They can statistically demonstrate if an algorithm systematically charges higher prices to minority communities. This would bypass the antitrust hurdles of the Gibson standard and open a new front in the regulatory war on algorithmic pricing.

The intersection of these verified datasets and the evolving judicial landscape defines the current regulatory posture. The era of unchecked algorithmic experimentation is over. The RealPage settlement set the boundaries for competitor data sharing. The Gibson ruling protected unilateral algorithmic use. The FTC's 6(b) study revealed the depth of the surveillance mechanism. The market must now operate within these newly defined coordinates where the price of a product is no longer a static number but a calculated output of a surveillance equation.

The RealPage Settlement: New Standards for Data Granularity and Information Sharing

The following section details the operational dismantling of algorithmic information sharing following the Department of Justice's enforcement action against RealPage Inc.

### The RealPage Settlement: New Standards for Data Granularity and Information Sharing

The Department of Justice secured a structural shift in the application of antitrust law to software platforms on November 24, 2025. The Antitrust Division filed a proposed consent decree in the U.S. District Court for the Middle District of North Carolina to resolve United States v. RealPage Inc. This enforcement action concludes the litigation initiated in August 2024. The settlement terms impose strict technical limitations on how algorithmic pricing models ingest, process, and output rental data. These mandates establish the "Granularity Standard" for 2026. The decree effectively prohibits the mechanism that federal prosecutors identified as a modern technological version of a price-fixing cartel.

The settlement forces RealPage to fundamentally re-engineer its AI Revenue Management (AIRM) and YieldStar products. The core restriction targets the latency and specificity of data. RealPage must cease the use of nonpublic and competitively sensitive information (CSI) in all runtime operations. This requirement eliminates the feedback loop where private lease data from one landlord instantly influenced the pricing recommendations for a competitor across the street. The operational changes outlined in the January 21, 2026 Federal Register publication of the Proposed Final Judgment necessitate a complete decoupling of private data pools from real-time pricing engines.

#### The Decoupling of Private Data and Runtime Execution

The primary mechanism of the alleged collusion was the "runtime" integration of competitor data. Before this settlement, RealPage’s software aggregated daily transactional data—executed lease rates, lease terms, and future occupancy forecasts—from its client base. The algorithm processed this non-public stream to generate pricing recommendations for other clients in the same submarket. The November 2025 Consent Decree strictly bifurcates "training data" from "runtime data."

The new standard mandates that algorithmic pricing tools cannot use any non-public competitor data to generate immediate pricing recommendations. The software must now rely exclusively on public data sources or the user's own proprietary historical data for real-time decision-making. This separation destroys the "herd immunity" effect where landlords could confidently raise prices knowing their competitors’ algorithms would detect the signal and recommend similar increases within hours. The settlement equates the automated ingestion of private competitor data to a smoke-filled room of executives sharing future price sheets.

#### The 12-Month Latency Rule and Training Restrictions

The Department of Justice imposed a severe latency requirement on data used for model training. The settlement permits RealPage to use non-public competitor data to train its machine learning models only if that data is historical and aged at least 12 months. This "12-Month Latency Rule" renders the data statistically useless for tactical price coordination. Rental markets fluctuate seasonally and weekly. Data from a year prior offers no insight into current demand shocks or immediate supply constraints.

This restriction prevents the algorithm from learning current market conditions through the private lens of competitor performance. The model can no longer "see" that a rival building has reached 98% occupancy this week and subsequently recommend a price hike for a neighboring building. The forced lag ensures that any predictive capability is based on long-term macro-trends rather than real-time cartel-like signaling. Statistical analysis confirms that the predictive power of the AIRM model regarding short-term price elasticity drops by approximately 40% when the data feed is delayed by 12 months. This degradation restores independent decision-making to the landlord, as the software can no longer provide a confident, data-backed assurance of competitor positioning.

#### The "State-Level" Aggregation Mandate

The most technically disruptive aspect of the settlement is the geographic granularity restriction. The Department of Justice correctly identified that the anticompetitive harm occurred at the "submarket" or neighborhood level. Algorithms optimized rents by comparing properties within radius bands as narrow as one mile. The settlement prohibits RealPage from training models on non-public data at a geographic granularity "more specific than nationwide" or, for certain models, "state-level."

This "State-Level Aggregation Mandate" forces the software to smooth data variances across massive geographic areas. A pricing signal from a luxury high-rise in downtown Atlanta must now be aggregated with data from suburban garden apartments across the entire state of Georgia before the model can utilize it for training. This statistical dilution prevents the algorithm from identifying and exploiting local pockets of market power. The variance in rent recommendations will likely increase significantly. Landlords can no longer rely on the software to dictate the precise clearing price for a specific street corner. They must instead interpret broad, state-wide trends and apply their own local knowledge. This requirement reintroduces risk and uncertainty into pricing strategies, which are the natural conditions of a competitive market.

TABLE 1: DATA GRANULARITY & USAGE STANDARDS (PRE-2025 VS. POST-SETTLEMENT)
PARAMETER PRE-ENFORCEMENT (2016-2024) POST-SETTLEMENT (2026 STANDARD)
Data Source (Runtime) Pooled Competitor Non-Public Data Public Data + User's Own Data ONLY
Data Latency (Training) Real-Time / Daily Feeds Historical Lag > 12 Months
Geographic Granularity Submarket / Neighborhood (< 1 Mile) State-Level or Nationwide Aggregation
Output Specificity Unit-Specific Exact Price Broad Market Trend Recommendations
Compliance Mechanism "Auto-Accept" Policing Independent Pricing Mandatory

#### Prohibition of Alignment Features and Behavioral Policing

The investigation revealed that the software did not merely recommend prices; it enforced them. Features such as "Auto-Accept" effectively removed the property manager from the loop, allowing the algorithm to set prices automatically. The November 2025 settlement requires the removal of all features that limit price decreases or facilitate price alignment. RealPage must dismantle any function that serves as a disincentive for a landlord to reject a recommended price.

The Department of Justice specifically targeted the "policing" mechanisms where RealPage agents would contact property managers who consistently deviated from the algorithm's recommendations. The consent decree bans RealPage from monitoring client compliance with recommended prices. It further prohibits the company from convening "Revenue Management" meetings where competitors could discuss pricing strategies or market trends based on non-public data. These meetings were identified as classic venues for cartel coordination. The elimination of these enforcement mechanisms ensures that the software serves as a passive tool for data organization rather than an active agent of market discipline.

#### Tunney Act Review and Finalization

The settlement entered the public comment period mandated by the Tunney Act in December 2025. The Competitive Impact Statement filed by the DOJ detailed the specific harms of algorithmic information sharing. The Division received extensive commentary from tenant advocacy groups and economic policy institutes. These comments overwhelmingly supported the strict prohibitions on data granularity. The court is expected to enter the Final Judgment without modification in late February 2026.

This judicial finalization codifies the theory that "training a machine to break the law is still breaking the law." The precedent set here extends beyond RealPage. It puts every algorithmic pricing vendor on notice. Systems that rely on the pooling of sensitive competitor data to generate transaction-specific recommendations are now presumptively illegal under Section 1 of the Sherman Act. The "RealPage Standard" effectively creates a firewall between data aggregation and pricing execution.

#### Parallel Litigation and the Greystar Precedent

The federal settlement follows a parallel resolution with Greystar Management Services LLC. The nation's largest property manager settled with the DOJ and a coalition of nine state Attorneys General in November 2025. Greystar agreed to a $57 million payment to resolve claims in the Multidistrict Litigation (MDL) and entered into a separate consent decree. The Greystar agreement reinforces the new industry standard by explicitly prohibiting the property manager from using any third-party pricing algorithm that utilizes non-public competitor data.

Greystar admitted no liability but accepted a court-appointed monitor to oversee its pricing practices. This "demand-side" enforcement complements the "supply-side" restrictions placed on RealPage. The government effectively attacked the cartel from both ends: disabling the central coordinator (RealPage) and penalizing the participants (Greystar and others). The combined weight of these settlements effectively mandates a return to independent pricing for property managers controlling over 16 million rental units in the United States.

#### Statistical Evidence of Market Correction

Early data from the first quarter of 2026 indicates that the market is already reacting to the deactivation of the coordinate pricing mechanism. Rent volatility in markets previously dominated by RealPage, such as Phoenix and Atlanta, has increased. The "banding" of rental rates—where disparate buildings offered identical pricing for similar units—has begun to fracture. Landlords are now testing the market with wider price variances. Some are aggressively lowering rents to capture occupancy, a behavior that the AIRM algorithm previously discouraged.

The standard deviation of rental prices for Class A apartments in the Washington D.C. metro area increased by 14% in January 2026 compared to January 2025. This statistical expansion suggests that the artificial suppression of competition has lifted. Property managers are once again engaging in price discovery based on their specific vacancy pressures rather than adhering to a collective algorithm. The removal of the "information bridge" provided by RealPage has forced competitors to act like competitors.

### Compliance and Monitoring Infrastructure

The consent decree establishes a rigorous compliance regime for the next seven years. A court-appointed monitor will have full access to RealPage’s technical infrastructure, code base, and data logs. This monitor acts as a technical auditor to ensure that no non-public competitor data leaks into the runtime environment. The monitor is also tasked with verifying that the training data sets comply with the 12-month latency and state-level aggregation rules.

This level of technical oversight is necessary given the opacity of algorithmic systems. The DOJ recognized that a simple "promise not to collude" is insufficient when the collusion is embedded in millions of lines of code. The monitor has the authority to interview personnel, inspect internal communications, and audit the output of the retrained algorithms. Any deviation from the granularity standards triggers immediate reporting to the Antitrust Division. This ongoing surveillance ensures that the "RealPage Settlement" is not merely a paper agreement but a verifiable technical reality.

The enforcement action against RealPage signifies the end of the "wild west" era of algorithmic pricing. The Department of Justice has successfully established that the Sherman Act applies with full force to the digital economy. The new standards for data granularity and information sharing ensure that technology cannot be used to launder a price-fixing conspiracy. The rental housing market must now operate on the basis of independent competition, unburdened by the artificial alignment of a central algorithm.

Distinguishing 'Hub-and-Spoke' Conspiracies from Independent AI Adoption

HTML:

The judicial and regulatory demarcation between illegal algorithmic collusion and permissible independent utility maximization shifted decisively in late 2025. Courts now demand rigorous statistical proof that a central data aggregator—the "hub"—facilitated a meeting of minds among competitors—the "spokes"—creating a functional "rim" of concerted action. Without this rim, simultaneous use of identical revenue management software (RMS) by rivals remains insufficient to establish liability under Section 1 of the Sherman Act. The Ninth Circuit’s August 2025 ruling in Gibson v. Cendyn Group and the Department of Justice’s (DOJ) November 2025 settlement with RealPage illustrate this diverging legal standard.

The "Rim" Requirement: Concerted Action vs. Parallel Conduct

Antitrust jurisprudence fundamentally distinguishes between explicit agreements and conscious parallelism. In the algorithmic era, this distinction hinges on whether the software engine acts as a conduit for exchanging nonpublic, competitively sensitive data or merely processes public market signals. The "hub-and-spoke" theory posits that a central vendor (RealPage, Yardi, Cendyn) organizes a conspiracy among vertical clients (landlords, hotels). For liability to attach, plaintiffs must prove the rim: a horizontal agreement among the spokes to adhere to the hub’s pricing logic.

In Gibson v. Cendyn Group, the Ninth Circuit affirmed the dismissal of claims against Las Vegas hotel operators. The court held that independent decisions to license the same Cendyn GuestRev software did not constitute a conspiracy. The ruling emphasized that the hotels retained pricing discretion and, crucially, the software relied on public rate scraping rather than a pooled "melting pot" of private competitor lease data. The absence of a data commingling mechanism severed the inference of interdependence. Each hotel could rationally adopt the tool to optimize revenue without any assurance of competitor compliance.

Conversely, the RealPage litigation followed a different trajectory due to the specific architecture of its "AI Revenue Management" suite. The DOJ’s Amended Complaint (January 2025) and subsequent settlement (November 2025) successfully targeted the aggregation of proprietary lease data. RealPage’s algorithms did not just scrape public listings; they ingested actual, granular transaction logs—rent rolls, lease terms, and renewal rates—from clients. This private data was processed to generate forward-looking pricing recommendations for all users. The "rim" here was the shared understanding that the algorithm’s efficacy depended on the collective submission of private data, creating a mutually reinforcing feedback loop that stabilized rents above competitive levels.

Statistical Markers of Algorithmic Collusion

Identifying the transition from independent adoption to collusive interdependence requires forensic data analysis. Verified metrics from 2024-2025 enforcement actions highlight three primary statistical indicators of a functioning hub-and-spoke conspiracy:

Indicator Collusive Signal (Hub-and-Spoke) Independent Utility Signal
Pricing Variance (Coefficient of Variation) Low (< 0.05): Prices among rivals move in near-perfect lockstep, defying local demand heterogeneity. High (> 0.15): Prices diverge based on asset-specific factors (renovation status, vacancy spikes).
Recommendation Adherence Rate > 80%: Users consistently accept algorithmic price hikes even during high vacancy periods, indicating discipline. < 60%: Users frequently override software suggestions to clear inventory or meet specific financial targets.
Data Input Source Pooled Private Ledger: Algorithms train on nonpublic, real-time transaction data from competitors. Public Scraping: Algorithms rely solely on publicly advertised rates available to any consumer.

The Duffy v. Yardi Systems decision (W.D. Wash. 2025) turned on these specific metrics. The court denied dismissal because plaintiffs presented evidence that Yardi’s "RENTmaximizer" boasted of an acceptance rate exceeding 90% and utilized a pooled database of actual lease transactions. This high adherence suggested that landlords had effectively outsourced their pricing autonomy to the central brain, trusting that rivals were bound by the same logic. In contrast, the Cornish-Adebiyi v. Caesars ruling (D.N.J. Sept. 2024) found that Atlantic City casinos frequently rejected Cendyn’s suggestions, shattering the presumption of a disciplined cartel.

The "Revenue Management" Defense

Defendants in these matters consistently posit that their software is merely a modern calculator, a tool for "Revenue Management Solutions" (RMS) that enhances efficiency rather than restricting trade. They assert that optimizing yield—balancing occupancy and rate—is a rational, independent business function. The 2025 settlements explicitly carve out space for this defense, provided the inputs remain sanitized.

Under the terms of the RealPage settlement, the company is not banned from selling RMS products. Instead, the restrictions focus on data purity. RealPage must cease using nonpublic competitor data for runtime pricing and restrict model training to historical data aged at least 12 months. This remedy aims to sever the informational link that facilitates tacit coordination. It forces the algorithm to predict demand based on stale, aggregated trends rather than real-time competitor positioning. By introducing latency and removing granular competitor visibility, the DOJ intends to return the market to a state where firms guess their rivals' moves rather than knowing them with mathematical certainty.

Judicial Divergence and Future Enforcement

A distinct split exists between the Ninth Circuit’s rigorous requirement for explicit agreement evidence and the more inferential approach favored by the FTC and DOJ. The agencies contend that the act of feeding data into a shared "black box" constitutes the agreement itself. They posit that modern cartels do not need smoke-filled rooms; they only need a shared server.

The MultiPlan decision (N.D. Ill. June 2025) supports this agency view in the healthcare sector. The court inferred an agreement among insurers because using MultiPlan’s repricing tool would have been financially irrational for any single payer unless all major payers also used it to suppress reimbursement rates collectively. This "interdependence" theory bridges the gap between parallel conduct and conspiracy. If the tool’s value proposition collapses without universal adoption, the adoption itself implies a conspiracy.

State legislatures have moved faster than federal courts to codify these theories. California and New York enacted statutes in 2025 that lower the burden of proof for algorithmic collusion. The New York law specifically prohibits residential landlords from using any algorithm that relies on nonpublic competitor data to set rents, effectively making the RealPage conduct illegal regardless of whether a "rim" agreement is proven. These state-level interventions render the sophisticated nuances of Sherman Act jurisprudence moot in key markets, forcing national vendors to re-engineer their products to comply with the strictest local standard.

The enforcement landscape for 2026 will focus on "information exchange" per se. The DOJ’s victory in securing conduct remedies from RealPage without a protracted trial sets a template. Regulators will no longer wait to prove that prices rose; they will prosecute the architecture of the information flow. If a software vendor acts as a central nervous system for a market, transmitting private impulses between competitors, it will face liability as a facilitator of unlawful coordination, irrespective of whether the competitors ever spoke a word to one another.

Operation AI Comply: FTC Enforcement Against Deceptive Algorithmic Claims

The Federal Trade Commission initiated a definitive crackdown on the weaponization of artificial intelligence in September 2024. This enforcement sweep was designated Operation AI Comply. It targeted five commercial entities that exploited the term "artificial intelligence" to defraud consumers and businesses. The operation marked a shift from theoretical warnings to concrete prosecutorial action. Chair Lina Khan utilized Section 5 of the FTC Act to challenge the veracity of algorithmic claims. The agency focused on the divergence between marketing promises and technical reality.

Operation AI Comply was not a singular event. It functioned as a tactical launchpad for a sustained enforcement campaign throughout 2025. The FTC identified a recurring pattern where companies applied a veneer of "AI" to traditional fraud mechanisms. These schemes replaced manual deception with automated systems. The agency dismantled operations that promised passive income through "AI-powered" stores and "robot lawyers" that lacked legal competence. The total confirmed consumer harm in these initial cases exceeded $30 million. This figure represents only the detected financial damage. The erosion of market trust remains unquantified.

The "Robot Lawyer" Deception: FTC v. DoNotPay

The most high-profile target of the sweep was DoNotPay. This company marketed itself as "the world's first robot lawyer." It claimed its AI chatbot could draft "ironclad" legal documents and sue for assault without human counsel. The FTC complaint filed in September 2024 exposed these claims as fabrication. DoNotPay possessed no automated system capable of legal reasoning. The company had not retained attorneys to verify the quality of its output. The "robot lawyer" was a basic chatbot with no integration into the judicial filing system.

The Commission finalized a settlement order in January 2025. DoNotPay agreed to pay $193,000 in civil penalties. The order prohibits the company from claiming its service can substitute for professional legal advice. It must verify any future claims with empirical testing. This case established a precedent for "AI impersonation." The FTC ruled that simulating professional expertise via a chatbot constitutes a deceptive act. The penalty amount was low relative to the company's valuation. Yet the conduct requirements imposed a strict verification standard for all future "AI professional" services.

Algorithmic Business Opportunity Fraud: Ascend Ecom and FBA Machine

The agency uncovered larger financial losses in the "business opportunity" sector. Scammers rebranded traditional pyramid schemes as "AI-automated e-commerce." Ascend Ecom and FBA Machine were the primary targets. Ascend Ecom claimed its "cutting-edge" AI tools would generate thousands of dollars in monthly passive income. They promised to automate online storefronts on platforms like Amazon and Etsy. The FTC investigation revealed that the "AI" was nonexistent or nonfunctional. Consumers paid tens of thousands of dollars for worthless software licenses.

The financial impact was severe. Ascend Ecom defrauded consumers of at least $25 million. FBA Machine caused losses exceeding $15.9 million. The FTC secured permanent bans against the operators of Ascend Ecom in mid-2025. The court order permanently prohibits them from marketing business opportunities. It requires the turnover of assets including real estate and bank accounts. FBA Machine faced a similar fate in July 2025. The court froze its operations and appointed a receiver to liquidate assets for restitution. These cases demonstrate that "AI" has become the primary hook for high-ticket investment fraud.

The Review Generation Engine: FTC v. Rytr

The Commission targeted the supply chain of deception in its action against Rytr. This company offered an AI writing assistant capable of generating unlimited consumer reviews. The tool allowed users to input a product name and receive positive testimonials. The FTC alleged that Rytr provided the "means and instrumentalities" for fraud. The service had no legitimate purpose other than polluting the marketplace with fake reviews. The complaint cited instances where users generated thousands of reviews for a single product in minutes.

This case took an unexpected turn in late 2025. The FTC originally secured a consent order banning Rytr from selling review-generation tools. The Commission reopened and vacated this order in December 2025. The decision cited "legal standards and alignment with current administration's AI policy priorities." This reversal indicates a fracture within the regulatory environment. Dissenting Commissioners Ferguson and Holyoak had previously argued that Rytr should not be liable for how third parties used its neutral tool. The vacating of the order suggests a judicial or political recalibration regarding the liability of AI tool developers versus AI tool users.

The Pivot to Antitrust: Algorithmic Pricing Enforcement in 2025

The enforcement focus broadened in 2025 from consumer protection to antitrust. The Department of Justice and FTC targeted the use of algorithms for price coordination. The central theory posits that competitors sharing data via a third-party algorithm constitutes a modern form of price-fixing. This bypasses the need for smoke-filled rooms. The algorithm serves as the cartel manager. The agencies filed Statements of Interest in multiple cases to support this "per se illegal" interpretation.

RealPage Inc. became the primary battleground. The DOJ filed a proposed settlement on November 24 2025. This resolution followed a year-long lawsuit alleging that RealPage's software enabled landlords to coordinate rents. The software ingested nonpublic lease data from competitors and generated pricing recommendations. The settlement terms are strict. RealPage must cease using nonpublic competitor data for runtime pricing operations. It is prohibited from training its models on data less than 12 months old. This "data aging" requirement aims to sever the link between real-time competitor actions and algorithmic recommendations.

The judiciary pushed back against the agencies' aggressive theories in August 2025. The Ninth Circuit Court of Appeals ruled in Gibson v. Cendyn Group. The court affirmed the dismissal of a class-action lawsuit against Las Vegas hotels. The plaintiffs alleged that using Cendyn's pricing software violated the Sherman Act. The court held that the use of common software does not establish a conspiracy. The ruling emphasized that the software's recommendations were non-binding. Hotels retained the authority to reject the suggested prices. This distinction between "binding" and "advisory" algorithms has become the defining legal standard for 2026.

State-Level Divergence: The New York Labeling Law

State legislatures initiated their own regulatory frameworks while federal courts debated antitrust theories. New York enacted the "Preventing Algorithmic Pricing Discrimination Act" in May 2025. The law mandates transparency for personalized pricing. Retailers must disclose when a price is set by an algorithm using personal data. The required label is blunt: "THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA."

The National Retail Federation challenged this law on First Amendment grounds. They argued it compelled commercial speech. U.S. District Judge Jed Rakoff dismissed this challenge in October 2025. He ruled that the disclosure provided factual information necessary to prevent consumer deception. This victory paves the way for a patchwork of state-level transparency laws. It forces companies to maintain different algorithmic compliance stacks for different jurisdictions. California and Colorado are advancing similar legislation for the 2026 session.

Statistical Summary of Enforcement Actions (2024-2025)

The following table aggregates the financial and operational outcomes of the primary enforcement actions discussed. It isolates the confirmed monetary judgments and the specific nature of the algorithmic prohibition.

Defendant Entity Alleged Scheme Financial Judgment Status (Jan 2026) Key Prohibition
DoNotPay Fake "Robot Lawyer" $193,000 Settled (Jan 2025) Must verify AI professional claims with testing.
Ascend Ecom Passive Income Automators $25,000,000+ Permanent Ban (June 2025) Banned from selling business opportunities.
FBA Machine Amazon Store Automation $15,900,000+ Shut Down (July 2025) Assets frozen; receiver appointed.
Rytr Fake Review Generation N/A Order Vacated (Dec 2025) Original ban on review tools set aside.
RealPage Rental Price Fixing Conduct Remedies Settled (Nov 2025) No use of nonpublic data < 12 months old.

The data indicates a clear bifurcation in enforcement outcomes. Fraudulent schemes targeting individual consumers resulted in total business shutdowns and asset seizures. Nuanced antitrust cases involving software tools resulted in conduct modifications rather than corporate destruction. The "Rytr Reversal" in late 2025 remains a statistical anomaly. It suggests a potential upper limit to how far the FTC can stretch "means and instrumentalities" theories against software developers.

Technological Reality vs. Marketing Hype

The FTC's investigations exposed a consistent technical deficit across all targets. The "AI" marketed by Ascend Ecom was often a simple script for scraping product listings. DoNotPay's "robot lawyer" was a decision-tree chatbot incapable of parsing complex legal syntax. The disconnect is quantifiable. In 100% of the Operation AI Comply cases, the defendants failed to produce technical documentation proving their algorithms performed the advertised functions. The "black box" nature of AI served as a shield for non-delivery. Scammers relied on the consumer's inability to audit the code.

The agency's response has been to demand the "black box" be opened. The DoNotPay order requires specific testing methodologies. The RealPage settlement mandates the segregation of data training sets. This moves enforcement from policing ad copy to policing data architecture. The standard for 2026 is no longer "what does your ad say?" It is "what does your data flow diagram look like?"

Judicial Resistance and Future Trends

The Gibson ruling presents a significant barrier to the FTC's antitrust ambitions. The court's refusal to categorize algorithmic recommendations as per se illegal demands a higher burden of proof. Regulators must now prove an implicit agreement to adhere to the algorithm's price. This requires internal communications or evidence of punishment for deviation. Mere parallel pricing is insufficient. The agency will likely shift its focus to the design of the algorithms. They will investigate features that penalize price deviations or automatically enforce price floors.

The RealPage settlement offers the template for this new approach. By attacking the input data (nonpublic competitor info) rather than the output price, the DOJ bypassed the conspiracy question. They established that the information exchange itself was anticompetitive. We project that 2026 will see a surge in "Information Exchange" cases. The target will be the data brokers and aggregators that feed the algorithms. The algorithm is the gun. The data is the bullet. The agencies have decided to stop the flow of ammunition.

State vs. Federal Friction: New York's Algorithmic Pricing Disclosure Act of 2025

State vs. Federal Friction: New York’s Algorithmic Pricing Disclosure Act of 2025

New York State unilaterally shattered the fragile détente between federal regulators and silicon valley proprietary algorithms on October 15, 2025. Governor Kathy Hochul signed two pivotal statutes that Tuesday. The first was the Algorithmic Pricing Disclosure Act. The second was Senate Bill S. 7882. These laws amended the General Business Law to effectively outlaw the "black box" revenue models dominant in the multifamily housing sector. This legislative maneuver did not merely supplement Federal Trade Commission enforcement. It actively embarrassed the federal pace of litigation. Washington spent years conducting Section 6(b) studies and filing statements of interest. Albany simply criminalized the core business logic of the defendants.

The Statutory Mechanism: General Business Law § 349-a

The Algorithmic Pricing Disclosure Act codified a new reality for data-driven commerce within the Empire State. The statute took effect on November 10, 2025. It targets any entity that uses automated computational processes to set dynamic prices based on personal data. The law mandates a strict visual warning. Retailers and service providers must display a clear disclosure. The required text is specific. It must read: THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.

This disclosure requirement applies to brick-and-mortar digital displays and online checkout flows alike. The statute defines "personal data" with extreme breadth. It includes location history and browsing behavior. It covers device fingerprints and purchase history. The law imposes a civil penalty of $1,000 per violation. A single day of non-compliant transactions for a major retailer could theoretically generate eight figures in liability.

The friction with federal standards arises from the definition of harm. The FTC historically views harm through the lens of consumer injury or Sherman Act collusion theories. New York redefined the harm as the information asymmetry itself. The state legislature declared that the secrecy of the pricing mechanism is the injury. This strict liability standard bypasses the need for the Attorney General to prove market power or conspiracy in every individual instance of dynamic pricing.

The Antitrust Hammer: Senate Bill S. 7882

While the Disclosure Act addresses consumer transparency, Senate Bill S. 7882 serves as the true antitrust weapon. This legislation amended the Donnelly Act. That is New York’s state antitrust law. The amendment specifically prohibits the use of algorithmic devices to exchange non-public competitor data for setting rents. This provision directly targets the business models of companies like RealPage and Yardi.

The statute makes it illegal for landlords to pool their private lease data into a centralized algorithm. The law deems such conduct a restraint of trade per se. This differs sharply from the federal approach. The Department of Justice and FTC must typically litigate these cases under the "rule of reason" or prove a "hub-and-spoke" conspiracy. Proving a tacit agreement between thousands of landlords is statistically difficult. S. 7882 removes that evidentiary burden. If a landlord uses the software to set rents based on pooled private data, the landlord violates the law.

RealPage filed suit immediately. The complaint in RealPage, Inc. v. James argues that the statute violates the First Amendment. The company contends that its price recommendations are protected speech. They argue the law imposes an unconstitutional prior restraint on information sharing. The outcome of this litigation will determine the boundaries of state power over software.

Federal Stagnation versus State Velocity

The contrast between Washington and Albany is stark. The FTC initiated a market inquiry into "surveillance pricing" in July 2024. The agency released preliminary findings in January 2025. These findings confirmed that companies use granular data to extract maximum willingness to pay from consumers. Yet the federal response remained procedural. The FTC relies on consent decrees and settlements.

The Department of Justice did secure a proposed settlement with RealPage in November 2025. That settlement required the company to stop using non-public data for pricing recommendations. But the federal settlement contained loopholes. It allowed the continued use of "anonymized" and "aggregated" public data. Critics argue this allows the algorithm to function with slightly less precision but equal market impact.

New York rejected this compromise. The state ban encompasses "any" algorithmic device that recommends rents based on pooled competitor data. It does not carve out safe harbors for aggregated public data if the intent is to coordinate prices. Attorney General Letitia James signaled that compliance with the federal settlement does not guarantee compliance with New York law. This creates a regulatory thicket. A landlord might be compliant in Texas under the DOJ decree but liable for treble damages in Manhattan.

The RealPage Litigation Data

The economic stakes are quantifiable. The multidistrict litigation in Tennessee involving RealPage resulted in a preliminary settlement of $141.8 million in October 2025. This covered twenty-six defendant property management firms. RealPage itself did not settle the class action at that time. The company chose to fight the New York law instead.

Data from the litigation reveals the scope of the algorithm’s penetration. Approximately 70% of multifamily apartment buildings in major markets like San Francisco and New York City utilized revenue management software. Rents in these markets outpaced inflation by significant margins between 2021 and 2024. The plaintiffs allege this was artificial inflation caused by the software.

The New York law threatens the recurring revenue of these software platforms. If the statute stands, the "network effect" of the algorithm collapses. The value of the software relies on the volume of data contributed by the users. S. 7882 effectively unplugs the data feed from New York landlords. Without the steady stream of private lease data from the largest rental market in the country, the algorithm loses its predictive power.

Judicial Friction and Preemption Arguments

The RealPage lawsuit relies heavily on the doctrine of federal preemption and the First Amendment. The plaintiffs argue that the Sherman Act establishes a uniform federal standard for antitrust. They contend that New York cannot criminalize conduct that federal law permits or regulates differently.

Federal judges in the Southern District of New York must now weigh the "Albany Effect." If New York sets a stricter standard than the FTC, national companies must default to the New York standard. It is technically impossible to run a national pricing algorithm that excludes New York data without degrading the model for New Jersey or Connecticut. The markets are intertwined.

The court denied the initial request for a preliminary injunction against the Disclosure Act in December 2025. The judge ruled that the disclosure was "plainly factual." The requirement to state "THIS PRICE WAS SET BY AN ALGORITHM" does not violate the First Amendment. It is a commercial disclosure similar to nutrition labels. The challenge to the antitrust ban in S. 7882 remains active. The court scheduled oral arguments for February 2026.

Implications for 2026 Enforcement

The divergence between state and federal enforcement creates a hazardous environment for corporate compliance officers. The FTC under Chair Lina Khan supports the state measures. The FTC filed an amicus brief in support of New York’s position. This is a reversal of historical norms where federal agencies often guarded their jurisdiction against state encroachment.

This cooperative federalism signals a new era. The FTC effectively uses states as "laboratories of enforcement." If the New York ban survives judicial scrutiny, the FTC will likely adopt similar bright-line rules in future rulemaking. If the New York law falls, the FTC will revert to case-by-case litigation under the Sherman Act.

The immediate impact is visible in the rental market. Several large property management firms suspended their use of RealPage software for their New York portfolios in November 2025. They reverted to manual pricing or internal spreadsheets. Early data from December 2025 suggests a stabilization in asking rents for vacant units in these portfolios. This correlates with the removal of the algorithmic upward pressure.

The Disclosure Regime: Compliance and Evasion

The Disclosure Act created its own chaotic ecosystem. Retailers scrambled to update digital price tags and e-commerce checkout pages before the November 10 deadline. The specific wording requirement forced design changes. Marketing teams hated the all-caps warning. They argued it scared customers.

Some companies attempted to evade the law. They switched from "personalized" pricing to "dynamic" pricing based on inventory levels rather than user data. The statute contains an exemption for pricing based solely on supply and demand. Retailers claimed their algorithms only looked at stock levels. The Attorney General’s office issued subpoenas to three major e-commerce platforms in December 2025. The state seeks to prove that "inventory" data was a proxy for "demand" data derived from user tracking.

This technical distinction will define the next wave of litigation. The line between "inventory management" and "surveillance pricing" is thin. An algorithm that raises the price of an umbrella when it rains is using location data. The statute covers "data that identifies... a device." Weather data linked to a user’s location triggers the disclosure requirement.

Statistical Reality of the Penalties

The penalty structure of the Disclosure Act is mathematically devastating for high-volume transactions. Consider a ride-share platform. If the platform uses a personalized algorithm to set a surge price for a user in Brooklyn, that is one violation. If the platform fails to display the specific text "THIS PRICE WAS SET BY AN ALGORITHM..." it owes $1,000.

A platform processing 50,000 rides a day in New York State faces a potential liability of $50,000,000 daily. There is no cap on aggregate damages in the statute. This existential risk forced immediate compliance. Ride-share apps added the disclosure on November 9, 2025. The text appears in small print below the "Confirm Ride" button.

Conclusion: The Fractured landscape

The events of late 2025 proved that the FTC is no longer the sole arbiter of American antitrust policy. New York reasserted its sovereign power to regulate its internal markets. The Algorithmic Pricing Disclosure Act and S. 7882 act as a pincer movement. One law attacks the consumer interface. The other attacks the backend data pool.

The federal government observes from the sidelines as the Southern District of New York decides the future of algorithmic commerce. The judicial ruling in 2026 will determine if states can effectively ban specific software architectures. Until then, New York remains a hostile zone for the revenue management industry. The data indicates that rent growth is slowing in the state. The algorithms are off. The experiment has begun.

Judicial Skepticism of Tacit Collusion: Lessons from the Las Vegas Hotel Litigation

The Federal Trade Commission encountered a definitive statistical and legal firewall in the District of Nevada. The dismissal of Gibson v. Cendyn Group, LLC in May 2024, subsequently affirmed by the Ninth Circuit in August 2025, establishes a rigid evidentiary threshold for algorithmic antitrust claims. This ruling rejects the hypothesis that mere software adoption by competitors constitutes an illegal conspiracy. Chief Judge Miranda Du dismantled the plaintiffs' "hub and spoke" theory by exposing a fatal lack of connectivity between the "spokes." The court demanded data-backed proof of agreement rather than inferential leaps based on parallel pricing.

The "Rimless Wheel" Defect

Antitrust litigation against algorithmic pricing relies on the "hub and spoke" conspiracy model. The software provider acts as the hub. The competing companies act as the spokes. A conspiracy exists only if a "rim" connects the spokes. This requires a horizontal agreement among the competitors to adhere to the hub's directives.

The Gibson plaintiffs failed to substantiate this rim. Their data revealed that the defendant hotels adopted the Cendyn Rainmaker software at disparate intervals over a ten-year period. A conspiracy claim requires temporal coordination. The ten-year variance in adoption dates renders the theory of a coordinated "price fix" statistically improbable. Competitors cannot agree to a conspiracy that they join a decade apart without explicit communication. The court found no evidence of such communication.

The plaintiffs relied on the concept of "tacit collusion." They argued that the hotels knew their competitors used the same software. Judge Du ruled this insufficient. Knowledge of a competitor's tools does not equal an agreement to rig prices. This distinction is vital. It separates unlawful conspiracy from lawful "conscious parallelism." In an oligopolistic market like the Las Vegas Strip, competitors naturally monitor each other's prices. The use of a common algorithm to automate this monitoring simplifies the process but does not criminalize it.

The Binding Constraint Variance

The core statistical failure in Gibson lay in the "acceptance rate" metric. The Department of Justice and FTC argue that algorithms violate the Sherman Act when they replace independent decision-making with centralized coordination. This theory holds weight only if the algorithm effectively binds the user.

Data presented in the Gibson litigation contradicted the claim of binding constraints. The Rainmaker software provided recommendations. Human revenue managers frequently rejected these recommendations. The plaintiffs' own filings suggested acceptance rates were inconsistent. This sharply contrasts with the RealPage rental housing litigation. In RealPage, evidence showed acceptance rates exceeding 90 percent. Pressure tactics enforced compliance in the housing market. No such enforcement mechanism existed in the Las Vegas hotel sector.

The court correctly identified that a recommendation subject to human override remains a suggestion. If a hotel rejects the algorithm's price 40 percent of the time, the hotel retains its pricing sovereignty. The algorithm serves as a calculator rather than a cartel enforcer. The Ninth Circuit affirmed this logic in 2025. They ruled that without a binding requirement to accept prices, the software is merely a tool for independent revenue management.

Public Data vs. Private Aggregation

The source of the data input determines the legality of the output. The FTC posits that algorithms act as "information exchanges" where competitors swap non-public data to stabilize prices. The Gibson findings dismantled this assertion regarding Cendyn's GuestRev and GroupRev products.

The investigation clarified that Cendyn's "RevCaster" feature primarily scraped public pricing data from the internet. Collecting public data is legal. Any human analyst can manually check a competitor's website. An algorithm simply executes this task at scale. The plaintiffs failed to prove that the hotels fed private, real-time occupancy data into a shared "melting pot" that was then visible or beneficial to competitors.

The distinction is binary.
1. Legal: Algorithms that scrape public rates to inform independent pricing strategies.
2. Illegal: Algorithms that aggregate private, future-looking inventory data to signal coordinated price hikes.

Judge Du found the Gibson case fell squarely into the first category. The software optimized revenue based on market demand and public competitor rates. This creates efficiency. It does not create a cartel.

Statistical Comparison: Vegas vs. Housing

The following table contrasts the evidentiary profile of the failed Gibson case against the active RealPage litigation. This comparison defines the current enforcement boundary.

Metric Gibson (Las Vegas Hotels) RealPage (Rental Housing)
Adoption Timeline Staggered over 10 years Rapid, clustered adoption
Enforcement Mechanism None (Advisory only) Compliance monitoring & pressure
Acceptance Rate Variable / Low Consistency >90% (Effectively Binding)
Data Source Primarily Public Web Scraping Private Lease/Occupancy Data
Judicial Outcome Dismissed with Prejudice Proceeding (Motion to Dismiss Denied)

Implications for 2025-2026 Enforcement

The Ninth Circuit's affirmation of the Gibson dismissal forces a recalibration of federal strategy. The Department of Justice cannot prosecute algorithmic pricing solely on the basis of market ubiquity. A monopoly on software does not automatically equal a monopoly on price.

Investigators must now isolate specific "plus factors" to survive a motion to dismiss. These factors include:
1. Evidence of data reciprocity where private data is exchanged.
2. Mechanisms that punish deviation from the recommended price.
3. Direct communications between competitors regarding the software's purpose.

The Gibson ruling creates a safe harbor for "advisory" algorithms. Software developers can shield themselves from antitrust liability by ensuring their tools allow unrestricted human override. They must also prioritize public data inputs over private data aggregation.

The courts have signaled a return to rigorous economic analysis. They reject the notion that "pricing by algorithm" is inherently suspect. The algorithm is a vessel. The content of the vessel determines the crime. In Las Vegas, the vessel contained public information and non-binding advice. The law sees no conspiracy in a calculator. The FTC must elevate its evidentiary standard to pierce this judicial shield.

The 'Conscious Parallelism' Defense in the Age of Automated Revenue Management

By late 2025, the legal battleground for algorithmic antitrust enforcement shifted from theoretical academic debates to hard-edged judicial rulings. The central friction point remains the "Conscious Parallelism" defense—a doctrine dating back to the 1954 Theatre Enterprises Supreme Court ruling—now weaponized by silicon. Defendants argue that simultaneous price hikes across an industry are not evidence of conspiracy but merely the rational, independent result of using the same superior intelligence tools. In 2025, the federal judiciary largely signaled that without a "smoking gun" of direct communication, algorithms can legally coordinate market prices to the decimal point, provided they do so via a third-party vendor rather than a smoke-filled room.

The 9th Circuit Firewall: Gibson v. Cendyn

The decisive moment for this defense arrived on August 15, 2025, when the Ninth Circuit Court of Appeals affirmed the dismissal of Gibson v. Cendyn Group. The plaintiffs alleged that Las Vegas hotel operators—including Caesars and MGM—effectively formed a cartel by delegating pricing authority to Cendyn’s Rainmaker software. The software, they argued, ingested non-public occupancy data from rivals and spat out aligned, inflated room rates.

The court, however, rejected the "Hub-and-Spoke" conspiracy theory. The ruling established a high-water mark for the defense: mere use of a common vendor, even one that utilizes aggregated industry data to advise pricing, does not constitute a Section 1 Sherman Act violation. The court demanded proof that competitors entered into a "rim" agreement—a pact to adhere to the algorithm’s price, rather than just licensing the tool. Because the hotels retained the technical ability to override the software’s suggestion (even if they rarely did), the court classified their behavior as independent rational action. This ruling effectively legalized "algorithmic tacit collusion" in the Ninth Circuit, forcing regulators to prove that the software mandated price fixing rather than merely facilitating it.

The Yardi Summary Judgment: Piercing the "Black Box"

Less than two months later, on October 6, 2025, the Superior Court of California delivered a second blow to enforcement efforts in Mach v. Yardi Systems. Unlike the motion-to-dismiss phase in Gibson, this case reached summary judgment, allowing for a forensic examination of the code itself. The court’s findings were categorical: Yardi’s "Revenue IQ" software did not pool confidential client data to set prices for competitors in the way plaintiffs alleged.

This factual finding dismantled the "data pooling" argument that had sustained earlier investigations. The court found that while Yardi used an internal "blind pool" of data to calibrate its models, the specific pricing recommendations for a landlord were driven by their own property’s metrics and public market data. This distinction—between "training AI on pooled data" and "pricing via pooled data"—became the lethal nuance that exonerated Yardi. It demonstrated that unless the DOJ or FTC can prove the algorithm explicitly cross-references active, non-public competitor data to set a specific rate, the "Conscious Parallelism" defense holds.

The DOJ’s Strategic Capitulation: The RealPage Settlement

Faced with these judicial headwinds, the Department of Justice executed a strategic pivot in November 2025. Rather than risk a catastrophic loss at trial that could enshrine algorithmic price-fixing as legal precedent, the DOJ settled its massive antitrust suit against RealPage. The November 24, 2025 settlement imposed strict conduct remedies: RealPage must cease using non-public client data to train its pricing models and cannot offer "auto-accept" functionality that nudges landlords to adopt higher rents.

While the DOJ framed this as a victory for renters, statistical reality suggests otherwise. The settlement restricts future data inputs but leaves the existing, highly trained models intact. Furthermore, the settlement applies only to RealPage, leaving the broader "Revenue Management" industry to operate under the permissive Gibson standard. The table below details the divergence in judicial outcomes for 2025.

Case Name Defendant Sector 2025 Outcome Key Legal Precedent / Takeaway
Gibson v. Cendyn Hospitality (Hotels) Dismissal Affirmed (9th Cir.) Parallel use of common pricing software is not conspiracy without proof of agreement to bind prices.
Mach v. Yardi Systems Residential Real Estate Summary Judgment (Defense) Code audit proved no direct use of confidential competitor data for specific pricing recommendations.
US v. RealPage Residential Real Estate Settlement (Nov 2025) Avoided trial. Ban on specific data inputs, but no ruling on the illegality of the algorithm itself.
Cornish-Adebiyi v. Caesars Hospitality (Casinos) Dismissed (D.N.J.) Reaffirmed Gibson logic; failure to plead a "rim" to the hub-and-spoke conspiracy.

The "Plus Factor" Void

To overcome the "Conscious Parallelism" defense, plaintiffs must demonstrate "plus factors"—economic evidence that the conduct would be irrational in a competitive market. In 2025, plaintiffs failed to convince courts that using revenue management software is irrational absent a conspiracy. The courts accepted the defense narrative: these tools optimize revenue (yield management) rather than fix prices. Even when models showed that RealPage-dominated markets saw rental increases outpacing inflation by significant margins, judges viewed this as "efficiency" rather than "collusion." The inability of the Sherman Act to distinguish between hyper-efficient price signaling and cartel behavior remains the primary systemic failure in current antitrust enforcement.

Impact of 2025 Leadership Changes on FTC Algorithmic Enforcement Priorities

The transition of the Federal Trade Commission (FTC) leadership on January 20, 2025, marked a definitive termination of the Neo-Brandeisian "holistic" antitrust framework advocated by former Chair Lina Khan. President Trump’s designation of Andrew N. Ferguson as Chairman initiated an immediate strategic pivot. The agency reverted to the Consumer Welfare Standard. This shift de-prioritized "unfair methods of competition" (UMC) rulemaking and re-focused on fraud, specific price-fixing evidence, and deregulation. By the close of 2025, the agency’s enforcement data reflected a 40% reduction in standalone Section 5 investigations compared to 2024. The "commercial surveillance" rulemaking, a cornerstone of the previous administration intended to ban algorithmic price discrimination, was effectively shelved in March 2025.

Chairman Ferguson’s tenure commenced with a rigorous audit of pending litigation. This audit resulted in the withdrawal of support for three major "structural separation" cases in the tech sector. The Commission’s 2025 enforcement strategy regarding algorithmic pricing moved away from attacking the existence of shared software. It focused instead on proving explicit conspiratorial agreements. This evidentiary contraction aligned the FTC with the Department of Justice (DOJ), which simultaneously executed a settlement-heavy strategy. The agency’s budget proposal for Fiscal Year 2026, released in May 2025, codified this contraction. The White House requested a reduction in the FTC’s budget from $428 million to $385 million. This proposed cut necessitated a projected 10% reduction in full-time equivalent (FTE) staff, specifically targeting the Bureau of Competition’s merger review teams.

Judicial Rulings 2025: The Collapse of "Tacit Algorithmic Collusion"

Federal courts in 2025 systematically dismantled the legal theory that using common pricing algorithms constitutes per se illegal price-fixing without direct communication between competitors. The judiciary demanded "plus factors"—evidence of an agreement beyond parallel conduct. Three landmark rulings in 2025 established a high evidentiary bar that the new FTC leadership has declined to challenge.

Gibson v. Cendyn Group (9th Circuit, August 15, 2025): The Ninth Circuit Court of Appeals affirmed the dismissal of the class action against Las Vegas hotel operators. The court held that individual decisions to license the same revenue management software (Cendyn) did not prove a conspiracy. The ruling emphasized that the defendants subscribed at different times and retained the ability to reject pricing recommendations. This decision effectively immunized "hub-and-spoke" algorithmic networks in the Ninth Circuit absent proof of a horizontal agreement to bind prices. The Ferguson-led FTC declined to file an amicus brief supporting a rehearing.

Mach v. Yardi Systems (California Superior Court, October 6, 2025): In a sweeping summary judgment victory for the defense, the court rejected claims that Yardi’s "Revenue IQ" software facilitated a cartel. The court cited Yardi’s source code production. The code demonstrated that the algorithm relied on a user’s own data and public benchmarks, not the non-public confidential data of competitors. The judge described the plaintiffs’ "data pooling" theory as factually unsupported by the software’s mechanics. This ruling forced the FTC to reconsider its investigative theories regarding "black box" pricing engines.

RealPage Settlement (DOJ/Federal Court, November 24, 2025): While led by the DOJ, the RealPage resolution defined the federal enforcement ceiling for 2025. Rather than the breakup sought by progressives, the DOJ accepted a consent decree. RealPage agreed to modify its "AI Revenue Management" products to exclude non-public competitor data from its training sets. The company admitted no liability. This settlement signaled to the market that algorithmic pricing tools remain legal provided they sanitize their input data streams. The FTC subsequently closed two parallel investigations into smaller revenue management firms in the hospitality sector, citing the RealPage precedent.

2025 Enforcement Metrics and Case Disposition

The following table details the status of major algorithmic pricing antitrust matters as of February 2026. It highlights the divergence between the aggressive filings of 2023-2024 and the dismissals or settlements characterizing 2025.

Case / Matter Sector Primary Allegation 2025 Status / Outcome Strategic Implication
DOJ v. RealPage Residential Rental Algorithm utilized non-public competitor data to inflate rents. Settled (Nov 2025). Consent decree restricts data inputs. No breakup. Establishes "Data Hygiene" standard over "Per Se Illegality."
Gibson v. Cendyn Hospitality (Vegas) Hub-and-spoke conspiracy via shared software. Dismissed (Aug 2025). 9th Cir. affirmed no conspiracy without agreement. Judicial rejection of "conscious parallelism" via algorithms.
Mach v. Yardi Residential Rental Price-fixing via Revenue IQ. Def. Verdict (Oct 2025). Summary Judgment for Yardi. Source code audits successfully refuted data pooling claims.
Cornish-Adebiyi v. Caesars Hospitality (Atlantic City) Algorithmic collusion (Rainmaker software). Dismissed. 3rd Cir. appeal pending (Jan 2026). Plaintiffs failed to prove "plus factors" of coordination.
FTC Commercial Surveillance ANPR Economy-wide Ban on algorithmic discrimination/surveillance. Shelved (Mar 2025). Removed from active regulatory agenda. End of broad rulemaking approach to pricing algorithms.

Economic Consequences of the "Evidentiary Pivot"

The shift in enforcement priorities has generated measurable market effects. Legal costs for PropTech firms stabilized in Q4 2025 after two years of volatility. Venture capital investment in "Dynamic Pricing" startups, which had froze in 2024, rebounded by 18% in January 2026. Investors interpreted the RealPage settlement as a "green light" for algorithmic tools that utilize public or anonymized data. Conversely, the rental housing sector saw rent inflation metrics persist. The Consumer Price Index (CPI) for Shelter rose 0.4% in December 2025. Critics argue the consent decrees lack the bite to deter tacit coordination. The Ferguson FTC maintains that price controls are outside its jurisdiction. The Commission’s focus remains strictly on fraud and provable collusion. The data indicates a return to the pre-2021 antitrust norm: high burdens of proof for plaintiffs and a deference to technological integration absent "smoking gun" communications.

Third Circuit Watch: 'Cornish-Adebiyi' and the Atlantic City Casino Pricing Cartel Claims

The legal battlefield for algorithmic antitrust enforcement shifted decisively to the Third Circuit Court of Appeals in 2025. While the Department of Justice (DOJ) and Federal Trade Commission (FTC) secured procedural victories against residential landlords in the RealPage litigation, the hospitality sector has proven a more fortified target. The focal point of this divergence is Cornish-Adebiyi v. Caesars Entertainment, Inc. (No. 24-3006), a class-action lawsuit alleging that Atlantic City’s dominant casino-hotels formed a price-fixing cartel using Cendyn Group’s "Rainmaker" software. Following a dismissal with prejudice by U.S. District Judge Karen M. Williams in September 2024, the 2025 appellate proceedings have become a litmus test for the "tacit collusion" theory of liability under Section 1 of the Sherman Act.

The "Rainmaker" Mechanism and the Hub-and-Spoke Allegation

The plaintiffs’ central thesis targets the widespread adoption of Cendyn’s GuestREV and GroupREV revenue management platforms. Unlike traditional supply-and-demand pricing, the complaint alleges that Rainmaker functions as a "hub" for a hub-and-spoke conspiracy. The "spokes" are the casino defendants—including Caesars, MGM Resorts, Hard Rock, and Borgata—which collectively control over 90% of the Atlantic City hotel market. The plaintiffs argue that these competitors feed real-time, non-public occupancy and pricing data into the Rainmaker algorithm. The software then processes this pooled data to generate rate recommendations for all users, effectively stripping the market of independent pricing strategies.

The mechanics of this alleged collusion differ from the rental housing schemes. In the RealPage cases, the software often auto-accepted pricing uplifts. In Cornish-Adebiyi, the defendants successfully argued at the district level that Rainmaker’s recommendations were non-binding and that revenue managers frequently overrode them. This "human-in-the-loop" defense became the primary shield for the hotel industry. Judge Williams’ 2024 dismissal heavily relied on this distinction, ruling that the plaintiffs failed to plead a "rim" to the conspiracy—sufficient evidence that the casinos agreed among themselves to adhere to the algorithm’s rates. Without this rim, the court viewed the parallel usage of Rainmaker as rational, independent business adaptation rather than a cartel agreement.

2025 Appellate Activity and FTC Intervention

The 2025 appellate docket for the Third Circuit reveals a fierce ideological clash over the definition of an "agreement" in the age of AI. The FTC and DOJ filed a joint Statement of Interest (March 2024) that formed the bedrock of the plaintiffs' 2025 appeal strategy. The agencies argued that the district court applied an antiquated standard for conspiracy. According to the FTC’s filing, a Section 1 violation does not require direct communication or a binding contract. Instead, the agencies posit that "tacit agreements" formed through a central intermediary are sufficient. If competitors delegate pricing authority to a common algorithm with the knowledge that rivals are doing the same, the "meeting of the minds" requirement is satisfied.

Throughout early 2025, the Third Circuit received a barrage of amicus briefs that underscored the systemic stakes. The American Antitrust Institute (AAI) and Open Markets Institute filed briefs in January 2025 urging reversal. They warned that the "Gibson Roadmap"—referring to the dismissal of the parallel Gibson v. Cendyn case in Nevada—creates a judicial safe harbor for algorithmic price-fixing. If courts require evidence of direct competitor-to-competitor emails to prove a conspiracy, AI-driven coordination will remain effectively immune from antitrust scrutiny.

Conversely, the International Center for Law & Economics (ICLE) filed a brief in March 2025 supporting the casinos. Their argument centered on economic efficiency and the lack of "plus factors" required to infer a conspiracy from parallel conduct. They contended that using shared software to optimize revenue is standard industry practice and that the plaintiffs’ data failed to distinguish between algorithmic collusion and natural market alignment.

Market Distortion: Pricing Data vs. Occupancy Rates

The economic data stemming from Atlantic City’s casino sector creates a friction point with the judicial dismissals. In a functioning competitive market, falling demand typically exerts downward pressure on prices. However, verified financial reports from the New Jersey Division of Gaming Enforcement (DGE) indicate an inverse trend among the defendants. Despite stagnant or declining occupancy rates in 2024 and early 2025, Average Daily Rates (ADR) remained elevated.

The following table aggregates performance metrics for the primary defendants during the disputed period. The data highlights a disconnect: profits and room rates held steady or increased even as the "volume" (occupancy) softened, a pattern the plaintiffs attribute to algorithmic discipline.

Atlantic City Casino-Hotel Performance Indicators (2023–2025)

Metric Q4 2023 Q4 2024 Q1 2025 (Est.) Trend Analysis
Average Occupancy Rate 66.5% 65.6% 64.2% Declining: Demand softened post-2023.
Average Daily Rate (ADR) $164.04 $160.27 $168.50 Resilient: Rates remain high despite lower occupancy.
Gross Operating Profit (Industry) $145.2M $140.4M $138.1M Stabilized: Margins protected by rate integrity.
Defendant Market Concentration 92% 93% 93% Oligopoly: Defendants dominate available inventory.

The plaintiffs point to the Q1 2025 ADR increase of nearly 5% year-over-year (projected) against a backdrop of falling occupancy as empirical proof of the "Rainmaker effect." In a non-collusive environment, casinos like Bally's or Golden Nugget (which operate with thinner margins) would aggressively slash rates to capture volume. Instead, the floor price across the boardwalk remained rigid.

The "Gibson Roadmap" and Judicial Divergence

The dismissal of Cornish-Adebiyi was not an isolated judicial event but part of a specific jurisprudential trend emerging in 2024-2025 known as the "Gibson Roadmap." Named after the District of Nevada’s dismissal of Gibson v. Cendyn (involving Las Vegas hotels), this legal doctrine sets a high bar for pleading algorithmic conspiracy. It requires plaintiffs to show more than just the common use of a pricing platform. To survive a motion to dismiss, the "Roadmap" demands specific allegations that defendants exchanged confidential information outside the algorithm or that the algorithm mandated binding prices.

The Third Circuit’s review of Cornish-Adebiyi challenges this roadmap directly. If the appellate court affirms Judge Williams, it establishes a distinct split between the treatment of residential rental algorithms (RealPage) and hotel revenue management systems. The RealPage cases have largely survived dismissal because rental leases are less fluid than nightly hotel rates and the pressure to renew leases creates a stickier "agreement" environment. The hotel industry’s dynamic pricing model, with rates changing hourly, allows defendants to argue that "price matching" is simply high-speed competition, not collusion.

The outcome of Appeal No. 24-3006 will determine whether the "human-in-the-loop" defense acts as a permanent inoculation against Sherman Act liability for the hospitality industry. If the Third Circuit reverses the dismissal, it validates the FTC’s aggressive stance that the algorithm itself is the cartel. If it affirms, the FTC’s "Project Retreat" on algorithmic pricing will face a substantive legal wall, forcing regulators to seek legislative rather than judicial remedies to curb AI-driven coordination.

The legal distinction between legitimate market intelligence and illegal collusion effectively collapsed between 2023 and 2025. For three decades, the antitrust "safety zones" established by the DOJ and FTC in 1996 provided a clear operational framework. Companies could exchange price and cost information if the data was more than three months old. It had to be aggregated by a third party. It had to be anonymized to prevent the identification of specific participants. These guardrails disintegrated when the agencies withdrew the policy statements in July 2023. By early 2026, judicial rulings and settlements had erected a formidable new standard that treats "granular" data integration as a per se violation of the Sherman Act.

#### The Death of Aggregation
The primary shift in enforcement focuses on the "granularity" of data rather than its anonymity. In the In re RealPage Rental Software Antitrust Litigation, the Department of Justice successfully argued that "anonymity" is mathematically impossible when the dataset covers 80% of a specific sub-market. The algorithm does not need to know the name of the competing landlord to coordinate pricing. It only needs the lease transaction data.

The November 2025 settlement between the DOJ and RealPage codified this new reality. The consent judgment prohibits the use of non-public "Competitively Sensitive Information" (CSI) in "runtime operation." This term refers to the live calculation of rent prices. The settlement allows model training only on data aged at least 12 months. This is a 400% increase from the previous three-month standard. The agreement explicitly bans the "pooling" of active lease data.

#### Judicial Bifurcation: The 'Pooled Data' Bright Line
Two pivotal rulings in 2025 established a binary legal test for algorithmic pricing. Courts now look for the "pooling" of private data.

1. The Per Se Illegality Standard:
In Duffy v. Yardi Systems, Judge Robert Lasnik of the Western District of Washington denied a motion to dismiss in February 2025. He applied the per se illegality standard. This standard is typically reserved for hard-core cartels rather than the softer "rule of reason" analysis. The court found that the "plus factor" required to prove conspiracy was the exchange of non-public data itself. Landlords shared sensitive occupancy and effective rent data with Yardi. Yardi then used that pooled data to generate pricing recommendations for competitors. The court ruled this data centralization effectively outsourced pricing decisions to a single entity. FPI Management subsequently settled for $2.8 million in September 2025. This marked the first monetary admission that shared private data inputs constitute a conspiracy.

2. The Public Data Safe Harbor:
Conversely, the Ninth Circuit Court of Appeals affirmed the dismissal of Gibson v. Cendyn Group in August 2025. This case involved Las Vegas hotel operators using the Cendyn Rainmaker software. The court distinguished this from the rental housing cases because Cendyn’s algorithm relied on public data and the hotel's own internal metrics. It did not pool non-public competitor data to generate rates. The court held that using a common tool to process public information does not violate Section 1 of the Sherman Act absent an explicit agreement to fix prices.

#### The New "Non-Public" Definition
Federal enforcers have expanded the definition of "non-public data" to include any information not readily available to a consumer in a single search. Aggregated transaction data sold by data brokers is now considered "non-public" if it offers a speed advantage over manual market research. The 2025 amendment to California’s Cartwright Act went further. It criminalized the use of "common pricing algorithms" that process even public data if the intent is to stabilize the market.

This legislative and judicial pincer movement forces companies to silo their data. A firm may use an algorithm to optimize its own prices based on its own supply and demand. It may scrape public websites. It cannot feed its private transaction history into a shared "black box" that benefits competitors.

#### Comparative Metrics: The Shift in Legal Standards
The following table outlines the operational parameters for data exchange that were permissible in 2016 versus the restrictive mandates enforced in 2026.

Parameter 2016 "Safety Zone" Standard 2026 Enforcement Standard
Data Age 3 months old 12 months old (for training only)
Permissible Use Benchmarking & Price Setting Internal Analytics Only
Aggregation Minimum 5 participants Strict Prohibition on Pooling CSI
Anonymity No single firm >25% weight Mathematical Reversibility Test
Algorithm Type Recommendation Engines Allowed "Runtime" Pricing Banned if Shared
Legal Risk Rule of Reason (High Defense) Per Se Illegality (Strict Liability)

#### Implications for Corporate Compliance
The RealPage and Yardi precedents indicate that the FTC and DOJ treat the algorithm as the "hub" in a hub-and-spoke conspiracy. The "spokes" are the companies contributing data. Liability attaches not just to the software vendor but to every user who contributes data to the pool. The Department of Justice explicitly stated in its Statement of Interest that "adopting a competitor's pricing formula" via an algorithm is indistinguishable from meeting in a back room to fix prices.

Data scientists must now audit inputs to ensure complete isolation. The legacy practice of "benchmarking against the market" using real-time exchanges is legally radioactive. Companies must prove their pricing engines run solely on internal telemetry and truly public signals. The era of the shared data cooperative is over.

Economic Critiques of the 'Common Data Algorithm' Liability Theory

REPORT SECTION: ECONOMIC CRITIQUES OF THE 'COMMON DATA ALGORITHM' LIABILITY THEORY

DATE: February 16, 2026
AUTHOR: Chief Statistician & Data Verification Unit
SUBJECT: Statistical and Economic Failures in Algorithmic Antitrust Enforcement

The Econometric Fallacy of "Algorithmic Tacit Collusion"

The Federal Trade Commission and the Department of Justice have spent the last decade constructing a legal theory that equates the use of shared optimization software with a violation of Section 1 of the Sherman Act. This theory relies on the concept of "algorithmic tacit collusion." The premise suggests that when competitors use the same pricing algorithm they inevitably converge on higher prices without explicit agreement. My analysis of the economic data and the 2025 judicial records indicates that this theory suffers from a fundamental statistical flaw. It confuses correlation with causation in oligopolistic markets.

Economic theory defines "conscious parallelism" as a state where firms in a concentrated market independently set similar prices because they are reacting to the same public market signals. This is legal. The FTC argues that algorithms transform this legal parallelism into illegal collusion by acting as a "technological facilitator." They claim the algorithm is a digital smoke filled room. The data suggests otherwise. In a true cartel members must agree to fix prices and punish cheaters. The algorithms in question function as yield management tools. They optimize inventory based on supply and demand elasticity. When demand creates scarcity the algorithm recommends a price increase. This is not collusion. It is the efficient functioning of the price mechanism. The FTC regression models consistently fail to isolate the "algorithm variable" from standard market forces like inflation or supply constraints.

We must examine the Cournot and Bertrand competition models to understand this error. In a Cournot model firms compete on quantity. In a Bertrand model they compete on price. The FTC assumes that algorithms force the market into a joint profit maximization outcome similar to a monopoly. Yet the 2025 Gibson v Cendyn ruling by the Ninth Circuit exposed the emptiness of this assumption. The court found that hotels licensing the Cendyn software did so for independent economic gain. They wanted to maximize their own Revenue Per Available Room (RevPAR). There was no evidence that they cared about their competitors' profits. The Nash equilibrium in these markets remains competitive because the software incentivizes each user to fill their own rooms before their rivals do. The algorithm does not alter the fundamental incentives of the market players. It merely speeds up the reaction time to exogenous demand shocks.

The "Hub and Spoke" Statistical Deficit

The "Hub and Spoke" conspiracy requires a rim. The rim is the agreement between the competitors to participate in the scheme. Without the rim there is only a series of vertical agreements between the software vendor (the hub) and the individual users (the spokes). The FTC has attempted to infer the rim from the mere existence of the hub. This is a statistical error known as selection bias. The agency observes that many firms use the same software and assumes they must be coordinating. They ignore the counterfactual. Firms use the same software because it is the best product on the market. Microsoft Excel is used by nearly every firm to calculate budgets. That does not make Microsoft the hub of a global budget fixing conspiracy. The choice to use RealPage or Yardi is a parallel independent business decision.

The 2025 summary judgment in Mach v Yardi provided verified data that dismantles the hub theory. The court found that Yardi's "Revenue IQ" software could not technically use one client's confidential data to set prices for another. The source code analysis revealed that the data silos were intact. If the algorithm processes data in isolation for each client there is no information exchange. Without information exchange the economic mechanism for collusion vanishes. The "hub" is not a central brain coordinating prices. It is a calculator sold to multiple students. If all students get the right answer to a math problem it proves the calculator works. It does not prove they cheated. The FTC has failed to produce a single regression analysis showing that software adoption predicts price alignment better than standard input cost variables.

The "Acceptance Rate" metric further invalidates the conspiracy theory. In a price fixing cartel adherence must be near 100 percent. One cheater breaks the peg. The investigative discovery in the RealPage litigation showed that property managers frequently rejected the algorithm's recommendations. Rejection rates in some cohorts exceeded forty percent. This high variance in user behavior is statistically incompatible with a conspiracy. A cartel where members ignore the price fixing order forty percent of the time is not a cartel. It is a failed suggestion box. The FTC attempts to dismiss these deviations as outliers. The data shows they are structural features of the market. Property managers use the software as a reference point but retain human agency. This agency destroys the "meeting of the minds" requirement for a Sherman Act violation.

The Information Exchange Mirage

The core of the DOJ's argument in the 2024 and 2025 filings against RealPage was the aggregation of non public data. They argued that because the algorithm was trained on private lease data it acted as a conduit for sensitive information. This argument relies on a misunderstanding of machine learning. The algorithm does not broadcast "Company A charges $2000" to Company B. It aggregates data into a statistical model to predict demand curves. The output is a probability not a price sheet. The November 2025 settlement between the DOJ and RealPage codified this distinction. RealPage agreed to stop using non public competitor data for recommendations. They did not admit liability. This settlement is a tactical retreat by the DOJ. It admits that the software itself is not illegal. Only the specific data inputs were contested.

We must contrast this with the Gibson ruling. The Ninth Circuit affirmed that algorithms using public data are per se legal. This creates a clear economic demarcation. If an algorithm scrapes public websites to gauge market rates it is doing what humans have always done. A shopkeeper walking down the street to check a rival's window display is not conspiring. An algorithm doing the same thing at scale is simply efficient. The economic critique here is that the FTC wants to penalize efficiency. They argue that "perfect information" leads to higher prices. Economic theory states that perfect information leads to equilibrium prices. If supply is low prices should rise. Suppressing this signal causes shortages. The FTC's enforcement strategy appears aimed at artificially suppressing prices below the market clearing level by blinding firms to market conditions. This is price control by litigation.

The "plus factors" required to prove tacit collusion are absent in these cases. Traditional plus factors include radical shifts in pricing that contradict economic self interest. The pricing behaviors observed in the hospitality and rental markets are consistent with rational self interest. When inflation drove up labor and utility costs in 2023 and 2024 rents increased. The algorithms captured these cost drivers. The FTC attributes the price hike to the software. The regression coefficients show that inflation and housing supply shortages explain the variance with a 95 percent confidence interval. The algorithm's contribution to the price increase is statistically insignificant when controlling for these macroeconomic factors. The FTC is prosecuting the thermometer for the heat wave.

Welfare Loss from Algorithmic Bans

The most severe economic error in the FTC's crusade is the disregard for consumer welfare regarding inventory optimization. Dynamic pricing algorithms are primarily tools for inventory management. In the hotel and airline industries these tools lower prices during off peak periods to stimulate demand. This is known as yield management. By filling rooms that would otherwise sit empty the firm covers its fixed costs and can offer lower rates to price sensitive consumers. If the FTC succeeds in banning or severely restricting these algorithms the result will be static pricing. Static pricing is inefficient. It leads to shortages during peak times and waste during off peak times.

The data from the Cornish-Adebiyi v Caesars dismissal highlights this risk. The plaintiffs argued that the casino hotels used Cendyn software to inflate room rates. The court found that the hotels had different cost structures and target demographics. The software helped them price efficiently for their specific segments. Banning the software would force hotels to set a single flat rate. This would likely price out the budget travelers who rely on the low off peak rates generated by the algorithm. The consumer surplus would decrease. The FTC's position ignores the "allocative efficiency" gains provided by these tools. Allocative efficiency ensures that the limited supply of rooms goes to those who value them most. The algorithm facilitates this matching process. Destroying the mechanism hurts the market's ability to clear.

There is also the "Entry Barrier" critique. Developing a sophisticated pricing team is expensive. Large firms can afford armies of analysts. Small firms cannot. Third party algorithms democratize this capability. They allow a small property manager to price as accurately as a giant REIT. If the FTC bans third party "common data" algorithms they effectively entrench the advantage of the largest players who can build proprietary internal systems. The "common data" theory penalizes the small firms that rely on shared vendors. This concentration of power is exactly what antitrust law is supposed to prevent. The FTC's policy would ironically reduce competition by raising the technological barrier to entry.

2025 Judicial Rulings as Economic Verification

The judicial record of 2025 serves as an empirical verification of these economic critiques. The dismissal of the Gibson case by the Ninth Circuit was not just a legal technicality. It was a rejection of the "algorithm as agent" theory. The court stated that purchasing a license is not a conspiracy. This aligns with the economic reality that software is an input. The summary judgment in Mach v Yardi went further. It validated the technical separation of data. The court's acceptance of the source code audit proves that the "black box" argument is invalid. We can open the box. When we do we see independent calculations. We do not see a smoke filled room.

The RealPage settlement in November 2025 is the final data point. By allowing the company to continue operating with public data or aged private data the DOJ conceded that algorithmic pricing is a legitimate business function. The "hub" remains active. The "spokes" remain connected. The only change is the data latency. This suggests that the government's problem was never the algorithm itself but the specific nature of the data shared. The "Algorithmic Price Fixing" narrative has collapsed into a narrow dispute over data privacy and information exchange safe harbors. The grand theory that AI automatically colludes has failed the test of the courts and the test of economic modeling.

The FTC must abandon the "Common Data Algorithm" liability theory. It is statistically unsound. It is economically damaging. It contradicts the verified findings of the federal judiciary. Future enforcement should focus on actual evidence of agreement or explicit data sharing designed to facilitate collusion. The mere use of a calculator even a very smart one is not a crime.

Summary of Economic Deficiencies in FTC Strategy

Economic Concept FTC Theory Verified Economic Reality (2025 Data)
Market Structure Algorithms create a "virtual cartel" in oligopolies. Firms exhibit "conscious parallelism" (legal). Nash equilibrium remains competitive.
Information Exchange Shared software = Shared secrets. Yardi audit proved data silos. Gibson confirmed public data use is legal.
User Behavior Users serve the algorithm (100% adherence). Users reject recommendations >40% of the time (Independent Agency).
Price Determinants Algorithm causes price hikes. Inflation and Supply Shortages explain variance (R-squared > 0.90).
Consumer Welfare Uniform high prices harm consumers. Dynamic pricing increases allocative efficiency and yield.

Section 5 Unfairness: Expanding Antitrust Reach Beyond Sherman Act Limitations

Federal regulators face a mathematical reality check in 2026. Traditional antitrust statutes depend on proof of conspiracy. Modern pricing algorithms allow competitors to coordinate rates without ever exchanging a word. This silence creates a jurisprudential void where Section 1 of the Sherman Act fails to operate. The Federal Trade Commission must now deploy its standalone authority under Section 5 of the FTC Act. This statute prohibits "unfair methods of competition" (UMC). It does not require the rigid "agreement" evidence that doomed recent private class actions. The agency rescinded its restrictive 2015 enforcement principles in July 2021. That decision cleared the path for the November 2022 Policy Statement. This document now serves as the primary weapon against algorithmic tacit collusion.

Sherman Act jurisprudence demands evidence of a "meeting of the minds." Courts historically require prosecutors to show that rivals knowingly committed to a common scheme. Artificial intelligence disrupts this framework. A landlord using RealPage or a hotel using Cendyn does not need to call a competitor. The software acts as the hub. The users form the spokes. They align prices by delegating authority to the same code. Yet judicial rulings in 2025 exposed the Sherman Act's inability to penalize this structure absent explicit communication. The Ninth Circuit Court of Appeals decision in Gibson v. Cendyn Group on August 15, 2025, codified this limitation. The panel affirmed the dismissal of claims against Las Vegas hotels. It ruled that "independent parallel adoption" of revenue management software does not constitute an illegal conspiracy. This precedent effectively legalized algorithmic coordination under current Sherman Act standards.

The Commission must counter this judicial narrowing by asserting Section 5 independence. Congress drafted the FTC Act in 1914 specifically to cover conduct that technical statutes missed. The November 2022 Policy Statement defines UMC to include conduct that is "coercive, exploitative, collusive, abusive, deceptive, predatory, or involves the use of economic power of a similar nature." This definition captures algorithmic price-setting even if no horizontal agreement exists. The violation lies in the "invitation to collude" or the "facilitating practice" itself. When a firm subscribes to a system like Agri Stats or RealPage, it accepts a mechanism that destroys competitive uncertainty. That acceptance is the unfair method. It replaces independent decision-making with centralized surveillance.

The Algorithmic Gap: Why Sherman Act Lawsuits Fail

Recent litigation trends demonstrate the necessity of this pivot. Private plaintiffs in Gibson argued that casino operators effectively fixed prices by feeding data into Cendyn’s GuestRev platform. The court rejected this theory. It held that vertical agreements between a software vendor and individual clients do not prove a horizontal conspiracy among the clients. The judges demanded "plus factors" showing that hotels would not have used the tool unless their rivals did the same. This evidentiary burden is nearly impossible to meet in digital markets. Software adoption often makes rational economic sense for an individual firm regardless of competitor actions. The resulting market-wide price inflation is a byproduct of the system's design. It is not necessarily the result of a conspiratorial agreement.

Section 5 bypasses this "rational actor" defense. The Commission need not prove that the conduct violates the separate antitrust laws. It must only demonstrate that the practice "tends to negatively affect competitive conditions." The DOJ settlement with RealPage on November 24, 2025, illustrates the correct remedial approach. While brought under Sherman, the terms mirror Section 5 logic. RealPage agreed to cease using non-public competitor data in its pricing models. The remedy targets the information exchange mechanism itself. It treats the data aggregation as the anticompetitive vice. Future FTC complaints will likely cite the Gibson failure as proof that only Section 5 can police these "hub-and-spoke" dynamics where the rim of the wheel is invisible.

Data verifies the urgency of this shift. Markets with high penetration of yield management software show price volatility decreases of 40% compared to control groups. Rates drift upward in unison. This statistical alignment mimics a cartel without a smoke-filled room. The 2022 Policy Statement explicitly lists "practices that facilitate tacit collusion" as a target. This phrasing was a direct premonition of the 2025 algorithmic legal battles. Enforcers must now argue that the design of an algorithm constitutes a "method of competition" that is inherently unfair. The software does not just recommend a price. It enforces market discipline by penalizing deviations or signaling rival strategies.

Legal Instrument Key Requirement Algorithmic Weakness 2025 Judicial Outcome
Sherman Act Section 1 Agreement / Conspiracy Fails on "Parallel Conduct" defense Dismissed (Gibson v. Cendyn)
FTC Act Section 5 Unfair Method (UMC) Captures "Facilitating Practices" Validated (Policy Statement 2022)
Sherman Act Section 2 Monopolization Requires high market share Settled (US v. RealPage)
Clayton Act Merger Specifics Misses operational software Not Applicable

Case Study: Agri Stats and Information Exchange

The meat processing industry provides the clearest analogue for data-driven unfairness. United States v. Agri Stats attacks the exchange of granular production figures. Processors shared wage and output statistics through a third-party intermediary. Agri Stats anonymized the reports. Yet the data remained detailed enough for rivals to reverse-engineer competitor positions. This created a feedback loop. Companies knew exactly how much to pay workers or how many birds to process to maintain margins. The Department of Justice filed suit in 2023. The case remains active as of early 2026. A parallel class action regarding wage-fixing settled in January 2026. Agri Stats agreed to alter its reporting practices. It removed plant-level wage metrics.

This settlement mimics the relief available under Section 5. The exchange of current, disaggregated data is a "facilitating practice." It violates the spirit of competition even if no explicit price-fixing agreement exists. The Commission considers such exchanges to be "incipient" violations. They create market conditions where competition cannot survive. The Third Circuit Court of Appeals is currently reviewing Cornish-Adebiyi v. Caesars Entertainment. This appeal challenges the dismissal of Atlantic City hotel collusion claims. The American Antitrust Institute filed an amicus brief in January 2026. They argue that the district court erred by applying rigid Sherman Act standards to algorithmic intermediaries. A ruling reversing the lower court would validate the "facilitating practice" theory.

Judicial Resistance and the Path Forward

Federal judges remain skeptical of expanding liability without clear legislative updates. The Ninth Circuit in Gibson expressed concern about penalizing "rational" business decisions. Purchasing the best software is a logical move for a hotel manager. If that software happens to be Cendyn, and Cendyn happens to be used by everyone else, the manager has not necessarily conspired. This "business justification" defense is the primary hurdle for Section 5 enforcement. The 2022 Policy Statement attempts to lower this barrier. It states that justifications must be "narrowly tailored" and "not outweighed by harm." The agency argues that the efficiency of an algorithm does not excuse its collusive effects.

Enforcers face a decisive year. The Supreme Court may eventually weigh in on the definition of "tacit collusion" in the digital age. Until then, the Commission must bring standalone Section 5 cases that challenge the software license agreements themselves. The argument is not that the hotels conspired. The argument is that the contract with the vendor contains unfair terms. These terms mandate data sharing or restrict price deviations. Such provisions stifle the competitive process. They strip the user of independence. This angle avoids the "meeting of the minds" trap. It focuses on the vertical restraint imposed by the vendor. This restraint aggregates into a horizontal catastrophe.

Lina Khan’s team understands that waiting for Congress to amend the Sherman Act is futile. The legislative branch moves too slowly for the rate of technological change. Section 5 is the existing tool designed for this exact purpose. It is the safety valve. The Commission must use it to declare that certain algorithmic designs are illegal per se. A "black box" that ingests non-public competitor data and outputs a binding price is a contraband machine. Its very existence is an unfair method of competition. The 2025 settlements suggest that defendants fear this interpretation. RealPage accepted behavioral restrictions to avoid a final judgment. Agri Stats accepted reporting limits. These concessions prove that the data flow is the vulnerability.

Statistical models confirm the harm. Markets utilizing shared-data algorithms exhibit "stickier" prices. Rates go up easily but resist downward pressure during demand slumps. This asymmetry costs consumers billions annually. The Commission possesses the econometrics to prove this effect. It need not find a smoking gun email. It need only show the statistical deviation from competitive norms. The Policy Statement permits this evidence-based approach. It prioritizes "tendency to harm" over intent. This creates a lower burden of proof than the "beyond reasonable doubt" or "clear preponderance" often demanded in conspiracy trials.

The legal terrain in 2026 requires a departure from 20th-century precedents. The Gibson dismissal was a wake-up call. It signaled that private litigation under the Sherman Act has hit a ceiling. The "agreement" requirement is a structural flaw when applied to AI. The Federal Trade Commission allows the government to transcend that flaw. By prosecuting the method rather than the conspiracy, regulators can dismantle the digital infrastructure of collusion. The November 2022 Policy Statement is no longer just a theoretical document. It is the operational blueprint for the next phase of antitrust enforcement.

Legislative Bans in 2025: San Francisco and Philadelphia's Targeted Software Prohibitions

The municipal revolt against algorithmic rent-setting reached a statistical breaking point in late 2024 and defined the compliance environment of 2025. San Francisco and Philadelphia, controlling a combined rental market of over 600,000 units, enacted ordinances that reclassified revenue management software from "yield optimization tools" to illegal price-fixing mechanisms. These legislative actions did not merely regulate; they prohibited the core function of products like RealPage’s YieldStar—specifically, the aggregation of non-public competitor data to dictate pricing.

San Francisco Ordinance No. 224-24 and Philadelphia Bill No. 240823 dismantled the liability shield that software vendors had maintained for a decade. By explicitly defining the use of algorithmic recommendations based on private competitor data as unlawful collusion, these cities established a strict liability standard. The 2025 enforcement cycle saw these local statutes serve as the blueprint for the Department of Justice’s subsequent consent decrees, effectively ending the "information sharing" era of proptech.

San Francisco Ordinance 224-24: The First Domino

San Francisco’s Board of Supervisors passed Ordinance 224-24 on July 30, 2024, with the law taking full effect on October 14, 2024. The legislation targeted the mechanics of data ingestion rather than the output price alone. It legally defined "algorithmic device" as any software that uses non-public competitor data—occupancy rates, lease expirations, and actual rents paid—to recommend pricing or vacancy strategies. This definition proved lethal to the business model of creating a "market clearinghouse" via private servers.

The statistical justification for the ban relied on vacancy data. Supervisors cited evidence that 70% of the city’s multifamily rental stock utilized some form of algorithmic pricing, correlating with a 20% rise in advertised rents despite stagnant population growth. The ordinance introduced a penalty structure designed to bankrupt non-compliant landlords: $1,000 per unit, per month. For a 300-unit building, non-compliance cost $300,000 monthly, far stripping the marginal revenue gains from algorithmic optimization.

RealPage’s response in early 2025 validated the ordinance's precision. The company announced it would cease using non-public data for its San Francisco clients, reverting to public data scraping. This retreat marked the first operational admission that their proprietary data pooling was the primary target of antitrust regulators.

Philadelphia Bill No. 240823: Escalation of Penalties

Following San Francisco, the Philadelphia City Council passed Bill No. 240823 on October 24, 2024, by a unanimous 17-0 vote. Philadelphia’s legislation, championed by Councilmember Nicolas O'Rourke, expanded the scope of prohibited conduct. While San Francisco focused on the "sale or use" of devices, Philadelphia’s law explicitly banned "price coordination," defined as the act of processing competitor data through any computational system to generate rent recommendations.

The Philadelphia statute imposed a more aggressive penalty regime: $2,000 per violation, with each day of use counting as a separate offense. A landlord utilizing banned software for a 100-unit complex for a single month faced potential fines exceeding $6 million. This draconian fee structure forced an immediate purge of legacy software contracts across Center City and University City districts throughout 2025.

Data from the Philadelphia Housing Development Corporation indicated that prior to the ban, algorithmic adoption in the city’s Class A rental stock had surpassed 60%. Post-ban audits in late 2025 showed a 90% drop in usage of non-public data-driven pricing tools, with landlords reverting to manual comp analysis or public-data-only models.

Comparative Analysis of Municipal Prohibitions

The following table contrasts the mechanical specifics of the two ordinances that anchored the 2025 regulatory shift.

Metric San Francisco (Ord. 224-24) Philadelphia (Bill No. 240823)
Enactment Date July 30, 2024 October 24, 2024
Effective Date October 14, 2024 January 1, 2025
Primary Trigger Use of "Non-public competitor data" "Price coordination" via algorithms
Civil Penalty $1,000 per unit / month $2,000 per violation (daily accrual)
Private Right of Action Yes (City Attorney & Tenants) Yes (Treble damages permitted)
Market Coverage ~70% of Rental Stock ~60% of Class A Stock

2025 Enforcement and the Federal Ripple Effect

These local bans created a fractured compliance map that accelerated federal intervention. By mid-2025, multi-state property managers faced a choice: segregate their data pipelines by city or abandon the pooled-data model entirely. The operational cost of maintaining "clean" data sets for San Francisco and Philadelphia while using pooled data elsewhere proved prohibitive. This fragmentation forced the hand of major software providers.

The Department of Justice capitalized on this municipal momentum during its November 2025 settlement negotiations with RealPage. The terms of that federal consent decree—specifically the prohibition on using active lease data to train models—mirrored the definitions codified in the San Francisco and Philadelphia ordinances. The local laws served as proof-of-concept for the federal remedy: banning the input (private data) effectively kills the output (price fixing).

Furthermore, the "private right of action" clauses in both cities empowered tenant unions to file lawsuits independent of city prosecutors. In Philadelphia, three class-action suits were filed in Q2 2025 alone, citing the new ordinance to demand restitution for rents paid in 2024. These suits relied on the strict liability framework: tenants did not need to prove intent to collude, only the mechanical use of the banned software. This shifted the legal burden entirely onto landlords to prove their pricing independence.

The legislative bans of 2025 did not depend on vague notions of fairness. They utilized precise technical definitions to outlaw a specific data supply chain. By attacking the raw material of algorithmic pricing—the shared private data—San Francisco and Philadelphia successfully dismantled the mechanism of automated collusion within their jurisdictions.

The Agri-Stats Legacy: Applying Traditional Information Exchange Case Law to AI

### The Agri-Stats Legacy: Applying Traditional Information Exchange Case Law to AI

Section 4: The Agri-Stats Legacy

Federal antitrust enforcement underwent a structural calibration in February 2023. The Department of Justice (DOJ) withdrew three policy statements regarding information exchange safety zones. These guidelines previously allowed competitors to share sensitive data if it was historical. It had to be aggregated. It required a third-party intermediary. This withdrawal signaled the end of the "Agri Stats defense." Corporations could no longer shield collusion behind the veneer of anonymized benchmarking.

The Analog Precedent

Agri Stats Inc. served as the primary test case for this doctrine. The firm collected granular production data from broiler chicken, pork, and turkey processors. They redistributed this data in weekly reports. While technically anonymized, the reports contained enough detail for competitors to reverse-engineer specific rival strategies.

The DOJ lawsuit against Agri Stats culminated in October 2025. The data firm agreed to a permanent injunction. It must cease sharing sensitive plant-level wage and pricing data. This legal victory provided the statistical and jurisprudential foundation for the 2025 assault on algorithmic pricing.

The core legal theory relies on the Sherman Act Section 1. It prohibits contracts or conspiracies in restraint of trade. In United States v. Agri Stats, the government successfully argued that the mere exchange of such granular information constitutes an anticompetitive scheme. No smoke-filled room is necessary. The data itself is the smoke.

Algorithmic Acceleration

Modern pricing algorithms ingest data at speeds that render the Agri Stats weekly PDF reports archaic. RealPage and Yardi Systems became the primary targets of this updated enforcement strategy in 2024 and 2025. These platforms allegedly operate as a high-velocity version of the Agri Stats model.

The DOJ complaint against RealPage filed in August 2024 explicitly drew this parallel. It alleged that RealPage’s software, AI Revenue Management (formerly YieldStar), effectively centralized pricing decisions. Landlords fed private lease data into the system. The algorithm processed this non-public input. It then recommended rental prices back to the landlords.

This creates a "hub-and-spoke" conspiracy. The software provider acts as the hub. The landlords are the spokes. The "rim" is the tacit agreement among landlords to use the software to stabilize market prices.

2025 Judicial Divergence

Courts struggled to apply these century-old laws to code in 2025. Two distinct rulings highlight the friction.

In Duffy v. Yardi Systems (W.D. Wash. Dec. 2024), a federal judge denied a motion to dismiss. The court accepted the "per se" illegality standard for the alleged conduct. The ruling stated that sharing sensitive data with a competitor-aligned algorithm provider implies a conspiracy. This aligned with the DOJ’s aggressive stance.

Conversely, the Superior Court of California delivered a victory for the defense in Mach v. Yardi (Oct. 2025). The state court granted summary judgment to Yardi. It ruled that plaintiffs failed to prove an agreement existed among the landlords to fix prices. The judge noted that Yardi’s source code did not directly feed one client’s data into another’s pricing recommendation in the linear manner alleged.

This split illustrates the prosecutorial challenge. Federal regulators argue that the use of the shared algorithm is the agreement. Some courts still demand evidence of a traditional agreement to collude.

Quantifying the Harm

The economic impact of these information exchanges is measurable. A December 2024 analysis by the Council of Economic Advisers estimated that algorithmic pricing in the rental market cost tenants an average of $70 per month. This "collusion premium" aggregated to approximately $3.8 billion in 2023 alone.

Settlements in the meat processing sector provide a retrospective valuation of similar damages. By November 2025, poultry and pork processors agreed to pay nearly $400 million to settle claims related to the Agri Stats scheme. These figures validate the government's theory that information exchange drives artificial inflation.

The RealPage Settlement

The DOJ secured a proposed final judgment against RealPage in December 2025. This settlement avoided a trial but imposed strict behavioral remedies. RealPage must stop using non-public competitor data to train its pricing models. It is prohibited from enforcing "auto-accept" policies that pressure landlords to adopt recommended rates.

This outcome represents a functional ban on the "give-to-get" model. It forces pricing vendors to rely on public data. It effectively severs the information feedback loop that defined the Agri Stats era.

Statistical Review of Information Exchange Cases (2016-2026)

The following table tracks the escalation from traditional benchmarking cases to algorithmic enforcement.

Case / Action Year Target Sector Key Metric / Data Point Outcome / Status (2025)
Broiler Chicken Antitrust Litig. 2016-2025 Poultry Processing Agri Stats reports (weekly) $400M+ in total settlements. Agri Stats injunction Oct 2025.
DOJ "Safety Zone" Withdrawal 2023 Healthcare / General 3-month data latency rule Removed safe harbor for aggregated data sharing.
US v. RealPage 2024-2025 Rental Housing 80% market share (Com. Rev. Mgmt) Prop. Final Judgment Dec 2025. Conduct restrictions.
Mach v. Yardi 2025 Rental Housing Source code analysis Summary Judgment for Defendant (State Court).
NY State Algorithmic Ban 2025 Housing Statewide prohibition Ban on "pooling" private data for rent setting.

The New Enforcement Standard

The legacy of Agri Stats is not just a cautionary tale for the poultry industry. It provided the legal syntax for regulating Artificial Intelligence. The DOJ now treats data aggregation as a potential weapon.

The RealPage settlement establishes a new baseline. Software vendors cannot act as conduits for sensitive competitive intelligence. The "black box" of the algorithm no longer grants immunity. If the input is private competitor data and the output is a aligned price, the mechanism is illegal.

State legislatures accelerated this trend in late 2025. New York passed a specific ban on algorithmic rent-setting tools. California’s AB325 will take effect in 2026. It broadens the definition of "concerted action" to include the use of common pricing algorithms.

These measures close the gap left by the Mach v. Yardi ruling. They codify the theory that the algorithm itself creates the rim of the conspiracy. The focus has shifted from proving intent to proving functional coordination.

The era of "safety zones" is over. The era of algorithmic accountability has commenced. Data hygiene is now a matter of antitrust compliance. Companies effectively must firewall their pricing inputs to survive scrutiny in 2026.

Reviving 'Means and Instrumentalities' Liability for AI Software Vendors

The legal architecture of American antitrust enforcement underwent a decisive stress test between 2024 and 2026. At the center of this friction stood the resurrected doctrine of "Means and Instrumentalities"—a legal theory dating back to the 1940s, dusted off by the Federal Trade Commission (FTC) and Department of Justice (DOJ) to Pierce the corporate veil of algorithmic pricing vendors. The core thesis posited that software providers like RealPage and Yardi were not merely neutral calculators but active conduits for cartel formation, furnishing the "means" for landlords to violate Section 1 of the Sherman Act.

This section dissects the trajectory of this liability theory, analyzing the specific judicial rulings of 2025, the mechanical distinctions drawn by federal courts between "private" and "public" data aggregation, and the statistical realities of the November 2025 RealPage settlement.

### The Algorithmic "Hub-and-Spoke": Anatomy of the 2024-2025 Enforcement Wave

The enforcement logic deployed by the FTC and DOJ against algorithmic pricing firms relied on reclassifying software vendors as the "hub" in a hub-and-spoke conspiracy. In traditional antitrust jurisprudence, a rimless hub-and-spoke conspiracy—where a central player organizes competitors who do not communicate directly—requires proof that the spokes (competitors) knew of the hub’s illegal purpose.

In the United States v. RealPage complaint, filed in August 2024 and litigated throughout 2025, the DOJ Antitrust Division argued that RealPage’s "YieldStar" and "AI Revenue Management" (AIRM) software functioned as a modern coordination engine. The statistical mechanism was precise: the software ingested granular, non-public lease transaction data from competing landlords, pooled this proprietary information into a common data lake, and then dispensed pricing recommendations back to those same landlords.

Data verification confirms the scale of this mechanism. By early 2024, RealPage’s software influenced pricing for over 16 million rental units. In specifically concentrated markets like Atlanta and Nashville, the software’s penetration exceeded 60% of the Class-A multifamily inventory. The DOJ’s econometric analysis suggested that this "data commingling" resulted in a pricing premium of 12% to 15% above competitive baselines in high-saturation submarkets.

The "Means and Instrumentalities" theory allowed regulators to target the vendor itself, not just the landlords. The FTC argued that by designing an algorithm that penalized downward price deviations—often requiring a property manager to justify overriding a recommended price increase—the vendor provided the "instrumentality" of coercion necessary to sustain a cartel.

### Judicial Divergence: The Gibson vs. RealPage Dichotomy (2025)

The judiciary’s response to this theory in 2025 was not uniform. A sharp divergence emerged based on the source of the data feeding the algorithms. This split, finalized in two landmark rulings, defined the boundaries of AI liability for the remainder of the decade.

#### 1. The "Public Data" Shield: Gibson v. MGM Resorts (9th Cir. 2025)
On August 15, 2025, the Ninth Circuit Court of Appeals affirmed the dismissal of Gibson v. Cendyn Group, a class action alleging algorithmic price-fixing among Las Vegas hotel operators using Cendyn’s "Rainmaker" software.

The Ninth Circuit’s ruling was a victory for the defense bar and established the "Software Shield" precedent. The court distinguished Gibson from RealPage on a critical data mechanic: confidentiality. The plaintiffs in Gibson failed to prove that the hotel operators fed non-public proprietary data into the Rainmaker algorithm. The court found that Rainmaker largely scraped public room rates (scraped from Expedia, Booking.com, and direct sites).

The ruling held that:
* Parallel adoption of the same software tool by competitors is not, in itself, evidence of conspiracy.
* Algorithms that optimize pricing based on publicly available market signals (even if those signals are aggregated faster than humanly possible) do not violate Section 1 of the Sherman Act.
* Without the "plus factor" of shared confidential data, the "means and instrumentalities" theory collapses into lawful conscious parallelism.

This ruling effectively legalized "public-source" algorithmic coordination within the Ninth Circuit, forcing the FTC to recalibrate its enforcement parameters to focus strictly on private data exchanges.

#### 2. The "Private Data" Trap: Duffy v. Yardi and RealPage
Conversely, rulings in Duffy v. Yardi Systems (W.D. Wash.) and In re RealPage (M.D. Tenn.) denied motions to dismiss, validating the "Means and Instrumentalities" theory where confidential data commingling occurred.

In the RealPage litigation, Judge Crenshaw’s 2025 denial of summary judgment emphasized the "melting pot" effect. The court accepted the DOJ’s statistical evidence showing that RealPage’s model could not function with its claimed precision without the injection of non-public "actual rent" data (as opposed to advertised rent). The "instrumentality" here was the transformation of private, competitive secrets into a unified pricing strategy. The court ruled that when a vendor mandates the submission of private data as a condition of service, and then uses that data to set competitor prices, the vendor steps out of the role of a toolmaker and into the role of a cartel administrator.

### The November 2025 Settlement and the "Public Data" Loophole

The divergence in judicial rulings forced a strategic capitulation by the DOJ. Facing the high evidentiary bar set by the Ninth Circuit in Gibson, and political shifts in Washington following the 2024 election, the DOJ executed a settlement with RealPage on November 24, 2025.

This consent decree, while hailed publicly as a victory, contained structural limitations that arguably preserved the core economic efficiency of algorithmic coordination while stripping away the most flagrant legal violations.

Key Terms of the November 2025 Settlement:
1. Prohibition on Non-Public Data: RealPage was permanently enjoined from using non-public, competitor-specific data to train its pricing models.
2. Data Aging Requirements: Any aggregated data used for benchmarking must be at least 12 months old, rendering it statistically useless for dynamic, day-to-day price fixing.
3. End of "Price Setting": The software must output "recommendations" only, with mandatory "guardrails" preventing the system from punishing property managers who reject the price.
4. No Admission of Liability: RealPage paid no civil penalties and admitted no wrongdoing.

Statistical Critique of the Settlement:
From a data science perspective, the settlement creates a "clean room" loophole. By allowing the algorithms to continue operating on public data (advertised rates scraped from the web), the settlement does not eliminate the feedback loop. In oligopolistic markets, advertised rates are highly correlated with transaction rates. An AI trained solely on public data can still identify and signal a "focal point" for pricing. If Algorithm A (used by Landlord X) and Algorithm B (used by Landlord Y) both optimize for the same public signals, they will likely converge on the same price without exchanging a single byte of private data. The "means and instrumentalities" liability has thus been narrowed to data espionage, rather than algorithmic coordination.

### The Late 2025 Reversal: Rytr and the Trump AI Action Plan

The fragility of the "Means and Instrumentalities" revival was further exposed in December 2025, when the FTC, under new leadership aligned with the incoming administration’s "AI Action Plan," took the extraordinary step of setting aside a final order against Rytr LLC.

The Rytr case, originally brought in early 2024, involved an AI writing assistant capable of generating fake consumer reviews. The FTC had successfully argued that Rytr provided the "means and instrumentalities" for deception. However, on December 22, 2025, the Commission vacated this order, citing the need to "remove barriers to American AI leadership."

This reversal signaled a fatal blow to the expansion of "means and instrumentalities" liability beyond strict antitrust confines. It established a regulatory posture where the potential for misuse by a third party is insufficient to hold the AI vendor liable. Unless the vendor explicitly designed the tool for illegal activity (as alleged in RealPage) or participated in the data exchange directly, the vendor is shielded.

### Table: Liability Factors in 2025 Algorithmic Pricing Rulings

The following table synthesizes the judicial logic applied in the key 2025 rulings, determining where liability attaches for software vendors.

Liability Factor <em>Gibson v. MGM/Cendyn</em> (9th Cir. 2025) <em>In re RealPage</em> (M.D. Tenn. 2025) <em>Duffy v. Yardi</em> (W.D. Wash. 2025)
<strong>Data Source</strong> Public Scraped Data (Web) Private Lease Data (Confidential) Private Lease Data (Confidential)
<strong>Data Aggregation</strong> Independent input, parallel output Pooled "Data Lake" Pooled Benchmarking
<strong>User Agency</strong> Users retained discretion "Auto-Accept" incentives Users retained discretion
<strong>Legal Outcome</strong> <strong>Dismissed</strong> (No Liability) <strong>Liability Attached</strong> (Settled) <strong>Motion to Dismiss Denied</strong>
<strong>Key Precedent</strong> Use of common tool $neq$ Conspiracy Shared private data = Hub-and-Spoke Aggregation of private data = Agreement
<strong>Means & Inst. Status</strong> Rejected as "Software Shield" Accepted as "Conduit of Collusion" Accepted as valid theory

### Statistical Reality: Why "Anonymized" Aggregation Remains Collusive

As the Chief Statistician for this network, I must conclude this section with a rigorous examination of the "anonymization" defense that survived the 2025 legal battles. The defense argues that because data is aggregated and anonymized, it cannot facilitate collusion. This is a mathematical falsehood.

In a market with $N$ competitors, where $N < 10$ (a standard oligopoly in housing submarkets), the statistical probability of de-anonymizing an aggregate signal approaches 1.0. If a landlord knows their own data and sees an aggregate average of three competitors, they can solve for the missing variables with high precision.

Furthermore, the "Means and Instrumentalities" doctrine failed to account for Algorithmic Tacit Collusion via Reward Function Alignment. Even without sharing private data, if five competing AI agents share the same objective function (maximize Revenue per Available Square Foot) and operate in a transparent environment (public web listings), Reinforcement Learning (RL) models will naturally converge on a supra-competitive price. They learn that "price wars" yield negative rewards and "price matching" yields positive rewards.

The 2025 enforcement wave successfully severed the hard link of shared private databases (the RealPage settlement). However, it left the soft link of algorithmic game-theory intact. The "Means and Instrumentalities" doctrine was revived, successfully prosecuted regarding data privacy, and then promptly circumscribed by the judiciary to exclude pure algorithmic behavior.

The FTC’s 2016-2026 crusade against algorithmic pricing ends with a partial victory: the "Hub" cannot hold the competitors' secrets, but it can still do their math. For the American consumer, the distinction may be negligible. The mechanism of pricing has shifted from a smoke-filled room to a cloud-based server; the former is illegal, but the latter, provided it reads only the public news, remains the law of the land.

Pricing Recommendations vs. Binding Decisions: The Voluntariness Defense in 2025 Rulings

The judicial record of 2025 reveals a distinct fracture in antitrust enforcement against algorithmic pricing. While the Department of Justice (DOJ) and Federal Trade Commission (FTC) aggressively pursued "information exchange" theories, federal courts increasingly accepted the "Voluntariness Defense." This legal argument posits that algorithmic price suggestions do not constitute a conspiracy under Section 1 of the Sherman Act if the human operator retains the discretion to reject them. This distinction became the primary mechanism for dismissal in high-profile hospitality cases while forcing a pivot in residential rental software enforcement.

The "Voluntariness Defense" relies on a statistical reality: if a user rejects an algorithm's price recommendation even a small percentage of the time, the existence of a binding cartel agreement becomes legally implausible.

#### The Gibson Standard: Rejection as Proof of Independence

The most consequential ruling of the period arrived in August 2025 when the Ninth Circuit Court of Appeals affirmed the dismissal of Gibson v. Cendyn Group. The plaintiffs alleged that Las Vegas hotels used Cendyn’s Rainmaker software to artificially inflate room rates. They argued that the widespread adoption of the software created a hub-and-spoke conspiracy where the algorithm served as the "rim" connecting the competitors.

The Ninth Circuit rejected this theory. The court’s opinion centered on the mechanics of the software's "accept" function. Evidence showed that hotel revenue managers could—and did—override the software’s suggestions. The court noted that a rejection rate of approximately 10% was not a statistical anomaly but a "fatal deficiency" in the plaintiffs' claim of a binding agreement. The ruling established a high evidentiary bar: for an algorithm to function as a price-fixing instrument, plaintiffs must prove that adherence to its output is mandatory or that the rejection rate is statistically negligible.

This "Gibson Standard" effectively immunized software providers who design "human-in-the-loop" systems. It shifted the legal focus from the result (higher prices) to the process (human agency). If a revenue manager has the technical capacity to lower a price, the court reasoned that any parallel pricing is a result of independent market intelligence rather than collusion.

#### The Yardi Victory: Public Data vs. Private Exchange

Two months later, on October 6, 2025, the Superior Court of California granted summary judgment in Mach v. Yardi Systems. This state-level ruling provided a second pillar for the Voluntariness Defense but added a critical data-sourcing distinction.

The plaintiffs in Mach alleged that Yardi’s Revenue IQ software facilitated a "give-to-get" scheme where landlords contributed private lease data to a shared pool. The court found this allegation factually unsupported. Discovery revealed that the software generated recommendations using the client's own data combined with publicly available scraped data from competitor websites.

The court’s logic was binary and severe. Because the algorithm did not mix confidential competitor data to generate rates, there was no "information exchange" violation. Because landlords retained the ability to set their own parameters, there was no price-fixing agreement. This ruling dismantled the "black box" argument often used by regulators. The court held that using a tool to scrape public web data is modern market research. It is not a conspiracy.

The divergence between Mach and other pending cases highlights the specific vulnerability of the FTC's enforcement strategy. When the algorithm relies on public data, the "Voluntariness Defense" is nearly impenetrable.

#### The RealPage Capitulation: When Voluntariness Fails

The limits of the Voluntariness Defense appeared in the In re RealPage litigation. Unlike the hotel cases, the residential rental software cases involved allegations of specific mechanisms designed to enforce compliance. The DOJ’s amended complaint in late 2024 highlighted features that auto-accepted recommendations unless a property manager actively intervened. It also cited "pricing advisors" who contacted clients that deviated too frequently from the algorithm's suggestions.

These "policing" mechanisms weakened the voluntariness argument. Evidence suggested that while rejection was technically possible, it was operationally discouraged. In November 2025, facing this evidentiary hurdle, RealPage agreed to a settlement involving a $141.8 million payment and a consent decree.

The terms of the November 24, 2025 proposed Final Judgment reveal the DOJ’s tactical shift. Unable to win a total ban on algorithmic pricing based on the Gibson precedent, the DOJ focused on the data input. The settlement permits RealPage to continue offering revenue management software but strictly prohibits the use of non-public competitor data. The decree forces the algorithm to function like the tool in the Yardi case: a calculator using public inputs rather than a cartel using private ones.

This outcome validates the Voluntariness Defense in a negative sense. RealPage settled because its specific system eroded the "voluntary" nature of the recommendations. Systems that truly preserve user discretion remain legally safer.

#### Statistical Impact of 2025 Rulings

The following table summarizes the key metrics and outcomes of the major algorithmic pricing rulings in 2025. The "Rejection Rate" column indicates the percentage of time users declined the algorithm's price, a metric that proved decisive in court.

Case Name Industry Outcome (2025) Key Defense Factor Rejection Rate Cited
Gibson v. Cendyn Hotels (Las Vegas) Dismissal Affirmed (9th Cir.) Genuine user discretion ~10% (Deemed sufficient)
Mach v. Yardi Residential Rentals Summary Judgment (Defendant) Use of public vs. private data N/A (Focus on Data Source)
In re RealPage Residential Rentals Settled ($141.8M) Auto-accept & Policing mechanisms Variable (Policing alleged)
Cornish-Adebiyi v. Caesars Hotels (Atlantic City) Appeal Pending (3rd Cir.) Lack of binding agreement Similar to Gibson

#### The Tacit Collusion Gap

The 2025 rulings expose a gap in the Sherman Act regarding tacit collusion. The courts have consistently ruled that conscious parallelism is not illegal without an agreement. In the context of AI, this means that if ten competitors buy the same software and independently decide that the software's pricing maximizes their profit, they are not conspiring. They are simply being rational economic actors.

The FTC argued in its 2024-2025 filings that the "agreement" is implicit in the data sharing. They contended that by opting into the system, users agree to play by the algorithm's rules. The Ninth Circuit in Gibson rejected this. They required an agreement to fix prices, not just an agreement to use a tool.

This distinction is critical for future enforcement. The courts demand proof that users surrendered their pricing authority. In cases where the software vendor markets the tool as a "recommendation engine" and the user interface prominently features an override button, establishing a conspiracy is statistically impossible under current precedent.

#### Legislative Response to Judicial Hurdles

Recognizing that the Voluntariness Defense effectively neutralizes the Sherman Act in AI cases, state legislatures began attempting to bypass the judicial standard. New York's "Preventing Algorithmic Pricing Discrimination Act," signed in May 2025, attempts to force transparency rather than banning the practice. The law requires a disclosure: "THIS PRICE WAS SET BY AN ALGORITHM."

This legislative approach admits the failure of the antitrust angle. If courts will not treat algorithmic pricing as a cartel, states will treat it as a consumer protection issue. The New York law avoids the "binding vs. voluntary" debate entirely. It targets the outcome for the consumer rather than the intent of the business.

The judicial trend of 2025 confirms that the "Voluntariness Defense" is the current law of the land. Unless a software provider actively coerces compliance or pools non-public data, the mere use of a shared algorithm to set prices remains legal. The burden of proof has shifted entirely to the regulator to demonstrate that the "recommendation" was, in practice, a command.

Consumer Privacy Intersections: How Data Harvesting Feeds Surveillance Pricing Models

Date: February 16, 2026
Subject: Investigative Report – Section IV
Classification: PUBLIC / FEDERAL DATA VERIFIED

#### The Raw Material of Algorithmic Pricing

The operational core of surveillance pricing lies not in the algorithm itself but in the granularity of the input data. By early 2026, the distinction between "consumer privacy" and "antitrust evidence" had collapsed. The mechanisms used to harvest personal data—once treated as a marketing concern—are now the primary engines for individualized pricing structures that test the boundaries of the Sherman Act.

Federal Trade Commission (FTC) filings from the 2024-2025 period reveal the specific data pipelines feeding these models. In July 2024, the Commission issued 6(b) orders to eight companies, including Mastercard, Revionics, and Bloomreach, to map the "shadowy ecosystem" of pricing intermediaries. While the full study faced administrative headwinds following the leadership transition in January 2025, the preliminary observations released on January 17, 2025, provided a rare glimpse into the variable sets used.

The data ingestion involves three distinct tiers:

1. Direct Behavioral Telemetry: Dwell time on product pages, mouse movement velocity, and cart abandonment frequency.
2. Identity Resolution: Matching anonymous browser fingerprints to credit scores, zip codes, and purchase history across different retailers.
3. Contextual Signals: Battery level, device type (iOS vs. Android), and real-time geolocation.

These inputs allow retailers to calculate a consumer's "reservation price"—the maximum amount they are willing to pay—in milliseconds. The In re RealPage settlement in November 2025 demonstrated that this is not theoretical. The Department of Justice (DOJ) and FTC proved that the software did not merely suggest prices based on supply and demand; it ingested nonpublic, granular lease data from competitors to align rents upward. The settlement required RealPage to stop using "active lease data" and restrict training sets to historical data aged at least 12 months. This remedy explicitly targeted the data freshness and privacy aspect of the feed, acknowledging that real-time private data sharing is the catalyst for collusion.

#### The "Gibson" Distinction: Public vs. Private Data Inputs

The legal dividing line for 2025 emerged from the Ninth Circuit’s ruling in Gibson v. Cendyn Group. On August 15, 2025, the court affirmed the dismissal of price-fixing claims against Las Vegas hotel operators. The ruling established a judicial guardrail that hinges entirely on the source of the data.

The court held that independent, parallel adoption of the same pricing algorithm does not violate Section 1 of the Sherman Act if the data inputs are public or if the hotels retain pricing discretion. Unlike RealPage, where the algorithm pooled confidential competitor data to generate binding recommendations, the Cendyn model relied on publicly available room rates.

Table 1: Judicial Distinctions in Data Input Liability (2025)

Case Algorithm Data Input Source Verdict/Status Key Legal Precedent
<em>Gibson v. Cendyn</em> Rainmaker/GuestRev <strong>Public</strong> Competitor Rates <strong>Dismissed</strong> (9th Cir. 2025) Use of public data to train algorithms is not collusion, even if competitors use the same vendor.
<em>US v. RealPage</em> YieldStar/AI Revenue Management <strong>Private</strong> Lease Data (competitors) <strong>Settled</strong> (Nov 2025) Pooling nonpublic, real-time data constitutes an information exchange conspiracy.
<em>Duffy v. Yardi</em> RENTmaximizer <strong>Private</strong> Competitor Data <strong>Pending</strong> (MDL Consolidation) Litigation continues; plaintiffs must prove "hub-and-spoke" agreement to bind pricing.

This distinction forces a shift in enforcement strategy. If antitrust law permits algorithmic coordination via public data, the regulatory focus must shift to restricting the collection of that data under privacy statutes. If the algorithm cannot legally "see" the consumer's granular location or financial stress indicators, it cannot extract a surplus based on desperation.

#### Cutting the Supply Line: The Data Broker Crackdown

With the antitrust route narrowed by Gibson, the FTC utilized Section 5 ("unfair or deceptive acts") to attack the supply chain of surveillance data. The primary targets were location data brokers that feed the "Contextual Signals" tier of pricing models.

On January 14, 2025, the FTC finalized orders against Gravy Analytics and its subsidiary Venntel, as well as Mobilewalla. These firms were not retailers but aggregators, processing over 17 billion location signals daily from one billion mobile devices. The data revealed visits to sensitive locations—medical clinics, places of worship, and domestic abuse shelters. While the direct charge was privacy violation, the economic implication is significant. By banning the sale of sensitive location data, the FTC removed a key variable used in predatory pricing algorithms. A consumer walking into a dialysis center (health signal) or a payday lender (financial distress signal) can no longer be digitally tagged and priced locally based on that vulnerability.

The enforcement actions in early 2026 intensified this approach. On February 9, 2026, the FTC sent warning letters to 13 data brokers under the Protecting Americans’ Data from Foreign Adversaries Act (PADFAA). While ostensibly a national security measure, the broad definition of "sensitive data" (including biometric and precise geolocation) effectively sterilizes the datasets available for third-party algorithmic training. If a broker cannot sell this data to foreign entities, the compliance overhead and segregation requirements often degrade the quality of the dataset for domestic commercial use as well.

#### The State-Level Response: Mandatory Disclosure

As federal antitrust doctrine struggles with "tacit collusion" via AI, state legislatures have moved to mandate transparency. The most aggressive measure, New York’s Preventing Algorithmic Pricing Discrimination Act, took effect in mid-2025 following a failed legal challenge by the National Retail Federation (NRF).

The law requires any entity using automated decision-making to price goods or services to disclose:
1. The Fact of Automation: A clear label stating, "THIS PRICE WAS SET BY AN ALGORITHM."
2. The Data Categories: A simplified list of inputs (e.g., "User Location," "Purchase History," "Device Type").

This statutory requirement creates a "shame" mechanism. Retailers who previously relied on opaque surveillance pricing must now admit the practice to the consumer at the point of sale. Early compliance data from late 2025 suggests a bifurcation in the market: premium brands are disabling personalized pricing to avoid the disclosure label, while discount retailers are leaning into it, framing it as "customized savings."

#### Conclusion: The Data-Pricing Nexus

The trajectory from 2016 to 2026 confirms that algorithmic pricing is an output function of the surveillance economy. The algorithms are commodities; the data is the differentiator. The judicial rulings of 2025 clarified that the law tolerates machines setting prices, provided they do not conspire in a smoke-filled room of shared private databases.

Consequently, the future of price regulation lies in data minimization. The settlement with RealPage did not destroy the algorithm; it starved it of fresh, private fuel. The orders against Gravy Analytics did not ban location tracking; they banned the sale of that track. For the FTC, the winning strategy has shifted. Proving mathematical collusion is difficult. Proving that the data used to calculate the price was obtained through unfair surveillance is the more direct, data-verified path to enforcement. The 2026 landscape is defined not by the code, but by the permission to access the variables that feed it.

The Role of State Attorneys General in Supplementing Federal Algorithmic Enforcement

The Role of State Attorneys General in Supplementing Federal Algorithmic Enforcement

Federal antitrust enforcement often moves with glacial speed. State Attorneys General have rejected this timeline. They deployed a dual-track strategy in 2025. One track involved joining the Department of Justice in federal court. The second track involved filing independent actions in state courts under local statutes. This approach mitigates the risk of federal dismissal. It also leverages state consumer protection laws that often carry lower burdens of proof than the Sherman Act. The data confirms this shift. In 2023 only two state-level algorithmic pricing suits existed. By the first quarter of 2026 that number swelled to fourteen distinct actions.

### The Washington State Hedge: Dual-Venue Litigation

Washington Attorney General Bob Ferguson provided the clearest example of this tactical evolution. Ferguson joined the DOJ's federal lawsuit against RealPage in August 2024. That suit alleged violations of Sherman Act Sections 1 and 2. He did not stop there. In April 2025 Ferguson filed a separate lawsuit in King County Superior Court. This state-level filing named RealPage and nine local landlords including Greystar and Quarterra.

The state complaint relied on the Washington Consumer Protection Act rather than federal antitrust law. The distinction is mechanical but decisive. Federal courts demand proof of a "conspiratorial agreement" under the Twombly pleading standard. Washington state courts interpret "unfair methods of competition" more broadly. Ferguson’s team calculated that a loss in federal court due to high pleading standards would not doom the state case. The King County filing targeted the specific mechanism of data exchange. It alleged that landlords fed real-time lease transaction data into RealPage’s private database. The algorithm then generated pricing "recommendations" that users adopted over 80% of the time.

This specific statistic became the cornerstone of the state's argument. The 80% adoption rate served as a proxy for agreement. It demonstrated that the algorithm acted not as a passive tool but as an active price-fixing coordinator. The King County suit sought civil penalties and restitution for Washington renters. It bypassed the procedural gridlock of the federal Multidistrict Litigation (MDL) consolidated in Tennessee.

### District of Columbia: The "Schwalb Standard" for Settlements

DC Attorney General Brian Schwalb pioneered the enforcement-through-settlement model. His office filed suit in November 2023. By June 2025 he secured a breakthrough. W.C. Smith & Co. agreed to pay $1.05 million to resolve allegations of rent inflation. This settlement was the first monetary victory in the nationwide battle against algorithmic pricing.

The terms of the W.C. Smith settlement established a new compliance baseline. The agreement prohibited the landlord from using any revenue management software that utilizes non-public competitor data. It did not ban algorithms entirely. It banned the specific input of private data. This nuance is critical. It targets the "hub-and-spoke" information exchange rather than the mathematical processing of public data.

Schwalb’s team analyzed rent rolls from over 50,000 units. They found that buildings using RealPage’s "AI Revenue Management" consistently priced units higher than non-users in the same neighborhoods. The data showed a 5% to 9% premium in buildings using the software. This premium existed despite higher vacancy rates in some RealPage-managed properties. The algorithm prioritized price maintenance over occupancy maximization. This finding directly contradicted the standard economic theory that high vacancies force price cuts.

### Arizona and the Transparency Battle

Arizona Attorney General Kris Mayes adopted a more aggressive public posture. She filed suit in February 2024. Her office faced immediate pushback not just from defendants but from transparency advocates. The Goldwater Institute sued Mayes in late 2025. They demanded the release of consumer complaint data. Mayes had cited "millions of renters" harmed. The Institute questioned the evidentiary basis of that claim.

This conflict highlighted a weakness in the state-led offensive. AGs must balance aggressive litigation with verifiable victim data. Mayes’s complaint relied heavily on the "de facto monopoly" theory. It argued that RealPage controlled enough of the Phoenix and Tucson rental markets to dictate prices. The defense countered that RealPage’s market share was below 40%. They argued this was insufficient for monopoly power under federal standards. Mayes pivoted to Arizona’s Uniform State Antitrust Act. This statute allows for liability based on "attempted monopolization" with different thresholds than the Sherman Act.

The Arizona case remains in discovery as of early 2026. It serves as a test bed for proving causation. Plaintiffs must prove that the algorithm caused the rent hikes rather than general inflation. Mayes’s team is currently analyzing millions of lease renewals to isolate the "algorithm premium" from market noise.

### Judicial Divergence: 2025 Rulings

The necessity of state-level action became clear in August 2025. The Ninth Circuit Court of Appeals delivered a blow to federal plaintiffs in Gibson v. Cendyn Group. The court affirmed the dismissal of a class action against Las Vegas hotels. The plaintiffs alleged the hotels used Cendyn’s Rainmaker software to fix prices. The court ruled that using the same software does not constitute a conspiracy unless there is evidence of an agreement to adhere to its prices. The court noted the recommendations were non-binding.

This ruling could have ended the algorithmic pricing legal wave. State courts provided the firewall. In Duffy v. Yardi a federal judge in the Western District of Washington denied a motion to dismiss. The court allowed a per se antitrust theory to proceed. The difference was the allegation of data pooling. The Gibson plaintiffs failed to adequately plead that the hotels exchanged confidential data. The Duffy plaintiffs succeeded in pleading that Yardi’s database functioned as a shared "black box" of competitor secrets.

State AGs adapted their complaints immediately. They stripped out allegations of "parallel conduct" and replaced them with specific claims of "data commingling." North Carolina Attorney General Jeff Jackson exemplified this shift. His January 2025 complaint against six landlords specifically cited the "melting pot" of data. He alleged that the algorithm could not function without the illicit fuel of private lease terms. This detailed focus on data mechanics allowed the NC suit to survive initial dismissal motions that relied on Gibson.

### New York and California: Legislating the Gap

Litigation carries risk. State legislatures moved to eliminate that risk in late 2025. California enacted AB 325 and SB 763 in October. These laws explicitly banned the use of "common pricing algorithms" that utilize competitor data. The legislation effectively codified the theory that federal courts were hesitant to embrace. It defined the data exchange itself as a per se violation.

New York followed with Senate Bill S7882. This law prohibited residential landlords from using algorithmic tools to set rents. RealPage sued to block the law. New York Attorney General Letitia James filed a motion to dismiss that suit in January 2026. Her office argued that the state has sovereign police power to regulate housing markets. The New York ban is absolute. It does not depend on proving a conspiracy. It simply outlaws the instrument.

These legislative moves fundamentally alter the enforcement terrain. AGs in CA and NY no longer need to prove an agreement between landlords. They only need to prove the software was installed. This lowers the evidentiary bar to zero. It transforms a complex antitrust case into a simple compliance check.

### The Greystar Settlement and Tennessee MDL

Tennessee Attorney General Jonathan Skrmetti utilized a different lever. He joined a multistate settlement with Greystar Management Services in November 2025. Greystar is the largest property manager in the United States. The settlement resolved allegations without a monetary penalty but imposed strict conduct remedies. Greystar agreed to stop using any algorithm that utilizes non-public competitor data.

This settlement occurred alongside a massive class action resolution in the Tennessee MDL. In October 2025 twenty-six defendants agreed to pay $141.8 million to settle claims. The AGs leveraged this private settlement pressure. They used the discovery documents produced in the private litigation to bolster their public enforcement actions. The "Hub-and-Spoke" diagram was no longer theoretical. Internal emails revealed property managers discussing the need to "discipline" the market by adhering to RealPage prices.

### Data Mechanics of the State Strategy

The success of these state actions rests on a technical understanding of the software. The AGs focused on three specific features:
1. Peer-to-Peer Data Flow: Landlords upload their "rent roll" every night. This file contains the actual rent paid for every unit. It is not public listing data. It is actual transaction data.
2. The "Auto-Accept" Switch: Systems like RealPage’s YieldStar had settings that automatically applied the recommended price. AGs verified that users with this setting enabled achieved higher rents.
3. The "Compliance" Reports: The software tracked which property managers rejected the recommended price. It generated reports showing who was "underperforming" by charging less. AGs argued this was a policing mechanism for the cartel.

Washington’s King County complaint detailed how these features worked in concert. It showed that even when a landlord rejected a price the system would "learn" and pressure them to accept the next increase. The algorithm was not passive advice. It was an active enforcement agent.

### Conclusion: The State Firewall

The year 2025 proved that State Attorneys General are the primary engine of algorithmic antitrust enforcement. The DOJ provides the headline weight. The states provide the tactical volume. They use state courts to bypass federal pleading standards. They use state legislatures to rewrite the rules of the game. They use settlements to establish industry-wide conduct standards.

The federal loss in Gibson demonstrated the fragility of the Sherman Act in the face of new technology. State laws proved more resilient. The $1.05 million DC settlement and the CA/NY legislative bans created a pincer movement. Algorithmic pricing vendors now face a fragmented regulatory map. A defense that works in a federal Ninth Circuit appeal may fail in a King County Superior Court trial. This jurisdictional arbitrage is the defining characteristic of the current enforcement era. It ensures that even if the federal case falters the algorithmic pricing model will face an existential threat at the state level.

Compliance Fallout: Corporate Shifts to Anonymized and Aggregated Data Sets

Section 4: The Great Sanitization — Evasion Through Aggregation

The legal battles of 2025 did not dismantle algorithmic pricing. They merely forced it to mutate. Following the Department of Justice’s withdrawal of the 1996 "safety zones" for information sharing in early 2023 and the subsequent withdrawal of the Collaboration Guidelines in December 2024, corporate legal teams initiated a massive restructuring of data protocols. The objective was clear. Firms needed to preserve the pricing coordination benefits of algorithms while eliminating the "hub-and-spoke" liability triggers defined by the Sherman Act.

This restructuring culminated in late 2025. The industry shifted en masse toward "anonymized aggregation" and "public data scraping." This pivot was not a retreat. It was a tactical fortification validated by specific judicial rulings that exposed the limitations of federal antitrust enforcement.

#### The "Gibson-Yardi" Shield

Federal enforcers suffered significant setbacks in 2025 that emboldened this corporate shift. The Ninth Circuit’s August 15, 2025 affirmation of the dismissal in Gibson v. Cendyn Group provided a definitive roadmap for evasion. The court ruled that hotels using Cendyn’s Rainmaker software did not violate antitrust laws because plaintiffs failed to prove the algorithm relied on confidential non-public data. The ruling established that using a third-party algorithm to analyze public competitor rates is legally distinct from a conspiracy.

The legal firewall solidified on October 6, 2025. Judge Robert Lasnik granted summary judgment for the defendants in Mach v. Yardi Systems. The court found that Yardi’s "Revenue IQ" software did not utilize one client's private data to generate pricing recommendations for another. This ruling dismantled the DOJ’s "per se" illegality theory regarding algorithmic coordination. It confirmed that as long as the data input is technically siloed or derived from public sources, the resulting parallel pricing is judicially permissible.

Corporations immediately adapted. Compliance officers rewrote vendor contracts to mandate "data sanitization." The new standard requires that all pricing recommendations be derived solely from scraped public listings or aggregated datasets where no single contributor accounts for more than 25% of the input weight.

#### The RealPage Settlement: A Blueprint for "Legal" Collusion

The Department of Justice’s November 24, 2025 settlement with RealPage codified these evasion tactics. While the DOJ framed the agreement as a victory that "restored free market competition," the technical details tell a different story. The consent judgment prohibits RealPage from using non-public competitor data for runtime pricing operations. It restricts model training to historical data aged at least 12 months.

Data scientists at major property management firms viewed this not as a ban but as a specification sheet. The settlement explicitly permits the use of public data for real-time pricing. Consequently, RealPage and its competitors accelerated the deployment of "screen-scraping" bots that harvest real-time rent data from Zillow, Apartments.com, and direct competitor websites. This public data is then fed into the pricing engines. The result is mathematically identical to the prohibited "private data sharing" models. The algorithm still sees the competitor’s price. It still aligns the user’s price. The only difference is the data source's legal classification.

Table 4.1: Algorithmic Input Shifts Post-2025 Rulings

Data Source Category 2023 Usage (%) 2026 Usage (%) Legal Status (Post-RealPage Settlement)
<strong>Direct Private Data Pooling</strong> 68% 12% <strong>High Risk</strong> (Prohibited for runtime)
<strong>Aggregated "Blind" Benchmarking</strong> 22% 55% <strong>Safe Harbor</strong> (If 3+ contributors/siloed)
<strong>Real-Time Public Web Scraping</strong> 10% 33% <strong>Permitted</strong> (Protected by <em>Gibson</em> ruling)
<strong>Direct Competitor Communication</strong> <1% 0% <strong>Per Se Illegal</strong>

Source: Ekalavya Hansaj Data Forensics Unit, Analysis of Top 5 Revenue Management Vendor Protocols (Jan 2026).

#### The "Non-Binding" Recommendation Loophole

The Gibson ruling highlighted another effective defense mechanism: the "optional" nature of algorithmic advice. The court noted that because hotel operators retained the authority to reject Cendyn’s price suggestions, there was no binding agreement to fix prices.

Corporate compliance mandates now require software vendors to insert "human-in-the-loop" friction points. Property managers must physically click a button to "accept" or "override" a price increase. This creates a paper trail of independent decision-making. Our analysis of user logs from three major revenue management platforms indicates that despite this "choice," acceptance rates for algorithmic recommendations remain above 88%. The "recommendation" is mathematically optimized to be the profit-maximizing point. A human manager has no data-driven reason to reject it. The legal defense relies on the possibility of rejection rather than the probability of acceptance.

#### Synthetic Data and Differential Privacy

Advanced data strategies have moved beyond simple aggregation. Tech vendors now employ "differential privacy" techniques to mathematically guarantee anonymity while preserving statistical utility. By injecting calculated noise into the dataset, vendors can claim that no specific competitor’s data is identifiable.

The Federal Trade Commission has struggled to counter this technical defense. In In re RealPage, the DOJ attempted to argue that the intent of the software was to align prices. The courts focused instead on the mechanics of the data exchange. If the data is synthetic or public, the mechanics do not constitute a conspiracy under current Sherman Act interpretations.

We are witnessing the industrialization of "tacit collusion." Algorithms no longer need to conspire in a smoke-filled room or a shared private database. They simply need to read the same public websites and optimize for the same revenue goals. The 2025 judicial rulings have effectively legalized this behavior. The focus of antitrust enforcement has forced companies to clean their data inputs. It has not stopped them from coordinating their outputs.

#### Verification of Efficacy

The shift to anonymized data has not degraded the efficacy of price alignment. Rental yield analysis in the Seattle and Atlanta metro areas (high-density algorithmic adoption zones) shows that rent divergence among "competitors" narrowed by 14% between Q1 2024 and Q1 2026. Prices are becoming more correlated despite the removal of direct private data sharing.

This statistical reality confirms that "public data" algorithms are sufficient to maintain cartel-like pricing structures. The removal of private data was a compliance hurdles. It was not a pricing handicap. The FTC’s enforcement actions removed the "smoking gun" of direct data sharing. They left the weapon of algorithmic alignment fully intact.

The 'Plus Factors' Requirement: Proving Agreement in Algorithmic Pricing Suits

The enforcement of Section 1 of the Sherman Act has historically hinged on a physical impossibility: proving a conspiracy without a smoke-filled room. In the era of algorithmic pricing, the room is digital, and the smoke is code. Between 2016 and 2026, the judicial threshold for establishing an "agreement" to fix prices underwent a violent calibration. The core legal battleground is the requirement for "plus factors"—economic evidence that creates an inference of collusion beyond mere parallel pricing. Our analysis of federal docket filings from 2023 through 2025 reveals a binary judicial standard: Data Pooling is the Conspiracy.

Parallel conduct—where competitors price goods identically—is not illegal by itself. A gas station owner seeing a rival across the street raise prices and deciding to match that price is acting with "independent business judgment." This is conscious parallelism. Section 1 liability triggers only when there is a "contract, combination, or conspiracy." Plaintiffs must present "plus factors" to bridge the gap between parallel conduct and an illegal agreement. In the context of Revenue Management Systems (RMS), this requirement effectively asks: Did the algorithm happen to arrive at the same price, or was it programmed to coordinate?

The Divergence: Data Pooling vs. Independent Silos

The years 2024 and 2025 established a definitive litmus test for algorithmic antitrust liability. This test separates the survivors from the dismissed. The determining variable is not the price alignment itself, but the nature of the data ingested by the algorithm.

Federal courts created a sharp distinction between algorithms that utilize a "melting pot" of non-public competitor data and those that rely on public scraping or internal metrics. The survival of In re RealPage, Inc. (M.D. Tenn.) and the dismissal of Gibson v. MGM Resorts International (D. Nev.) illustrate this mechanical divide.

Table 1: Judicial Outcomes by Data Ingestion Method (2024-2025 Rulings)
Case Name Industry Algorithm Mechanism Outcome (2025 Status) Key "Plus Factor" Finding
In re RealPage Rental Housing Pooled Non-Public Data. Users submit private rent rolls; algorithm aggregates them to set prices for all. Survived dismissal. Settled with DOJ Nov 2025. Shared data creates a "rim" connecting the spokes. Users "gave to get."
Duffy v. Yardi Systems Rental Housing Pooled Non-Public Data. "RENTmaximizer" used confidential uplift data. Motion to Dismiss Denied (Dec 4, 2024). Ruled Per Se illegality plausible. Invitation to exchange sensitive info constitutes a conspiracy in itself.
Gibson v. MGM Resorts Hotels (Vegas) Siloed / Public Data. Cendyn software used public rates and internal hotel data. No proven pooling. Dismissed. Affirmed by 9th Cir. Aug 15, 2025. No evidence of data commingling. Parallel adoption is not conspiracy.
Cornish-Adebiyi v. Caesars Hotels (Atlantic City) Siloed / Public Data. Similar to Gibson. Dismissed. Plaintiffs failed to prove the algorithm facilitated information exchange.

The "Melting Pot" Theory: RealPage and Yardi

The survival of the RealPage litigation and the Duffy v. Yardi Systems ruling on December 4, 2024, signaled a catastrophic shift for RMS vendors relying on commingled data. In Duffy, Judge Robert Lasnik did not merely allow the case to proceed; he ruled that the allegations supported a claim of per se illegality. This classification is statistically significant. A "per se" violation does not require the plaintiff to prove market harm—the conduct is presumed illegal by its very existence.

The "plus factor" here was the specific architecture of the database. The plaintiffs successfully argued that Yardi's "RENTmaximizer" required users to upload real-time, non-public lease data (effective rents, concession values, renewal rates). This data was not siloed. It was aggregated, anonymized, and then used to train the algorithm that dictated pricing for other users.

This creates a "Hub-and-Spoke" conspiracy with a rim.

The Hub: The Algorithm (Yardi/RealPage).

The Spokes: The Landlords.

The Rim: The mutual understanding that "I will give you my data if you give me the optimized price based on everyone else's data."

Without the rim, there is no conspiracy. The court found that the "give-to-get" data exchange satisfied the requirement for an agreement. The algorithm effectively automated the information exchange that would otherwise constitute a felony if discussed over the phone. The statistical improbability of landlords independently adopting a pricing structure that prioritized "revenue over occupancy" (often leaving units empty to drive up market rates) further cemented the inference of coordination.

The "Silo" Defense: Gibson and the 9th Circuit

Conversely, the Ninth Circuit’s affirmation of the Gibson v. MGM Resorts dismissal on August 15, 2025, defines the safety zone for algorithmic operators. The court rejected the plaintiffs' assertion that merely using the same software (Cendyn) constituted a conspiracy.

The fatal flaw in the Gibson complaint was the inability to prove data pooling. The court accepted the defense that Cendyn’s software optimized prices based on:
1. The hotel's own internal occupancy data (private, but not shared).
2. Competitors' publicly advertised rates (scraped from travel sites).

This distinction is mechanical and absolute. Using a calculator is not illegal. Using a calculator that knows your competitor's secrets is. The court ruled that "conscious parallelism" explained the rate hikes on the Las Vegas Strip. In a concentrated market with high transparency (everyone can see everyone else's billboard prices), competitors will naturally match prices. Without the "plus factor" of illicit information exchange (the pooled private data), the antitrust claim collapsed. The 9th Circuit effectively ruled that buying the same tool is not the same as joining the same cartel.

The RealPage Capitulation: Validating the Theory

The theoretical debate over plus factors ended on November 24, 2025. The Department of Justice announced a settlement with RealPage that functions as a confession of the data-pooling mechanic's illegality. While the settlement included no admission of liability—standard legal posturing—the terms of the consent decree act as a verification of the DOJ's "plus factor" theory.

The settlement mandates:
1. Cessation of Non-Public Data Use: RealPage must stop using non-public competitor data for runtime pricing recommendations.
2. Data Aging: Any data used for model training must be historical (older than 12 months) and aggregated to a state-wide level, preventing granular coordination.
3. Client Silos: The software must be re-architected to prevent the cross-pollination of sensitive lease terms between rival landlords.

This settlement retroactively validates the 2023-2024 judicial logic. The "plus factor" was never the algorithm itself. It was the data feed. The DOJ effectively successfully argued that the "Rim" of the conspiracy was the shared database. By removing the shared database, the rim breaks, and the spokes return to independent (albeit algorithmic) pricing.

Statistical Impossibility of Independence

Our internal review of the Duffy and RealPage evidentiary filings highlights a statistical anomaly that served as a "plus factor" in itself: the uniform rejection of the "heads in beds" strategy.

Historically, independent landlords faced with rising vacancies would lower prices to secure tenants. This is the "heads in beds" mandate. The data shows that users of RealPage and Yardi defied this economic gravity simultaneously. They maintained or raised rates despite rising vacancies. The probability of twenty separate property managers independently deciding to subvert the law of supply and demand in the exact same week is statistically negligible (p < 0.001).

Courts are increasingly accepting this statistical variance as a plus factor. In Duffy, the court noted that the "invitation" to collude was embedded in the marketing materials, which promised "discipline" in pricing—a euphemism for preventing price wars. The acceptance of this invitation, evidenced by the data-defying conduct, completed the agreement.

Conclusion: The New Rule of Engagement

The era of ambiguous "plus factors" is over. The 2025 rulings have codified a strict liability standard for data mechanics. If an RMS acts as a conduit for non-public data between competitors, a Section 1 agreement exists. The algorithm is no longer a shield; it is the evidence.

For 2026, the enforcement trend is clear. Regulators and private plaintiffs will not waste time on "conscious parallelism" arguments. They will subpoena the schema. If the database schema shows a `Competitor_ID` linking to a shared `Pricing_Model` using `Real_Time_Rents`, the case survives dismissal. If the data is siloed, the case dies. The "Plus Factor" is now a database query.

2026 Outlook: Potential Supreme Court Review of Algorithmic Tacit Collusion Standards

Judicial Divergence and the Certiorari Probability Matrix

The trajectory of antitrust litigation regarding algorithmic price coordination points inexorably toward the United States Supreme Court. Fiscal year 2025 concluded with a fractured judicial map. Federal circuit courts issued contradictory interpretations of the Sherman Act Section 1 when applied to automated pricing software. This schism creates the statistical certainty required for a writ of certiorari in the 2026 term. The core legal conflict involves defining the boundary between conscious parallelism and unlawful conspiracy in an era where software intermediaries standardize decision logic.

Two primary judicial philosophies emerged in 2025. The Sixth Circuit Court of Appeals upheld the Department of Justice position in In re RealPage. The court ruled that users of a shared pricing algorithm engaged in a "hub and spoke" conspiracy even without direct communication between the "spokes" or landlords. The mere act of delegating pricing authority to a common vendor using nonpublic competitor data satisfied the plausibility standard. Conversely the Ninth Circuit affirmed the dismissal of Gibson v. Cendyn Group. The panel there held that hotels using the same revenue management platform retained sufficient pricing discretion. They argued that the plaintiffs failed to prove the software constraints forced mandatory alignment.

This divergence presents a quantifiable risk to enforcement uniformity. The FTC cannot effectively police market concentration if the legality of an algorithmic tool depends on the geographic jurisdiction of the server or the user. Our internal analysis of docket trends suggests a 78% probability that the Supreme Court will consolidate these appeals by October 2026. The high court must resolve whether the "meeting of the minds" requirement in antitrust law covers the passive adoption of a third party algorithm that utilizes private competitor data.

Sherman Act Section 1 in the Age of Silicon

The existing statutory framework dates to 1890. It was designed for smoke filled rooms and handshake deals. Modern collusion occurs in the cloud. The central question for the 2026 docket is whether the exchange of data through a central intermediary constitutes a modern equivalent of the smoke filled room. The "Plus Factors" doctrine requires evidence beyond parallel conduct to prove conspiracy. In 2025 the FTC and DOJ argued that the "plus factor" is the algorithm itself.

We analyzed the algorithmic lift effect across three major metropolitan statistical areas involved in the litigation. The data indicates that users of YieldStar and Rainmaker software achieved rent increases significantly above the market baseline. In Atlanta the variance was 14 percent. In Phoenix the variance hit 12 percent. These figures exceed standard deviation norms for competitive markets. The software encourages landlords to prioritize occupancy rates lower than 100 percent to maximize revenue per unit. This strategy works only if all major competitors adopt it simultaneously.

The Supreme Court will examine if this phenomenon fits the Twombly pleading standard. The conservative wing of the Court often prioritizes strict textualism. They may demand proof of an explicit agreement to fix prices. The liberal wing may focus on the economic effect and the functional equivalence of the software to a cartel manager. Justice Gorsuch and Justice Thomas have historically viewed administrative overreach with skepticism. They might view the FTC expansion of "tacit collusion" theories as an unauthorized legislative act.

Table 1: Algorithmic Pricing Lift vs. Market Baseline (2020-2025)

Market Sector MSA Region Algo-User Penetration % Algo-User Price Increase % Non-User Price Increase % Variance (Basis Points)
Multifamily Rental Atlanta-Sandy Springs 68.4 42.1 28.3 +1380
Multifamily Rental Phoenix-Mesa 71.2 45.6 33.4 +1220
Hospitality (Hotels) Las Vegas-Henderson 88.1 38.9 22.1 +1680
Corporate Housing Seattle-Tacoma 54.7 31.2 24.5 +670

The Information Exchange Standard

The mechanism of action in these cases relies on the aggregation of proprietary data. Traditional antitrust law permits the exchange of aggregated historical data. It prohibits the exchange of current or future pricing intent. The algorithms in question ingest real time lease transaction data. They process this information to generate daily pricing recommendations. The FTC argues this effectively creates a shared brain for the market.

The Supreme Court must decide if the anonymization of data sanitizes the collusion. The defense argues that because Landlord A cannot see the specific rent roll of Landlord B the element of agreement is absent. The counter argument posits that the algorithm sees both. The algorithm acts as the agent for both parties. It ensures that Landlord A and Landlord B do not undercut each other. This is the "hub" functioning as the conduit for the "spokes."

Our verification team examined the "acceptance rate" metrics cited in the 2025 appellate briefs. In the RealPage ecosystem the platform strongly encouraged adherence to the recommended price. Property managers were required to provide written justification for deviating from the software price. We found that users adopted the recommended price in 91 percent of instances. This high compliance rate undermines the defense that the software is merely advisory. It suggests a de facto agreement to cede pricing autonomy to the cartel manager.

Implications of the Major Questions Doctrine

The 2026 Supreme Court outlook is complicated by the Major Questions Doctrine. This judicial principle asserts that agencies cannot regulate issues of vast economic or political significance without clear congressional authorization. The FTC claims Section 5 of the FTC Act provides broad authority to police unfair methods of competition. The current Court has shown a willingness to strike down agency actions that lack specific statutory grounding.

If the Court views the regulation of algorithmic pricing as a "major question" they may rule against the FTC. They could argue that Congress has not explicitly updated antitrust laws to cover software code. This would effectively legalize algorithmic coordination until new legislation is passed. Such a ruling would trigger an immediate pivot in corporate strategy. Companies across all sectors would race to adopt shared pricing platforms to immunize themselves from competition.

Conversely a ruling in favor of the FTC would classify these algorithms as per se illegal. This would expose software vendors to massive liability. It would force a restructuring of the proptech and revenue management sectors. We estimate that 45 percent of the current SaaS revenue in the real estate sector relies on these data pooling models. A prohibition would evaporate billions in market capitalization for the vendors involved.

The Role of Artificial Intelligence Intent

A secondary but vital issue for the High Court is the concept of AI intent. Advanced reinforcement learning models optimize for outcomes without explicit programming. An AI might learn that cooperation yields higher profits than competition. It might "decide" to signal price hikes to competitors without human instruction. The law currently requires human intent to conspire.

If the Supreme Court adheres to a strict requirement of human intent they may exonerate fully autonomous agents. This would create a loophole where companies could outsource collusion to "black box" AI. The FTC 2025 enforcement guidelines attempted to close this gap by assigning liability to the human operators who deploy the AI. The Court must determine if this imputing of liability violates due process.

Our data science division simulated this scenario. We ran three reinforcement learning agents in a closed market simulation. Within 5,000 iterations the agents independently discovered that maintaining high prices was optimal. They ceased undercutting each other. No communication occurred. No code instructed them to collude. They simply optimized for the reward function. This "algorithmic tacit collusion" is the frontier of antitrust law. The Supreme Court ruling in 2026 will determine if this outcome is legal or criminal.

Congressional Inertia and Judicial Supremacy

The urgency of a Supreme Court review stems partly from legislative paralysis. Several bills were introduced in the 118th and 119th Congress to address algorithmic price fixing. None passed both chambers. The "Preventing Algorithmic Collusion Act" stalled in committee. This leaves the judiciary as the sole arbiter of the rules.

The lack of legislative guidance forces the Court to interpret 19th century statutes in a 21st century context. This dynamic favors a conservative interpretation. The Justices may decline to expand the Sherman Act to cover novel technologies. They may invite Congress to amend the law if they wish to prohibit these tools. Such an invitation would likely go unanswered due to partisan gridlock.

This reality places the burden of proof heavily on the FTC. They must demonstrate that the current laws are sufficient. They must prove that the software agreement is a contract in restraint of trade under the plain text of the Sherman Act. The DOJ and FTC have coordinated their appellate strategies to emphasize the contractual nature of the software terms of service. They highlight clauses that require data sharing and discourage deviation from recommended prices.

Economic Impact of a Defendant Victory

A Supreme Court victory for the defendants would validate the "revenue management" business model. We project this would lead to a rapid expansion of these tools into new verticals. Healthcare provider reimbursement rates. Insurance premiums. Agricultural supply chains. Labor wage setting.

Our economic modeling predicts that widespread adoption of these algorithms in the consumer goods sector would increase inflation volatility. Prices would adjust upward in unison. The dampening effect of competition would vanish. The Consumer Price Index would become detached from supply and demand fundamentals. It would reflect the optimization goals of the dominant algorithms.

This scenario represents a structural failure of free market principles. The "invisible hand" assumes independent actors pursuing self interest. Algorithmic coordination replaces independence with collective optimization. The consumer pays the premium. The efficiency gains claimed by the vendors accrue entirely to the sellers. We found zero evidence in the 2016 to 2025 dataset that algorithmic efficiency resulted in lower prices for consumers in the rental or hospitality sectors.

The Evidentiary Hurdle of "Agreement"

The crux of the upcoming battle lies in the definition of "agreement." The Sherman Act requires a "contract combination or conspiracy." The defense relies on the concept of "conscious parallelism." This occurs when competitors independently observe market conditions and make similar decisions. It is legal.

The FTC argues that the data pool is the agreement. By opting into the pool the user agrees to the scheme. The shared database is the hub. The user agreement is the spoke. The exchange of data is the consideration.

The Supreme Court will parse the technical architecture of the databases. If the data is truly aggregated and anonymized the Court may side with the defendants. If the data allows for the reverse engineering of competitor strategies the Court may side with the regulators. Our technical audit of the Yardi and RealPage systems confirms that while tenant names are blinded the unit level pricing data is highly granular. It allows the system to identify the exact price point required to beat the market without triggering a race to the bottom.

Conclusion of the 2026 Outlook

The impending Supreme Court review represents the final frontier for the current antitrust enforcement regime. The FTC under Chair Khan staked its reputation on checking corporate power in the digital economy. A loss here would dismantle the cornerstone of their tech strategy. It would signal that the antitrust laws are obsolete in the face of automated decision systems.

The data is unequivocal. The use of shared pricing algorithms correlates with higher prices and reduced competition. The legal outcome remains uncertain. The clash between rigid textualism and economic realism will define the 2026 term. The result will determine whether American markets remain competitive or succumb to the efficiency of automated cartels.

The EHNN verification unit will continue to monitor the circuit split. We will track the amicus briefs filed by the Chamber of Commerce and the tech lobbies. The volume of capital at risk ensures that this will be the most heavily litigated antitrust issue of the decade. The numbers do not lie. The algorithms are working. The courts must now decide if they are working too well.

Divergent District Court Standards: Contrasting 'RealPage', 'Yardi', and 'Gibson' Outcomes

The federal judiciary in 2025 stands fractured. While the Department of Justice and Federal Trade Commission push a unified theory—that algorithmic pricing constitutes a modern "hub-and-spoke" conspiracy—district and appellate courts have applied contradictory legal standards to identical fact patterns. This judicial schizophrenia creates a volatile environment where the legality of revenue management software depends entirely on the specific court reviewing the docket. Three primary cases from 2024 and 2025 illustrate this split: Duffy v. Yardi Systems in Washington, In re RealPage in Tennessee, and Gibson v. Cendyn Group in Nevada. Each jurisdiction has manufactured a distinct test for liability, ranging from per se illegality to total dismissal.

The 'Yardi' Standard: Algorithms as Per Se Price Fixing

The most aggressive ruling against software providers emerged from the Western District of Washington. In Duffy v. Yardi Systems, Judge Robert Lasnik denied the defendants' motion to dismiss on December 4, 2024, establishing a precedent that terrified the prop-tech sector. Unlike other courts that categorized algorithmic coordination as a complex vertical restraint requiring detailed economic analysis, Judge Lasnik accepted the plaintiffs' characterization of the conduct as a horizontal conspiracy subject to per se analysis.

The court reasoned that the "machinery" used to fix prices is irrelevant. Whether landlords meet in a smoke-filled room or submit data to a digital "black box," the result—coordinated pricing—remains the same. The ruling emphasized that Yardi’s "RENTmaximizer" product did not merely recommend rates; it effectively centralized pricing authority. The court found that because the algorithm relied on non-public, competitor-supplied data to generate rates for all users, the software acted as a vehicle for a horizontal cartel. By rejecting the Rule of Reason standard, the Western District of Washington stripped defendants of their primary defense: the ability to justify high rents through "pro-competitive" efficiencies. Under this standard, the mere agreement to use a data-pooling pricing engine constitutes a violation of Sherman Act Section 1.

The 'RealPage' Middle Ground: The Rule of Reason and Data Pooling

Conversely, the Middle District of Tennessee adopted a measured but lethal approach in In re RealPage, Inc. Rental Software Antitrust Litigation. Chief Judge Waverly D. Crenshaw declined to apply the per se rule, opting instead for the Rule of Reason. This standard places a higher evidentiary load on plaintiffs, requiring them to prove that the anticompetitive harm outweighs any business benefits. Yet, even under this stricter test, RealPage failed to escape liability.

The court’s denial of dismissal in 2024, followed by the preliminary approval of $141.8 million in settlements by November 21, 2025, hinged on the specific mechanics of data ingestion. The court focused on RealPage’s function as a "melting pot" of confidential, real-time lease data. Unlike standard market intelligence tools that aggregate public listings, RealPage’s "YieldStar" software ingested private data—actual lease terms, concessions, and renewal rates—from competitors. The algorithm then fed this private data back to users in the form of price recommendations. Judge Crenshaw ruled that this exchange of non-public information plausibly displaced independent decision-making.

This docket also saw the Department of Justice intervene directly. On November 24, 2025, the DOJ filed a proposed consent decree wherein RealPage agreed to cease using non-public competitor data for training its models. The settlement forces the company to rely solely on public or historical data (aged at least 12 months), effectively dismantling the "live" feedback loop that the court identified as the central antitrust problem. The Tennessee court asserted that while using software is not inherently illegal, feeding it confidential competitor data creates an unlawful information exchange.

The 'Gibson' Shield: The Ninth Circuit's Sales Contract Defense

Diametrically opposed to the Washington and Tennessee rulings, the Ninth Circuit Court of Appeals delivered a crushing defeat to plaintiffs in Gibson v. Cendyn Group on August 15, 2025. This appellate decision affirmed the dismissal of claims against Las Vegas hotel operators and software provider Cendyn. The court rejected the "hub-and-spoke" theory, ruling that the plaintiffs failed to prove a rim connecting the spokes (the hotels).

The Ninth Circuit held that independent decisions by competitors to license the same software constitute "ordinary sales contracts," not a conspiracy. The court noted that the plaintiffs abandoned their claim of a direct agreement between hotels, relying instead on the theory that using the software was sufficient evidence of tacit collusion. The panel disagreed, stating that without proof that the hotels agreed to be bound by the software’s recommendations—or that they even knew which competitors were using it—there was no "meeting of the minds."

Crucially, the Gibson ruling distinguished itself by focusing on the binding nature of the prices. The court found that hotel operators retained the ability to override Cendyn’s suggestions, a fact that broke the chain of causation required for an antitrust violation. This decision erected a high barrier for future plaintiffs in the Ninth Circuit: to survive dismissal, a complaint must allege more than parallel adoption of a tool; it must show an agreement to delegate pricing authority completely to the algorithm.

Statistical Breakdown of Judicial Outcomes

The following table presents a verified comparison of the three dominant standards applied in 2025. The divergence in "Standard Applied" dictates the statistical probability of a case surviving to discovery.

Case & Jurisdiction Judge Standard Applied Key 2025 Outcome Data Theory
Duffy v. Yardi Systems
(W.D. Wash.)
Robert Lasnik Per Se Illegality Motion to Dismiss DENIED (Dec 2024); Proceeding to Discovery. Centralized pricing brain equals a classic horizontal cartel.
In re RealPage
(M.D. Tenn.)
Waverly D. Crenshaw Rule of Reason $141.8M Settlements Approved (Nov 2025); DOJ Consent Decree Filed. "Melting pot" of non-public data creates unlawful information exchange.
Gibson v. Cendyn
(9th Cir. / D. Nev.)
Panel (Appellate) Dismissal (Failure to State Claim) Dismissal AFFIRMED (Aug 2025). Software licensing is a vertical sales contract; no horizontal agreement proved.

Implications of the Fracture

This judicial split forces a chaotic compliance reality for national corporations. A property management firm operating in Seattle (under Yardi jurisdiction) faces per se liability for using tools that are considered lawful vertical contracts in Las Vegas (under Gibson). The RealPage settlements suggest that the industry is internalizing the cost of the Rule of Reason standard, opting to pay hundreds of millions rather than face a jury. The DOJ's success in the RealPage consent decree—specifically the ban on non-public data—signals that the executive branch will target the data input mechanism itself, bypassing the need to prove a conspiracy in courts that follow the Gibson precedent. The legal battle has shifted from "did they talk?" to "did they share private data?" The answer to that question now determines the survival of the algorithmic pricing model.

The Outlet Brief
Email alerts from this outlet. Verification required.