Overview
Artificial intelligence (AI) is transforming market dynamics, and in doing so, it creates new avenues for distorting competition. This article explores how unilateral uses of AI tools can amount to unilateral conduct under both European Union and United States law, drawing lessons from recent enforcement actions and emerging regulatory frameworks. New uses of AI tools may raise antitrust concerns, including when used for self-preferencing, predatory pricing, and price-discrimination.
Under EU antitrust law, unilateral conduct involving the use of AI may constitute an abuse of dominance under Article 102 of the Treaty of the Functioning of the European Union (TFEU) or a violation of the Digital Markets Act (DMA). While such cases remain relatively rare, most of them have focused on self-preferencing for now. Unilateral practices by certain digital platform providers are subject to specific rules, particularly under the DMA's targeted obligations for gatekeepers.
In the US, enforcement has for now largely targeted algorithmic pricing collusion, where AI tools facilitate coordination among competitors. The next frontier, however, likely involves a firm’s unilateral deployment of algorithms that could distort competition on its own. As companies try to use AI pricing tools to maximize profit, they may find those tools automatically doing things that a person would not: inching higher prices to ensure the market follows to achieve supercompetitive prices or setting customer-specific prices that may be predatory or discriminatory. In addition to Section 2 of the Sherman Act, US agencies may also rely on Section 5 of the FTC Act and the Robinson-Patman Act (RPA) as a flexible mechanism to address such concerns.
1. Self-Preferencing
In the digital economy, access to data has become a crucial competitive asset and often a necessary condition for market entry. When dominant platforms restrict access to essential datasets, they hinder rivals' ability to compete or grow, thereby entrenching their own market power. A related concern is self-preferencing, where a dominant firm favors its own or affiliated products and services over those of competitors, undermining competition based on the merits. While the impact of self-preferencing on consumer welfare can vary and is not always anticompetitive, the main risk lies in leveraging dominance in one market to exclude rivals in related or complementary markets. This risk is especially significant in the context of search, recommendation, and allocation algorithms.
In the EU, unilateral conduct is addressed both under Article 102 TFEU and more explicitly under the DMA. Under Article 6(5) DMA, designated gatekeepers are prohibited from giving preferential treatment in ranking, indexing, or crawling to their own products or services over those of third parties. By doing so, the DMA proactively targets practices that threaten fair competition in digital markets. The European Commission can use either of these tools.
This issue was demonstrated in the Google Android case, where the European Commission found that Google had limited competitors' access to valuable user data related to search queries, reinforcing its dominant position in violation of Article 102 TFEU. In 2018, the Commission imposed a €4.34 billion fine on Google – the largest antitrust fine ever at the time – for imposing illegal restrictions on Android device manufacturers and mobile network operators to cement its dominant search engine position. While data concentration can lead to improved services and some consumer benefits, it can also result in high switching costs and lock-in effects, especially when users depend on a single platform for a range of services. This undermines user choice and weakens effective competition. As part of the remedy, Google was required to cease the restrictive practices and allow device manufacturers greater freedom to install competing search engines and apps. Google was also ordered to provide consumers with a choice screen to select their preferred search engine and browser on Android devices in the European Economic Area. These measures were imposed through a formal infringement decision under Article 7 of Regulation 1/2003 and were not the result of voluntary commitments. The Commission continues to monitor compliance with these obligations. In September 2022, the General Court of the European Union upheld the Commission’s decision, dismissing Google's appeal and confirming both the fine and the findings of abuse of dominance. A related concern arises from the use of AI-driven ranking algorithms on digital platforms, particularly search engines, where platforms can engage in self-preferencing – giving preferential treatment to their own services or those of selected partners.
This practice was also central to the Google Search (Shopping) case. In its judgment of November 10, 2021, the EU General Court upheld the Commission’s finding that Google had abused its dominant position under Article 102 TFEU. In that case, Google gave its own Comparison-Shopping Service (CSS) more prominent placement and display in search results. In addition, it demoted rival CSSs through algorithmic adjustments. As a result, competing services suffered a significant loss of visibility and traffic, which effectively excluded them from the market. This advantage was not based on the merit or efficiency of Google's service, but rather on the discriminatory tactics it employed in this specific context. For this conduct, the European Commission imposed a fine of €2.42 billion on Google in 2017 – the first antitrust penalty the Commission issued for self-preferencing behavior. The fine was upheld by the General Court in 2021 and later confirmed by the Court of Justice of the European Union (CJEU) in 2024, reinforcing the principle that dominant platforms must not favor their own services to the detriment of competition. Alongside the fine, Google was ordered to end the discriminatory treatment by treating rival comparison-shopping services equally in how search results are displayed and ranked. These remedies were imposed under a formal infringement procedure pursuant to Article 7 of Regulation 1/2003, and not through voluntary commitments. The Commission gave Google 90 days to bring its practices into compliance, with the threat of further penalties for noncompliance.
Another example is illustrated in the Amazon Marketplace case, in which the European Commission, by its decision of December 20, 2022, found that Amazon used its algorithm to favor its own retail business and third-party sellers who relied on its logistics and delivery services. By doing so, Amazon engaged in customer foreclosure, raising rivals' costs and limiting their ability to compete. As Amazon operates both as a marketplace platform and a retailer, this dual role enabled it to distort competition. The "Buy Box," which features a single seller’s offer and is critical for driving sales, was central to this conduct. The Commission found that Amazon's algorithm unduly favored its own retail business, as well as third parties using its logistics services, over other third-party sellers when determining which offer would appear in the "Buy Box." This self-preferencing abused Amazon's dominance by prioritizing its own offers and preferred sellers, thereby reducing the visibility and competitiveness of independent third-party retailers. Since the "Buy Box" heavily influences consumer purchasing decisions, exclusion or demotion from it significantly limits rivals' access to customers, raises their distribution costs, distorts competition, and restricts consumer choice. In doing so, Amazon leveraged its dominance in marketplace services to unfairly exclude competitors and entrench its position in the retail market.
Unlike other cases, the Commission did not impose a fine on Amazon. Instead, under Article 9 of Regulation 1/2003, it accepted voluntary commitments proposed by Amazon, which were made legally binding through the Commission's decision. These commitments require Amazon to end its discriminatory treatment in the allocation of the "Buy Box," refrain from using non-public data of third-party sellers for its own retail operations and ensure equal access to visibility and sales opportunities for independent sellers, regardless of whether they use Amazon's logistics services. The Commission will closely monitor compliance with these commitments and may impose fines or other sanctions if Amazon fails to adhere to them.
The US is also grappling with self-preferencing. These concerns were front and center in the US Department of Justice's recent victory against Google, in which the court imposed a comprehensive set of remedies aimed at restoring competition in the search and search advertising markets. The case addressed how Google allegedly leveraged its dominance, through exclusive distribution agreements, control over default settings, and self-preferencing within AI-driven services like Search, Chrome, and the Gemini assistant – to entrench its position and disadvantage competitors. The remedies, announced in September 2025, require Google to end a range of exclusionary contracts, share certain data assets with rivals, and provide broader access to its search and advertising syndication services. Significantly, the court's order extends to Google's generative AI products, reflecting the growing recognition that algorithmic design and data access practices in AI can replicate or amplify competitive harms.
2. Algorithmic Price Discrimination
Pricing algorithms enable personalized pricing by setting different prices for different consumer groups based on personal characteristics or behavioral data. By analyzing large volumes of consumer information, companies can estimate individual willingness to pay, segment their customer base, and adjust prices accordingly.
While personalized pricing may improve pricing efficiency by tailoring offers to individual consumers, its use by dominant firms can raise concerns under Article 102 TFEU. Charging each customer the maximum they are willing to pay allows the firm to capture the entire consumer surplus – the extra value consumers would otherwise retain. For example, a dominant firm might offer lower prices to more price-sensitive users and higher prices to those less sensitive to price. Although this could result in a redistribution of benefits among consumers, the net effect is that more value is transferred from consumers to the firm, especially when there is little transparency or ability to compare prices. If a firm can implement such pricing strategies without fear of losing marginal consumers, it may indicate that it is operating independently of competitive constraints – a key indicator of market power under EU competition law. This raises the possibility of exploitative abuse, particularly where pricing lacks objective justification or transparency. While Article 102 has traditionally been enforced more in exclusionary cases, the ability of AI tools to segment consumers and extract maximum willingness to pay could prompt renewed attention to exploitative pricing in digital markets.
In such situations, the European Commission is not, in principle, required to demonstrate actual harm to the market position of the disadvantaged party. However, in its judgment of April 19, 2018 in MEO, the CJEU clarified that invoking Article 102 TFEU to address exploitative abuses – such as unfair pricing – still requires a high standard of proof. The Court emphasized that not every difference in treatment between trading partners amounts to an abuse; rather, it must be shown that the conduct produces or is capable of producing a competitive disadvantage.
In the context of personalized pricing, this means that competition authorities would need to establish that the practice is not only systematic and targeted, but also unfair, lacking any objective justification, and capable of causing concrete harm to consumers or to the competitive structure of the market. This presents significant evidentiary challenges, especially when the pricing algorithms are opaque or when harm is diffuse and individualized. The MEO ruling thus underscores the difficulty of pursuing exploitative abuses in dynamic, data-driven markets, even where AI-based pricing strategies may raise fairness concerns.
Pricing algorithms may also raise concerns under US antitrust laws. A leading example is the FTC's challenge to Amazon’s internal pricing algorithm tool – Project Nessie. In FTC v. Amazon, the FTC alleged that Amazon deployed an algorithm designed to raise prices both on and off its platform by predicting when rivals would follow its price increases. Project Nessie’s algorithm predicted when competitors would match Amazon's price hikes and then automatically increased Amazon's prices, with the goal of manipulating other online stores into raising their own prices as well. The FTC alleges that this conduct violates, among others, Section 5 of the FTC Act for using an unfair method of competition.
In September 2024, the federal court denied Amazon’s motion to dismiss the FTC's core federal antitrust claims, including the Section 5 claim related to Project Nessie. The court held that to state a claim under Section 5, the FTC must allege "evidence of anticompetitive intent or purpose." The court found that the FTC met this threshold, citing allegations that Amazon implemented Project Nessie after it "realized that it could increase its prices while reducing the risk of shoppers finding a lower price off Amazon if Amazon focused its price increases on products sold by competitors that were matching Amazon’s prices."
The case – now in discovery and set for trial in October 2026 – may define how US law treats algorithmic conduct that manipulates market outcomes without collusion. As AI pricing tools become more common, regulators may increasingly look to Section 5 of the FTC Act to challenge strategies that traditional antitrust tools might not reach.
3. Predatory Pricing in the Age of Instant Feedback
Companies can use algorithms and AI to implement targeted predatory pricing by quickly analyzing market data and predicting competitor reactions. This enables them to identify marginal customers – those likely to switch providers – and offer below-cost prices to retain or attract them, while maintaining profitability on inframarginal customers who are less likely to switch. AI reduces the cost and increases the precision of such strategies, making predatory pricing more sustainable and potentially more harmful to competition. When practiced by a firm in a dominant position, this conduct may violate EU and US antitrust laws.
In the EU, predatory pricing may amount to an abuse of dominance under Article 102 TFEU, particularly where it results in the exclusion of equally efficient competitors. The use of artificial intelligence further complicates this analysis, as algorithmic pricing tools enable dominant firms to undercut rivals in real time, adapting dynamically to market conditions. This raises the risk of more sophisticated and less detectable forms of predatory pricing, making the application of traditional legal tests – such as the AKZO framework – increasingly difficult. Under AKZO v. Commission, prices below Average Variable Cost (AVC) are presumed abusive, but this test may be ill-suited for digital markets, where marginal costs are often negligible and algorithmic pricing strategies can evolve rapidly.
Despite these concerns, there is currently no case law in the EU that directly addresses predatory pricing conducted through AI-driven tools. While the European Commission's Draft Guidelines on Exclusionary Practices address unilateral conduct such as predatory pricing, they make no specific reference to AI or algorithmic pricing mechanisms. At the same time, other EU legislative instruments contribute to a broader regulatory response. The Platform-to-Business (P2B) Regulation provides safeguards against unfair commercial practices in online intermediation services, while the Omnibus Directive – through Article 6(1)(ea) of the Consumer Rights Directive – introduces transparency obligations regarding personalized pricing based on automated decision-making. Together, these developments reflect a shift toward a more integrated and cross-cutting EU regulatory framework for digital markets, where the boundaries between competition law, consumer protection, and AI regulation are increasingly intertwined. As AI becomes more embedded in commercial strategies, there is growing recognition of the need to adapt legal tools to address novel forms of market abuse.
In the US, there may be a resurgence of predatory pricing claims as AI becomes more advanced. Justice Powell once remarked in Matsushita Electric Industrial Co. v. Zenith Radio Corp, 475 U.S. 574 (1986) that predatory pricing schemes are "rarely tried, and even more rarely successful." Under traditional antitrust doctrine, predatory pricing is defined as the practice of selling goods or services below cost in order to drive competitors out of the market, with the intention of later raising prices to recoup losses. It requires a firm to set prices below an appropriate measure of its own cost and have a dangerous probability of recouping its losses once competition in the market is eliminated.
Historically, courts have generally been skeptical of predatory pricing claims, viewing them as economically irrational or hard to prove. But the introduction of autonomous AI pricing algorithms may change that calculus. An algorithm tasked with maximizing market share might, without explicit human direction, develop and implement a predatory pricing strategy simply because it is the most efficient path to dominance. Modern algorithms can identify which customers are most likely to defect to a rival and offer selective below-cost prices to neutralize that threat while maintaining profitable prices elsewhere. This kind of precision predation revives strategies once deemed too costly to execute manually. Like Standard Oil’s historical use of localized undercutting, AI systems can micro-target discounts, but now at digital speed and scale.
This kind of precise predation could escape detection under current legal standards that focus on average pricing, not micro-targeted tactics. But from a competition policy standpoint, the implications are serious: AI doesn't just make traditional strategies more efficient, it may revive and legitimize tactics once deemed too costly or risky to pursue. Even if courts remain skeptical under Section 2, the FTC could again reach for Section 5, framing algorithmic predation as an "unfair method of competition."
In the US, a predatory-pricing claim under Section 2 of the Sherman Act generally requires market power, because a "dangerous probability of recoupment of the losses through higher prices later" means that a firm has or will gain sufficient market power to raise prices in the future. But even absent market power, AI-enabled pricing discrimination may raise issues under the RPA, which targets price discrimination among competing buyers.
Under Section 2(a), the RPA prohibits sellers from charging different prices to different purchasers of commodities of like grade and quality. If an AI pricing tool is used to set retailer-specific prices for goods in an effort to maximize profit, retailers paying higher prices may have standing under the RPA.
A further layer of potential exposure lies with the developers of AI tools themselves. If an AI vendor licenses or sells an identical pricing product to multiple firms but charges discriminatory prices or imposes differential access terms that advantage certain competitors, regulators could test whether those transactions fall within the RPA's ambit. The threshold question is definitional – is an algorithm a "good" under the RPA? Courts have long limited the RPA to tangible commodities, but as AI systems become increasingly licensed or embedded into physical infrastructure, that boundary could blur.
Together, these developments suggest that the RPA – long considered a relic of mid-century retail battles – may find renewed relevance in the algorithmic age. As enforcement agencies revisit statutes designed to police discriminatory pricing, AI developers and users alike may soon face scrutiny not only for how their algorithms set prices, but also for how those algorithms themselves are priced.