Through Big Data and increasing digitalisation of commerce, algorithmic pricing (AP) has become a staple of markets globally. While this increased prevalence has produced a multitude of procompetitive market outcomes – for example, increased supply-side and demand-side efficiencies – there exists palpable academic and administrative concern that AP may greater facilitate the emergence of collusion in digital markets. In light of this, the following will therefore seek to establish what competitive concerns AP poses and whether in light of said concerns reform to the competition toolkit is necessary to ensure effective enforcement.
Undoubtedly, the least complex of such algorithm-fuelled collusion scenarios pertains that identified in Virtual Competition as ‘the messenger’ – namely, where AP is used to implement pre-existing collusive agreements. In such instances, algorithms stand as merely a “technological extension of human will”. Thus, it could be surmised that they will raise the same concerns as ‘traditional’ anticompetitive agreements depending largely on the cartelist’s intentions. However, perhaps a more novel concern raised by such instances is the ability of pricing algorithms to foster stable collusion. Essentially, through increased speed of detection and punishment of deviations from the collusive price, such software strongly disincentivises deviation from cartelism thus increasing detection difficulties through inherently stabler collusion. Furthermore, it is noteworthy that behavioural science indicates that indirect harms are perceived as less problematic than direct harms. Thus, where algorithmic intermediaries automatically inflict punishment, the sense of detachment from wrongdoing this provides to cartelists may prove detrimental to competition by further encouraging collusion given the ability of algorithms to distance individuals from the illegal activity.
In enforcement terms, however, the picture appears less problematic. While it is true that AP may increase detection difficulties, as the illegality lies in the agreement among humans so long as a “concurrence of wills” or concerted practice can be established the current framework appears sufficient to sanction such collusive conduct once detected. The same rationale, however, does not strictly apply to algorithm-fuelled collusive vertical agreements. For example, while the current framework may restrict explicit RPM agreements despite the use of an algorithm to monitor compliance with the agreement, enforcement concerns may arise given the unique amplifying effect of algorithm usage at a horizontal level on RPM. Essentially, Phillips highlighted pricing algorithms may allow price increases to spread more broadly across the market to parties not engaged in RPM due to the price-matching mechanisms of AP software. However, due to the tacit nature of these latter price increases, they may fall beyond the clutch of regulators thus highlighting a gap in the current framework.
Another issue identified within the algorithmic collusion taxonomy of Virtual Competition concerns the increasing prevalence of “off-the-shelf” pricing algorithms. On the surface, increasing access to AP appears procompetitive through its reduction in barriers to entry for smaller merchants who lack funds to develop bespoke algorithms. However, the reality of this poses stark competitive concerns. Essentially, where multiple competitors utilise the same readymade algorithm Franco-German competition authorities argue such conduct fuels automatised price coordination due to algorithmic similarities which cause each firm to respond in similar ways to market events. The CMA, however, do effectively downplay such concerns. Fundamentally, even where identical algorithms are utilised by competitors, firms must fight the urge to modify their algorithm or implement discount strategies which promote short-term profits as opposed to deriving benefits from the longer-term incentives of price coordination. Overall, it is the authors view that while it is fair to assert that parallel use of identical algorithms would naturally result in some price coordination, the extent of such concerns is partially undermined given pricing authority remains with the individual firms.
Therefore, perhaps a more pressing concern exists where firms delegate pricing decisions to a third party who takes such decisions via an algorithm – for example, dynamic ridesharing platforms such as Uber. Essentially, Uber’s algorithm solely determines for thousands of competing drivers a journey’s base price and the extent of any surge price leaving drivers incapable to compete on price through implementing discounts or reduced rates. Therefore, such arrangements dampen competition through large-scale price coordination. Admittedly, Uber counters such claims highlighting that its price-fixing arrangement with drivers is justifiable given the countervailing efficiencies the platform provides to consumers via reduced costs. Superficially, it would therefore seem perverse to promote legal intervention where Uber has supposedly enhanced consumer welfare. However, the author would urge enforcement authorities to consider the longer-term implications when permitting such price-fixing arrangements. Essentially, allowing such arrangements risks market foreclosure for traditional taxi services when Uber given its comparatively reduced costs inevitably becomes more popular thus potentially handing Uber “greater market power to set the price”. In effect, due to reduced competition, incentives to increase price could rise given Uber’s hypothetical monopolistic position.
Despite the competitive concerns raised, the CMA note that the current toolkit will be generally sufficient in dealing with such concerns via competition law analysis of hub-and-spoke agreements upon satisfaction of certain criteria. Essentially, Eturas highlighted that firms must have an awareness of the use of said algorithm by competitors and the potential for anticompetitive market outcomes. For instance, liability may arise where firms are deemed to have implicitly assented to an anticompetitive agreement if the algorithm is promoted as a means to avoid “price wars” given this could reduce strategic uncertainty regarding future conduct of competitors using the algorithm. Admittedly, where there is neither direct nor indirect information exchange between the competitors, restricting such conduct via Article 101 may prove difficult given the unilateral nature of each competitor’s conduct in choosing the algorithm. However, if anticompetitive intent in their choice can be identified, the current toolkit could restrict such behaviour as a facilitating practice. Thus, although there may exist some evidentiary difficulties, the current toolkit does appear broadly sufficient to deal with the concerns highlighted. Nevertheless, given all that is required to generate competitive harm is the adoption of identical algorithms, in light of the increasing delegation of pricing to AP vendors such as Uber and “off-the-shelf” software usage such instances do appear particularly concerning.
Equally, algorithm-fuelled collusion may occur absent any communication between competitors and is instead the product of human design. Essentially, through increasing transparency, ‘adaptive’ algorithms may raise concerns by stabilising prices at supracompetitive levels. Simply through their implementation of price-matching guarantees discounting is disincentivised given the newfound lack of competitive gain in doing so as such deviation is rapidly ‘punished’ by competing algorithms. Conversely, this discovered interdependence may foster price increases given that when faced by a rise in competitor pricing, firms are incentivised to follow this rise to ensure they do not all forgo the extra profits from this increase. Theoretically, it is therefore clear how such ‘adaptive’ algorithms may foster collusion by autonomously stabilising prices at a supracompetitive equilibrium.
Admittedly, there exists doubts as to whether such competitive concerns may materialise in practice. Essentially, Roos and Bryan suggest firms are unlikely to establish a mutual understanding over a collusive strategy absent explicit communication. However, while these arguments may be a fair reflection of markets generally, “one does not need an agreement to bring about this kind of follow-the-leader effect in a concentrated industry”. Such views are evidenced by German legislative amendments which mandated publication of petrol price changes in an oligopolistic gas market and in turn fostered price increases due to increased market transparency enabling firms to diligently monitor competitor conduct. In effect, it is clear that while algorithmic collusion may not be sustainable in every market, under the correct market conditions AP could facilitate coordination.
While usage of advanced self-learning algorithms also raises competitive concerns, such coordination here is not the fruit of intentional human design but is rather achieved through an experimentation process. Effectively, concerns exist that self-learning algorithms may in their mission to optimise profits find that the best strategy is coordination. Though it has been suggested coordination could be achieved through a process of algorithmic signalling, such claims have been categorised as below the threshold of accuracy and reliability necessary to drive the point home given a lack of substantiating empirical evidence.
However, a growing body of experimental research has indicated otherwise. Through experimentation with Q-Learning algorithms, scholars identified tacitly collusive outcomes in 63% of the pricing games executed. Essentially, through a means of deviation and subsequent punishment from a competing algorithm, both algorithms were led to deliver a higher supracompetitive price. Admittedly, there are some limitations to the experimental findings. Specifically, Franco-German authorities downplay some of the findings given they are based on strong assumptions regarding the economic environment – for example, no risk of new entry and stable demand. Thus, given the joint effect of such conditions remains unexplored it may be fair to hold that there exists uncertainty as to the ability of self-learning algorithms to produce supracompetitive prices. Overall, it is the author’s view that while current studies do warrant some concern, given the arguable lack of robust evidence demonstrating such concerns in practice it is perhaps a longer-term concern for competition enforcers.
Nevertheless, despite the admitted potential for collusion, mere ‘conscious parallelism’ fails to fall within the scope of Article 101 given it necessitates a meeting of minds to restrain competition. Thus, while industry-wide use of pricing algorithms may generate the same economic outcomes as explicit collusion – for instance, supracompetitive prices – absent an agreement it may be beyond the grips of regulators. Commentators have therefore argued in favour of steps to restrict the excessive market transparency which AP has produced. However, the author would urge against such an approach given the procompetitive benefits transparency may have for consumers in comparing market offers. Instead, the solution may lie in a simple ‘compliance by design’ duty.
Finally, with the growth of Big Data and AP, such factors have increased the scope for firms to implement PP. Though PP raises notable privacy issues – given it involves the use of AP to offer differential pricing based on how price sensitive an undertaking deems individual consumers to be – there also exists prolific debate regarding its competitive impact. For example, the CMA note that PP could positively affect competition by reducing barriers to entry for newcomers in markets with high switching costs through offering targeted discounts. However, it is equally true that in monopolised markets PP may cause market foreclosure when abused by firms with market power who target rival’s customers through offering targeted discounts. In sum, the author would therefore agree with the balanced view of the OECD – namely, that the competitive impact of PP is ambiguous and depends on a variety of case-specific factors such as the intensity of competition in the market.
Overall, it is clear that while AP has produced many procompetitive market outcomes, the current concerns are not unwarranted. Through increasing transparency and allowing for rapid retaliation against deviations from preferred market price, pricing algorithms have not only increased the stability of pre-existing collusive agreements, but they may push markets which are susceptible to tacit collusion into interdependence. While the former of these concerns may be dealt with via Article 101, due to its tacit nature the latter of these circumstances will fall beyond the scope of the current framework despite it having the potential to generate the same competitive harms – namely, supracompetitive prices and reduced competition. However, while current experimental proof of this does warrant concern, this is arguably a longer-term concern for enforcers given the limitations of current empirical evidence. In any event, should concerns materialise, the author would warn against regulating to restrict market transparency due to its benefits for consumers and instead advocate for a “compliance by design” duty. In effect, the author would therefore broadly agree with the CMA who conclude that the most immediate risk to competition is the use of identical algorithms by competing firms given all that is required to pose competitive concerns is the simple adoption of identical algorithms.
This essay was written within the University of Strathclyde Law School for the module “Competition and the Digital Economy” coordinated by Dr Oles Andriychuk.