Corporate Law

Algorithmic Collusion in Flight Pricing in India

Madhavi Singh

Are the robots colluding and can their masters be made liable?

madhavi post.png

In May, 2018 the Chairperson of the Competition Commission of India (‘CCI’) informed the PTI that the CCI is looking at the possibility of collusion by airlines in the pricing of air tickets. Such concerns surfaced following sudden surges in the prices of Chandigarh-Delhi tickets right after the Jat agitation. The CCI is not alone, as even the ECand the DoJ in the USare looking at allegations of price fixing by airlines through the grounds for such allegations vary across jurisdictions. In US the airlines were alleged to have regulated traffic across routes with the objective to fix prices (drive them higher by limiting capacity). Whereas in EU the allegations are more traditional and relate to restricting airlines and travel agents from shifting to competitors.

How can algorithms be used to collude?

There are many different ways in which algorithms may be used for collusion. Ariel Ezrachi and Maurice Stucke (Artificial Intelligence & Collusion: When Computers Inhibit Competition, 2017) discuss four such ways and their categorisation and nomenclature have been borrowed in this article for the purpose of analysing airline price-fixing in India.

  1. First, is the “Messenger” where human beings agree to collude and merely use the algorithm to implement such agreement. An example of this is the Topkins case where sellers of posters on an online marketplace agreed to coordinate prices by adopting specific algorithms.
  2. The second type of algorithmic collusion is “Hub and Spoke” where market players adopt the same algorithm. Such collusion is facilitated by the developer of the algorithm who by entering into vertical agreements with various competitors ensures price fixing.
  3. The third category is “Predictable Agent” where competitors do not use the same algorithm but independently and unilaterally design their own algorithm. These algorithms are designed to react in certain ways to market changes and deliver predictable outcomes. Therefore, even though the competitors do not use the same algorithm by programming algorithms to react in the same manner to the same stimulants tacit collusion is affected.
  4. The fourth type of algorithmic collusion- “Digital Eye” involves machine-learning algorithms. Here the makers once again develop their algorithms unilaterally but this time without programming them to react a certain way to market stimulants. However, since these algorithms use artificial intelligence, and by the virtue of self-learning, start colluding on their own.

The flight price-fixing cases in both the US and the EU can at best fall under the “Messenger” category where algorithms are not parties to the collusion but merely tools for implementing it. The issues posed by such cases are not very different from the conventional collusion cases since the algorithms are merely being used to reflect the collusive intent of the players- a scenario very similar to the standard colluders manually fixing prices of commodities.

Unlike US and EU, the initial information received about the flight pricing case being investigated by the CCI seems to indicate that this case would fall under the third or the fourth category since the information refers to the problems posed by “algorithms which have been designed with the logic” to collude or “self-learning algorithms.” Therefore, the rest of this article analyses the potential treatment of only the third and fourth categories by the Indian competition regime. These categories pose unique challenges because of the apparent non-intervention of competitors in the actual functioning of the algorithms.

Understanding Section 3(3) of the Competition Act

In India, the provision which prohibits collusion is Sec 3(3)of the Competition Act. Sec 3(3) is not just limited to horizontal agreements but extends also to practice and decisions taken in a collusive manner, making the section quite broad.

Sec 3(3) can be broken down into 3 components:

(i) if there is any “agreement entered into” or “practice carried on” or “decision taken by”; (ii) persons or association of persons or enterprises or association of enterprises; (iii) which directly or indirectly determines purchase or sale prices; then it shall be presumed to have an appreciable adverse effect on competition.

Applying Sec. 3(3) to the present issue

It is important to note that appreciable adverse effect in such cases needn’t be proved and is automatically presumed. The third leg of the test involves proving price parallelism. Since it is a factual determination, it is not relevant for this post. The first two requirements under Sec 3(3) are analysed in turn.

The First Requirement

First, the width of conduct which might fall under “horizontal restraint” is quite expansive. Unlike most other jurisdictions not just explicit agreements but even practice or decision could fall within the ambit of horizontal restraints. “Agreement” itself has been defined to include any arrangement or understanding or action in concert. “Practice” similarly has been broadly defined to include any practice relating to the carrying on of any trade by a person or enterprise.

As far as the third kind of algorithmic collusion (“Predictable Agent”) is concerned, it is clear that the act of programming algorithms to react a certain way to market stimulants (even though done independently and unilaterally) would amount to both an “action in concert” and to “practice.” Additionally, for the fourth kind of collusion (“Digital Eye”), the act of self-learning algorithms to price fares at a certain level in response to the price fixed by the algorithms of other competitors would amount to an “action in concert.”

Therefore, irrespective of the nature of the interaction between the algorithms it is likely to satisfy the first criteria of the test because of the broad ambit of conduct covered under horizontal restraint. This is unlike the US and EU where an agreement to collude needs to be established.

The Second Requirement

Second, the agreement or practice or decision must be between persons or enterprises or associations thereof. It is doubtful whether the existing groups of “individual,” “artificial juridical person” or the other groups under the definition of “person” could be read to include algorithms.

However, the definition of “person” is inclusive. Therefore, if a purposive interpretation of the Act were to be adopted then protection of consumer welfare and competitive structures should lead the CCI to utilise the non-exhaustive nature of the definition of “person” to include algorithms in it. In any case, for “Predictable Agent” category at least, since the algorithms do not have a logic of their own but merely reflect the logic of the programmer herself, the competitors themselves would be the “persons” who through indirect means are acting in concert.

The Question of Liability

Other than the two issues discussed above the biggest problem in the algorithmic collusion case before the CCI would be a determination of liability. Sec 48of the Act discusses the personal liability of employees (such as directors and managers) in case the company is found to have violated provisions of the Act. But the Act does not provide guidance for the reverse of this situation, i.e., when can the anti-competitive acts of an employee be attributed to the company. This is going to be the toughest and possibly the deciding factor in this case.

As has been explained above, for the “Predictable Agent” scenarios, the competitors themselves would be liable since they ensured that the algorithms are programmed to respond to similar market conditions in the same way and therefore collude to show the same prices (Admittedly, this debate is not as simple as I have made it sound. Often, the market stimulants which algorithms respond to would be the standard market factors on which pricing decisions even in non-collusive markets hinge upon- however, market efficiency justifications are beyond the scope of this article).

The problem arises for the “Digital Eye” category where algorithms using AI automatically learn that collusion would result in profit maximisation. Here the competitors did not expressly programme their software to act in a specific way in response to market changes but only programmed their software to “maximise profits.” The algorithms through self-learning reach the conclusion that collusion would be the best way to maximise profits. While competitors here had no express or implied intention to collude it is possible to argue that they had knowledge that self-learning algorithms with profit-maximisation objectives would ultimately resort to collusion. Whether this knowledge is sufficient to hold competitors liable is what the CCI will have to decide. This will undoubtedly be a very difficult job. Not only does it pose a challenge of ascertainment of the extent of involvement of the competitor, but it also raises questions of pro-competitive and efficiency justifications that can be advanced in favour of such algorithms.

Ultimately both the “Predictable Agent” and the “Digital Eye” have taken over the role of employees of the competitors (namely, fix prices to reflect market forces). The difference in consequence is because the algorithms are doing their job much better than the humans before them and also because they do not have the ethical conscience that had restrained their human predecessors from colluding in spite of the promise of profits.


Madhavi is a 5th-year BA LLB (Hons.) student at NLSIU, Bangalore.


Image Source: OECD

 

Categories: Corporate Law

2 replies »