CYIL vol. 11 (2020)
MICHAL PETR CYIL 11 (2020) Already in the “analogue” world, price signalling has been identified as a form of communication in theory 75 and even observed in practice, with regard to petrol stations. 76 It is therefore plausible that the use of algorithms may make this form of communication much more effective. Thus, if individual deployment of algorithms able to “communicate” by signalling and programmed to “cooperate” rather than “defect” leads to price increase, it may arguably be classified as an agreement rather than a “classical” tacit collusion. If there is an anticompetitive agreement, the second question is who is liable for it. The answer is in our opinion the same as in the previous scenario – if the undertakings were aware that the algorithm they use is able to “coordinate” its prices with others, the undertaking itself should be liable for the cartel, with a possible liability also of the designer of the algorithm. If, on the other hand, the undertaking could not have been aware of it, it cannot be found liable; the undertakings will however be arguably obliged to make sure that the algorithms they use comply with competition law. 77 2.4 Artificial Intelligence The final scenario is, as of today, only a hypothetical one. It addresses a possibility that deep learning algorithms develop a form of communication with one another, without being instructed or “thought” so, and using it, “agree” on price increases. Though it may seem improbable nowadays, this scenario is being seriously discussed, both in academia 78 and in practice. 79 Such a development would constitute an anticompetitive agreement, the question however arises who should bear responsibility for that, in case of black-box deep learning algorithms, which the undertakings themselves (or more precisely, their managers) cannot understand. This this will be discusses in detail in the chapter below. Under this scenario, it is also possible that the algorithms would engage in “pure” tacit collusion, i.e. they realize that in the long run, it is more advantageous for them to “cooperate” than to “defect”. As professors Ezrachi and Stucke argue: […] algorithm developers are not necessarily motivated to achieve tacit collusion; nor could they predict when, how long, and how likely it is that the industry-wide use of algorithms would yield tacit collusion. Nor is there any intent or attempt by the developers and user of the algorithm to facilitate conscious parallelism. The firm “merely” relies on AI. 80 75 See e. g. FRIEDMAN, J. W. A Non-cooperative Equilibrium for Supergames. The Review of Economic Studies , 1971 (1), p. 1. 76 BYRNE, D. P., DE ROOS, N. Learning to Coordinate: A Study in Retail Gasoline (23. 7. 2018). Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2570637 (1 July 2019). 77 This was expressed by the Commissioner for Competition Margarethe Vestager as a requirement of “ competition by design” in her speech at Bundeskartellamt 18th Conference on Competition (Berlin, 16 March 2016), available at: https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/bundeskartellamt-18th- conference-competition-berlin-16-march-2017_en (1 July 2020). 78 SCHWALBE ( op. cit. sub 66), p. 594: “ considering the rapid progress in research on AI, it cannot be ruled out that algorithms may learn to communicate and thereby increase the likelihood of algorithmic collusion” . 79 OECD. Algorithms and Collusion – Note from the European Union (14 June 2017), available at: https://one.oecd. org/document/DAF/COMP/WD(2017)12/en/pdf (1 July 2020), para. 28. 80 EZRACHI, STUCKE ( op. cit. sub 27), p. 1795.
252
Made with FlippingBook flipbook maker