CYIL vol. 11 (2020)

CYIL 11 (2020) ARTIFICIAL INTELLIGENCE AND COMPETITION LAW… Similarly, antitrust liability cannot in general rest with the algorithm’s designers (as a producer of a gun cannot be liable for a murder committed with it); if however, they were aware that they were facilitating the functioning of a cartel, they may be found liable together with the cartelists. We are therefore left with a question of whether competition law, as is stands today, is capable of attributing liability to the undertakings themselves. The enforcers of competition law argue in this direction, in particular the European Commission. As the Commissioner Vestager puts it: […] companies can’t escape responsibility for collusion by hiding behind a computer program. […] And businesses also need to know that when they decide to use an automated system, they will be held responsible for what it does. So they had better know how that system works. 92 This very strict requirement, known as “ compliance by design ”, 93 is arguably based on an analogy with undertakings’ liability for the conduct of their employees. As summarised in the Franco-German Study : if one were to apply this standard to cases involving algorithmic behaviour, an under- taking could be held liable simply for introducing and using an algorithm if that algorithm is authorized to take decisions regarding certain market behaviour, e.g. pricing. Distinguishing between different degrees of autonomy, i.e. between descriptive and black- box algorithms, would not be necessary within this concept: As even a significant degree of autonomy enjoyed by an employee does not preclude attributing his or her actions to the undertaking, an algorithmic behaviour would similarly be attributed even if the undertaking was not aware of its anticompetitive implications. 94 This would amount to de facto absolute liability for the use of algorithms, allowing the liability to be escaped only under exceptional, atypical circumstances. 95 Though practical, such a strict approach might discourage undertakings from deploying algorithms at all, which would in effect harm the economy, as the pricing algorithms are in general believed to bring substantial efficiencies. Some legal scholars therefore suggest a less stringent approach, only imputing liability for the behaviour of its algorithms on an undertaking if a reasonable standard of care and foreseeability is breached. 96 To pass this benchmark, they require a close review of the relevant algorithms, in particular with a view to the programming, available safeguards, its reward structure, and the scope of its activities. 97 This vein of argument is based on an analogy with undertakings’ accountability for acts of an independent third party, as discussed above. Commission to the European Parliament, the Council, the Economic and Social Committee and the Committee of the Regions. Building Trust in Human-Centric Artificial Intelligence. 8 April 2019, COM (2019) 168 final. Available at: https://ec.europa.eu/digital-single-market/en/news/communication-building-trust-human-centric- artificial-intelligence (1 June 2020). 92 The speech of Commissioner Margarethe Vestager in Berlin, 2016 ( op. cit. sub 77). 93 Ibid . 94 Franco-German Study , p. 58. 95 Ibid , p. 59. 96 JANKA, S. F, UHSLER, S. B. Antitrust 4.0 – the rise of Artificial Intelligence and emerging challenges to antitrust law. European Competition Law Review , 2018 (3), p. 121. 97 EZRACHI, STUCKE ( op. cit. sub 27), p. 1801.

255

Made with FlippingBook flipbook maker