CYIL vol. 11 (2020)

MICHAL PETR CYIL 11 (2020) in comparison to uniform pricing. From an economic viewpoint, there is therefore no rationale for banning personalised pricing per se (as there is no rationale for banning price discrimination). 87 It is difficult to discuss whether personalised pricing is indeed anticompetitive, because, as of today, there are no documented cases of it. 88 It is possible that most businesses are still reluctant to engage in personalised pricing, due to fears of losing reputation or of triggering a negative reaction by consumers. The research however suggests that personalised pricing is already taking place, at least to some extent; 89 it is thus plausible that firms are already personalising prices, but choose to do so in a non-transparent way, for the same reasons previously stated. In any event, should a dominant deploy a pricing algorithm leading to personalised pricing, that would amount to abuse of dominance, the following scenarios would need to be assessed. In the first one, the dominant acquires an algorithm designed to achieve such effects; clearly, the liability for anticompetitive conduct rests with the undertaking concerned. In the second scenario, a dominant undertaking deploys such an algorithm without being aware of its capabilities. If the undertaking could not have reasonably foreseen it, it arguably cannot be liable for its effects; in any event, as discussed with regard to anticompetitive agreements, the extent of “due diligence” with respect to algorithms may significantly increase in the future. In the second scenario, a black-box algorithm “learns” the personalised pricing by itself, without being instructed to do so. This most complex scenario will be considered in the chapter below. IV. Antitrust Liability for Artificial Intelligence As we have observed above, the deployment of pricing algorithms mostly does not pose any “new” concerns as far as attribution of liability for anticompetitive conduct is concerned. The only – but very significant – problem is connected with black-box deep learning algorithms, which “learn” the anticompetitive conduct themselves, without being instructed to do so. In principle, the liability may be attributed to the undertaking using the algorithm, its programmer or even the algorithm itself. Concerning the algorithm itself, such a solution does not seem plausible (or desirable) today, it however needs to be mentioned that it had been seriously discussed. 90 Currently, the debate is arguably cantered on human liability for algorithms’ conduct. 91 87 BOURREAU, M, DE STREEL, A., GRAEF, I. Big Data and Competition Policy: Market Power, Personalised Pricing and Advertising (16 February 2016). Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_ id=2920301 (1 June 2020). 88 OECD Report on Personalised Pricing , p. 14. 89 For a collection of examples, see e. g. OECD Report on Personalised Pricing , p. 14 et seq. 90 See e.g. MEHRA ( op. cit. sub 17). See also European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), which reads: “ in the scenario where a robot can take autonomous decisions, the traditional rules will not suffice to give rise to legal liability for damage caused by a robot, since they would not make it possible to identify the party responsible for providing compensation and to require that party to make good the damage it has caused (…) ultimately, the autonomy of robots raises the question of their nature in the light of the existing legal categories or whether a new category should be created, with its own specific features and implications” (emphasis added). 91 See in particular the requiem on “ human agency and oversight” , expressed in the Communication from the

254

Made with FlippingBook flipbook maker