New Technologies in International Law / Tymofeyeva, Crhák et al.
To prevent issues such as unfairness and bias caused by AI tools, developing nations should be cognizant of this issue of bias. The governments of these nations should incorporate intersectional justice to cater to more diverse, inclusive, and anti colonial standpoints. This requires a commitment to imploring the development of a justice-oriented design of AI algorithims and AI-based support systems. 494 Developing nations should adopt technical and legal frameworks to minimize or prevent unfairness and bias. These laws should promote equity in the development process of AI tools applied to healthcare. To this end, AI tools that adopt a design rationale incorporating the principles of serendipity (diversifiability) and equity (intersectionality, reflexivity, and power balance) should be encouraged and adopted for creating healthcare AI tools. 495 The laws should also encourage developers of AI tools to adopt measures that limit bias and unfairness by adopting data pre-processing techniques, algorithmic modifications, or human oversight in AI decisions to create a fair society and reduce societal asymmetries and racial and gender stereotypes. 496 Liability for harm A significant legal challenge posed by the application of AI to promote the right to health in developing nations is the difficulty in detecting harm caused by algorithmic activity and finding its cause due to the black-box model of AI, which results in liability gaps. Liability gaps make it difficult to identify whom to ascribe responsibility and or liability for harm in situations where algorithmic activity causes damage to the patient accessing healthcare treatment, making it challenging to prevent it from happening again. 497 There is a peculiar difficulty in ascribing responsibility for harm caused by the application of AI solutions in healthcare settings, primarily due to the myriad of actors responsible for administering healthcare to the individual and for developing and applying AI systems. Take this instance for example: in determining who bears liability for harm caused to a patient due to the application of AI solutions to his healthcare needs, does the responsibility for harm caused lie with the healthcare practitioner, for instance, for not questioning the results of the AI tool that caused harm to the patient, even if they were unable to evaluate the quality of the diagnosis received from the AI tool against other sources of information, including their knowledge of the patient, due to the black-box nature of the AI system? Or is the responsibility for harm ascribed to the hospital or care facility due to its obligation to implement a policy allowing healthcare practitioners to overrule algorithmic advice? Or does the responsibility for harm lie with the commissioners or retailers of the system or device that contains the algorithm, as it may be argued that they bear some responsibility for checking the accuracy of decisions of the AI tool? Or does the responsibility for harm extend to the 494 Baumgartner R et al, ‘Fair and Equitable AI in Biomedical Research and Healthcare: Social Science Perspectives’ (2023) 144 Artificial Intelligence in Medicine 102658. 495 Van Leeuwen C et al, ‘Blind Spots in AI’ (2021) 23 ACM SIGKDD Explorations Newsletter 42. 496 Ibid., (Fn 36). 497 Racine E, Boehlen W, Sample M, ‘Healthcare Uses of Artificial Intelligence: Challenges and Opportunities for Growth’ (2019) 32 Healthcare Management Forum 272.
117
Made with FlippingBook Annual report maker