Potential conflict of interest: Nothing to report.
TO THE EDITOR:
I read with interest the extensive review by Ahn et al.(1) As a bioethicist, I need to raise some concern about the perspective given by this review, which disregards some important issues concerning ethical aspects and the patient–physician relationship.
In the field of hepatology, as in most fields in medicine, machine learning health care applications (ML‐HCAs) are transitioning from an alluring future possibility to a current reality with profound implications in clinical practice. Almost certainly, ML‐HCAs will impact substantially on processes, quality, cost, and access to health care in hepatology and ultimately modify the patient–physician relationship. Hence, specific and perhaps unique issues and concerns are raised in the health care context.(2‐4)
A variety of concerns about the use of ML‐HCAs, not specific for hepatology, have been identified. They range from the biases arising from the training data set to the privacy of personal data in business arrangements to ownership of the data used to train ML‐HCAs to accountability for ML‐HCA’s failings. As such, these issues will need, besides a strict regulatory approach, an appropriate ethical assessment.
Bioethicists will need to identify ethical issues occurring within and across the entire pipeline of activities that comprise the development, implementation, and ongoing evaluation of any ML‐HCAs in hepatology. Moreover, a systematic approach aiming to survey the landscape of ML‐HCA conception, development, calibration, implementation, evaluation, and oversight needs to be developed and validated. Bereft of any conceptual map of this landscape, the identification of ethical concerns arising from this emerging, complex, cross‐disciplinary technology that potentially affects many aspects of hepatology will continue to be reactive, ad hoc, and fragmented.
In order to exploit the full potential of ML‐HCAs in the clinic, hepatologists will need to keep a close balance between the systematic, inflexible evidence of an algorithm and the subtle, nuanced reality of human relationships in order to avoid being forced not to choose. Beware the unwary!
1. Ahn JC, Connell A, Simonetto DA, Hughes C, Shah VH. The application of artificial intelligence for the diagnosis and treatment of liver diseases. Hepatology 2021;73:2546‐2563.
2. Obermeyer Z, Emanuel EJ. Predicting the future—big data, machine learning, and clinical medicine. N Engl J Med 2016;375:1216‐1219.
3. Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med 2018;169:866‐872.
4. Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva‐Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf 2019;28:231‐237.