I was thrilled to have the opportunity to speak at the Uehiro Oxford Institute (https://www.uehiro.ox.ac.uk/) about some of the work I've been doing on the implications of AI and genomics for medical practice today. I highlighted the ways in which AI is already better than humans at tasks and competencies that have been important parts of doctors' special expertise. AI can keep track of data in real time and use that data to draw inferences and make predictions. AI can listen to conversations between doctors and patients and then generate progress notes for electronic health records. AI is better at diagnosing difficult cases than most doctors.
I then asked whether there are features of doctor patient relationships that are uniquely human and that it will be difficult for AI to emulate or improve. Two such features have received a lot of attention - empathy and trust. Both are highly valued and poorly understood. Furthermore, there are reasons to think that they are not beyond the capacities of current or anticipated AI models. In head-to-head comparisons, patients often rate AI interfaces as more empathic than human doctors.
Trust is more complicated. As Annete Baier noted, "Promises are a most ingenious social invention, and trust in those who have given us promises is a complex and sophisticated moral achievement." With AI, the object of trustworthiness is hidden. We interact with an interface that is owned by somebody, somewhere, but we don't understand, or don't want to think about, the reasons why that hidden entity is engaged with us.
Doctors, too, can have hidden agendas. But doctor is the moral agent who either reveals or conceals those conflicts. If we offer trust, we know to whom it is offered. The same cannot be said of trust in a commericial entity that pretends to be our friend and confidant.
Comentarios