4 Dental AI Quandaries to Discuss

We’ve been developing CoTreat AI for the past four years, and throughout this journey, we’ve consistently engaged in discussions about the opportunities and risks ahead. These often heated discussions have significantly shaped our product's direction. Whether we've made the right choices will become clear over time. While we are already very confident in the value CoTreat AI adds to dentists and dental practices, both in terms of increasing clinical accuracy and reducing workload, the ethical implications of AI are evolving.

I believe the time is now right, with the genie sufficiently out of the bottle, to openly share some insights and internal concerns about dental AI. We seek candid feedback from practitioners and practice owners, as AI will impact everyone in the field in the next decade. CoTreat AI is committed to taking a clear stance on each of the following profound ethical quandaries, avoiding any fence-sitting.

We have identified the following four high-impact ethical issues to be discussed as a priority:
  1. The risk of over-diagnosis leading to poorer patient outcomes
  2. The impact of AI analysis on practitioner mental health
  3. Automation of administrative tasks resulting in staff redundancies
  4. AI's shift from 'clinical decision support' to 'decision maker,' leading to a loss of practitioner autonomy and blurred medico-legal and professional boundaries

I will delve into each of these four topics in future weeks to garner feedback and hopefully start a conversation. We are keen to hear from a diverse group of practitioners and practice owners with different experiences and viewpoints. There are no right or wrong answers; everything is in flux and rapidly changing, and we're all just trying to make sense of things. Let's begin with the patient and the risk of over-diagnosis today.

Ethical Quandary Number 1 - Risk of Over-diagnosis Resulting in Poorer Patient Outcomes

To illustrate this point, I'll reference the words of two great thinkers. First, from the recently deceased Charlie Munger:

My doctor constantly writes, PSA test, prostate specific antigen, and I just cross it out. And he says, ‘What the hell are you doing? Why are you doing this?’ And I say, ‘Well I don’t want to give you an opportunity to do something dumb. If I’ve got an unfixable cancer that’s growing fast in my prostate, I’d like to find out 3 months in the future, not right now. And if I got one that’s growing slowly, I don’t want to encourage a doctor to do something dumb and intervene with it. So, I just cross it out.’ Most people are not crossing out their doctor’s prescriptions, but I think I know better. I don’t know better about the complex treatments and so forth. But I know it’s unwise for me to have a PSA test. So, I just cross it out. I’m always doing that kind of thing. And I recommend it to you when you get my age. Just go cross out that PSA test.

Second, from Nassim Taleb’s concept of Naïve Interventionism in his book Antifragile: Things That Gain from Disorder. Taleb criticises the tendency of modern doctors to avoid the 'do nothing' approach, even when it might be the most appropriate choice. The term iatrogenic, after all means harm caused by the healer. Taleb argues that well-meaning interventions can often lead to negative outcomes, and is ubiquitous in society, and hardly restricted to medicine.

CoTreat AI’s Concern

AI already identifies and documents pathologies at an unprecedented level. Constantly surfacing these observations provides practitioners with endless information, leading to what Charlie Munger calls “the opportunity to do something dumb,” and could lead to what Taleb labels “Naïve Intervention.”

CoTreat AI's Position on Mitigating These Concerns:
  1. Avoid disclosing minor observations without a strong evidence base for intervention. For instance, even if CoTreat AI detects minor issues like incipient carious lesions or Miller's class 1 gum recession, we’ll omit these from standard reports in the future. While somewhat paternalistic, this approach aims to minimise over-diagnosis risks. (Note: We are still refining our reporting criteria, awaiting broader input on the matter)
  2. Implement longitudinal monitoring: Log initial findings and present them to practitioners only if there is clear worsening over time. (This feature is not yet built into CoTreat AI.)

Please reach out or comment if you have insights or concerns about the risk of over-diagnosis. Every contribution is highly valued by our entire team and will inform our product in research direction on route to creating a win-win for practices and patients.

Further reading