Is AI making your GI doc stupid? Probably not.
Artificial intelligence is transforming everything, including the universally-dreaded colonoscopy—the frontline tool for preventing colorectal cancer. On the one hand, randomized controlled trials (RCTs) have repeatedly shown that real-time computer-aided detection (CADe) can help doctors spot more pre-cancerous polyps without slowing procedures, a factor for patients given the discomforts involved. Yet a new study out of Poland on polyp detection, published in The Lancet Gastroenterology & Hepatology and covered in the Financial Times, has raised an unsettling possibility: Will AI dumb doctors down?
The Polish investigators didn’t ask the usual “does AI improve detection?” question. Instead, they examined what happens after doctors become accustomed to AI and then perform procedures without it. Importantly, this wasn’t an experiment where the same physicians alternated between AI-on and AI-off during the same period. Rather, the study compared two distinct timeframes—before AI was implemented and after it was introduced—finding that in the post-AI period, the lesion detection rate for procedures done without AI fell from 28.4 percent to 22.4 percent. The authors interpret this decline as evidence that routine AI use may have shifted more of the diagnostic burden to the AI, thereby reducing physician vigilance when AI was not present.
That is a different research lens from the one used in most of the landmark RCTs on CADe colonoscopy. Those trials randomized patients to AI-on or AI-off groups to control for unrelated factors that might skew results. AI consistently increased detection of smaller or more subtle lesions, sometimes by double-digit percentages depending on study and lesion types.
The distinction between these methodologies matters. The Polish study’s before–after design is better suited to spotting long-term behavioral changes among physicians, but it’s also more vulnerable to differences in patient mix, patient prep, staffing, or seasonal trends—all of which make it difficult to prove causation. Without concurrent randomized controls, it’s hard to know how much of the drop in post-AI, physician-only detection was due to over-reliance on AI versus other factors.
If both types of studies are right in their contexts, the picture is more complicated than “AI makes your doctor better” or “AI makes your doctor worse.” In controlled conditions, CADe clearly helps detect more pre-cancerous lesions during use. AI-on and AI-off can assess whether AI diminishes physician diagnostic skills.
How much should we care about that? Colorectal cancer prevention hinges on finding and removing lesions before they turn malignant. If AI reliably boosts detection, that’s a real public health gain. Skill erosion would matter most in under-resourced settings where AI is uneconomical or otherwise unavailable, which is unlikely to be a widespread problem in developed countries with the most access to AI tools.
This conundrum is playing out in many fields where AI is reducing the salience of human skills (e.g., research, writing, financial analysis, fraud detection) and in many cases reducing their economic value. For gastroenterologists, like the rest of us, the future will involve more and better AI systems that execute an increasing share of cognitive work. The real question for all of us is: Should education and training shift toward oversight and management of AI systems rather than focusing on traditional manual skills that are subject to AI obsolescence?
So, is AI making your doctor stupid? Probably not. But over time, it may change how they allocate their attention and effort—which is exactly what should happen if we want to maximize limited human capital, improve efficiency, and increase positive outcomes. The challenge, and opportunity, is to harness AI’s power while keeping human expertise and judgment high enough to know when the machine misses something or gets it wrong.
