Modern medicine can image a perforation in your intestine with millimeter precision. But getting someone to tell you it’s there? That takes hours. Sometimes until morning.

Consider this hypothetical yet very real situation: At 2:17 a.m., a 68-year-old woman with severe abdominal pain arrives in the emergency department (ED). The CT scan shows diverticulitis with possible perforation—a tear in the intestinal wall that could lead to life-threatening infection. The scan completes. The radiologist, covering three hospitals from home, won’t read it until 4 a.m. The ED physician waits. The surgical resident waits. The patient’s fever climbs. During waits like these, localized infections spread systemically. Obstructed blood flow progresses to tissue death. The time-sensitive window for intervention narrows. Then closes.

This isn’t a technology problem. It’s an availability problem. Artificial intelligence is approaching the capability to solve it—providing immediate answers to questions about medical images. Is there an air pocket? Does this require urgent surgery? The vision is simple: medical intelligence as infrastructure, available as a regulated software application that helps compute complex information, that any clinician can query to receive radiologist-comparable interpretations in seconds.

Now imagine that same patient. At 2:18 a.m., the ED physician receives a preliminary AI-derived interpretation: “acute diverticulitis with a small, contained perforation. Infection is localized with minimal surrounding inflammation. No large fluid collection or spreading infection. Recommended treatment: IV antibiotics and surgical consultation.” Treatment starts immediately. The surgical resident calls the attending physician with data, not guesses. By 3 a.m., the treatment plan is set. At 7 a.m., the radiologist reviews the study during morning rounds, confirms the findings, and signs the final report. The patient is stable, already four hours into appropriate treatment.

This transformation is starting to happen.

Over the past decade, we authors have worked in this field. Pranav has advanced the foundational techniques in radiology AI that have made these capabilities practical—developing methods that can now detect findings at radiologist-comparable performance levels. Samir has spent three decades helping to implement technology transformations globally. Over the past several years, he’s traveled to hospitals nationwide and talked to radiologists, emergency physicians, surgeons, and specialists about their real-world needs. What we’ve learned: The demand for immediate imaging interpretation is both urgent and universal, but the path forward requires thinking beyond productivity tools toward foundational capability.

The Food and Drug Administration has authorized nearly 1,000 AI tools for radiology, validating the technology’s maturity to detect lung nodules, brain hemorrhages, bone fractures, and more. Yet most of these AI tools target similar conditions. What clinicians actually need is broader coverage. When a patient arrives with acute abdominal pain at 2 a.m., the ED physician needs a system that can identify acute diverticulitis or acute pancreatitis or small bowel obstruction—or dozens of other possible conditions.

Medical imaging is the natural starting point for this kind of comprehensive medical intelligence. The input and output are clearly defined—pixels in, interpretation out. Most diagnostic journeys flow through imaging: A patient presents with symptoms; imaging reveals pathology; treatment follows. With hundreds of millions of exams annually in the U.S. and interpretation often delaying immediate lifesaving decisions, the volume is massive and the urgency is real. Build an intelligence layer for medical imaging—a regulated API that turns images into answers—and you create a model for how medical intelligence could work across all of health care.

There is already a wealth of radiology AI focused on making existing workflows more efficient—helping to convert spoken observations into formatted text or drafting summary conclusions from detailed observations. These tools have value, but at 2 a.m., when immediate expertise isn’t available, they don’t solve the fundamental problem. They’re optimizing the wrong bottleneck.

Medical intelligence that can answer any question about medical images solves an immediate need for doctors. An ED physician needs to know if there’s acute pathology requiring urgent intervention. A surgeon needs to understand the degree of bowel obstruction. Same underlying intelligence, different questions, all answered in seconds. The path forward isn’t narrow detection tools or documentation assistants. It’s an intelligence layer that serves whoever needs it, however they need it.

The first decade of medical AI proved the technology works. This decade is about building it into the intelligence layer health care depends on.

Consider speed—the most immediate opportunity. A patient develops sudden abdominal pain at 3 a.m. The ED physician needs to know: surgical emergency? In seconds comes the answer: “Free air visible below the diaphragm—indicates perforation of the stomach or intestine. Immediate surgical evaluation required.” The surgical team mobilizes while the patient is still in the scanner. An elderly patient arrives with back pain: “Abdominal aortic aneurysm with danger signs on the CT scan that it’s becoming unstable—call vascular surgery immediately.” A severely blocked kidney and showing infection at 3 a.m.: urgent urology called, not morning rounds. Even 30 minutes to 60 minutes of acceleration can meaningfully change outcomes for time-sensitive conditions.

But speed alone doesn’t capture what becomes possible. Sophisticated triage becomes feasible: A patient presents at 2 a.m. with abdominal pain and fever. The surgical resident needs to know: operating room now, or medical management? Immediate interpretation: “acute diverticulitis with a small, contained perforation. Air pocket is localized. Mild surrounding inflammation. No large fluid collection.” The resident starts antibiotics and monitoring, not surgery. Even this level of clinical detail—identifying what’s present and characterizing severity—enables the right action at 2 a.m.

Or systematic screening: Kidney cysts appear incidentally on imaging all the time. If AI can accurately characterize them—distinguishing benign simple cysts from complex cysts that require follow-up—physicians could check their entire patient population: Who has cysts needing surveillance? The system flags patients that require monitoring. Such proactive population management determines who needs attention before problems become urgent.

Or quantitative tracking: Oncologists following tumor response need measurements across serial scans—is the cancer shrinking, stable, or growing? Automated measurements, comparing to priors, flagging progression or responsiveness to treatment foster consistent quantitative assessment of change over time, enabling data-driven treatment decisions that would otherwise require manual measurement of dozens of lesions across multiple points in time.

Most fundamentally, this solves systematic gaps in expertise availability. The access bottleneck manifests differently across settings—academic centers face significant workload pressures during peak hours; community hospitals depend on single specialists covering multiple facilities—but overnight and weekend coverage creates systematic gaps everywhere. Even hospitals with robust daytime staffing often have minimal overnight presence. ED physicians and surgeons wait for interpretations. The wait can stretch for hours. During that time, clinical conditions progress.

Building AI that generates preliminary interpretations—what we call semi-autonomy—creates continuous coverage while maintaining expert oversight. The AI provides immediate answers that clinicians can act on. The radiologist reviews and confirms, focusing attention on complex cases requiring nuanced judgment, quality assurance, and direct consultation when clinical questions demand human dialogue. This remains a very challenging technical problem. Systems that can handle the full breadth of imaging interpretation at radiologist-comparable performance don’t exist in practice today. But the trajectory is clear, and the organizations investing now are building what will soon become standard capability.

The regulatory and legal path matters here. A system that generates complete radiology reports at radiologist-comparable performance and signs them autonomously—true full autonomy—requires thoughtful regulatory frameworks. Building standards for autonomous report generation takes time, appropriately so. The path forward doesn’t require solving all these challenges simultaneously. Semi-autonomy—where AI generates preliminary interpretations with radiologist review—addresses immediate access needs while working within current regulatory standards. Across all these use cases, radiologists always review and confirm.

Many forward-thinking radiologists recognize this as the evolution their field needs. Facing unprecedented demand—hundreds of millions of exams annually, growing faster than the workforce—they understand: The bottleneck isn’t radiologist capability. It’s availability. This technology doesn’t replace radiologist expertise; it extends it across time and space. The radiologist covering three hospitals overnight can focus on the most complex cases while preliminary AI interpretations enable immediate clinical action on straightforward findings. The radiologist managing high volumes during peak hours can ensure that critical findings trigger immediate action even when the formal read comes hours later.

Different clinicians want different answers. Radiologists recognize the need for new capabilities to match unprecedented volume. ED physicians want immediate answers about acute pathology. Surgeons want triage guidance. Specialists want quantitative data. Build this capability, and different users access it in ways that serve their workflows.

These insights led us to work on what we call building the intelligence layer for medical imaging. We’re developing systems that make radiologist-level interpretation accessible on demand, starting with comprehensive body CT interpretation and expanding across imaging types and anatomies. Early experience shows the model works: Clinicians get timely information; radiologists focus on complex cases and oversight; patients benefit from faster care.

When this intelligence layer becomes standard, it stops being a radiology department tool. It becomes foundational capability for entire health care organizations. Emergency departments make care plan decisions based on immediate answers to imaging questions. Surgical teams prioritize cases overnight with real-time interpretation. Critical findings trigger immediate action regardless of time or day. The result isn’t merely faster care. It’s fundamentally different care delivery—where the constraint is no longer waiting for expertise to become available.

For specialty medicine, this enables precision at scale. Cardiologists identify therapy candidates across their entire patient population. Neurologists get details about blocked blood vessels in the brain in minutes rather than hours. Organizations with this capability can promise 24/7 subspecialty-level interpretation, manage complex cases that would otherwise require transfer to more specialized facilities, and deliver data-driven precision medicine at scale.

Just as electronic health records transitioned from optional innovation to required capability, immediate access to medical imaging intelligence will become expected rather than exceptional. The question for health care leaders isn’t, “Should we invest in this?” It’s, “How quickly can we build this capability before it becomes the competitive baseline?”

The answer requires thinking about AI not as a productivity tool but as infrastructure. Reliable, always available, accessible to whoever needs it, supporting everything built upon it.

The first decade of medical AI proved the technology works. This decade is about building it into the intelligence layer health care depends on.

Within a decade, the waiting will seem strange. We’ll wonder why we ever accepted that expertise, once created, couldn’t be everywhere at once. That a scan could complete in minutes but interpretation took hours. That a perforation visible on a screen could remain unknown to the team caring for the patient.

Medical intelligence is becoming infrastructure—not because it’s revolutionary, but because once you build it, the alternative becomes unthinkable. The organizations building this capability now aren’t just improving health care delivery. They’re establishing what it means to deliver care at all. That future—where any question about medical images receives an immediate answer, where patients everywhere have access to expert interpretation when clinical decisions happen—isn’t distant. We’re building it.

Pranav Rajpurkar, Ph.D., is an associate professor at Harvard Medical School and earned his Ph.D. from Stanford University. His research on medical AI has resulted in more than 150 publications, and he’s been recognized by Forbes 30 Under 30, MIT Technology Review’s Innovators Under 35, and Nature.

Samir Rajpurkar, MS, is CEO of a2z Radiology AI Inc., and brings three decades of award-winning enterprise-scale global technology transformations, leading teams of hundreds across more than 20 countries.

Together, they co-founded the company a2z Radiology AI in 2024.

The Takeaway

Artificial intelligence’s analysis of medical imaging can speed care to patients and allow physicians to spend more time on complicated cases.

Explore the Issue

This article is part of a magazine issue featuring in-depth stories and insights.
Read the full issue.