The healthcare AI landscape shifted decisively toward autonomous action at HLTH 2025 in Las Vegas this week. While ambient documentation and predictive analytics dominated previous conferences, this year’s exhibition floor buzzed with a fundamentally different proposition: AI systems that don’t just listen, analyze, or recommend—they act.

From the Venetian Expo halls, a clear pattern emerged across exhibitors large and small. Healthcare’s next generation of AI won’t wait for clinician prompts or administrative approvals. These agentic systems autonomously generate evidence before questions are asked, orchestrate multi-step workflows without human intervention, and make care delivery decisions within defined clinical guardrails.
Where yesterday’s tools automated individual tasks, today’s agents coordinate entire workflows. Where previous systems required explicit queries, these platforms anticipate needs.
The technology represents more than incremental improvement in existing AI categories. Where yesterday’s tools automated individual tasks, today’s agents coordinate entire workflows. Where previous systems required explicit queries, these platforms anticipate needs. The implications ripple through every corner of healthcare operations—from diagnostic imaging to revenue cycle management, from patient engagement to clinical decision support.
Proactive Intelligence: AI That Acts Before Clinicians Ask
Atropos Health, in collaboration with Microsoft and Stanford Health Care, exemplified this shift toward anticipatory AI. The company is piloting what it calls a real-world evidence agent—an AI system that generates patient-specific clinical insights before physicians realize they need them.
Rather than waiting for a clinician to formulate a question about treatment options or outcomes data, the Atropos platform continuously monitors clinical contexts and proactively surfaces evidence-based insights drawn from real-world data. The system analyzes individual patient characteristics against vast datasets to identify relevant treatment patterns, outcomes, and potential complications—all without a human query triggering the process.
The Stanford collaboration positions the technology within actual clinical workflows, testing whether autonomous evidence generation can reduce the cognitive burden on physicians while maintaining clinical rigor. Early results suggest the approach could fundamentally alter how clinicians access and apply real-world evidence at the point of care.
Clinical Reasoning AI Physician Copilot
OneLine Health showcased what the company describes as enterprise-grade clinical reasoning AI designed to function as an end-to-end physician copilot. The platform represents a comprehensive approach to autonomous clinical support that extends from pre-visit preparation through documentation and decision support.
The system autonomously conducts pre-visit history-taking through AI-driven patient questionnaires that dynamically adapt based on the patient’s care journey and physician specialty. Patients can upload medical records, lab results, and imaging studies, which the platform processes using optical character recognition and natural language understanding to extract relevant clinical information.
Before the clinical encounter begins, OneLine generates a complete 360-degree patient summary that synthesizes the patient’s history, current complaints, lab results, imaging findings, and prior clinical notes. The AI-generated summary includes ICD-10 diagnosis codes, CPT procedure codes, and guideline-aligned clinical decision support—delivered as a structured, EMR-ready note.
The platform has demonstrated measurable operational impact: cutting consultation note generation time from 15 minutes to seconds, freeing up 1-2 hours of physician time daily, and achieving 80-90% adoption across diverse patient demographics including non-English speaking populations. OneLine integrates with major EHR systems including Epic, Cerner, and Meditech through secure APIs.
The technology addresses physician burnout by eliminating redundant questioning during encounters—since the AI has already gathered comprehensive patient information beforehand—and automating the tedious documentation work that consumes physicians’ cognitive energy. By handling the mechanical aspects of clinical reasoning and documentation, the platform allows physicians to focus on nuanced clinical judgment and patient interaction.
Imaging’s Agentic Transformation
GE HealthCare unveiled what the company describes as the industry’s first agentic AI diagnostic imaging assistant, developed through its AI Innovation Lab. Unlike traditional computer-aided detection tools that flag potential abnormalities, this system orchestrates the entire imaging workflow.
The agentic assistant integrates directly into diagnostic devices, processing scans, enabling natural language interaction with radiologists, and generating interactive reports autonomously. The technology moves beyond image analysis to coordinate multi-step diagnostic pathways.
GE HealthCare is refining the underlying MRI foundation model through collaborations with Mass General Brigham and the University of Wisconsin-Madison. Trained on over 200,000 MRI images, the model is being validated across prostate cancer use cases at Mass General Brigham and operational challenges like image quality control at Wisconsin-Madison.
The research reflects a broader industry recognition: agentic AI in diagnostics requires not just algorithmic sophistication but integration into clinical decision-making workflows and validation across diverse operational contexts.
Multi-Agent Orchestration for Patient Navigation
League introduced League Agent Teams, a multi-agent orchestration system built on the company’s Health Story data foundation. The platform represents a departure from single-function chatbots toward coordinated AI agents that work in concert.
The system transforms fragmented engagement and clinical data into what League describes as a “dynamic, contextual, plain-language narrative.” Multiple specialized agents collaborate to provide 24/7 personalized navigation for benefits, care solutions, and care gap closure.
Rather than routing patients to different systems for different needs, League’s orchestration layer enables a single conversation to span appointment scheduling, benefits verification, care recommendations, and follow-up coordination. The agents share context, hand off tasks, and maintain continuity across the member journey.
The technology addresses a persistent challenge in patient engagement: the proliferation of disconnected digital tools that fragment rather than streamline the healthcare experience. By orchestrating multiple agents under a unified interface, League aims to deliver the seamless experience consumers expect from digital services in other sectors.
Clinical Reasoning at Scale
Pangaea Data launched a platform designed to emulate clinical reasoning across large patient populations. The system analyzes both structured and unstructured data to identify patients who should be flagged for screening or treatment but slip through traditional rule-based systems.
The platform’s AI mimics how skilled clinicians think through diagnostic possibilities, weighing symptoms, risk factors, and medical histories to surface patients who merit clinical attention. Early deployments show the technology identifying six times more undiagnosed cancer cachexia patients compared to conventional screening methods.
The technology tackles a fundamental healthcare challenge: delivering high-touch clinical judgment at scale. While individual physicians excel at nuanced reasoning for patients in front of them, health systems struggle to apply that same clinical acuity across thousands or millions of members. Pangaea’s agentic approach aims to close that gap.
Evidence-Based Medical Intelligence
OpenEvidence announced a $200 million fundraise at a $6 billion valuation—doubling its valuation in just three months—to build what the company positions as “ChatGPT for Medicine.” The platform emphasizes trustworthy, evidence-based AI that draws from peer-reviewed medical literature and clinical guidelines.
The company’s rapid valuation growth reflects investor conviction that generative AI in medicine must be grounded in validated evidence rather than probabilistic language models alone. OpenEvidence’s approach combines large language models with structured medical knowledge bases to provide clinicians with decision support that traces back to specific studies and guidelines.
The company now finds itself in direct competition with ambient documentation leaders like Abridge, which recently introduced similar evidence-based decision support capabilities. The convergence highlights how distinct AI categories—documentation versus decision support—are blending as companies pursue comprehensive clinical AI platforms.
Foundational Infrastructure for Agentic AI
InterSystems and Google Cloud announced a partnership to integrate InterSystems HealthShare with Google Cloud’s ecosystem. The collaboration delivers what the companies describe as a scalable, FHIR-ready data foundation specifically designed for generative and agentic AI applications.
The partnership addresses a fundamental challenge: agentic AI systems require access to comprehensive, interoperable patient data to make autonomous decisions. Without robust data infrastructure that can surface the right information at the right time, even sophisticated AI agents cannot function effectively.
The integration aims to solve healthcare’s persistent data fragmentation problem by ensuring clinical information is both interoperable and responsibly managed. The companies position the collaboration as essential infrastructure for the next wave of AI innovation—systems that act autonomously across care settings and workflows.
Voice-Powered Patient Engagement
SoundHound AI showcased its Amelia AI Agent platform, demonstrating how agentic voice AI is transforming patient access. The company’s exhibit featured interactive demonstrations of AI agents handling complex, multi-intent healthcare conversations.
In one demonstration, the Amelia platform seamlessly managed a patient reporting an injury, rescheduling an appointment, and requesting a prescription refill—all within a single conversation. The system recognized multiple intents, checked facility hours, booked new appointment times, and integrated securely across backend systems without human handoff.
SoundHound’s Amelia platform has been deployed at major health systems including Allina Health, where an AI agent named “Alli” answers calls, authenticates patients through electronic medical records, and handles routine scheduling and administrative tasks. Early results show 5-10 second improvements in average call duration and successful offloading of routine requests from human call center staff.
The technology reflects a broader industry trend toward conversational AI that can handle full end-to-end workflows rather than single transactions. SoundHound’s voice recognition capabilities—designed to understand multiple accents, tempos, and background noise—enable more natural patient interactions compared to traditional menu-driven phone systems.
Ambient Intelligence Expands to Revenue Cycle
Suki expanded its ambient clinical intelligence platform beyond documentation to automatically generate medical coding. The enhancement allows the AI to listen to clinical conversations and produce specific CPT, E/M, ICD-10, and HCC codes without clinician intervention.
The expanded capabilities deliver measurable financial impact: a 48% reduction in amended encounters and an estimated net gain of $379 per clinician per month. By extending AI listening from documentation into revenue cycle management, Suki demonstrates how agentic systems can optimize multiple operational domains simultaneously.
The company also announced a nursing consortium to develop AI assistants specifically designed for nurse workflows. The initiative recognizes that physicians aren’t the only clinicians burdened by administrative tasks—nurses face similar documentation challenges with forms, flowsheets, and patient assessments.
Documentation Leaders Enter Decision Support
Abridge, which raised $300 million in Series E funding earlier this year, announced new capabilities that surface real-time clinical insights from UpToDate directly into physician workflows. The move positions the company—known primarily for ambient documentation—in direct competition with dedicated decision support platforms.
The announcement highlights how ambient listening creates a strategic platform for additional capabilities. By already capturing clinical conversations, Abridge can layer decision support, coding, and other intelligence on top of its documentation foundation.
The company now serves 150 of the largest health systems in the country across 55 specialties and 28 languages, processing over 50 million medical conversations annually. The scale provides Abridge with valuable training data and workflow insights as it expands beyond documentation into broader clinical AI services.
Agentic Evolution in Ambient AI
Nabla, which raised $70 million in Series C funding, is evolving from ambient documentation toward what the company calls an “agentic model” of clinical AI. The platform, now deployed across more than 130 healthcare organizations supporting over 85,000 clinicians, is moving beyond transcription to initiate actions within electronic health records.
The company’s vision encompasses unifying ambient listening, dictation, coding, and command capabilities into a single extensible platform. Rather than requiring clinicians to explicitly request specific actions, the agentic assistant would proactively execute tasks based on conversational context—staging orders, updating problem lists, and triggering follow-up workflows autonomously.
The Microsoft Clinical AI Ecosystem
Microsoft’s Nuance Dragon Copilot—formerly DAX Copilot—continues to demonstrate the staying power of established players in the ambient AI market. Used by more than 600,000 clinicians worldwide, the platform combines speech recognition, ambient listening, and generative AI to automate clinical documentation.
Northwestern Medicine reported physicians using Dragon Copilot see an average of 11.3 additional patients per month and spend 24% less time on notes. The technology achieved 112% ROI at Northwestern and is now being adapted for nurses with specialized workflow support launching in December 2025.
Microsoft’s broader healthcare strategy integrates Dragon Copilot with cloud infrastructure and other AI services, providing health systems with an extensible platform rather than standalone point solutions. The approach reflects recognition that agentic AI in healthcare requires not just sophisticated algorithms but ecosystem integration.
Strategic Implications: From Assistance to Autonomy
The agentic AI demonstrations at HLTH 2025 signal a fundamental evolution in healthcare artificial intelligence. Previous generations of clinical AI operated as sophisticated assistants—powerful tools that enhanced human decision-making but required explicit prompts and oversight.
The systems on display in Las Vegas operate with increasing autonomy within defined parameters. They anticipate needs rather than respond to queries. They execute multi-step workflows rather than complete discrete tasks. They coordinate across systems rather than operate in isolation.
This shift carries profound implications for healthcare delivery models. If AI can autonomously generate relevant evidence, coordinate patient navigation across benefits and care options, identify undiagnosed conditions at scale, and handle complete administrative workflows, the role of human staff fundamentally changes.
The opportunity lies not in replacing clinical judgment but in allowing scarce human expertise to focus where it matters most: complex cases requiring nuanced reasoning, emotionally difficult conversations requiring empathy, and oversight of autonomous systems.
The opportunity lies not in replacing clinical judgment but in allowing scarce human expertise to focus where it matters most: complex cases requiring nuanced reasoning, emotionally difficult conversations requiring empathy, and oversight of autonomous systems to ensure they operate within appropriate clinical and ethical boundaries.
The Infrastructure Challenge
Yet realizing this vision requires solving challenges that transcend algorithmic sophistication. Agentic AI systems depend on comprehensive, interoperable data infrastructure—the focus of the InterSystems and Google Cloud partnership. Without access to complete patient information across care settings, even the most advanced agents cannot make informed decisions. This challenge extends beyond clinical data to encompass medication management workflows, supply chain operations, and administrative processes.
The systems also require robust integration with existing workflows and platforms. Standalone AI tools, no matter how capable, create friction if they don’t connect seamlessly with electronic health records, scheduling systems, billing platforms, and communication tools that staff already use.
Perhaps most critically, agentic AI raises fundamental questions about oversight, accountability, and the appropriate scope of autonomous decision-making in clinical settings. As systems move from recommending actions to taking actions, health systems must establish clear governance frameworks that define when AI can act independently and when human review is required.
Patient-Centered Autonomy
For patients, agentic AI promises to reduce some of healthcare’s most persistent frustrations. Scheduling appointments, verifying insurance coverage, requesting prescription refills, and accessing test results—mundane administrative tasks that consume enormous time—become frictionless in demonstrations of platforms like SoundHound’s Amelia or League’s Agent Teams.
The technology could also address critical access challenges. AI agents that operate 24/7 in multiple languages help bridge gaps for patients with limited English proficiency, those working non-standard hours, or individuals in rural areas with limited staff availability.
Yet the same autonomy that streamlines access raises important questions about the patient experience. Will conversing with AI agents feel impersonal? Will patients trust autonomous systems to handle sensitive health matters? Will the efficiency gains come at the cost of human connection that many patients value?
The most thoughtful implementations on display at HLTH 2025 position agentic AI not as a replacement for human interaction but as a filter that handles routine matters efficiently so human staff can focus on complex needs requiring empathy, judgment, and personalized attention.
The Road Ahead
HLTH 2025 revealed an industry in the early stages of a significant transition. The technology demonstrations were impressive, yet most remain in pilot or early deployment phases. Proving value at scale across diverse health systems with varying data infrastructure, workflows, and patient populations represents the next challenge.
The companies exhibiting in Las Vegas are betting that healthcare is ready for AI that acts rather than merely assists. The coming months will test whether health systems embrace autonomous AI systems, whether regulatory frameworks can keep pace with technological capability, and whether patients accept AI agents as integral parts of their healthcare experience.
The question is no longer whether AI will transform healthcare operations but how quickly autonomous agents will become the standard interface between patients and health systems.
What’s clear from the HLTH 2025 exhibition floor: the question is no longer whether AI will transform healthcare operations but how quickly autonomous agents will become the standard interface between patients and health systems, and between clinical data and clinical decisions.
The agent will see you now. Whether healthcare is prepared to hand over that level of autonomy remains the industry’s most pressing strategic and ethical question.
– This original article was created with AI support.
Photo courtesy of HLTH Inc.