Leader Spotlight: Balancing innovation and trust in healthcare AI, with Anish Arora
Anish Arora is Vice President of Product at TigerConnect, bringing over 20 years of experience in healthcare technology. He specializes in AI-driven healthcare solutions, electronic health records (EHR), and interoperability. Anish has led product teams at top organizations like Cardinal Health and Ontada, focusing on improving clinical workflows and patient outcomes through intelligent automation and data integration.
In our conversation, he shares insights on designing AI that supports clinicians in high-pressure settings, balancing innovation with trust and usability, and what lies ahead for AI in healthcare.
Designing AI for clinicians in high-stress environments
At a high level, what’s your strategy for ensuring that AI products support rather than interrupt clinicians in high-stress environments?
Our strategy is rooted in a simple principle: the best AI is invisible. In healthcare, AI should quietly work in the background to remove friction, not add to it. Clinicians in high-stress clinical environments don’t have the cognitive bandwidth for new interfaces or unnecessary workflows. So, we focus on AI that reduces noise, automates mundane tasks, and intelligently filters information. This enables clinicians to stay focused on patient care rather than the tool itself. Success means reducing mental load and achieving outcomes efficiently, not simply increasing engagement metrics.
How do you approach designing human-in-the-loop systems that maintain clinician autonomy while unlocking AI-driven decision support? What does this look like in practice at TigerConnect?
Human-in-the-loop means that clinicians remain firmly in control, something that’s both sacred and essential in healthcare. Our approach is to empower the right individuals, whether they’re physicians, nurse managers, or clinical informaticists, by allowing them to customize how AI behaves in their specific environments. Every hospital and acute care setting operates differently. They have unique workflows, policies, and procedures. Instead of forcing clinicians into rigid AI parameters, we enable them to define their own rules and preferences.
Another key factor is how much the care team trusts the AI. Initially, they may want firm controls, checks, and balances in place. But over time, as trust in the system grows, they might choose to loosen some of those constraints, allowing the AI to handle certain tasks autonomously. That frees up clinicians to focus their attention on the most critical aspects of decision-making.
The key is to help strike a balance by providing intelligent defaults and suggestions while preserving the final decision-making authority of clinicians. AI should adapt to care teams, not the other way around.
Incorporating clinician feedback and advisory input into AI development
What’s your approach to getting feedback from users, especially given how unique this audience is? How do you incorporate them into the product development process?
User feedback is absolutely critical to how we develop products. In our world, there’s really no other way. We don’t build something and then ask, “Does this work for you?” Instead, we collaborate with users from the very beginning, even before we write a single line of code.
We co-create concepts and early designs alongside clinicians, ensuring their input shapes the foundation of the product. We conduct extensive user research across various personas and care settings to deeply understand their workflows, needs, and environments. That insight becomes the basis for our user stories and guides how we build meaningful, usable solutions.
What role do cross-functional advisory councils, such as clinical customer advisory boards or data ethics boards, play in your AI product development cycles? And how do you ensure their feedback meaningfully shapes your roadmap?
Our best features, AI or not, haven’t come from a product backlog. They’ve come from listening to our customers, especially clinicians. Clinical advisory boards and data ethics councils keep us grounded in real-world needs. Their feedback doesn’t just inform our process; it actively shapes everything from product design to roadmap priorities.
As I mentioned earlier when we discussed user research, these advisory groups are core to how we build. We regularly interview practicing clinicians, nursing leaders like CNIOs, CMIOs, and informaticists to understand their daily challenges and to test ideas before we commit to building anything.
These conversations have a direct impact on our roadmap. If a nurse manager tells us they want more control over workflow automation but need appropriate guardrails, that becomes a design principle. When a physician tells us they want AI suggestions, but not decisions, we make sure our systems support that request rather than override their clinical judgment.
This kind of ongoing, real clinical feedback ensures we’re not building AI that looks impressive in a demo but fails in the real world.
Building transparency, trust, and control in AI workflows
Based on your experience, what are some of the most effective ways to give clinicians transparency and control in AI-powered workflows?
Transparency starts with visibility. Clinicians have every right to understand why AI behaves a certain way. That means building products with traceability in mind. For us, that includes audit trails that clearly show what triggered an alert or a recommendation. Different users, clinicians and non-clinicians alike, should be able to see the logic behind the AI’s decisions, adjust parameters, and understand the impact of any changes they make.
Control also means clinicians can override AI decisions at any time and customize the system to reflect their preferences and clinical judgment. At the end of the day, AI should function like a smart assistant, not a black box making decisions on their behalf.
How do you evaluate third-party AI solutions and integrate them into your products without compromising your product’s trust, explainability, and compliance posture?
We treat AI from a user perspective the same way we’d evaluate any other member of the clinical team. If it’s not explainable or accountable, it simply doesn’t make the cut. When we assess third-party AI solutions, we hold them to high standards around explainability, auditability, and compliance. If we can’t trace how an AI system makes decisions, it doesn’t belong in our platform.
Before we even consider integration, we require comprehensive documentation, rigorous bias testing, and peer-reviewed validation studies. These are non-negotiables.
From a technical standpoint, we’ve built standardized integration frameworks that allow us to innovate without compromising on security or compliance. We also define clear boundaries around what data the AI can access, how decisions are logged, and how clinicians can override or adjust behaviors.
Because at the end of the day, especially in healthcare, trust is everything. If clinicians can’t trust the tools they’re using, if they can’t understand how those tools work, then the technology becomes a liability, not an asset.
Evaluating AI readiness and measuring clinical impact
With the rapid speed of AI innovation, how do you evaluate whether an emerging AI capability is ready for clinical use? What specific steps do you take to determine if it belongs in your product roadmap?
That’s a great question and one that’s especially relevant for us. We’re not just building AI in-house; we’re also actively exploring partnerships and integrations with other vendors that are developing innovative capabilities. The aim is to bring in those advancements in a way that creates real value for our customers.
When we assess whether an emerging AI capability is ready for clinical use, we start with a rigorous evaluation focused on three key areas. First, we look at whether the technology works reliably. Then we ask if it will genuinely help clinicians in their daily workflows. And finally, we examine whether it can deliver measurable value to the organization.
We go deep into the underlying data science to understand how the models function and whether they’re accurate enough for healthcare settings, where the margin for error is incredibly small. Just as important, we test usability with real clinicians to see if the AI fits naturally into their existing workflows or if it creates more friction.
We also look for strong evidence that the AI can improve outcomes, whether that’s patient care, operational efficiency, or clinician satisfaction. In healthcare, we simply can’t afford to be early adopters of unproven technology. We need to see validation studies and understand how the AI performs across different clinical settings, workflows, and patient populations. Integration with existing systems also has to be safe and seamless.
And finally, we consider whether we have the right expertise in-house to implement, monitor, and maintain the AI solution effectively. At the end of the day, we only move forward with capabilities when we’re confident they’ll help clinicians work better, not just differently. The bar in healthcare is high because the stakes are high. We’re not here to build AI for the sake of it. We’re here to genuinely improve how care is delivered.
What frameworks or KPIs do you use to measure whether an AI-powered feature is truly delivering clinical or workflow value, not just being used, but actually improving outcomes or efficiency? Can you share some best practices?
We’re moving beyond traditional SaaS metrics and focusing more on measuring real clinical impact and outcomes. So instead of just counting user interactions or feature usage, we’re tracking workflow improvements, patient outcomes, and clinician satisfaction.
The ultimate goal is to connect AI performance to what truly matters: better patient care, reduced clinician burnout, and improved operational efficiency. If an AI feature isn’t moving the needle on those fronts, it doesn’t matter how technically impressive it is; it’s not considered successful.
Balancing clinician adaptability with a cohesive product vision
How do you balance allowing adaptability for different types of clinicians with maintaining a cohesive product vision?
What’s really important for us is to incorporate user feedback early before anything is built. We’ll present concepts to clinicians and say, “Here’s how we envision this working. Now tell us how it fits or doesn’t fit into your workflow. Does it support your decision-making, or is it adding more cognitive burden?”
This kind of early feedback and research helps us avoid missteps. It ensures that what we build fits naturally into how clinicians actually work, not how we assume they do.
More broadly, even though we’re addressing many different AI use cases, our product vision is anchored in a few key principles — there’s trust, accountability, transparency, and keeping humans in the loop. We aim to make AI as invisible and assistive as possible, supporting clinicians without ever getting in their way. These values don’t just guide the product; they’re also embedded in our design and user experience principles. That’s how we stay adaptable while still maintaining a cohesive vision.
How do you validate AI product success when traditional metrics like fewer clicks or faster documentation don’t fully capture clinician satisfaction or patient care quality?
Healthcare is fundamentally different from typical B2C businesses, where metrics like number of clicks or login frequency directly tie to financial results. In healthcare, faster documentation or reduced clicks don’t tell the whole story about clinician satisfaction or the quality of care patients receive.
We’re evolving our approach to focus more on outcomes-based measurements that healthcare organizations truly care about. This means looking at metrics like emergency department throughput rates, patient boarding times, and response times to critical alerts. The real validation comes from department-specific outcomes that show whether AI-powered workflow automation helps patients move through the system faster and improves care coordination, not just counting system interactions.
For us, success means connecting our AI features to the quality and operational metrics hospital administrators and clinical leaders already track. When we can demonstrate that AI contributes directly to better patient satisfaction scores, reduced readmission rates, or improved efficiency, that’s when we know we’re delivering real value.
Ultimately, the real success is when clinicians tell us they wouldn’t want to work without AI support because it genuinely improves patient care and streamlines their workflows. That’s the North Star we’re constantly working toward.
On the future of AI in healthcare
What excites you the most about the future of AI in healthcare and where it can continue to deliver real value?
Healthcare is probably one of the most exciting industries to be in right now, especially when it comes to AI. It makes up about one-fifth of the US GDP, and studies show that more than half of that spending is wasted due to operational inefficiencies and suboptimal patient outcomes. The industry is also heavily staffed in areas where technology can truly make a huge impact.
I see the low-hanging fruit of AI’s promise in healthcare as addressing that administrative waste. People often talk about AI doctors that can diagnose, read reports, and produce treatment plans, and while that’s progressing, the real opportunity lies in tackling the massive operational inefficiencies. That administrative overhead alone might account for roughly one-tenth of the US GDP. We’re talking about potentially two to three trillion dollars a year.
The great thing is that reducing this waste doesn’t have to come at the cost of patient care. On the contrary, by letting technology invisibly handle administrative and compliance tasks in the background, clinicians can spend more time focusing on patients. That invisible, supportive role of AI, freeing clinicians from burdensome paperwork so they can care better, is what I find most exciting about being in healthcare today.
What does LogRocket do?
LogRocket’s Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.