Leader Spotlight: Adopting responsible AI in healthcare tech, with Archie Mayani
Archie Mayani is Chief Product Officer at Global Healthcare Exchange (GHX), where she leads the design and delivery of AI-powered solutions to modernize and optimize healthcare supply chain management. Before GHX, Mayani served as CPO at Change Healthcare, where she advanced value-based care through cloud-native Enterprise Imaging platforms and unified technologies to streamline clinical decision support and revenue cycle management. Earlier, as Head of Product and Content Operations at Amazon, she drove scalable product ecosystems, and as Vice President/GM at Optum, she led prevention and population health businesses. She’s also ranked among the Top 50 Women Leaders of San Francisco.
In our conversation, Archie shares how her teams within GHX are working to define ethical, responsible AI within the healthcare industry, including by forming an AI council. She talks about how the PM role is evolving to encompass more breadth than depth, especially in the age of new AI-related skill sets. Archie also discusses the importance of future-proofing product roadmaps for future advances in technology.
The 3 E’s and an AI council
As CPO, how are you ushering your organization into the world of AI? What specific tools, frameworks, processes, or methodologies are you leveraging to educate your team and drive the necessary mindset shift?
AI isn’t new to us. We’ve been leveraging AI in some form for the last 15 years, and it's like any other technology that we leverage to solve customer problems. It's not about chasing shiny use cases — it’s about making sure that we are building and prioritizing for our customers, even as we talk together as a team. It manifests into a framework, what I like to call the three E's:
Explore — We love to run quick pilots and proof of concepts, even more so with AI. As we understand the path to production-readiness for AI, sometimes it's a longer pole in the tent with the completeness of the data, accuracy of the datasets, model sensitivity, etc. We run quicker pilots and proof of concepts, embedding the customers hard in that loop
Embed — AI is default into the core workflows, but our customers are embedded into each of our processes of how we are looking at the datasets. How are we training the models, what are the workflows themselves, and are they delivering the value that our customers want? Most of the AI workflows start with humans. We ensure we allow the human to make the right set of decisions with the support that they're looking for
Elevate — We want our teams to think in terms of not just data leverage and features, but also building responsibly. Data governance, rights, and privacy all play into this, but we take it a step further. We have created a cross-functional AI council to ensure that everyone brings ethics and broader perspectives into how we build these products
The AI council is very interesting. Can you share more? Do you meet regularly, or is it mostly reserved for a new launch or consideration?
The AI Council is one of the most important structures we’ve put in place. Its core job is simple but essential: to ensure everything we build with AI drives real customer value and is grounded in responsible use. That means it’s not just about new launches, it’s about continuous oversight.
On the build side, the Council sets clear guardrails. No model goes live without passing bias audits, explainability checks, and real-world testing. No black boxes. If we can't explain how it works, we don’t ship it.
We also evaluate how we consume AI internally: Are we choosing the right technologies, the right partners, and the right patterns of adoption? Everything has to align with our values and our role in the ecosystem.
GHX sits at the center of the healthcare supply chain. We are the connective tissue between providers and suppliers, and our customers trust us with their data. That trust is earned through precision, transparency, and accountability. The Council exists to protect that trust.
Leveraging AI to bring users value
Healthcare is especially sensitive in terms of data and information. What does ‘responsible’ AI mean in this industry?
To me, responsible AI means knowing the difference between whether you truly are going to be helpful and not cause any unintentional harm in any way. When a dating app hallucinates, it's funny. Responsible AI means being useful and safe. In healthcare, the margin of error or that line is razor-thin. The model should support judgment, not override it.
At GHX, we never go straight to automation. We start with human-in-the-loop systems to build trust, measure accuracy, and generate real-world evidence. Only when a model meets our standards — technical, ethical, and clinical — do we even consider full automation.
This is not about moving fast and breaking things. It’s about moving precisely and earning trust at scale.
What’s your approach to aligning UX, product, and data science teams so that AI features are not just technically sound, but also usable, trusted, and effectively integrated into workflows?
AI only works when it's trusted, useful, and invisible. That requires tight integration between product, UX, engineering, and data science. We don’t treat these as separate functions. They operate as a single loop, iterating quickly, anchored in real user workflows.
Our product managers use PR/FAQs as a forcing function to define intent early. We've evolved that format to include AI behavior briefs with what the model should do, how it should behave, and how it should feel in the user’s hands.
We also run user-in-the-loop sprints. Real users interact with early designs and model outputs, not just mockups. If the experience doesn't feel intuitive or the model doesn't earn trust, it goes back. And we always build a fallback with two-door decisions. If the model gets it wrong, the system still delivers value. That’s how you build confidence: by designing for failure and delivering utility anyway.
Have you ever gotten a particularly interesting or useful insight through that process?
One of the most valuable insights we’ve had came while building GHX Resiliency AI.
Healthcare supply chains are under real stress. Resiliency AI uses generative models to predict disruptions — backorders, outages, etc. — and gives users prescriptive recommendations. It works like any large language model, but trained on the realities of the healthcare ecosystem.
Initially, we focused on disruption confidence: how likely an event was. But in showing it to customers, one insight changed everything: Confidence alone wasn’t enough. A Band-Aid and a catheter aren’t the same. They wanted to understand clinical impact.
That small shift from confidence to clinical sensitivity made the tool exponentially more useful. It’s a reminder that real-world context beats theoretical performance. Listen closely to the users. The best ideas often come disguised as nuance.
Are there other AI use cases that you think are particularly innovative in healthcare today?
AI in healthcare is no longer theoretical. It's operational. One of the most meaningful areas is diagnostic support. We’re facing real clinical workforce shortages with radiologists, cardiologists, and imaging specialists. AI can help by doing the first pass.
A great example is in mammography. Years ago, we saw early models that could identify potential cancer patterns with high accuracy, flagging 98 percent confidently, and routing the remaining 2 percent for human review. That shift lets clinicians focus their time where judgment matters most.
The innovation here isn’t just the model. It’s how we design AI to extend human capacity, not replace it. That’s the pattern worth scaling.
Evolving to more breadth vs. depth
What methods do you employ to coach your team to scale how they leverage AI and build it in today's landscape?
At GHX, AI isn’t new. We’ve been doing it long before it was called AI. What’s different now is the visibility. Classic AI ran quietly in the background. GenAI puts the experience in front of the user. That changes how teams think and how we coach. We’re shifting the mindset from AI as a tool to AI as leverage. Not everyone needs to know how a model is trained, but they do need to recognize where friction exists and ask, “Could a model do this better, faster, or smarter?”
For product teams, that kind of reasoning is essential. It’s not about knowing the algorithm. It’s about knowing the problem well enough to ask the right questions.
Scaling AI is not about adding more data science. It’s about scaling clarity on what to solve, and why it matters.
How do you see the role of product managers and technical functions changing with AI?
AI will reshape how we work, but not why we build. There’s a lot of noise about coding going away or engineers becoming obsolete. That misses the point. Yes, we’ll automate repetitive tasks, say, backlog grooming, test writing, and even parts of design. But the core question remains: Do you deeply understand your customer?
In healthcare, empathy is non-negotiable. You need the curiosity to ask better questions, and the judgment to solve real problems safely and meaningfully. That will never be replaced.
What changes is the surface area. Product managers may go broader, not necessarily deeper — but the craft, the thinking, and the obsession with impact always stay.
Future-proofing AI roadmaps
How do you see regulatory frameworks, such as the FDA's evolving stance on AI and ML, enabling or hindering AI innovation today?
Regulation is evolving, and that’s not a constraint; it’s a catalyst. The FDA's move towards lifecycle oversight is a smart shift. It recognized that AI isn't static; it's evolving. The teams are also evolving, adapting, and learning, and so is the technology. That's a great thing.
I predict we'll also see more flexible, continuous validation models over the next few years. It won't be a bottleneck. It will standardize building responsibility at scale. Even the work that some of the governments are taking, whether it's the EU or countries in Asia, to ensure responsible AI or build bias-free capabilities, is a great place to start.
The models are only going to be as good as the creators, and it’s important to think of the ramifications downstream. Building some of these guidelines upstream is an important piece of the puzzle. I welcome the dialogue around continuous improvement, because we don't know what we don't know.
How are you empowering your team to adapt accordingly? And how are you future-proofing your roadmap with this in mind?
One, we have absolutely stopped thinking of static features. Every new product idea has to pass the litmus test of whether it can continue to grow over time. Can it adapt? Can we build feedback into it? Whether it's direct customer feedback loops or learning feedback, that shift is here to stay, and I think it makes us more durable to the AI landscape changes.
Second is modularity. We design for modularity so that if we want to swap to an advanced model in the next quarter, we can do that. Also, AI is moving so fast that I think if we don't build that flexibility into our design, we could be at a precipice where we would have to throw away all the great work. Keeping that modularity in mind, so things can be easily upgraded or swapped, is key to tying AI milestones to business outcomes.
Lastly, AI will never be the hero of healthcare. The humans who save lives are the real heroes, as they should be. However, AI should be invisible scaffolding. It should be that quiet force that lets real humans do the heroic work of healing faster, smarter, and with less friction. Care should be affordable and accessible to all, and that’s what we are building at GHX.