Leader Spotlight: Balancing trust and signals in complex marketplaces, with Khetiwe Richards
Khetiwe Richards is a B2B product leader who spent the first part of her career as a strategy consultant. After starting at Deloitte and earning an MBA from The Wharton School, she joined Bain & Company, where she refined the hypothesis-driven, first-principles thinking she brings to product. From there, she moved into strategy and product roles at Elavon, Analytics Quotient, and Rent before becoming Head of Product at Cartus, a corporate relocation services company.
In our conversation, Khetiwe talks about her approach to thoughtfully introducing AI into complex marketplaces and the importance of building trust across stakeholders as part of that process. She shares how to avoid solving for the loudest voice and, in turn, how to anchor decisions in the right metrics. Khetiwe also discusses the evolving role of AI, including when to use it and how context and governance will shape its impact on decision-making.
Knowing which customer signals to solve
In B2B2C marketplaces, signals from clients, end users, and suppliers often conflict. How do you determine which problem is structurally important vs. just the loudest in the moment, and what signals do you trust most?
Often in B2B2C or marketplaces, the person or entity that’s paying, i.e., the client, is often the loudest. You’ll hear the client feedback, and that is really critical because your paying client is in control of the contract and derives revenue. But it’s important not to fall into the trap of treating every client request as a signal. You have to make sure you’re looking at whether the problem is solving for one side of the marketplace without degrading another side’s experience.
At Cartus, one of the flagship products was a self-service benefits selection tool. The transferee could select what benefits they were interested in, and one of our clients asked us to build in a quote functionality, which seemed simple and made sense on the surface. But as we thought more about it we realized that generating a quote before the person has decided what benefits they want is premature. The customer hasn’t entered the information necessary to generate a proper quote, and the supplier is looking for enough information to generate that quote.
If the user is in a self-service experience and hasn’t made the choice for that service yet, it’s too early to ask them for all of that information. So to balance that need we created an estimate tool, because that’s really what the client was looking for. They wanted the employee to be informed about the benefits they were choosing and how much that might cost.
That’s the balance — you hear the signal, but you don’t overrotate on doing exactly what the stakeholder is asking. You have to be thoughtful about how that impacts all stakeholders. The client didn’t at first love it because they were anchored on a more manual process, but after explaining the self-service nature, they understood.
Are there certain categories or instances where teams think they’re solving a user problem but are actually shifting friction to another party?
Absolutely. It’s important to understand who you’re solving for upfront. I’ve seen it a number of times where you think you’re solving the problem for the full stakeholder ecosystem, but you’re really only solving for one side.
For example, at Rent, the product was an advertising platform for people looking for apartments. Users go on the marketplace, look for an apartment, and submit a lead. If we’re only solving for our clients, the property management companies, then we’re solving for volume and quality of leads. They want all the information about when the user plans to move, what their family size is, if they have pets, etc.
But if we have all of those questions on the lead submission form, the renters won’t submit it, because it’s too much to fill out. So you really have to consider whether you’re solving for the core ecosystem or shifting friction from one group to another. If you strip down the questions, you’re shifting friction to the property, because now the property managers have to call the lead to get that information. On the opposite end, if you include everything, you’re shifting the friction to the renter, who has to fill out a long form just to learn more about the property.
There’s a real balance there. When you’re problem-solving, it’s not just about solving for one component — it’s about balancing friction and solving for the whole ecosystem.
Frameworks and signals across the full ecosystem
Have you developed any frameworks to predict when an optimization for one side will negatively impact another?
It’s funny — you can ask any of my team members, “What does Khet always say?” The number one thing they will tell you is, “What problem are you solving for?” That’s what I always ask my team. The two frameworks I use are simple: one, what problem are you solving for, and two, how are we changing the process as part of the solution? These aren’t formal frameworks, but they work because the problem needs to be holistic. You’re thinking about who the primary stakeholder is, but also how the problem impacts other stakeholders.
In answering how the process changes, you have to understand the current state, the future state, and the delta between them — what you’re actually changing and impacting. It’s straightforward, but that’s why I like it: what problem are we solving, and how is it going to change? If you think about more traditional frameworks, one is first principles — thinking about second-order effects when solving a first-order problem. What is the first-order thing, and what are the second-order effects it might have?
I come from a consulting background. Bain & Company has a hypothesis-first approach — before we build anything, what do we believe, and what assumptions have to be true to validate that? I use that mindset in product as well. What experience does the client need? What experience does the customer need? What experience do supplier partners need? That helps ensure the overall experience works.
What’s the primary metric you anchor on in a marketplace like this, and where have teams been misled by the wrong metrics?
This actually ties to the previous question about frameworks. One of the frameworks I love is the OKR framework, or objectives and key results. People say OKRs and sometimes don’t understand the spirit of it. For me, the essence is: what is the business objective we’re trying to solve for, and what are the key results that tell us we’re moving in the right direction?
There’s lifetime value and all kinds of key results a product person could use, but it’s important that a product team understands the mission and goals of the company. The team shouldn’t dream up objectives and key results that are separate from those of the organization — you need to focus on the core outcomes for the company.
For example, Cartus’ mission is to help people move with ease. If that’s the mission, then the primary goal should be: was it a successful relocation, and was the transferee happy? The client pays the bill, but if those two things are moving in the right direction, the client should be happy. Those are the core signals for whether we’re moving in the right direction, with other supporting metrics alongside them.
The same applies at Rent. The goal there is to help people find a home. If we’re spamming properties with hundreds of leads that consist of just a name and email, we’re not helping them close a lease. You could use the volume of leads as a metric, but that can lead you in the wrong direction. The client goal is to get leases. To do that, you have to be thoughtful about the balance of key health metrics — you need traffic and volume, but you also need quality. And you have to measure that quality to make sure it translates to what’s ultimately important, which is getting someone into a home.
Building trust when introducing AI
It’s impossible to have a product leader conversation today without talking about AI. In a domain like relocation, where decisions have significant financial and personal impact, how do you introduce AI into workflows where stakeholders need transparency and control?
It is a balancing act. In relocation, trust is really important; it is a benefit just like health, vision, or dental. You can’t get that wrong. If the company says you can move your family and your pets, but then we say “oh yes, we can move your horse” and that wasn’t in the policy, the transferee has already planned around that. Now you have a financial dispute, a broken promise, and a client whose employee is irate mid-move. Trust in the whole service breaks down fast.
So it’s important that you’re using AI in a safe and trusted way to maintain that trust. I think about it in two aspects. One is: can AI be trusted to reliably solve the problem we’re trying to solve? That’s for the product team to determine. The second is: is the client or stakeholder ready to trust that AI solution? There are two sides of the coin.
First, you can evaluate through a cost-benefit lens. Is this the right solution? Can we afford to invest in it? That’s traditional product thinking applied with an AI lens. But with AI, you also have to be thoughtful about whether your end user will trust it.
A funny example is one of our large clients, one you’d expect to be very AI-forward, told their account rep that they didn’t want us to use any AI and asked to stipulate that in our contracts. But when you peel that back, it’s not that they didn’t want AI — they just wanted to make sure Cartus was providing the service. So it’s important how you frame the use of AI.
Implementing AI does not automatically mean the removal of people. And if that is a major concern, as it is in relocation, then leading with “AI will answer transferee questions” lands very differently from “AI will allow us to meet your transferee’s unique needs whether answering a quick question or putting them directly in touch with their consultant”. Positioning matters as much as the technology.
Lastly, starting internally is often helpful. Automating known, repeatable processes is a good place to begin. We did that at Cartus through initiatives like reading invoices, auditing, and automating templated processes. By the time the client was ready, we were ready to move AI into external experiences.
Have you had any experiences where you’ve seen a team misapply AI or use it in a situation that’s not the right use for it?
One of my pet peeves is, “Let’s come up with an AI roadmap.” AI is a tool that can be used to solve problems, but it is not the roadmap on its own. You need to be thoughtful about what problems you’re trying to solve with AI. Everything you’ll hear from my team and me comes back to: what problem are we trying to solve?
I’ll give you an example where we applied AI, not necessarily in the wrong place, but maybe at the wrong time or without the right support. At Rent, we launched a natural language search bar on apartment.com. It was similar to how you can type into Amazon in plain English and get what you need. This was in the early 2020s, around COVID. Users weren’t adopting it, and we couldn’t figure out why. We thought it was a great experience.
As we discussed it, someone pointed out that the low adoption rate was probably about trust. If I’m shopping and I type, “I want fun pajamas for an eight-year-old boy,” I’m OK with getting results that are close. But if I’m looking for a home, I want it to be exactly what I’m asking for.
At the time, people weren’t experienced enough with AI and natural language to trust that if I typed “patio,” the system might interpret that as a balcony or an outdoor space. So there wasn’t enough trust. People preferred to use filters because then, they knew exactly what they were choosing. That’s an example where AI may not have been the right choice at that time. Or we could have blended approaches — using AI for the search, but traditional methods to show what was inferred, to give users more visibility and trust.
We eventually rolled it back, and the adoption data made that call clear. Users were telling us through their behavior that they trusted filters more than free-form search, and we listened. Many home search websites have natural language search now. The solution was right, the timing wasn’t..
AI tools are evolving and changing so quickly. As an executive, how do you decide how in-the-weeds you want to get with AI tools, and do you intentionally carve out time to upskill in that area?
As a leader, I need to understand the tools, the processes, and the work my team is doing so I can support them effectively. So yes, I carve out time to dive into the tools, try them out myself, and build things. Honestly, it’s quite fun. I have an undergrad degree in computer science, and AI has allowed me to tap back into that part of myself that was writing code and building things.
I think it’s important for a product leader to stay close to the technology, not just for that reason, but because AI is changing how we structure our teams. There are product managers who implement AI features, and there are also AI-native solutions that require AI PMs. That differentiation is new, but a lot of people are talking about it. Are you a PM working on AI features, or an AI PM building an AI-native solution? The problems they’re solving are slightly different, and the skill sets are nuanced. As a leader, it’s important to understand that difference, so you can build your teams appropriately, deploy them against the right problems, and be effective.
How AI is reshaping decision-making and ownership
How do you see AI changing how decisions get made across the multi-stakeholder marketplace?
I think we’re on a really interesting frontier. It’s less about the LLMs themselves and more about context engineering, which is where does the information live, and what information is guiding the AI?
What I find interesting is that if AI is making decisions, then the available information becomes the most important source of truth. That becomes your most important asset. Whoever controls that context controls the outcome. So how are you, as an organization, making sure that the context the AI is using is up to date, correct, and reflective of your principles, governance, guardrails, and policies? As more decisions are pushed to technology, that becomes really important.
Even if you maintain a human in the loop where the final decision is made by a person, AI still plays a major role in the fact-finding and information leading up to it. That underlying knowledge set becomes critical.
Even with a human in the loop, who owns the ‘context’ that AI relies on, and how do you feel that responsibility should be structured?
I don’t know that there’s a standard way to determine who owns the context now because it’s so new, especially at the enterprise level. I don’t think there’s an agreed-upon practice yet. It raises the question about whether there will be context engineering or a context owner role that coordinates across teams. Or will it sit with individual stakeholders, so marketing owns marketing context, for example? It could be pushed out to the core business. I think that’s going to be really important.
Even with a human in the loop, where someone is reviewing the output, people may agree with the result most of the time, thinking they made the decision. But a large percentage of the time, they may actually agree with the wrong answer because there’s a bias to assume it’s correct. So in this kind of world, context becomes even more critical. You have to make sure the inputs and guardrails are in place to prevent that type of risk.
What does LogRocket do?
LogRocket’s Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.


