Leader Spotlight: Experimentation within an established core product experience, with Laure Marchand
Laure Marchand is Director of Product Management at OfferUp, a digital marketplace connecting local buyers and sellers. She began her career in sales optimization and marketing at Monte-Carlo Société des Bains de Mer before transitioning to Auto Escape, where she eventually led revenue management. Laure then moved to product management at CarRentals.com, working on the core product as well as search and analytics. Before her current role with OfferUp, she spent over two years as a senior product manager at Nordstrom.
In our conversation, Laure talks about how to run high-velocity experimentation limiting risk on the core product experience — and why protecting that core must come before monetization. She explains how OfferUp distinguishes between features that belong to everyone and paid accelerants designed for its most active users and business customers. Laure also reflects on the hidden risks of “winning” experiments and how AI is reshaping PM work.
Monetization for different user groups
When you’re building a platform product, how do you distinguish core features from paid features?
I think one of the most important things is really knowing the core of your business and your business model, and being able to say, “Hey, this feature belongs to the core experience, we cannot really put it behind a paid gate. The people who have been using the app for a long time are going to have a degraded experience if we do that.”
That’s how we think about introducing subscription products. How do we protect the core of the experience for our users, making sure that people who have been successful this whole time as buyers or sellers continue to be successful? Then, what are the things that we can provide to them to make them even more successful? That’s what you would put behind a paid product.
User segmentation is another way to look at monetization. For example at OfferUp we work with businesses and dealerships, they are great partners we want to enable success for, but they are not the majority of our user base. Our core use base is made of casual buyers and sellers like you and I. Businesses pay us and we owe them dedicated features that do not apply to everyone on the platform.
Do you think about users as one group, in the sense that your goal would be to make everyone a paid user? Or do you think you always need to create an experience for that unpaid group?
I think about this a lot, because as a user myself, I’m generally anti-subscription. My thought is that you should always keep a part of your product that’s unpaid, because a lot of people are like me and pay close attention to that, and even more in the current economy.
I’ve found that users who are willing to pay are usually your most active and loyal users, and they have a very different behavior than somebody who’s just casually coming in every quarter or so. The paid features are geared toward this segment in particular, because they’re using the app so much that they want even more. To me, it’s different user segments with different needs and your product needs to support them in different ways.
Testing intelligently
What observations came out of using tools like Statsig that shifted the way you were thinking about your product roadmap?
It was a bit of a journey. One of the big gains with moving to a platform like Statsig is analytics. It makes you much faster in understanding what the experiment is producing, the results you analyze, and how fast you can analyze and move to the next phase.
We went from running just a couple of experiments to really ramping up that process. But we got to a point where the experience itself became disjointed for our users. We had a test to change certain elements on the page, and all of those things separately had a positive impact, but from an overall user experience, it made it more complicated for users to be successful on our app.
The second shift in how we look at experiment results happened more recently. Yes, a test could be a winner with those short-term KPIs, but you absolutely need to look at long-term retention and understand the impact of features, especially altogether. We came to that realization because our users were saying, “Your experience is so complicated nowadays.” If we had looked more at funnel analysis, how it changed the journey, and how it changed retention, we probably would have made different calls on some of those experiments.
On the topic of experimentation, what trends or challenges are you seeing among product managers and leaders in trying to run more effective experiments?
With all the tooling that we have right now, there’s a tendency to want to test every little thing. But it’s hard for product managers to come up with so many fully baked hypotheses and tests. If you don’t have a solid hypothesis and you’re so low-level as to test the shapes of on-screen buttons, it might not be worth it. What are you actually trying to drive with this?
At the same time, someone might say, “I’m just changing this copy. I’m not going to test it.” These tests can be the most impactful because changing copy might lead the user in a completely different direction. It’s an ongoing practice of: what are you really trying to learn? Try to isolate the test, too. You cannot test all of it at once because then your result’s going to be muted.
If I had one piece of advice, it’s to take the time to define what you want to test and what the goal is, clearly define your main KPIs, and make sure you have more long-term KPIs as guardrails. That’s what makes an experiment successful — not necessarily a winner test, but a test where you learn what your next steps should be from there.
How do you encourage a culture of experimentation in your team and your company without testing everything all the time?
There are two things I always do. One is to set really clear goals. What’s the problem you’re trying to solve for the user, what’s your hypothesis, and why did you build this thing in the first place? Be aligned as a team on what you’re trying to solve.
Second, I’ve seen organizations where, as part of PM goals, you have to run however many experiments per quarter. This is not the right goal. The right one is a win percentage or ratio. It doesn’t matter how many you run. You might run only three, but two are really strong winners. That changes the business.
The role of AI in accelerating PM work
It’s impossible not to see how AI comes into play in product organizations, so where do you see AI accelerating PM work — and where does it overstep?
The base of our work as PMs takes a lot of time — user research, competitive research, digging through all your app reviews and customer care reports, etc. And with layoffs throughout the industry, this type of work has been on PMs more often. It’s been hard to transition to that.
With AI, the research piece is a tremendous accelerator. You can research things that would take one or two weeks, and today it takes 15 minutes. You ask Gemini to look at competitors, what they’re doing for this type of feature, scan app reviews, and summarize how people feel.
The other part is how the role evolves. The lines between UX research, designers, product managers, and engineering start to blur. You can take insights, form a hypothesis, and build a bare-bones prototype without working with your designer. That’s accelerating, though it’s not quite there yet, and there’s a lot of rework to make it match to your business outcome. Even if we have to re-write things, it shaves a lot of time off the pre-work.
Where it’s not quite there yet is similar to experimentation. If you don’t define clearly what you’re trying to solve and your probable ideas or hypotheses on how to solve it, AI will not tell you that. If you don’t prompt it properly, you’ll get an answer that’s maybe not aligned with what you’re trying to accomplish.
With all that said, are you putting guardrails in place for internal AI use? And do users specifically want or ask for AI in the product?
Internally, we want everyone to be exposed. There’s no process per se — it’s more like go and experiment, but within company guardrails. We’re using Gemini as our approved AI tool, and it’s not using our data to train its model outside of us. Everyone talks about the excitement around AI, but there’s also fear. When I ask if people have tried new prototypes, most of the time the response is, “No, not really.”
So I keep pushing it a little. Every time we’re starting something new, the first question I’m asking is, “Did you use Deep Research to look at what competitors are doing? Where do we sit compared to our competitor for this particular feature or for this particular problem?”
On the user side, AI is not new. On trust and safety, it’s always been the number one thing we work on. And on the backend, we’ve been using these techniques to augment listing data. If someone posts that they’re selling a black chair, great, but there’s not enough info for search to find it. So we extract and augment data, and make our systems work properly.
More recently — and maybe more critically — we’ve started to look into whether users want AI. To me, it’s more about whether they need it and, if so, where they need it the most. For example, about a year ago, we built an AI-assisted posting experience. Users can take a picture, and we’d auto-fill everything. We tested it, and one hypothesis was it would drive retention through increased frequency of use. People will post more because it’s so easy. That didn’t show, though — people posted quickly, but it didn’t change their fundamental behavior. They still only came to our product to sell things when they needed to.
With that said, we did see a lift on items — buyers were finding them more easily and buying them. But with the price recommendations we created, people didn’t really accept that, and even with AI-powered descriptions, people were going back in to change things. The trust wasn’t there at the time. But AI is at a different state now, and users’ states of mind are always changing as well.
In general, the main thing is not to ship a feature with AI just because it’s called AI. You need to think about your users and where they need it most.
When PMs transition from backend work to doing things that are more customer-facing, how do you get them to build that empathy for customers? Is that a difficult thing to coach people on?
I’ve always tried to think about the user. Even for backend changes, you need to think about who your core user is. What are the things that you could do, even if they’re not UI related, to help solve their pain points?
To me, the transition is not necessarily difficult, but the attention to detail is. When you work on big backend stuff, it’s very straightforward. The databases and APIs need to be a certain way, and we’ll serve this data by doing X. On the UI side, it’s more difficult because you have a lot of opinions. Plus, your opinions are not necessarily always right because your users are not you. In the experience itself, it’s important to try it and see how it feels before you move on with a feature. You also have to be OK with being proven wrong.
Empathy, data, and effectively coaching PMs
Did you find that this is similar to having to shift from quantitative to qualitative insights? How do you strike that balance after having worked with one extreme for so long?
You need to merge quantitative and qualitative feedback. One tendency for PMs on the UI front is to go with qualitative feedback because that’s what people see and complain about. When you read feedback that says, “This is not efficient for me — I hate it,” you think, “This is my product. I don’t want people to talk about it like this.” But you have to look at the data. How many people share this sentiment? Is it actually preventing people from converting — from buying something?
Sometimes, it’s a case where one user says notifications aren’t intuitive. But let’s see if there are more — and if we have data that tells us if that’s a true blocker for a lot of users.
Ultimately, needs and wants are different. People might say, “I want this feature because Facebook has it.” That’s not necessarily solving their actual problem. It’s really important to dig into what the actual problem is.
To wrap up, what guidance would you give to someone who is new to product management about navigating what the field looks like now?
The way I see the PM role evolving with AI in particular is that AI will do a lot of junior-level work, whether that’s product, engineering, or design. The advice I’d give to younger PMs going into the field is to keep being curious and really dig into things. That’s what’s going to get them to a faster level of seniority. Ultimately, that curiosity and ability to dive deeper will help them be successful in this new world. Critical thinking and strong business acumen and knowledge coupled with AI will likely shape the product of the future.
What does LogRocket do?
LogRocket’s Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.


