Leader Spotlight: Designing experiments for modern buyer behavior, with Laura Laytham
Laura Laytham has 20+ years experience leading end-to-end website rebuilds, platform migrations and growth programs. She started her career in media at Primedia Group, working under gURL.com and Seventeen.com. Laura then joined Total Beauty Media as a founding product/tech lead before transitioning to Golf Channel as Director of Product & Technology for Golf Channel Digital. She served in digital strategy leadership at Akamai and as Head of Web (Web Strategy & Operations) at Sisense, an API-first analytics platform. Additionally, she continues to provide fractional CDO/Head-of-Web services to startups, non-profits, and media brands.
In our conversation, Laura shares how she approaches experimentation with a pragmatic, outcomes-driven mindset. She discusses how modern buyer behavior, shrinking attention spans, and low-commitment preferences are reshaping B2B journeys, and reflects on the role of leadership in building sustainable testing cultures.
Defining what’s worth testing
With extensive experience in different forms of testing and product performance, how do you decide when an issue is a good candidate for an A/B test?
The biggest thing to think about is whether we have a strong hypothesis. If we do this, then we think this will happen, and is there a clear way to define that it did or didn’t happen? That’s what I try to stick to. If someone brings me a test, I ask: Are we clear on what we’re testing? Are we testing this because we want this outcome versus that outcome? And can we actually measure the result properly? If it’s too nebulous or gray an area, then it doesn’t really make sense as a test. We need to rethink how we’re testing and what we’re testing so we end up with actual data-driven analytics.
For example, I was working with someone who wanted to test changing an H1 on a page. His goal wasn’t engagement, but to see whether paid campaign dollars changed based on the different H1. We initially ran the test through VWO, but he wasn’t seeing any change.
Once I understood what he was really trying to achieve, I realized the issue was that the variant was being delivered through the A/B testing tool on the frontend. Search engines likely weren’t seeing it. So we flipped the test: the variant became the default header on the page, and the control became the alternate.
So we had six weeks of control data, and then six weeks of variant data. We still had engagement metrics in VWO, but now we could also see whether ad spend and pricing metrics changed on his side. Once I understood the goal better, we evolved the test to something we could actually measure with the tools available.
When you run a test and you have multiple options for the user, how great a discrepancy in results do you need to see to consider it significant?
Tools like VWO can help by telling you when there’s enough participation and a clear enough winner to end a test. But I’ll say that probably 50 percent of the tests I ran last year never hit that threshold, either because that page didn’t have enough traffic or time to hit it, or there wasn’t a clear winner overall. We can still look at the results, though, because they at least tell us if we got somewhere.
If a page gets 2,000 visits and one version performs 5% better, that might not be statistically flagged as a winner, but in B2B, that can still matter, especially for lead gen. Any little bit can count.
If I see enough of a signal, even if the tool doesn’t formally call it, we might still choose to adopt or invest in it. But I wouldn’t leave it there. I’d then say, “OK, we either keep the winner or stick with the control. Now what’s the next thing we test?” If something didn’t move much, that tells us we need to rethink what lever to pull next.
How do you encourage continuous learning and retesting without creating an environment where you’re never settling on anything?
What works for us is having clear ownership. If I lead A/B testing, then I can decide what we test, how long we test, and when we stop.
If something is a clear winner and it doesn’t introduce risk or negative business impact, I can end the test and implement it immediately. We all agreed it was worth testing, and now we act on it.
In terms of iterating on tests and finding next steps, I prioritize a regular review of every A/B test that we’ve done and what the outcomes have been, and then we also review those outcomes as a team. This creates space for collaboration. Someone might say, “We tested that, but have we tried this? ”I like some democracy in the process. I’ll usually take those ideas, refine them into proper tests, and slot them into a future plan.
You need collaboration, but you also need a leader who can take action. Otherwise, it becomes too collaborative, and you stop making progress.
Personalization and the user experience
You mentioned your work on predictive personalization. When you’re working on that kind of algorithm or generating that predictive personalization, how do you ensure that you have data that’s actually going to create quality personalized experiences?
At Golf Channel, it was trickier because we were doing it more ad hoc. We didn’t have a formal testing tool. Later, with Akamai, we used Adobe Target, which helped measure how different variants performed across audiences. At Sisense, we hadn’t fully implemented personalization yet, but tools like VWO support similar approaches. For example, if someone comes to the homepage and we’re featuring case studies, we might show a financial case study to someone in finance, a media case study to someone in media, and so on.
If you’re using something like ZoomInfo, you can see the inferred industry for a particular user. Then we can tailor the experience so it’s relevant to each person. Especially in media, I learned that FOMO is powerful. If you see your competitor is using a product and getting results, it can be very motivating.
With some of these tools, you can see how many people from each segment clicked and how they engaged. If one audience responds strongly to personalization and another doesn’t, that informs where you invest next.
You’ve worked across different industries and different verticals in B2C and in B2B. Does the process change for how you design and measure the journey and the engagement in B2C vs. in B2B?
A lot of the tools are the same, what changes is how you use them. In B2C, especially media, you can introduce more variation and personalization. At Golf Channel,for example, users could favorite players, and there were dozens of potential variations.
In general, in B2B, you’re not going to have 50 variations. You’re usually focused on conversion, lead generation, and adoption. Maybe you have two or three paths you’re optimizing.
It also comes down to goals. B2C is more experience-focused. You want people to enjoy it and come back. B2B is more conversion-focused. If someone leaves without converting, they might not return. Apps also play differently. B2C benefits a lot from mobile apps. B2B marketing sites are still very web-centric. No one needs a marketing app for a B2B site.
With so many unique digital consumption habits pervasive across users, how do you accommodate different preferences while still serving people the version of a product experience that you feel is optimal?
The page length and the depth of content on any page has to keep getting shorter and shorter. Paragraphs could maybe have been tolerated a few years ago, but now, landing pages need to be succinct and clear. You need bullet points, scannable content, and easy-to-skim items with clear CTAs to the next action.
SEO wants more words, but users don’t. And now AEO complicates it further by trying to predict what people want before they even get to your website. Last year, some tests I ran showed that users weren’t engaging with long-form content on the homepage. They just weren’t reading it. We did a test to yank all of it out, and engagement didn’t drop at all. That told us people just want an easy next action.
We’ve also tested CTAs. Sales prefers “schedule a demo,” but that’s not top-of-funnel behavior. New visitors don’t want commitment. They want low-energy actions like watch a video, take a tour, or learn more. Free trials are interesting, but even those require effort. People want information without energy or commitment.
Adapting to low-commitment behavior
When it comes to a B2B journey where you’re trying to get people to go through the funnel and make a purchasing decision, does the reluctance to engage accelerate that process or slow it down?
I think the first step has to be low-commitment. If someone watches a demo and thinks, “This might solve my problem,” they’re more willing to invest next. From there, it could be a free trial. Free trials are compelling because no one wants to talk to sales. But then you have to think about what happens after the trial. If someone invests time, uploads data, and sees value, PLG becomes interesting. Maybe they just want to buy right away. Especially for SMBs, immediate gratification and satisfaction matters.
But I don’t think people are looking at 20 tools anymore. From my own experience, it’s more like three. You narrow quickly and move forward.
How do you decipher what users say they want from what they actually respond to in practice?
Especially in my experience, all the B2B companies struggle with information architecture. That then translates into your navigation on your site. When I was at Akamai, we had a mega menu battle where we had to fight to reduce it heavily. We wanted users to be able to easily find what they’re looking for, but we’re offering them 50 choices at once, and people can’t navigate it. The challenge is in creating options while skimming them down to a manageable user journey without so many choices.
When we redid the whole Sisense website last year, I was a big advocate for “less is more.” Too many options overwhelm users. Give them a path and a journey. Above the fold still absolutely matters. People do not scroll at all.
Do the different goals of the industries change the way that you’ve gone about testing and optimizing the experiences? Do you think about the tests differently in those two settings?
It comes back to content strategy in general. I don’t think just about the test, but about the strategy as a whole, and about the experience. B2C is experience-oriented, while B2B is conversion-focused. On a B2B site, nobody’s there to play a game on the homepage. And they definitely don’t want a video as their first experience.
On B2B sites, having a really clean presentation is important. That’s where branding is so pertinent. As a customer, you definitely notice if a site is well done in terms of branding, layout, colors, and more. Users shouldn’t have to think about the interface. It should just make sense.
AI is reshaping the search landscape
What impact is AI having on some of these processes, with automating testing or predicting personalization?
AI is really changing execution. Tools now suggest tests or variations, and AI can help generate alternate copy and speed up ideation. That’s useful because it can surface ideas you might not have thought of, but there’s also a challenge that AEO means users may never reach your site. We have to give search engines enough to surface us, but not so much that users never click through.
Personally, I trust AI for some factual things, but not everything — I’ve seen it get basic math wrong. And now, SEO agencies are trying to figure out how to game AI responses for their clients to show up in the results. That can be really good for a business, but it’s not so great for us as consumers. The answer we’re getting is not always the best and correct one, but the one that gamed the algorithm.
Hopefully, we’ll all learn to not take these results as the sole truth. We’ll still need our critical thinking skills, and that will continue to be important for consumers to make the right decisions for themselves.
What does LogRocket do?
LogRocket’s Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.


