Leader Spotlight: Navigating AI product design and judgment, with Jason Bejot
Jason Bejot is Senior Manager, Experience Design, AI Assistant at Autodesk. He began his career in engineering as a full-stack developer and eventually transitioned into web design work for an agency. From there, Jason held design leadership roles at Amazon, where he worked on Alexa personalization and identity experiences, and at The Walt Disney Studios, where he led work spanning design systems, internal product incubation, and emerging technologies. Before his current role at Autodesk, he served as Director of Conversational AI Design & Personalization at Rocket Mortgage, where he established conversational AI design as a company practice and helped lead the transition from NLU to generative AI.
In our conversation, Jason shares how his engineering background shapes the way he evaluates AI-generated solutions and anticipates downstream UX impacts. He talks about how LLMs are reshaping experiences like search, research, coding, and design, and why discoverability and intentionality matter when building AI-powered products. Jason also discusses the role of human judgment in an era of AI-generated content.
The AI design perspective
Reflecting on your professional journey, how did your early start as a full-stack engineer and web developer shape the way you currently evaluate and work with AI-generated solutions?
One thing that really helped shape my perspective is that I’m able to see the scaffolding. I can see the outcomes that we’re trying to drive through design, and also the systems and architectures that are making that happen — all the things under the hood. While I might not be an expert in all those things, I still have a strong understanding of how they fit together and how they influence that eventual outcome.
This enables me to look at downstream consequences, especially those that may be second- or third-order — things that others might miss, especially through a UX lens. A change within the architecture might impact the experience down the road or affect a seemingly unrelated area of the experience. Having that grounding in engineering helps me see and predict those scenarios.
Within AI, especially, one of the defining factors compared to web or mobile is that the experience and the architecture are very closely coupled. A change within the architecture usually means a change within the experience and vice versa. That gives me a different perspective when designing experiences or leading teams. We don’t necessarily have to think within the constraints of what is possible. New designs can influence how the architecture needs to change. There’s a symbiotic relationship between the two.
Especially as design and product management teams evolve how they work day to day, technical foundations like working with Git repositories are becoming more and more visible — and necessary — for non-engineers. That shift is one I’m very familiar with, so I’m able to help shepherd other people to it.
How LLMs are reshaping digital experiences
You described a symbiotic relationship. Is there ever a sort of reverse, where LLMs can actually make an experience worse due to a lack of context or something else?
Yeah, this is a fascinating thing to think about. There are a lot of different lenses for how LLMs have improved experiences. Even more broadly, they’re influencing the technology landscape that we interact with every day. There’s a lot of AI going into infrastructure and architecture — how things get analyzed and how connections are made behind the scenes. Even if you’re interacting with something that doesn’t have AI in its interface, chances are there’s some AI connecting dots behind the scenes.
When we look at experiences that LLMs have changed, the first one that comes to mind is search. Search has completely changed over the past couple of years. Whether you’re using ChatGPT or Claude to ask questions and get answers, the experience of searching is fundamentally different now compared to using a traditional search engine.
The same thing is true for research, which is sort of the next order of search. Let’s say you have one thing you’re looking for, and you want to examine multiple sources and then make a decision. Now you can gather those sources together, synthesize them, summarize them, and find a through line. Yet, while search and research have fundamentally changed, it’s not necessarily just the end-user experience. The experience of coding has fundamentally shifted as well.
Engineers might not be writing code all day anymore. Instead, they’re prompting tools like Cursor or Claude. The same is happening in design. Designers who might have spent all day in Figma are now working more agentically in tools like Claude Code or SigmaMake instead of focusing on pixel-perfect work. Where these things fall down often isn’t the product itself, but how the LLM is integrated. There might not be enough context or guardrails. Sometimes the system is simply hard to use because people don’t know what to do with it.
Discoverability, therefore, becomes really important. If you’re creating a product, you need to teach people how to use it. That’s one of the biggest downfalls of conversational systems. There’s also the shiny-object syndrome, where teams say, “We’re going to throw AI at this problem, and everything will be better.” Chances are it won’t be, because you’re focusing on the solution instead of the problem you’re trying to solve.
When LLM-powered experiences fail, it’s usually because AI is treated as a silver bullet rather than something intentional.
How do you think AI’s speed and efficiency affect the messy discovery phase of zero-to-one product development?
Zero-to-one is a fantastic space, and the mess is really important. I’ve seen situations where people jump to the first thing an AI produces. They’ll say, “Great, I have this idea,” put it through an LLM, and whatever comes out becomes the solution.
AI can collapse the time it takes to get from zero to one. But what’s missing is the divergence that needs to happen during that process. It’s less about how quickly you go from zero to one and more about how you use AI to accelerate divergence. Instead of taking for the first output, you might ask, “What are 10 other examples that are different? What are the bright points and failures of those examples?”
That helps you form judgment and move toward a stronger zero-to-one outcome. There’s a lot of value in that messy middle rather than jumping straight to polished output.
Designing intentional AI experiences
When teams are excited about agentic or conversational AI, what experience-based questions do you ask before approving the work?
There are a number of considerations, and the first is discoverability. When a system can do an unknown number of things, people don’t know what to do with it. That’s been a challenge with conversational systems for a long time, and it’s compounded with agentic experiences because they’re more powerful.
Teams need to make the capability discoverable. If you build something, it needs to be obvious that people can go use it — you’re not simply placing a button somewhere in an interface. Once people discover it, the next question is how to make it sticky. How do you make it valuable enough that people keep coming back? How do you make it memorable and easy to return to?
Another important consideration is precision. LLMs are very good at broad strokes — they can help you do a lot of work quickly — but they’re not very precise. When you need precision, the experience often slows down, and it feels like AI isn’t doing what you want it to do. So teams need to be intentional about where an LLM provides value.
You might use it for broad strokes, then provide an easy off-ramp into a precision mode where someone can fine-tune something manually. After that, they may jump back into the LLM again. Designing that back-and-forth is really important.
Is some of that lack of close precision with an LLM due to insufficient context? For example, when an LLM is on a particular project, would the precision improve over time?
It depends on a number of factors, especially the underlying architecture. How the system handles context matters a lot. If you’re working on larger or ongoing projects, you can start experiencing context rot as context windows fill up. That reduces precision. It also depends on how the user provides context. If you provide too little, you won’t get the precision you need. If you provide too much, you might get precision in the wrong places. So there’s a balance, and it’s very dependent on the situation.
Do you have an example to share of a discoverable, memorable AI experience that stands out to you?
I’ve seen a lot, but one example that stuck with me was early ChatGPT. When I first started using it, one thing that really surprised me was the “regenerate” feature. I had done a lot of work in Alexa and chatbot systems, and the idea of regenerating the same prompt to get a different response blew my mind.
There was just a small recycle icon under the response. I clicked it, and it regenerated the answer. What was interesting was that it also maintained the previous responses, and I could tab through them. That simple interaction really highlighted the difference between deterministic systems and generative systems. It was discoverable, delightful, and powerful — all through a single small feature.
Human judgement and managing expectations
Are there certain aspects of beautiful user experiences that you believe can only be learned and can’t be generated?
Sure — just ask any designer about beauty. In AI, especially, a lot of this comes down to taste and judgment. It’s authenticity. We’re living in an era of AI-generated content where the bar for creating something has gotten very low. You see a lot of polished output, but there’s often something hollow about it. They’re built under constraints, and they have to be able to survive the complexity that goes into them rather than something that can be generated at scale.
Also, what has to be learned is the judgment of when to restrain yourself versus when to lean in. That’s what allows something to feel authentic and personal. Even if AI understands your preferences and produces things you like, you still have to apply human judgment and ask, “Is this authentically something I believe? Is this something I would put out myself?” That kind of judgment has to be learned.
How do you manage prioritization and stakeholder expectations in AI work without over-promising?
It’s largely dependent on the situation that you’re in and the people that you’re working with. What I’ve seen is that experience with AI-enabled systems is very uneven. Not everyone has the same knowledge about designing or building with AI. Because of that, you need to lead with a level of grace. Not all teams working with AI will move at the same velocity as they would with more established technologies like mobile apps.
You have to have honest conversations about complexity, timelines, and what still needs to be figured out. Once everyone understands that baseline, it becomes much easier to prioritize and move forward.
Early career and leaning into excitement
A lot of early-career PM work is now being automated. What advice do you have for those who are earlier in their product careers on how they may gain experience?
I don’t know how long this advice will last because things are moving so quickly, but the apprenticeship model of junior roles is fundamentally changing. Those roles were traditionally execution-heavy, and that execution work is compressing because we can go from zero to one much faster. So it becomes less about craft execution and more about judgment.
How are you framing problems? How are you navigating ambiguity? How are you creating clarity from that ambiguity? Those are the durable skills. You have to lean into building judgment. It’s like working out — you have to put in the reps and experience the friction of failure in order to grow.
One thing that helps is using peers and AI as thought partners. I do that myself. It helps you think through different scenarios. And when you’re choosing where to work, ask yourself: is it a problem you’re excited about? Is it a company you’re excited about?
That excitement will help you lean into the work and the challenges. You have to get comfortable with ambiguity and with not being perfect. Apprenticeship-level work is about learning and growing — even when the focus shifts away from execution.
What does LogRocket do?
LogRocket’s Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.


