Leader Spotlight: Solving adaptive vs. technical problems, with Jen Wang
Jen Wang is CPO at Framework, a consumer electronics company focused on giving consumers options to repair, upgrade, and customize their electronics as they want. She began her professional career as a special assistant at The World Bank before pursuing her PhD at Stanford University in behavioral science and decision-making, business and sustainability, and socioeconomic outcomes. After completing her doctoral program, Jen joined thredUP as a product manager and, after a five-year tenure, became VP of Product & Growth. In her current role at Framework, she leads major GTM functions, including the digital product organization, growth analytics, B2B, and customer operations.
In our conversation, Jen talks about how often problems that seem to be purely technical are actually “adaptive” and, therefore, require changes to people’s habits, values, and beliefs to solve them. She also talks about how her PhD in human judgment, behavior, and decision making influences her role as CPO, as well as how it leads her to balance time, resources, experimentation, and behavioral intervention design approaches.
Understanding when a problem is adaptive vs. technical
You credit the book Leadership Without Easy Answers by Ronald Heifetz with influencing how you think about problem identification. In Heifetz's language, what is the difference between a technical issue and an adaptive issue?
There’s a whole book on this, so one could go really deep on it! However, at a surface level, I often describe technical problems as problems that can be solved if you get the right people into the room. From there, they can work away at it and eventually figure out a solution.
By contrast, Heifetz talks about adaptive problems as problems that require moving the hearts and minds of people to solve. In organizations, we run into a lot of problems where, on the surface, it looks like you're trying to solve something technical — if we could just figure out the right solution, everything would be solved. But actually, it's about moving people from where they are emotionally — habit-wise, belief-wise, and values-wise — that ends up being the bigger barrier.
What do you require of your teams before they start jumping in to solve the problem?
When you see a problem, the symptom often feels urgent or painful. But almost always, when you step back from that initial reaction and start to ask questions, the context starts to expand. You often realize that what you thought was the problem was actually just a symptom of a much deeper or different issue.
I am often talking with my teams about problem identification and pushing to make sure that we really understand the nature of the problem we’re trying to solve. Most of the problems that land on my plate are adaptive in nature, for which there are often some common tells.
For example, if it feels like there should be an obvious solution, but that solution keeps failing, that suggests to me that it's probably not a technical issue. A technical solution should work on paper, but if it isn’t, it's often because it’s actually about people — their habits, how they work together and communicate, how they manage information and burnout, how they’re absorbing change, and more. Another sign is when people can't seem to agree on what the problem is. That, to me, is a clear signal that the root issue is not actually technical.
One of the questions I often ask my team when they bring a problem to me is, "Have you talked to everybody involved?" Surprisingly, the answer is not always yes. I also ask, “Can you describe where you think they're coming from?” And, “Do you feel like you understand how we got to our current state?” If someone can describe a problem in a way that resonates with their stakeholders as well, it’s a good sign that they’ve likely been able to wrap their arms around the problem from multiple perspectives.
Lastly, emotional reactions are telling, especially if the reactions are disproportionate to what one might expect. If I sense a lot of resistance, anxiety, or defensiveness around a proposed solution, that is often a signal to me that there's something else going on. It's worth figuring out what's at stake for these people. What am I not understanding to make sense of this reaction? Almost always, it's adaptive and not technical.
How do you help your team get better at identifying and solving adaptive problems?
Great question. I feel like recently this has been coming up most frequently in 1:1s. We'll talk about it as a team, too, but I think private conversations are more likely when people will voice more details of how they’re thinking about a problem. Then, as a mentor or as a manager, you have the opportunity to ask questions and give detailed feedback. That's when I have a lot of space to ask questions to help myself understand how much they understand of the context. How much of it is actually about this technical problem that they're bringing to me versus the quality of the relationship with their stakeholders or the historical context with other teams?
For scaling cultural change across a team more quickly, I’ve also done an old-fashioned book club. For example, one of the books we’ve read as a team is The Leader Lab by Tania Luna and LeeAnn Renninger, which I love. It’s a great resource on leadership and team execution principles, and the authors put a lot of thought into pedagogy and how to most effectively communicate these lessons, so it is great for a book club format.
Operating with implicit team cohesion
Can you share a time when your team brought you a problem that seemed to be technical, but it turned out to be an adaptive, or people-oriented, problem?
One of our go-to-market teams is focused on launching our hardware products online. Previously, we’d get to launch day, and the team would be burnt out because they had been crunching all the way to the end, building all of this functionality, and dealing with issues that would pop up at the last minute. At first glance, we wondered if we had an issue with how fast we were building or with our QA process.
Slowly, we realized that the issue was actually deeper and required a more fundamental shift in team mindset. Based on previous experiences where the team felt that they had to absorb any number of last-minute launch requirements that the business threw at them, the team had taken on the approach of building in maximum flexibility to accommodate every edge case they could think of. That meant that they were in a very defensive position, and one that implied a huge scope.
So, over time, we worked on a number of things, chief among which was practicing and building trust with the team that product prioritization could be a two-way conversation between us and our stakeholders, rather than the team being the inevitable final recipient of any last-minute demands.
Are formal rules necessary to keep teams aligned and not bogged down by adaptive problems?
In my view, there are times when you need explicit, formal rules, and other times when a team can operate extremely quickly with informal, non-explicit rules. The difference is often a reflection of how aligned the implicit norms and expectations are within that team. When a team can move very fast without a lot of explicit rules, it’s often the case that there are also fewer adaptive issues because they're operating from the same implicit playbook — it often looks from the outside that they're “just clicking”. When teams get bigger, often more has to be explicitly articulated because with more people often come more differences.
One of the things about high-performing teams that makes them so fun and invigorating to be a part of is that they’ve often locked in their team culture, and, as a result, almost always have fewer adaptive problems blocking their way. When they do have issues, high-performing teams tend to have really good conflict resolution mechanisms, where issues are solved both effectively and more quickly — so these teams get to spend a higher percentage of their time solving the technical issues that they most enjoy.
Identifying problem vs. solution spaces
It's interesting how much you tie leadership into problem identification and problem-solving. As a leader, how do you build that muscle for problem identification, and how do you give feedback that turns solution-first PMs into problem-first PMs?
There are a couple of levels. One is focused on having a shared language and frameworks. In addition to the frameworks of adaptive versus technical problems and teaching from Leader Lab, I really like the Double Diamond design process. The visual of the two diamonds really helps remind us that you want to expand and narrow in on the right problem first, and then do the same expansion and narrowing to figure out the right solution.
Internally, as an organization, we try to actively distinguish if we’re in the problem space or the solution space at any given moment. To build this muscle, people have to first know that there's a problem. Then, do they have the mindset, language, and frameworks to distinguish between the problem and solution space? A lot of how we do that is through modeling. I will model this with our own teams, and our PMs are really important vehicles to model that across our cross-functional organization. It's results and outcomes-based. If it wasn't working, it wouldn't get picked up by other people either.
In addition, to identify the right problem, you have to have the right people in the room at the
right time. Again, I like the Double Diamond visual because it’s a reminder that you don’t want to bring people in only after a solution is set. Often, as a stakeholder, if you had been brought in earlier, you might have helped the team identify the root cause faster. That’s why we spend a lot of time thinking about these frameworks, bringing in the right people at the right time, and working with others to have them be part of the problem definition.
What’s your process for running problem and solution reviews, and do you separate them?
We don't tend to separate them, but we'll often structure sessions so there’s a separate portion for the problem and solution. As a team, we tend to like retros and pre-mortems. For product planning or engineering scoping, we'll also separate the problem and solution portions of the meeting.
One benefit of starting in the problem space is that it allows you to figure out if there are multiple issues at hand. Often, a lot of teams realize that one problem is actually multiple problems, and then they have to go through the process of prioritizing which one they want to address first. From there, you can see which sequence you want to tackle issues in.
Once we get into the solution space, we often hold separate sessions to go deeper into specific solutions. By having this distinction between problem and solution spaces, and using time to delineate when we're talking about what, it keeps the team open-minded in certain places and narrow in others.
Do you have any go-to, low-cost tests you can run quickly to confirm or kill your problem framing?
If it’s a customer-facing problem, then of course, the best method is to run things by your customers. Our team is often discussing the trade-off between speed vs. quality of results in terms of picking ways to test hypotheses by customers. This is ideal, but not always possible.
For internal problem framing, shopping for an idea or question around is critical. You can immediately get a sense for buy-in. And if there isn’t buy-in to the framing of the problem, that's a good indicator that you don't have the right problem or haven't figured out the adaptive issue. If you can't get buy-in at the beginning, you're definitely not going to get buy-in later on. That's one reason to shop it around.
Also, as people, we’re pattern-making machines. Some of your partners are more customer-oriented, while others might be taking in other types of data regularly. If you trust that your cross-functional partners are sampling appropriately — that the data they're seeing is well sampled and representative — then you can often also rely on their intuition to give you some directional knowledge.
Trading off velocity with learning
How do you protect this type of problem-identification work in an organization that focuses so much on shipping results?
Organizations also have to go through learning cycles themselves. At this point, I have a lot of examples of when we tried to save time on the front end of something, and it ended up causing us more time in the end because, essentially, it took way more time for us to correct our mistakes once we realized that we hadn't solved the right problem.
If you take a longer view on things and you have a leader who's supportive of that, it's not that you're protecting time, but that you think of it as an effective way to increase your chances of solving the right problem. I talk to my teams a lot about this — the trade-off between velocity and learning. My tolerance for our team to make mistakes is a lot higher if we can learn very quickly, and the stakes are low.
You did your PhD in human judgment, behavior, and decision making. As Framework’s CPO, how do you translate your academic work into concrete business decisions?
It’s interesting because in academia, it’s all about knowledge generation, as well as the accuracy and precision of that knowledge. In business, we also care about knowledge, but how we use knowledge is very different. In business decision-making, we tend to care more about directionally correct decisions that are made faster, and we’re often willing to trade precision for speed.
In my PhD, I spent a lot of time thinking about how to design effective behavioral interventions. One way to test your hypotheses is by using a highly controlled method, such as an A/B test or a randomized controlled trial. You're trying to isolate one variable to see what the impact of that variable is.
On the other end of the spectrum of behavioral intervention design, there's what is sometimes described as a kitchen-sink approach. With a kitchen-sink approach, you pull from everything you know could work and design one intervention that bundles all of it together. It’s much harder to measure causality in this scenario, but I’ve found this approach to be useful in the business world because you tend to see bigger effects. We can also measure bigger effects faster, so it’s often much cheaper, for example, to run one experiment with 10 variables thrown in, versus running 10 separate experiments, each with one variable.
When I put on my academic and data science hat, I can be extremely precise and very detail-oriented around statistics, effect size, significance, and all of that. In the business world and with my product hat on, I try to focus more on the bigger picture - what are we actually trying to solve and move the needle on? Overall, when it comes to making decisions as a product leader, I’m often looking for data triangulation — if I see multiple sources pointing in the same direction, even if those data sources individually are not as precise, it helps me build more conviction.
Lastly, what role does leadership play in creating an environment that emphasises solving adaptive problems and trading off velocity with learning?
Most of my job is solving adaptive problems. I often tell my team to use me to help solve things that I'm in a unique position to help them solve. Ideally, if you're hiring people who are better than you in their respective areas, they should be able to solve the technical problems much better than you can. That puts you in a position where your job is to enable others to work really well and bring out the best in each other. You are, therefore, fostering an environment where they feel safe, can take risks, and bring their best ideas to the table.
As leaders, this is critical, and it comes out in all sorts of ways. For example, how do you react to new suggestions? When people bring you problems, what’s the first question you ask them? Further, it’s imperative that leaders build empathy with their teams, as well as help their teams build empathy with their leaders in return. At the end of the day, it’s about how we can work better with each other and use that to build better products for the world.
What does LogRocket do?
LogRocket's Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.