Leader Spotlight: Designing value-driven products and experiences, with Cris Concepcion
Cris Concepcion is Head of Product & Technology at the Obama Foundation. After grad school, Cris joined 170 Systems as a system administrator, then transferred to consulting. After seven years, he left 170 to build his own professional services team at American Well. From there, he became a software engineering manager at Safari Books Online (acquired by O'Reilly Media) before managing iOS teams at Wayfair. Before his current role at the Obama Foundation, Cris served in director roles at the Democratic National Committee (DNC) Tech Team and Capital One.
In our conversation, Cris talks about his work at the Obama Foundation, merging a value-based product with digital and in-person experiences. He shares how his team tests, measures, and refines outcomes, as well as how, as a small team, they remain focused on big-ticket priorities.
The importance of strategic prioritization
You spent most of your career as an engineer or engineering leader. When you moved from engineering to product, how did you have to shift your mindset in the process?
I had been involved in engineering and engineering management for many years, but I’ve always been very close to our product managers and had even substituted as product owner on infrastructure and backend projects. Something I had to wrap my head around when I started leading product managers and taking on the role of Head of Product and Technology was how we judged the success of our product. What does user adoption look like? How do we make the case for why we should be prioritizing one project over another? And how do we relay the value of what we’re building to our colleagues outside of technology?
That coincided with a time when we were trying to strongly focus on what we needed to have for opening the Obama Presidential Center. One of the product management tools I found really helpful was a framework like MoSCoW. I’d seen it at other companies when we were talking about what we'd have in a release or in a sprint, but we were now applying it to our multi-year planning of core experiences or general features. We’d ask, “What are the things that, if we do not have, will delay or prevent us from opening at all? What items need to be done to a high level of excellence for us to be successful, versus things that can be more flexible?”
Having that framework and optionality, and understanding the highest priority items, was really important. It was useful for helping us prioritize what to build for the opening.
When you have different groups involved in rolling the product out, how do you manage the tension between teams and the varying expectations for what can get done?
A lot of it comes down to asking, “What are the consequences of not having these items at all? What are the consequences of building this feature to a partial level of success? What is the smallest thing that we can ship so we can learn from our users?” Something we've run into a lot within the Foundation is that, while we have a pretty healthy online audience who's interested in seeing the museum open, we know that number will grow as we get closer to opening and really ramp up our marketing efforts. There's going to be so much more we're going to learn from that. From seeing people interact with exhibits and our ticketing systems, we’re going to go from just engaging with audiences online to seeing what they really want from a truly omnichannel experience.
Even when I was leading the engineering team at the Foundation, I was trying to be consistent with advising my team on the engineering phrase “you aren't going to need it.” Do not build things unless we know there's a user need for it.
One of the biggest debates that we had within our team and the larger Foundation was the idea of user profiles. Some of our internal product and engineering teams were very strong advocates of user profiles to link all the methods by which our audience would interact with us, between ticketing, shop, donations, and newsletter. But, when we looked at our data and reflected on past experiences, we knew that while, yes, a user profile and single sign-on are handy, only a small percentage of users take the step to create an account and manage interactions within it.
Further, we haven’t launched ticketing yet, so we knew that we might need single sign-on later, but we didn’t need it then. We didn’t have the traffic to justify the time and effort that comes with that yet. To this day, we know that when we launch ticketing, we are going to start seeing some of this pain, as users juggle having their tickets, donations, and retail purchases on separate platforms, so we need to measure that to understand when the right time to invest in that feature is.
Testing, measuring, and inviting skepticism
Can you share how you balance quick, data-driven optimizations with deeper, qualitative insights that might challenge or complicate what the data appears to be telling you?
In the past year, we started A/B testing on our website for a lot of different core flows around donations and newsletter subscriptions. The team has been very focused on immediate, incremental lifts around things like increasing the size of the donate button or having a clear CTA above the fold of the first page. It’s interesting to see that, when you make obvious improvements that make the team feel good, you also start seeing some externalities.
For example, our head of design raised a good question around the size of the donation button. It’s improved click-through to our donation funnel, but have we seen at least the same or an increase in the level of people who are finishing conversion? Or is it possible that, though the click-through improved, the drop-off rate is also now higher because people hit the button by mistake?
I’m so happy to see our team asking those second-level questions as we're starting to engage with the data. It's been a process for us to talk about things, test them out, measure them, and invite some skepticism to refine the outcomes we’re looking for. One interesting insight we found when we were looking at our retail shop was that changing the merchandise photos to be worn by a model made a huge difference. Showing it on someone’s body versus just showing the plain T-shirt, for example, increased a person’s likelihood to buy.
All in all, this helps our colleagues see that it's not just about our engineers or product managers making small technical tweaks to the site, but that there are different ways that our colleagues in marketing or retail can improve our content simply by looking at how we take photos or write copy.
Do you ever have to enforce a certain level of restraint so you’re not testing too many things at once and, therefore, not able to see what’s driving the change?
Definitely, and this is a challenge I experienced when I worked at Wayfair. We had thousands of engineers and PMs, and hundreds of tests running at one time. Attributing success to whether or not our users were in a particular bucket and what the other tests were running inside of that bucket was a challenge.
At the Obama Foundation, we have the benefit of still having a relatively small team. As a result, we can run very focused tests and keep a lid on overall complexity. We don't have more than a handful of tests running in distinct spaces, so we have very clean data. With that said, I encourage us to think broadly about the hypotheses we want to test, but we always need a clear line of sight to understand the difference between the control and the experiment.
Staying clear about your organization’s role
Your work incorporates a mix of digital and in-person experiences. Do you think about that hybrid product differently than one that’s purely digital, like an app?
Profiling user behavior is definitely very different between online and in-person. You can see what 2000 people do on a website very quickly, but that might take an entire day for an in-person exhibit. While that takes longer to understand, and it’s a smaller audience, it’s still vital for us to understand how our visitors are interacting with these exhibits because they’re a center point of the experience.
As far as online versus in-person, I look forward to seeing how that data bears out. Can we understand our users as they're moving through physical space, especially as conditions change, like time of day? Can we see improved engagement when traffic is lighter, for example, or when we have school groups visiting? We don't always get the chance to see those sorts of controlled user cohort insights or user behavior on the web. I’m interested in diving into that as we open the museum and seeing what actual engagement looks like.
Do you see any pattern of user behavior being different in a product that's more geared toward values as opposed to products that are geared toward something more transactional?
Definitely, we just completed a comprehensive user survey for visitors on our website. The Obama Foundation is a 501(c)(3), so we are focused on educating a new generation of change leaders. We don’t get involved in current politics, so we have to be clear with a lot of our audience about our role as an educational institution. Yet with the name on the door, we get a lot of political interest. Something that we’ve found in our data and user surveys is that people visited us after the last election to find ways to get involved and make their voices heard. And we can’t tell people what actions they should take, but we try to lift up the work of others in our orbit, and hope that others can take inspiration from their stories. We can measure some of the success by metrics like time on site and how many other stories they engage with, but we’re also looking forward to seeing how that audience evolves as we start launching more educational programs and events at the Presidential Center.
A conversation between the user and the product
Especially for qualitative behavior and asks, how do you integrate that into your definitions and measurement of product success?
When we look at our metrics and the success of the website, we run a Jobs-to-be-Done survey to understand why people are coming to our website. Do they want to find more information about the museum’s opening? Do they want to apply for a job or be involved in one of our leadership development programs? Are they interested in the work of one of the other organizations that we've been supporting? We also do qualitative feedback to ensure they find what they’re looking for. We marry that with measuring conversion on some of the specific workflows we have around donating, signing up for a newsletter, applying for an opportunity, etc., so that we can measure how effectively we are doing that job.
It’s all a combination of understanding what they're doing, measuring conversion, and adding other qualitative measurements. Same with NPS and understanding customer satisfaction. We intend to translate that to digital experiences once we open the museum. What exhibit most resonated with someone? Is the signage effective? Are we educating people on new programs they can get involved in?
Do you think that, even though you try to design for certain behaviors, people can use a product incorrectly? Or do you think that the way that people choose to use it just becomes the new normal for what the product is for?
At a previous job, we had a platform for managing volunteers that a lot of different groups used for scheduling shifts and signups. We were always trying to improve that platform’s UI based on user feedback, but the ability to do batch management of volunteers was something that we didn’t offer at the time. Then we observed that some enterprise clients had figured out the API for the backend and built their own Google Sheets to do all of their batch management on their own.
While that was a little alarming, we sat down with them and said, “We see why you're doing this — it works for you. It doesn't actually violate any of our security, and it's a really good idea.” I’m a big fan of studying aberrant user behavior because the core of it is a job the user wants to do. And if that job is in line with creating a better user experience, we need to pay attention to that. I welcome that.
The ideal situation is when the conversation between the user and the product is a two-way street. At the end of the day, you have to develop a product that users would love to interact with. And if they're showing you that love by, in some ways, hacking and working around your UI, you shouldn’t try to shut that down. Rather, you should try to take advantage of it.
Do you have any advice or learnings to share from your experience dabbling in the different areas of product and engineering?
As an engineer who has gone into product, I believe that you don't have to go full-on to become a product manager, but there is always a benefit. I've really enjoyed having a product mindset to truly understand the business. It’s helped my teams over the years understand the solutions that we have to create to meet the needs of the organization. Similarly, if our product managers are willing to be technical, there's a lot of insight they can get around understanding trade-off risks.
I always encourage folks on all sides of the product, design, and engineering triad to spend time understanding the world of your counterparts. Put on their hats for a bit and try it out. It is a very beneficial experience and something I tell everybody to dabble in if they have the opportunity to.
I believe that creativity is not about the depth of your expertise in one domain; it's your ability to pick up insights from a few different domains and combine them. Figuring out those kinds of combinations can be really valuable to your career, and that’s stuck with me.