Jason Kikta is Chief Technology Officer at Automox, a patch and endpoint management software company. He began his career at the Cyber National Mission Force before spending more than two decades in the US Marine Corps. After retiring from the military, Jason joined the information technology and security organization at Automox. Alongside his role at Automox, he also teaches at both the Institute for Security and Technology (IST) and Johns Hopkins University.
In our conversation, Jason talks about his experience going from the US Marine Corps to leading the product organization at a software company. He shares how his time as a practitioner has informed his product management style, as well as how he maintains a strong focus on delivering value to customers above all else.
Going from a practitioner to a product leader
You have a unique background because you weren’t formally trained as a product manager. Can you share some specific ways you see your background as an advantage in your current role at Automox?
My experience has been primarily in IT and cybersecurity. It can be challenging not to come from a product background because there are nuances to providing analysis on the market reach of new features. I don’t have the direct experience to provide that calculation correctly, but since other people in the company do, I can rely on them.
The advantage of coming from a non-product background is that I can predict how customers will react to not just the feature itself, but the specifics of how it’s implemented and the ways it’ll be used in the field. It’s also been helpful in that I can interpret customer requests that may seem unusual on their surface. I’m able to give that perspective, which has given us the power to contextualize our decisions. That has been a little bit of a superpower for me — making decisions that lead to really happy customer outcomes.
At Automox, you came into the company as a vocal supporter of change for the product. What were some of the pain points that pushed you to speak up, and how have you worked to address them as a product leader?
In the past, the company implemented more of a feature-chasing approach. We’d go after whichever feature had the most requests, the loudest voices supporting it, or where there was a perception that the feature would give us some kind of market advantage. Those are things to consider, but if you can’t tie it all back to user experience and how customers will use the tool, then you will end up building a Frankenstein. Our industry sees this “sometimes-stitched-together” product built through mergers and acquisitions or through a haphazard, market-chasing feature development process, and we want to avoid that.
Since there aren’t a lot of practitioners who transition to becoming product leaders, it’s rare to have someone thinking about building a coherent IT product. That’s an exciting aspect of this job — I get to build the product that I’ve always wished existed. That changes some of the sequencing and the nuances of delivering features, how we bundle them, and how we iterate toward full development.
Though this isn’t always the most optimal market path in the short term, it leads to much happier customers in the long term. They get a lot more value out of the system, and that’s our goal.
A focus on delivering value
Do you see this type of ‘Frankenstein’ effect happening with companies trying to implement AI in any way they can?
Definitely. There’s new technology and AI everywhere, and I think that companies often fall into the trap of wanting to incorporate it without fully thinking through the value for its use case. AI is a great example of this because we’ve explicitly had customers ask us, “When are you adding more AI to the product?” I’d ask them what they’d want AI in, and they’d say, “I don’t know. My management wants us to use products that have more AI in them.”
We sat there in a bit of a loop because their management didn’t really know why they wanted more AI — their perception was just that it would improve efficiency. There was no clear thought process of what it would do. That’s where the AI strategy that I developed is different from many of our peers — I sat down and wrote out all the use cases for AI automating tasks.
It turns out that large language models and generative AI don’t fit many of these use cases. We actually want something closer to traditional machine learning. That field is rapidly advancing, but it’s not where the majority of investment is today. Rather than jamming another chatbot into our product, we should look at how to optimize our data in a way that a machine learning model will be able to fully capitalize on it and deliver valuable insights.
What types of KPIs do you prioritize to make sure you’re meeting your goals to deliver value to customers?
I’m more predisposed to focus on customer happiness and satisfaction metrics than financial ones. I often say I don’t really care how it’s priced and packaged, I just want it to be working well, bug-free, and easy to implement and use. If you focus on that, the revenue will come. I specifically don’t want to fall into the trap of focusing on features that will make more money over those that will help our users.
Overall, pricing models have evolved. There are deeply impactful features that are included in the basic tier, while less impactful, required features are add-ons. I like to focus on impact rather than making a little extra revenue.
As a cybersecurity practitioner, I’ve made it a big focus to not charge customers for security. Security features or requirements aren’t extra costs to our customers. It’s tempting to do, though — it’s very common in our industry to charge for things like single sign-on and multifactor authentication. Same thing with more data retention on audit logs since there is a cost associated with that. The problem with that, however, is that you’re blocking the organizations that need that security the most from the tools that would help them be in a better place.
So, rather than try to compete in that security add-on space, we bake it into every single tier. Yes, it costs us a little bit more to give 400 days of audit logs to every single user, but the result is they can have more confidence in our product because they know that they have all the security features they need — there’s not a single piece they’re missing that could be creating a blind spot in their security posture.
Do you feel like that was a hard mindset shift for the organization to take on, especially since a company’s end goal is nearly always revenue?
At Automox, it actually wasn’t a difficult shift, which is great. There were a lot of things the company was trying to do, but couldn’t quite articulate. Overall, this was a change that the company wanted to make, and I was brought in to help accelerate and reinforce that change.
I’ve been fortunate that I didn’t come into a culture that was adverse to this change. Instead of having to convince people, I just had to explain and illuminate how we should perceive these choices correctly to fit this new philosophy. People were already making certain trade-offs without fully realizing it, and that’s where those teaching moments came in.
Implementing cross-functional reporting within Automox
Can you walk us through a product launch or feature release you’ve done at Automox where your practitioner mindset played a critical role in shaping the shipped product?
We had a project to implement cross-organizational reporting, which was previously impossible to do if you had many organizations. For example, if you were an MSP who had many customers or large businesses with many business divisions split into organizations underneath the account, you couldn’t get across-the-top reporting — you could only get reporting on a per-organization basis.
That’s the most direct use case for cross-functional reporting, though, because you want to understand how a given business unit is doing. What is their IT posture? If you’re an MSP, you need to be able to explain to that customer. You wouldn’t know if your team across your customers is getting better, worse, or doing the same without a lot of manual work. There was no solution to show how a team is performing across the customer base. That’s where the desire to integrate this reporting came about.
I wanted to take that initiative, along with another project to enhance the customization and visualizations, and combine them. I didn’t want people to have to go to multiple places to get the answers that they needed. They should be able to have well-designed, smart defaults that can be built the way they need them to. Then, they can scope and de-scope those answers along the way.
There is no magic combination that will make all customers happy, so I felt very strongly that delaying the project, which ultimately became Automox Analytics, to do it properly. We wanted to give users the freedom to see many things simultaneously, create a catalog of 200 reports that could be combined into multiple dashboards, and scope these analytics to whatever level they desire on the fly. This was very different from how we thought about it in the past, but customers have been very happy with the result. We’re continuing to build on it because it is so successful and popular.
Were there any specific hurdles that you encountered throughout this launch process?
One big hurdle was in concern that the popularity of the tool could affect production execution because they would be sharing a database. After discussion with our engineering counterparts, we agreed to create a whole new database that would support this feature. While it is more expensive to run that way and takes a little longer to build, the result is that customers can go wild! They can hammer the analytics database to their heart’s desire, and it can never endanger production execution.
This is where being transparent made a big difference. We explained the reason it was taking longer to release — because the safety engineering involved had to be perfect. We needed to make sure that customers could both maximally leverage the new feature and also not have to be concerned about how the platform performed. They didn’t want to have to make that trade-off or to be limited in depth or the number of queries they could perform. Building the more expensive design and taking the extra time to do it right was absolutely worth it.
Investing time and money to get the product right
How do you think your overall approach differed from how a traditional product manager might tackle each step of the launch process?
The traditional product approach is to deliver a minimum viable product and build from there. That is a good strategy in many cases, but what I brought value was in recognizing that this is a different use case. While we incremented on success, we needed to identify the minimum viable product to a much larger scope than we originally thought. That made a big difference. It was worth the wait.
Do you build it all at once, or do you build a little and iterate upon it? This is what many teams grapple with, but we recognized when we started working on the first version of this product that it needed to be big. There were lots of features we had to include, and customers wouldn’t see its true value unless they were all there. Identifying where that frustration would come from if we didn’t execute it right was the main trick.
The product launch was not only successful with external stakeholders, but generated a lot of internal excitement. To wrap up, could you speak to how the product was received overall?
Internally, it was received with a lot of fanfare. There was a lot of excitement because people understood how critical this project was to the company’s success, as well as how meaningful it was going to be for our customers. That got everyone so genuinely excited. It was a really fun launch day.
Even when we were demoing it internally, the first time someone on our team would see it, you could see them light up with excitement. It exceeded so many people’s expectations. Being on customer calls once it launched was even better because we got to talk it over, show them demos, and see them get excited to use it.
The only thing topping that excitement is the excitement about what’s coming next. We have a lot of things we plan to build over the next couple of months to add onto that feature and get it fully developed. It’s just been a really fun journey, and it’s probably my favorite feature to date.
What does LogRocket do?
LogRocket’s Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.



