Leader Spotlight: Building trust and agility in a high-stakes tech vertical, with Scott Johnson
Scott Johnson is VP of Product Management at Black Duck, where he leads product efforts to secure the application software lifecycle through advanced application security testing and AI-driven analysis. With extensive experience in cybersecurity, Scott previously served as GM of Micro Focus-Fortify, and held leadership positions at Ionic Security, HP Fortify, and IBM. His career is defined by building innovative, resilient products in fast-changing, high-risk environments.
In our conversation, Scott shares how he balances innovation, customer trust, and emerging threats in one of tech’s most demanding domains.
Navigating the fast-moving cybersecurity landscape
What’s it like managing product in an environment with constantly evolving threats and compliance requirements?
When I entered cybersecurity back in 2002 at Internet Security Systems, the pace of change in cyber attacks was already incredibly rapid. While having knowledge of the previous five years was helpful, the landscape evolved so quickly that you had to be constantly learning. That hasn’t changed; the speed, risk factors, and compliance demands remain intense.
For example, the recent Cyber Resiliency Act in Europe will impact compliance globally. Threat vectors continue to evolve across endpoints, networks, applications, and data. The old saying from G.K. Chesterton, “For no chain is stronger than its weakest link,” applies perfectly here, making it tough to stay ahead.
When building product roadmaps, the key is understanding where customers are now to meet their current needs, and also, to use a hockey analogy, anticipating where the puck will be. For instance, C and C++ have been staples for embedded systems, but Rust is emerging as a safer, more efficient alternative. We recently discussed this with some of our key customers, and their developers are eager to adopt Rust in their IoT devices.
The challenge is validating these trends with customers and adjusting roadmaps to stay ahead without overcommitting to unknowns. In cybersecurity, a rigid three-year roadmap is unrealistic; maybe 18 months at best is realistic. For example, three years ago, few were talking about AI prompt poisoning, and now it’s a critical concern. Success requires agility and teams that are ready to pivot quickly.
How do you stay abreast of potential vulnerabilities? Do you have dedicated teams or subject matter experts you rely on?
It’s a mix of four areas. First, we spend a lot of time with analysts like Gartner and Forrester. We just had sessions with both recently. They provide valuable insight into what’s on the horizon, like how traditional application security testing and cloud-native application protection are starting to blur together. Those trends and predictions help inform whether we build new capabilities or consider mergers and acquisitions.
Second, we get input directly from customers through user events and conversations. We ask what they’re seeing and what they are missing.
Third, we keep an eye on competitors and new startups, especially those that are getting VC funding. For example, some are currently experimenting with large language model scanning capabilities. Maybe that type of feature takes off, maybe it doesn’t, but it’s important to be aware of it and factor it into strategic planning.
Fourth, we do our own market research. For example, AI is changing the vulnerability landscape. Now, code can be machine-generated. We ask how that fits into development pipelines. Does it matter if code is human or AI-generated? Is there a risk factor involved?
Evaluating and incorporating new technologies
There's a strong push to involve engineering earlier in the development process. Do you think AI could enable some of that initial work to be handled within the product function — or is there a risk to pushing it too far?
AI is also changing how we work. A future product manager might be someone who prompts AI tools. Imagine being on a call with a customer, recording it, then using AI to generate a prioritized list of requirements. From there, AI can create mock-ups and even prototypes. This speeds up development dramatically. Instead of days spent gathering and organizing requirements, you have most of the work done quickly, and a senior developer can focus on building the final product. That’s pretty mind-blowing, but again, only if it’s grounded in human judgment.
So I think there is a risk with handling early development work with AI. At least for now, it’s unclear if the proper context is fully captured early on. Context really matters, especially when it comes to risks or priorities, and if you shorten the process, sometimes there isn’t enough clarity around those factors. AI-generated work might miss important context, too, though that could improve as the technology gets smarter.
I see it like this. Say, I’m a carpenter working on a roof. At first, I’m using a hammer and nails; I can feel and control everything, but my productivity is limited. Then I get a nail gun, which increases my efficiency, but only after I’ve understood the job well. The nail gun is a tool that helps me do my work better.
That’s how I see AI. It’s a powerful tool, but one that needs to be guided with care and context.
As you assess new technologies, how do you determine when they’re ready to be incorporated into your roadmap without compromising user trust or product stability?
There’s a difference between building for new technologies and incorporating them into your own products. With AI, we’re doing both. We’re using it to help us write code and make our systems more resilient, with features like auto-fixing and predictive capabilities. That’s where AI really started, right? From machine learning and predictive analytics.
So maybe at first, the AI suggests fixes and you say, “Let me review those.” But over time, as you get more comfortable, you might allow it to take more action automatically. It evolves.
But how fast you adopt really depends on the industry. Think about Jeffrey Moore’s technology adoption lifecycle — some industries are early adopters, others take time. A healthcare company might say, “I don’t want AI predicting blood pressure,” because what if it gets it wrong? On the other hand, a company like Tesla might try it right away to fine-tune gaps in the body panels.
In cybersecurity, we have to offer flexibility. Some customers aren’t ready to use AI features, so we make sure they don’t have to turn them on. Maybe it’s a separate module that customers can activate when they’re ready.
There’s no one-size-fits-all answer. You have to balance innovation with trust and readiness, and let your customers meet the technology as they’re ready for it.
On agile product planning and customer engagement
Do you have a set cadence for re-evaluating product plans?
We take a layered approach for that. We start with an annual strategy, but from there we break things down. There are quarterly planning reviews (QPRs), release plans, sprint reviews, and release reviews. That layered cadence gives us regular checkpoints to adjust course based on market shifts or customer needs.
For example, when President Biden issued the executive order on software supply chain security, the demand for supply chain-related features spiked. But the order didn’t specify exactly how to comply. It just said, “Track your software supply chain.” So we had to evolve iteratively, along with the industry.
Different standards like CycloneDX and SPDX came out of that. Some customers liked one format, and some liked the other. So, like Coke and Pepsi, we had to offer both. That’s just part of staying flexible in a space that changes quickly.
Even non-tech orgs face real risks from software vulnerabilities. For example, Starbucks once had a security lapse where unencrypted passwords were left in their build system. While it didn’t lead to a breach, such exposures could have disrupted critical operations, like cash registers, and eroded customer trust.
Communicating and managing risk for diverse user groups
How do you help users manage risk in such a fast-moving threat landscape?
One of the hardest truths in cybersecurity is that you’ll never be able to catch everything. There are too many threat vectors, and things change too quickly. So the focus has to shift from just identifying problems to helping customers mitigate risk.
Telling someone “you have vulnerabilities” isn’t enough. That’s like going to the doctor and being told, “You’re sick,” with no diagnosis or treatment plan. Customers want clarity. What’s wrong? How severe is it? What can I do about it?
More CISOs are looking for that kind of contextual lens both for their own leadership teams and for the teams doing the work. A CISO needs to understand risk posture, investment ROI, and trends over time. Meanwhile, a developer wants to know: What’s the vulnerable component? Is there a fix? Has it been patched already?
The beauty is, it’s all the same data, but filtered and visualized differently depending on who’s using it. AI can help here. If we scan an app with something like Black Duck, the devs get actionable technical info, while the CISO gets a high-level view: red/yellow/green status, spend vs. value, and whether they need to invest more or less. That tailored context builds alignment across the org.
And then there’s the real-world impact. Look at the CrowdStrike outage. It wasn’t even a vulnerability, just a software bug. But it brought down airports and airlines because everyone relied on the same systems. There was no fallback, no diversity in the infrastructure. People missed funerals — not because of a war or disaster, but because of a software update. That’s the kind of risk we have to think about now.
You mentioned tailoring product context for both CISOs and developers. How do you test whether your positioning actually resonates with customers?
The best way is to test it. If our product management group goes too long without talking to users, we stop and ask, “Wait. Who are we doing this for?” We have to validate.
We do more tactical testing, too. For our supply chain features, we meet monthly with a small user group with four or five customers and walk them through designs or mockups. We ask: Does this make sense? How would your boss view it? You want that feedback loop early, not after you’ve fully built something. Otherwise, you're stuck tweaking something that’s already half-baked, and that’s wasted time.
We also test messaging internally, especially with sales engineers. They’re on the front lines with customer use cases. A good SE can tell you, “Yeah, if you frame it like this, it’ll land better.” They’re not always right, so you still need to validate. But it’s another lens.
One thing we watch for is over-indexing on one loud voice. If a customer is vocal and opinionated, it’s easy to overcorrect. Talking to a small but representative sample often gives us a better chance of getting it mostly right. It’s not perfect, but it’s iterative. Test, assess, adjust.
Measuring product impact and driving preventative security
Given that some of the features and functionality you're building into tools are hopefully never needed, how do you measure the effectiveness of those preventative measures?
You're right. In a perfect world, you'd hope those vulnerabilities are never found. But you still need to track whether the preventative features are working. One way is through policy adoption. If customers build those capabilities into their regular processes, like how they run scans or assess open source components, that's already a strong signal. It means the feature has been operationalized.
Then there’s validation from the field. Ideally, customers aren't finding new vulnerabilities. Even better, they're not identifying any false negatives — those “you should’ve caught this but didn’t” situations. That builds trust in the product’s preventative measures. One of our customers is a manufacturer of packaging systems for foods like milk and juice, and also baby formula. Their software controls the formula mix, so a cyberattack that alters those settings could have serious health consequences. It’s a stark reminder that cybersecurity is about protecting people’s safety, not just data.
On the metrics side, it depends on the domain. In application security, for instance, you can track how many scans were run, how many issues were found, how many were remediated, and whether any vulnerabilities were missed or actually exploitable. In network security, it might look like: “We analyzed a million packets, and only 0.001% were malicious and slipped through.” You use those insights to build KPIs.
Another layer is measuring downstream quality. If a developer ships an app and the release is riddled with vulnerabilities, that's a sign that their secure coding practices need work. Some companies use training or reinforcement to help improve developer output over time, so those issues decrease.
At the end of the day, you’re combining hard metrics, customer process adoption, and post-release quality checks to paint a picture of how effective your preventative tooling really is.
Is there an example you could share where improving a specific metric led to an outsized business impact?
Sure. We had a group of customers, particularly on the open source side, who were really concerned about remote code execution, or RCE, vulnerabilities. These come up in open source components where, for one reason or another, code can be executed remotely. Sometimes that’s by design, maybe to enable updates, but it also opens the door for malicious use. If someone gets in, they could execute that code from their terminal, potentially install a rootkit at the kernel level, and you wouldn’t even know it. That’s a serious risk.
Now, our platform could already identify RCEs, but we weren’t surfacing them in a way customers could easily act on. It was just buried in the scan results. And a few of our larger customers flagged that. They said, “Hey, we know you’re detecting this stuff, but we can’t do anything with it.” At first, we didn’t think it was a top priority, but it clearly was for them, especially for companies with proprietary systems that go into other products down the supply chain.
So, we made changes. We updated how we presented that RCE information, flagged it more clearly, made it filterable, and let users mark whether they’d reviewed it or needed to swap the component out. Sometimes our platform even recommended alternatives, like, “Use this version of the transport layer security (TLS) instead.”
That change made a big difference. Especially in high-stakes industries like automotive, where if an RCE sneaks into a component and ends up in a vehicle, it can lead to real-world consequences and a PR nightmare. Like, “Your autonomous car got hacked because of a hidden open source vulnerability.”
It is a great example where we listened to customers, learned from the feedback, and led with the fix. That’s kind of a personal mantra of mine. Listen, learn, lead. And if I had to break that down, I’d say it’s optimally 50% listening, 40% learning, and just 10% leading. Interestingly, that 10% is where the impact shows up — that’s the execution. But if you haven’t listened and learned first, then you’re just leading in the wrong direction.
Transformative role of AI in cybersec today and tomorrow
How are artificial intelligence and large language models influencing cybersecurity, and what do you see for the future?
AI is pervasive and is fundamentally changing everything we do. We need to accept it and leverage it as a tool. I’ll go back to a construction analogy. Growing up, my neighbor’s dad was a carpenter with a belt full of tools for different tasks. Similarly, professionals now have, or will soon have, an AI tool belt with solutions to help write emails faster, transcribe calls, summarize discussions, and set follow-up meetings. For example, from this call, you could feed the transcript to an AI agent, maybe even ChatGPT, and say, “Write a draft of this discussion.” The result won’t be perfect, but you will get a solid starting point.
In cybersecurity, there’s always risk. If you ask AI to write an application with zero SQL injection vulnerabilities, can it do that perfectly? Maybe not yet. And do you trust it? What if it says it did something, but it didn’t — it doesn’t know it’s wrong. That can create a false sense of security.
With AI evolving so quickly, who should be accountable for how it’s used or misused in cybersecurity?
I’m blown away by how fast things are transforming. I was talking with a friend who runs an AI company I advise to, called Quome. I told him, "You should put your release info out there publicly, like a press release.” He said, “Great idea, I’ll do it now.” Then he pulled up his release notes from the last six months, prompted an AI tool to write a press release summarizing key features with a quote from him, and in five minutes had an 80% complete draft. He just needed to tweak it a bit. That kind of speed? Wow. It’s changing everything.
In cybersecurity, AI will definitely transform everything, from detecting vulnerabilities, preventing them, to blocking attacks. But it also introduces new risks, like LLM poisoning. What’s that? Imagine you have an open, large language model in your environment, and someone updates it so it behaves differently from the standard you set. Is that okay? It’s like a configuration shift. Maybe it’s fine, maybe not. But you’d want to know.
It’s like a new era of version control. Remember when we used to launch software with a “gold CD” image? Any deviation was a no-go. With shifting AI models, you need a baseline to compare against and detect changes. Maybe a null field turned into zero or one, which could introduce a vulnerability like a denial of service attack. That’s why container security is becoming so important.
I got into cybersecurity and product management because there’s never a dull moment. Software is everywhere. As Marc Andreessen said, “Software is eating the world.” Look at Target’s data breach a few years ago. A security failure cost them about a billion dollars in market cap, and the CEO got fired.
Today, everything runs on software, and you can’t do much without it. That’s why securing it as best you can is absolutely essential.
What does LogRocket do?
LogRocket's Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.