Leader Spotlight: Personalization at scale with AI, with Jal Tummala
Jaladheer (Jal) Tummala is Vice President of Product Management at HelloFresh. Previously, he held product and MarTech leadership roles at PVH Corp and Publicis.Sapient. He has deep experience in customer data platforms, marketing automation, loyalty solutions, mobile applications, and agile delivery.
In this conversation, Jal focuses on leveraging AI and machine learning to predict customer preferences and to drive AI-based decision-making for marketing personalization. He discusses explicit and implicit signals, patterns over time, model choices, reinforcement learning, and the balance between real-time personalization and scalability.
Customer expectations and personalization
How do customer expectations around personalization differ between industries, and how does that shape your strategy?
I’ve worked across a few different industries — retail, quick service restaurants, banking here and there, fashion retail, and direct-to-consumer subscriptions. The core underpinning is that most of these are direct-to-consumer businesses. One common theme is a very strong expectation from customers that businesses will leverage AI and machine learning to understand them and personalize experiences and services.
Over the last 10 years, that expectation has only grown. The more we, as individuals, interact with services like Netflix or Amazon, the more those companies raise the bar in terms of customer experience and personalization. That has become table stakes.
In retail, AI is a core part of the discovery journey. When you want to find a product, you expect AI to assist you based on past purchase history — knowing preferences, recognizing patterns, recommending products, and even noticing that you might order certain household products every four to six weeks and factoring that into what is shown to you.
In banking, it’s less about discovery or removing friction, and more about increasing trust. Customers absolutely expect banks to use AI for fraud detection and alerting. That’s a different dimension, where AI increases trust in the product and services.
For subscription businesses, the relationship is built on longevity. The longer a customer engages, the stronger the expectation for the brand to know them — recognizing when the customer might be unhappy, being able to act proactively to prevent churn, and providing recommendations based on long-term engagement. Subscription is about curating that lifelong journey. While there are some differences according to vertical, the expectation that AI will play a big role is common across all industries.
Signals, patterns, and prediction
What signals are most predictive of changes in preferences?
It’s not just about signals in the moment, but also about patterns or behaviors over time. For example, if you’re a fashion retailer and a customer who often buys office wear starts routinely browsing for casual wear, and that becomes a pattern, it suggests a change in their situation — maybe they’re now in a fully remote role and don’t need formal wear every day. Models need to be attuned to those changes.
There are explicit versus implicit signals. Explicit signals are what we ask and customers provide. Think of Netflix: after you finish watching a show you get, “Is it one thumbs up, two thumbs up, or three thumbs up?” Food delivery services ask, “Rate the food that you ordered.” Subscription services may ask for your goals. These explicit signals become inputs for machine learning models.
Implicit signals come from how customers interact with the product and marketing — which products they browse, how they navigate, which communications they interact with. If you send emails or SMS, you notice what they click and what they ignore. If different communications emphasize different value propositions and a customer engages with some and not others, that’s a strong indication. If I talk to a customer about a certain benefit and they engage with that content, that matters. Whereas if I talk about convenience and they don’t engage, they’re likely less interested in that value proposition.
How do you validate or improve prediction accuracy over time?
Rich signals — rich data about customers — are a big factor. Then it’s techniques like feature engineering, where you take available signals and translate them into high-quality features for training. It’s also experimenting with different model architectures because certain types of models are better suited for certain use cases.
For dynamic use cases, it’s not just about the customer but also external factors — what’s right for me today might not be right tomorrow. You need data sets that aren’t just about the customer but also the macro. Seasonality, for example, especially in clothing, has a big impact.
Reinforcement learning is helpful because models learn over time. You balance exploration and exploitation. Give the model flexibility to explore with, say, 20-30 percent of traffic — it’s not trying to optimize to a goal, but trying variations and seeing which work better for which customers. With the remaining 80 percent of traffic, the model exploits the learnings to drive business or customer-experience goals. Model accuracy isn’t static; it keeps improving as the model learns.
Scaling real-time personalization
How do you balance real-time personalization with speed and scale?
It’s always a balance. Customers expect personalized experiences, but they’re not willing to trade off speed and efficiency — especially in today’s attention economy. If a product doesn’t load in milliseconds, people don’t have the patience to sit and wait, no matter how great the experience might be.
Some use cases don’t need real-time computation. Back-end operations like forecasting can run in the background. They need to be performant — you don’t want a forecasting algorithm that runs for hours — but they don’t need to respond in microseconds. For direct customer-facing experiences, it has to be fast. Here, distinguish between computing the prediction in real time and serving the prediction in real time. Often, you compute offline and serve from a fast-access layer. In rare scenarios where you also compute in real time, that’s where optimizing for computing on the edge comes in.
You also have to balance compute cost for the business. Architecture depends on the use case — response-time requirements, cost envelope — and whether you can separate compute from serve to meet both customer expectations and efficiency.
Transforming marketing with AI
Can you share an example of using AI to transform a legacy marketing process?
Offer personalization is a specific example. Marketers have been sending offers to customers for decades — from the era of list marketing onward. Traditionally, offers were managed by geography and segments: how do I give the right offers to the right segments? As companies became more data-driven, they tested which offers worked better for which customers. But segmentation still treats customers in large blocks, and the pace at which you can test is limited. Every time you want to introduce a new offer, you repeat the testing cycle.
With machine learning, you can try to understand price elasticity at the individual customer level. Instead of making decisions at the segment level, you could make more granular and personalized decisions. That drives much better returns on investment, and from a customer perspective you’re more likely to get relevant offers and engage with the brand in more exciting ways. Generative AI adds opportunity for transforming content across the marketing workflow.
How do you measure whether AI-driven tactics like campaign targeting or lookalike modeling have a measurable impact on acquisition cost or long-term value?
While there are differences between machine-learning product development and traditional software, there are also similarities in how you measure impact. Build a solid testing plan. Build a version of the machine-learning product, integrate it into a customer experience or business process, and then A/B test it.
Today, the baseline is rarely one-size-fits-all; most experiences are at least segmentation driven. The real comparison is between a segmentation-driven experience and a personalized, machine-learning-driven experience. For example, in offer allocation, compare allocating offers based on customer segmentation to allocating offers based on a machine-learning model.. For messaging, compare sending different messages to different segments with a model that mines at the customer level what message to send.
ML models don’t always win on the first go. Through experimentation, you learn a lot. When you slice the results, the model may have performed better on certain customers and not so well on others. That can indicate that the model is missing certain signals. Go back to feature engineering. Maybe the training data isn’t clean enough. Perhaps seasonality is a factor, and training data didn’t capture it. You may need to rethink data sets and retrain the model. The key is to be very methodical and data-driven in evaluating performance against the current state and then using those learnings to iterate.
Should AI replace existing processes, or augment them?
Absolutely you can augment. I’ve seen situations where you augment the model with certain rule-based decision-making to drive optimal decisions. With generative AI, you’ll see more workflows that are not just autopilot but are copilot, meaning the AI system augments what a human is doing rather than replaces them. Working together like that is much more powerful than working separately. Of course, there are situations where an autopilot system can deliver as well as a human, but in many creative endeavors, augmentation tends to deliver better performance. It really depends on the use case.
Consistency, architecture, and trust
How do you ensure consistency of AI-driven personalization across mobile, desktop, email, and physical touchpoints?
At the core, make sure there’s a central understanding of the customer. Customers interact with a brand across multiple touchpoints. If you only use in-store signals or only email signals and try to personalize in silos, it will lead to a disjointed experience. Unify those data points so you have a unified understanding of the customer — their profile and interactions. Drive machine-learning products off that central layer. Then ensure the predictions feed all layers of the experience — mobile, desktop, email, and even in-store. It comes down to an architectural problem: think layered system, with a central customer profile, ML products driven off that layer, and predictions feeding the touchpoints.
What are your thoughts on the “black box” nature of AI in customer-facing use cases and internal decision-making?
Customers expect machine learning to be baked into products, so it isn’t a surprise that a product uses ML. What’s important is being transparent. Netflix shows recommendations “because you watched this.” Social networks let you see “Why am I seeing this ad?” These lightweight features make things less creepy and, in some cases, give control — you can say: “No, don’t show me this again.”
In banking, when a transaction is blocked and a fraud alert is sent, some brands explain why they flagged it and give you an opportunity to immediately interact and confirm that it’s legitimate. If you’re in a foreign country and your transaction gets blocked, being able to respond quickly matters.
Internally, when ML models augment decision-making, there’s often a need to understand predictions better — “Why is this system making a certain recommendation?” The biggest lever to drive usage and results is having a reporting and insights layer on top. If the only interface is just “here are the predictions,” you’re asking business users to take a leap of faith. As humans, we’re naturally curious; we want to understand logically why a recommendation is made. An insights layer lets users dig deep, slice and dice data, and gain confidence. It’s also useful for product teams — users often point out discrepancies that help improve the model.
The future with generative AI
As generative AI enters the martech stack, what excites you most?
Content creation. Even with segmentation, marketers are limited by the amount of content they can produce and how fresh they can keep it. If you have 20 segments, you can produce 20 varieties of content and target them — that’s it. The velocity of content workflow is the constraint.
Generative AI moves us from mass messaging with segmentation to mass personalization at scale. Imagine producing individualized pieces of content for customers so you can truly achieve personalization at scale. With the right automation, experimentation, and systems, high-scale personalization is achievable with GenAI tools.
What does LogRocket do?
LogRocket’s Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com.


