

147.6K
Downloads
179
Episodes
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be?
While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be?
If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype?
My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions.
Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies.
I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better.
Hashtag: #ExperiencingData.
JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS
https://designingforanalytics.com/ed
ABOUT THE HOST, BRIAN T. O’NEILL:
https://designingforanalytics.com/bio/
Episodes

3 days ago
3 days ago
In this episode, I sat down with tech humanist Kate O’Neill to explore how organizations can balance human-centered design in a time when everyone is racing to find ways to leverage AI in their businesses. Kate introduced her “Now–Next Continuum,” a framework that distinguishes digital transformation (catching up) from true innovation (looking ahead). We dug into real-world challenges and tensions of moving fast vs. creating impact with AI, how ethics fits into decision making, and the role of data in making informed decisions.
Kate stressed the importance of organizations having clear purpose statements and values from the outset, proxy metrics she uses to gauge human-friendliness, and applying a “harms of action vs. harms of inaction” lens for ethical decisions. Her key point: human-centered approaches to AI and technology creation aren’t slow; they create intentional structures that speed up smart choices while avoiding costly missteps.
Highlights/ Skip to:
- How Kate approaches discussions with executives about moving fast, but also moving in a human-centered way when building out AI solutions (1:03)
- Exploring the lack of technical backgrounds among many CEOs and how this shapes the way organizations make big decisions around technical solutions (3:58)
- FOMO and the “Solution in Search of a Problem” problem in Data (5:18)
- Why ongoing ethnographic research and direct exposure to users are essential for true innovation (11:21)
- Balancing organizational purpose and human-centered tech decisions, and why a defined purpose must precede these decisions (18:09)
- How organizations can define, measure, operationalize, and act on ethical considerations in AI and data products (35:57)
- Risk management vs. strategic optimism: balancing risk reduction with embracing the art of the possible when building AI solutions (43:54)
Quotes from Today’s Episode
"I think the ethics and the governance and all those kinds of discussions [about the implications of digital transformation] are all very big word - kind of jargon-y kinds of discussions - that are easy to think aren't important, but what they all tend to come down to is that alignment between what the business is trying to do and what the person on the other side of the business is trying to do."
–Kate O’Neill
" I've often heard the term digital transformation used almost interchangeably with the term innovation. And I think that that's a grave disservice that we do to those two concepts because they're very different. Digital transformation, to me, seems as if it sits much more comfortably on the earlier side of the Now-Next Continuum. So, it's about moving the past to the present… Innovation is about standing in the present and looking to the future and thinking about the art of the possible, like you said. What could we do? What could we extract from this unstructured data (this mess of stuff that’s something new and different) that could actually move us into green space, into territory that no one’s doing yet? And those are two very different sets of questions. And in most organizations, they need to be happening simultaneously."
–Kate O’Neill
"The reason I chose human-friendly [as a term] over human-centered partly because I wanted to be very honest about the goal and not fall back into, you know, jargony kinds of language that, you know, you and I and the folks listening probably all understand in a certain way, but the CEOs and the folks that I'm necessarily trying to get reading this book and make their decisions in a different way based on it."
–Kate O’Neill
“We love coming up with new names for different things. Like whether something is “cloud,” or whether it’s like, you know, “SaaS,” or all these different terms that we’ve come up with over the years… After spending so long working in tech, it is kind of fun to laugh at it. But it’s nice that there’s a real earnestness [to it]. That’s sort of evergreen [laugh]. People are always trying to genuinely solve human problems, which is what I try to tap into these days, with the work that I do, is really trying to help businesses—business leaders, mostly, but a lot of those are non-tech leaders, and I think that’s where this really sticks is that you get a lot of people who have ascended into CEO or other C-suite roles who don’t come from a technology background.”
–Kate O’Neill
"My feeling is that if you're not regularly doing ethnographic research and having a lot of exposure time directly to customers, you’re doomed. The people—the makers—have to be exposed to the users and stakeholders. There has to be ongoing work in this space; it can't just be about defining project requirements and then disappearing. However, I don't see a lot of data teams and AI teams that have non-technical research going on where they're regularly spending time with end users or customers such that they could even imagine what the art of the possible could be.”
–Brian T. O’Neill
Links
- KO Insights: https://www.koinsights.com/
- LinkedIn for Kate O’Neill: https://www.linkedin.com/in/kateoneill/
- Kate O’Neill Book: What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast

Wednesday Sep 03, 2025
Wednesday Sep 03, 2025
In this episode, I talk with Ilya Preston, co-founder and CEO of PAXAFE, a logistics orchestration and decision intelligence platform for temperature-controlled supply chains (aka “cold chain”). Ilya explains how PAXAFE helps companies shipping sensitive products, like pharmaceuticals, vaccines, food, and produce, by delivering end-to-end visibility and actionable insights powered by analytics and AI that reduce product loss, improve efficiency, and support smarter real-time decisions.
Ilya shares the challenges of building a configurable system that works for transportation, planning, and quality teams across industries. We also discuss their product development philosophy, team structure, and use of AI for document processing, diagnostics, and workflow automation.
Highlights/ Skip to:
- Intro to Paxafe (2:13)
- How PAXAFE brings tons of cold chain data together in one user experience (2:33)
- Innovation in cold chain analytics is up, but so is cold chain product loss. (4:42)
- The product challenge of getting sufficient telemetry data at the right level of specificity to derive useful analytical insights (7:14)
- Why and how PAXAFE pivoted away from providing IoT hardware to collect telemetry (10:23)
- How PAXAFE supports complex customer workflows, cold chain logistics, and complex supply chains (13:57)
- Who the end users of PAXAFE are, and how the product team designs for these users (20:00)
- Pharma loses around $40 billion a year relying on ‘Bob’s intuition’ in the warehouse. How Paxafe balances institutional user knowledge with the cold hard facts of analytics (42:43)
- Lessons learned when Ilya’s team fell in love with its own product and didn’t listen to the market (23:57)
Quotes from Today’s Episode
"Our initial vision for what PAXAFE would become was 99.9% spot on. The only thing we misjudged was market readiness—we built a product that was a few years ahead of its time."
–IIya
"As an industry, pharma is losing $40 billion worth of product every year because decisions are still based on warehouse intuition about what works and what doesn’t. In production, the problem is even more extreme, with roughly $800 billion lost annually due to temperature issues and excursions."
-IIya
"With our own design, our initial hypothesis and vision for what Pacaf could be really shaped where we are today. Early on, we had a strong perspective on what our customers needed—and along the way, we fell in love with our own product and design.."
-IIya
"We spent months perfecting risk scores… only to hear from customers, ‘I don’t care about a 71 versus a 62—just tell me what to do.’ That single insight changed everything."
-IIya
"If you’re not talking to customers or building a product that supports those conversations, you’re literally wasting time. In the zero-to-product-market-fit phase, nothing else matters, you need to focus entirely on understanding your customers and iterating your product around their needs..”
-IIya
"Don’t build anything on day one, probably not on day two, three, or four either. Go out and talk to customers. Focus not on what they think they need, but on their real pain points. Understand their existing workflows and the constraints they face while trying to solve those problems."
-IIya
Links
- PAXAFE: https://www.paxafe.com/
- LinkedIn for Ilya Preston: https://www.linkedin.com/in/ilyapreston/
- LinkedIn for company: https://www.linkedin.com/company/paxafe/

Tuesday Aug 19, 2025
Tuesday Aug 19, 2025
This is part two of the framework; if you missed part one, head to episode 175 and start there so you're all caught up.
In this episode of Experiencing Data, I continue my deep dive into the MIRRR UX Framework for designing trustworthy agentic AI applications. Building on Part 1’s “Monitor” and “Interrupt,” I unpack the three R’s: Redirect, Rerun, and Rollback—and share practical strategies for data product managers and leaders tasked with creating AI systems people will actually trust and use. I explain human-centered approaches to thinking about automation and how to handle unexpected outcomes in agentic AI applications without losing user confidence. I am hoping this control framework will help you get more value out of your data while simultaneously creating value for the human stakeholders, users, and customers.
Highlights / Skip to:
- Introducing the MIRRR UX Framework (1:08)
- Designing for trust and user adoption plus perspectives you should be including when designing systems. (2:31)
- Monitor and interrupt controls let humans pause anything from a single AI task to the entire agent (3:17)
- Explaining “redirection” in the example context of use cases for claims adjusters working on insurance claims—so adjusters (users) can focus on important decisions. (4:35)
- Rerun controls: lets humans redo an angentic task after unexpected results, preventing errors and building trust in early AI rollouts (11:12)
- Rerun vs. Redirect: what the difference is in the context of AI, using additional use cases from the insurance claim processing domain (12:07)
- Empathy and user experience in AI adoption, and how the most useful insights come from directly observing users—not from analytics (18:28)
- Thinking about agentic AI as glue for existing applications and workflows, or as a worker (27:35)
Quotes from Today’s Episode
The value of AI isn’t just about technical capability, it’s based in large part on whether the end-users will actually trust and adopt it. If we don’t design for trust from the start, even the most advanced AI can fail to deliver value."
"In agentic AI, knowing when to automate is just as important as knowing what to automate. Smart product and design decisions mean sometimes holding back on full automation until the people, processes, and culture are ready for it."
"Sometimes the most valuable thing you can do is slow down, create checkpoints, and give people a chance to course-correct before the work goes too far in the wrong direction."
"Reruns and rollbacks shouldn’t be seen as failures, they’re essential safety mechanisms that protect both the integrity of the work and the trust of the humans in the loop. They give people the confidence to keep using the system, even when mistakes happen."
"You can’t measure trust in an AI system by counting logins or tracking clicks. True adoption comes from understanding the people using it, listening to them, observing their workflows, and learning what really builds or breaks their confidence."
"You’ll never learn the real reasons behind a team’s choices by only looking at analytics, you have to actually talk to them and watch them work."
"Labels matter, what you call a button or an action can shape how people interpret and trust what will happen when they click it."
Quotes from Today’s Episode

Tuesday Aug 05, 2025
Tuesday Aug 05, 2025
In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working.
In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.
By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI.
Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems:
- Monitor – enabling appropriate transparency into AI agent behavior and performance
- Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed
…and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework.
Highlights / Skip to:
- 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications.
- 01:27 The importance of trust in AI systems and how it is linked to user adoption
- 03:06 Cultural shifts, AI hype, and growing AI skepticism
- 04:13 Human centered design practices for agentic AI
- 06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation
- 11:32 Measuring success of agentic applications with UX outcomes
- 15:26 Introducing the first two of five MIRRR framework control points:
- 16:29 M is for Monitor; understanding the agent’s “performance,” and the right
level of transparency end users need, from individual tasks to aggregate views - 20:29 I is for Interrupt; when and why users may need to stop the agent—and
what happens next
- 16:29 M is for Monitor; understanding the agent’s “performance,” and the right
- 28:02 Conclusion and next steps

Tuesday Jul 22, 2025
Tuesday Jul 22, 2025
In this episode of Experiencing Data, I chat with Irina Malkova who is the VP of AI Engineering and VP of Data and Analytics for Tech and Product at Salesforce. Irina shares how her teams are reinventing internal analytics, combining classic product data work with cutting-edge AI engineering—and her recent post on LinkedIn titled “AI adoption moves at the speed of user trust,” having a strong design-centered perspective, inspires today’s episode. (I even quoted her on this in a couple recent product design conference talks I gave!) In today’s drop, Irina shares how they’re enabling analytical insights at Salesforce via a Slack-based AI agent, how they have changed their AI and engineering org structures (and why), the bad advice they got on organizing their data product teams, and more. This is a great episode for senior data product and AI executives managing complex orgs and technology environments who want to see how Salesforce is scaling AI for smarter, faster decisions.

Tuesday Jul 08, 2025
Tuesday Jul 08, 2025
Todd Olson joins me to talk about making analytics worth paying for and relevant in the age of AI. The CEO of Pendo, an analytics SAAS company, Todd shares how the company evolved to support a wider audience by simplifying dashboards, removing user roadblocks, and leveraging AI to both generate and explain insights. We also talked about the roles of product management at Pendo. Todd views AI product management as a natural evolution for adaptable teams and explains how he thinks about hiring product roles in 2025. Todd also shares how he thinks about successful user adoption of his product around “time to value” and “stickiness” over vanity metrics like time spent.
Highlights/ Skip to:
- How Todd has addressed analytics apathy over the past decade at Pendo (1:17)
- Getting back to basics and not barraging people with more data and power (4:02)
- Pendo’s strategy for keeping the product experience simple without abandoning power users (6:44)
- Whether Todd is considering using an LLM (prompt-based) answer-driven experience with Pendo's UI (8:51)
- What Pendo looks for when hiring product managers right now, and why (14:58)
- How Pendo evaluates AI product managers, specifically (19:14)
- How Todd Olson views AI product management compared to traditional software product management (21:56)
- Todd’s concerns about the probabilistic nature of AI-generated answers in the product UX (27:51)
- What KPIs Todd uses to know whether Pendo is doing enough to reach its goals (32:49)
- Why being able to tell what answers are best will become more important as choice increases (40:05)
Quotes from Today’s Episode
- “Let’s go back to classic Geoffrey Moore Crossing the Chasm, you’re selling to early adopters. And what you’re doing is you’re relying on the early adopters’ skill set and figuring out how to take this data and connect it to business problems. So, in the early days, we didn’t do anything because the market we were selling to was very, very savvy; they’re hungry people, they just like new things. They’re getting data, they’re feeling really, really smart, everything’s working great. As you get bigger and bigger and bigger, you start to try to sell to a bigger TAM, a bigger audience, you start trying to talk to the these early majorities, which are, they’re not early adopters, they’re more technology laggards in some degree, and they don’t understand how to use data to inform their job. They’ve never used data to inform their job. There, we’ve had to do a lot more work.” Todd (2:04 - 2:58)
- “I think AI is amazing, and I don’t want to say AI is overhyped because AI in general is—yeah, it’s the revolution that we all have to pay attention to. Do I think that the skills necessary to be an AI product manager are so distinct that you need to hire differently? No, I don’t. That’s not what I’m seeing. If you have a really curious product manager who’s going all in, I think you’re going to be okay. Some of the most AI-forward work happening at Pendo is not just product management. Our design team is going crazy. And I think one of the things that we’re seeing is a blend between design and product, that they’re always adjacent and connected; there’s more sort of overlappiness now.” Todd (22:41 - 23:28)
- “I think about things like stickiness, which may not be an aggregate time, but how often are people coming back and checking in? And if you had this companion or this agent that you just could not live without, and it caused you to come into the product almost every day just to check in, but it’s a fast check-in, like, a five-minute check-in, a ten-minute check-in, that’s pretty darn sticky. That’s a good metric. So, I like stickiness as a metric because it’s measuring [things like], “Are you thinking about this product a lot?” And if you’re thinking about it a lot, and like, you can’t kind of live without it, you’re going to go to it a lot, even if it’s only a few minutes a day. Social media is like that. Thankfully I’m not addicted to TikTok or Instagram or anything like that, but I probably check it nearly every day. That’s a pretty good metric. It gets part of my process of any products that you’re checking every day is pretty darn good. So yeah, but I think we need to reframe the conversation not just total time. Like, how are we measuring outcomes and value, and I think that’s what’s ultimately going to win here.” Todd (39:57)
Links

Tuesday Jun 24, 2025
Tuesday Jun 24, 2025
Today on the podcast, I interview AI researcher Tony Zhang about some of his recent findings about the effects that fully automated AI has on user decision-making. Tony shares lessons from his recent research study comparing typical recommendation AIs with a “forward-reasoning” approach that nudges users to contribute their own reasoning with process-oriented support that may lead to better outcomes. We’ll look at his two study examples where they provided an AI-enabled interface for pilots tasked with deciding mid-flight the next-best alternate airport to land at, and another scenario asking investors to rebalance an ETF portfolio. The takeaway, taken right from Tony’s research, is that “going forward, we suggest that process-oriented support can be an effective framework to inform the design of both 'traditional' AI-assisted decision-making tools but also GenAI-based tools for thought.”
Highlights/ Skip to:
- Tony Zhang’s background (0:46)
- Context for the study (4:12)
- Zhang’s metrics for measuring over-reliance on AI (5:06)
- Understanding the differences between the two design options that study participants were given (15:39)
- How AI-enabled hints appeared for pilots in each version of the UI (17:49)
- Using AI to help pilots make good decisions faster (20:15)
- We look at the ETF portfolio rebalancing use case in the study (27:46)
- Strategic and tactical findings that Tony took away from his study (30:47)
- The possibility of commercially viable recommendations based on Tony’s findings (35:40)
- Closing thoughts (39:04)
Quotes from Today’s Episode
- “I wanted to keep the difference between the [recommendation & forward reasoning versions] very minimal to isolate the effect of the recommendation coming in. So, if I showed you screenshots of those two versions, they would look very, very similar. The only difference that you would immediately see is that the recommendation version is showing numbers 1, 2, and 3 for the recommended airports. These [rankings] are not present in the forward-reasoning one [airports are default sorted nearest to furthest]. This actually is a pretty profound difference in terms of the interaction or the decision-making impact that the AI has. There is this normal flight mode and forward reasoning, so that pilots are already immersed in the system and thinking with the system during normal flight. It changes the process that they are going through while they are working with the AI.” Tony (18:50 - 19:42)
- “You would imagine that giving the recommendation makes your decision faster, but actually, the recommendations were not faster than the forward-reasoning one. In the forward-reasoning one, during normal flight, pilots could already prepare and have a good overview of their surroundings, giving them time to adjust to the new situation. Now, in normal flight, they don’t know what might be happening, and then suddenly, a passenger emergency happens. While for the recommendation version, the AI just comes into the situation once you have the emergency, and then you need to do this backward reasoning that we talked about initially.” Tony ( 21:12 - 21:58)
- “Imagine reviewing code written by other people. It’s always hard because you had no idea what was going on when it was written. That was the idea behind the forward reasoning. You need to look at how people are working and how you can insert AI in a way that it seamlessly fits and provides some benefit to you while keeping you in your usual thought process. So, the way that I see it is you need to identify where the key pain points actually are in your current decision-making process and try to address those instead of just trying to solve the task entirely for users.” Tony (25:40 - 26:19)
Links
- LinkedIn: https://www.linkedin.com/in/zelun-tony-zhang/
- Augmenting Human Cognition With Generative AI: Lessons From AI-Assisted Decision-Making: https://arxiv.org/html/2504.03207v1

Tuesday Jun 10, 2025
171 - Who Can Succeed in a Data or AI Product Management Role?
Tuesday Jun 10, 2025
Tuesday Jun 10, 2025
Today, I’m responding to a listener's question about what it takes to succeed as a data or AI product manager, especially if you’re coming from roles like design/BI/data visualization, data science/engineering, or traditional software product management. This reader correctly observed that most of my content “seems more targeted at senior leadership” — and had asked if I could address this more IC-oriented topic on the show. I’ll break down why technical chops alone aren’t enough, and how user-centered thinking, business impact, and outcome-focused mindsets are key to real success — and where each of these prior roles brings strengths and/or weaknesses. I’ll also get into the evolving nature of PM roles in the age of AI, and what I think the super-powered AI product manager will look like.
Highlights/ Skip to:
- Who can transition into an AI and data product management role? What does it take? (5:29)
- Software product managers moving into AI product management (10:05)
- Designers moving into data/AI product management (13:32)
- Moving into the AI PM role from the engineering side (21:47)
- Why the challenge of user adoption and trust is often the blocker to the business value (29:56)
- Designing change management into AI/data products as a skill (31:26)
- The challenge of value creation vs. delivery work — and how incentives are aligned for ICs (35:17)
- Quantifying the financial value of data and AI product work(40:23)
Quotes from Today’s Episode
- “Who can transition into this type of role, and what is this role? I’m combining these two things. AI product management often seems closely tied to software companies that are primarily leveraging AI, or trying to, and therefore, they tend to utilize this AI product management role. I’m seeing less of that in internal data teams, where you tend to see data product management more, which, for me, feels like an umbrella term that may include traditional analytics work, data platforms, and often AI and machine learning. I’m going to frame this more in the AI space, primarily because I think AI tends to capture the end-to-end product than data product management does more frequently.” — Brian (2:55)
- “There are three disciplines I’m going to talk about moving into this role. Coming into AI and data PM from design and UX, coming into it from data engineering (or just broadly technical spaces), and then coming into it from software product management. I think software product management and moving into the AI product management - as long as you’re not someone that has two years of experience, and then 18 years of repeating the second year of experience over and over again - and you’ve had a robust product management background across some different types of products; you can show that the domain doesn’t necessarily stop you from producing value. I think you will have the easiest time moving into AI product management because you’ve shown that you can adapt across different industries.” - Brian (9:45)
- “Let’s talk about designers next. I’m going to include data visualization, user experience research, user experience design, product design, all those types of broad design, category roles. Moving into data and/or AI product management, first of all, you don’t see too many—I don’t hear about too many designers wanting to move into DPM roles, because oftentimes I don’t think there’s a lot of heavy UI and UX all the time in that space. Or at least the teams that are doing that work feel that’s somebody else’s job because they’re not doing end-to-end product thinking the way I talk about it, so therefore, a lot of times they don’t see the application, the user experience, the human adoption, the change management, they’re just not looking at the world that way, even though I think they should be.” - Brian (13:32)
- “Coming at this from the data and engineering side, this is the classic track for data product management. At least that is the way I tend to see it. I believe most companies prefer to develop this role in-house. My biggest concern is that you end up with job title changes, but not necessarily the benefits that are supposed to come with this. I do like learning by doing, but having a coach and someone senior who can coach your other PMs is important because there’s a lot of information that you won’t necessarily get in a class or a course. It’s going to come from experience doing the work.” - Brian (22:26)
- “This value piece is the most important thing, and I want to focus on that. This is something I frequently discuss in my training seminar: how do we attach financial value to the work we’re doing? This is both art and science, but it’s a language that anyone in a product management role needs to be comfortable with. If you’re finding it very hard to figure out how your data product contributes financial value because it’s based on this waterfalling of “We own the model, and it’s deployed on a platform.” The platform then powers these other things, which in turn power an application. How do we determine the value of our tool? These things are challenging, and if it’s challenging for you, guess how hard it will be for stakeholders downstream if you haven’t had the practice and the skills required to understand how to estimate value, both before we build something as well as after?” - Brian (31:51)
- “If you don’t want to spend your time getting to know how your business makes money or creates value, then [AI and data product management work] is not for you. It’s just not. I would stay doing what you’re doing already or find a different thing because a lot of your time is going to be spent “managing up” for half the time, and then managing the product stuff “down.” Then, sitting in this middle layer, trying to explain to the business what’s going to come out and what the impact is going to be, in language that they care about and understand. You can't be talking about models, model accuracy, data pipelines, and all that stuff. They’re not going to care about any of that. - Brian (34:08)

Tuesday May 27, 2025
Tuesday May 27, 2025
Today, I'm chatting with Shri Santhanam, the EVP of Software Platforms and Chief AI Officer of Experian North America. Over the course of this promoted episode, you’re going to hear us talk about what it takes to build useful consumer and B2B AI products. Shri explains how Experian structures their AI product teams, the company’s approach prioritizing its initiatives, and what it takes to get their AI solutions out the door. We also get into the nuances of building trust with probabilistic AI tools and the absolutely critical role of UX in end user adoption.
Note: today’s episode is one of my rare Promoted Episodes. Please help support the show by visiting Experian’s links below:
Links

Tuesday May 13, 2025
Tuesday May 13, 2025
Today, I'm chatting with Stuart Winter-Tear about AI product management. We're getting into the nitty-gritty of what it takes to build and launch LLM-powered products for the commercial market that actually produce value. Among other things in this rich conversation, Stuart surprised me with the level of importance he believes UX has in making LLM-powered products successful, even for technical audiences.
After spending significant time on the forefront of AI’s breakthroughs, Stuart believes many of the products we’re seeing today are the result of FOMO above all else. He shares a belief that I’ve emphasized time and time again on the podcast–product is about the problem, not the solution. This design philosophy has informed Staurt’s 20-plus year-long career, and it is pivotal to understanding how to best use AI to build products that meet users’ needs.
Highlights/ Skip to
- Why Stuart was asked to speak to the House of Lords about AI (2:04)
- The LLM-powered products has Stuart been building recently (4:20)
- Finding product-market fit with AI products (7:44)
- Lessons Stuart has learned over the past two years working with LLM-power products (10:54)
- Figuring out how to build user trust in your AI products (14:40)
- The differences between being a digital product manager vs. AI product manager (18:13)
- Who is best suited for an AI product management role (25:42)
- Why Stuart thinks user experience matters greatly with AI products (32:18)
- The formula needed to create a business-viable AI product (38:22)
- Stuart describes the skills and roles he thinks are essential in an AI product team and who he brings on first (50:53)
- Conversations that need to be had with academics and data scientists when building AI-powered products (54:04)
- Final thoughts from Stuart and where you can find more from him (58:07)
Quotes from Today’s Episode
- “I think that the core dream with GenAI is getting data out of IT hands and back to the business. Finding a way to overlay all this disparate, unstructured data and [translate it] to the human language is revolutionary. We’re finding industries that you would think were more conservative (i.e. medical, legal, etc.) are probably the most interested because of the large volumes of unstructured data they have to deal with. People wouldn’t expect large language models to be used for fact-checking… they’re actually very powerful, especially if you can have your own proprietary data or pipelines. Same with security–although large language models introduce a terrifying amount of security problems, they can also be used in reverse to augment security. There’s a lovely contradiction with this technology that I do enjoy.” - Stuart Winter-Tear (5:58)
- “[LLM-powered products] gave me the wow factor, and I think that’s part of what’s caused the problem. If we focus on technology, we build more technology, but if we focus on business and customers, we’re probably going to end up with more business and customers. This is why we end up with so many products that are effectively solutions in search of problems. We’re in this rush and [these products] are [based on] FOMO. We’re leaving behind what we understood about [building] products—as if [an LLM-powered product] is a special piece of technology. It’s not. It’s another piece of technology. [Designers] should look at this technology from the prism of the business and from the prism of the problem. We love to solutionize, but is the problem the problem? What’s the context of the problem? What’s the problem under the problem? Is this problem worth solving, and is GenAI a desirable way to solve it? We’re putting the cart before the horse.” - Stuart Winter-Tear (11:11)
- “[LLM-powered products] feel most amazing when you’re not a domain expert in whatever you’re using it for. I’ll give you an example: I’m terrible at coding. When I got my hands on Cursor, I felt like a superhero. It was unbelievable what I could build. Although [LLM products] look most amazing in the hands of non-experts, it’s actually most powerful in the hands of experts who do understand the domain they’re using this technology. Perhaps I want to do a product strategy, so I ask [the product] for some assistance, and it can get me 70% of the way there. [LLM products] are great as a jumping off point… but ultimately [they are] only powerful because I have certain domain expertise.” - Stuart Winter-Tear (13:01)
- “We’re so used to the digital paradigm. The deterministic nature of you put in X, you get out Y; it’s the same every time. Probabilistic changes every time. There is a huge difference between what results you might be getting in the lab compared to what happens in the real world. You effectively find yourself building [AI products] live, and in order to do that, you need good communities and good feedback available to you. You need these fast feedback loops. From a pure product management perspective, we used to just have the [engineering] timeline… Now, we have [the data research timeline]. If you’re dealing with cutting-edge products, you’ve got these two timelines that you’re trying to put together, and the data research one is very unpredictable. It’s the nature of research. We don’t necessarily know when we’re going to get to where we want to be.” - Stuart Winter-Tear (22:25)
- “I believe that UX will become the #1 priority for large language model products. I firmly believe whoever wins in UX will win in this large language model product world. I’m against fully autonomous agents without human intervention for knowledge work. We need that human in the loop. What was the intent of the user? How do we get that right push back from the large language model to understand even the level of the person that they’re dealing with? These are fundamental UX problems that are going to push UX to the forefront… This is going to be on UX to educate the user, to be able to inject the user in at the right time to be able to make this stuff work. The UX folk who do figure this out are going to create the breakthrough and create the mass adoption.” - Stuart Winter-Tear (33:42)