

151.8K
Downloads
182
Episodes
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be?
While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be?
If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype?
My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions.
Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies.
I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better.
Hashtag: #ExperiencingData.
JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS
https://designingforanalytics.com/ed
ABOUT THE HOST, BRIAN T. O’NEILL:
https://designingforanalytics.com/bio/
Episodes

5 days ago
5 days ago
On today's Promoted Episode of Experiencing Data, I’m talking with Lucas Thelosen, CEO of Gravity and creator of Orion, an AI analyst transforming how data teams work. Lucas was head of PS for Looker, and eventually became Head of Product for Google’s Data and AI Cloud prior to starting his own data product company. We dig into how his team built Orion, the challenge of keeping AI accurate and trustworthy when doing analytical work, and how they’re thinking about the balance of human control with automation when their product acts as a force multiplier for human analysts.
In addition to talking about the product, we also talk about how Gravity arrived at specific enough use cases for this technology that a market would be willing to pay for, and how they’re thinking about pricing in today’s more “outcomes-based” environment.
Incidentally, one thing I didn’t know when I first agreed to consider having Gravity and Lucas on my show was that Lucas has been a long-time proponent of data product management and operating with a product mindset. In this episode, he shares the “ah-hah” moment where things clicked for him around building data products in this manner. Lucas shares how pivotal this moment was for him, and how it helped accelerate his career from Looker to Google and now Gravity.
If you’re leading a data team, you’re a forward-thinking CDO, or you’re interested in commercializing your own analytics/AI product, my chat with Lucas should inspire you!
Highlights/ Skip to:
- Lucas’s breakthrough came when he embraced a data product management mindset (02:43)
- How Lucas thinks about Gravity as being the instrumentalists in an orchestra, conducted by the user (4:31)
- Finding product-market fit by solving for a common analytics pain point (8:11)
- Analytics product and dashboard adoption challenges: why dashboards die and thinking of analytics as changing the business gradually (22:25)
- What outcome-based pricing means for AI and analytics (32:08)
- The challenge of defining guardrails and ethics for AI-based analytics products [just in case somebody wants to “fudge the numbers”] (46:03)
- Lucas’ closing thoughts about what AI is unlocking for analysts and how to position your career for the future (48:35)
Special Bonus for DPLC Community Members
Are you a member of the Data Product Leadership Community? After our chat, I invited Lucas to come give a talk about his journey of moving from “data” to “product” and adopting a producty mindset for analytics and AI work. He was more than happy to oblige. Watch for this in late 2025/early 2026 on our monthly webinar and group discussion calendar.
Note: today’s episode is one of my rare Promoted Episodes. Please help support the show by visiting Gravity’s links below:
Quotes from Today’s Episode
“The whole point of data and analytics is to help the business evolve. When your reports make people ask new questions, that’s a win. If the conversations today sound different than they did three months ago, it means you’ve done your job, you’ve helped move the business forward.”
— Lucas
“Accuracy is everything. The moment you lose trust, the business, the use case, it's all over. Earning that trust back takes a long time, so we made accuracy our number one design pillar from day one.”
— Lucas
“Language models have changed the game in terms of scale. Suddenly, we’re facing all these new kinds of problems, not just in AI, but in the old-school software sense too. Things like privacy, scalability, and figuring out who’s responsible.”
— Brian
“Most people building analytics products have never been analysts, and that’s a huge disadvantage. If data doesn’t drive action, you’ve missed the mark. That’s why so many dashboards die quickly.”
— Lucas
“Re: collecting feedback so you know if your UX is good: I generally agree that qualitative feedback is the best place to start, not analytics [on your analytics!] Especially in UX, analytics measure usage aspects of the product, not the subject human experience. Experience is a collection of feelings and perceptions about how something went.”
— Brian
Links
- Gravity: https://www.bygravity.com
- LinkedIn: https://www.linkedin.com/in/thelosen/
- Email Lucas and team: hello@bygravity.com

Tuesday Oct 14, 2025
180- From Data Professional to Data Product Manager: Mindset Shifts To Make
Tuesday Oct 14, 2025
Tuesday Oct 14, 2025
In this episode, I’m exploring the mindset shift data professionals need to make when moving into analytics and AI data product management. From how to ask the right questions to designing for meaningful adoption, I share four key ways to think more like a product manager, and less like a deliverables machine, so your data products earn applause instead of a shoulder shrug.
Highlights/ Skip to:
- Why shift to analytics and AI data product management (00:34)
- From accuracy to impact and redefining success with AI and analytical data products (01:59)
- Key Idea 1: Moving from question asker (analyst) to problem seeker (product) (04:31)
- Key Idea 2: Designing change management into solutions; planning for adoption starts in the design phase (12:52)
- Key Idea 3: Creating tools so useful people can’t imagine working without them. (26:23)
- Key Idea 4: Solving for unarticulated needs vs. active needs (34:24)
Quotes from Today’s Episode
“Too many analytics teams are rewarded for accuracy instead of impact. Analysts give answers, and product people ask questions.The shift from analytics to product thinking isn’t about tools or frameworks, it’s about curiosity.It’s moving from ‘here’s what the data says’ to ‘what problem are we actually trying to solve, and for whom?’That’s where the real leverage is, in asking better questions, not just delivering faster answers.”
“We often mistake usage for success.Adoption only matters if it’s meaningful adoption. A dashboard getting opened a hundred times doesn’t mean it’s valuable... it might just mean people can’t find what they need.Real success is when your users say, ‘I can’t imagine doing my job without this.’That’s the level of usefulness we should be designing for.”
“The most valuable insights aren’t always the ones people ask for.
Solving active problems is good, it’s necessary. But the big unlock happens when you start surfacing and solving latent problems, the ones people don’t think to ask for.Those are the moments when users say, ‘Oh wow, that changes everything.’That’s how data teams evolve from service providers to strategic partners.”
“Here’s a simple but powerful shift for data teams: know who your real customer is.
Most data teams think their customer is the stakeholder who requested the work…
But the real customer is the end user whose life or decision should get better because of it.
When you start designing for that person, not just the requester, everything changes: your priorities, your design, even what you choose to measure.”
Links
- Need 1:1 help to navigate these questions and align your data product work to your career? Explore my new Cross-Company Group Coaching at designingforanalytics.com/groupcoaching
- For peer support: the Data Product Leadership Community where peers are experimenting with these approaches. designingforanalytics.com/community

Tuesday Sep 30, 2025
179 - Foundational UX principles for data and AI product managers
Tuesday Sep 30, 2025
Tuesday Sep 30, 2025
Content coming soon.

Tuesday Sep 16, 2025
Tuesday Sep 16, 2025
In this episode, I sat down with tech humanist Kate O’Neill to explore how organizations can balance human-centered design in a time when everyone is racing to find ways to leverage AI in their businesses. Kate introduced her “Now–Next Continuum,” a framework that distinguishes digital transformation (catching up) from true innovation (looking ahead). We dug into real-world challenges and tensions of moving fast vs. creating impact with AI, how ethics fits into decision making, and the role of data in making informed decisions.
Kate stressed the importance of organizations having clear purpose statements and values from the outset, proxy metrics she uses to gauge human-friendliness, and applying a “harms of action vs. harms of inaction” lens for ethical decisions. Her key point: human-centered approaches to AI and technology creation aren’t slow; they create intentional structures that speed up smart choices while avoiding costly missteps.
Highlights/ Skip to:
- How Kate approaches discussions with executives about moving fast, but also moving in a human-centered way when building out AI solutions (1:03)
- Exploring the lack of technical backgrounds among many CEOs and how this shapes the way organizations make big decisions around technical solutions (3:58)
- FOMO and the “Solution in Search of a Problem” problem in Data (5:18)
- Why ongoing ethnographic research and direct exposure to users are essential for true innovation (11:21)
- Balancing organizational purpose and human-centered tech decisions, and why a defined purpose must precede these decisions (18:09)
- How organizations can define, measure, operationalize, and act on ethical considerations in AI and data products (35:57)
- Risk management vs. strategic optimism: balancing risk reduction with embracing the art of the possible when building AI solutions (43:54)
Quotes from Today’s Episode
"I think the ethics and the governance and all those kinds of discussions [about the implications of digital transformation] are all very big word - kind of jargon-y kinds of discussions - that are easy to think aren't important, but what they all tend to come down to is that alignment between what the business is trying to do and what the person on the other side of the business is trying to do."
–Kate O’Neill
" I've often heard the term digital transformation used almost interchangeably with the term innovation. And I think that that's a grave disservice that we do to those two concepts because they're very different. Digital transformation, to me, seems as if it sits much more comfortably on the earlier side of the Now-Next Continuum. So, it's about moving the past to the present… Innovation is about standing in the present and looking to the future and thinking about the art of the possible, like you said. What could we do? What could we extract from this unstructured data (this mess of stuff that’s something new and different) that could actually move us into green space, into territory that no one’s doing yet? And those are two very different sets of questions. And in most organizations, they need to be happening simultaneously."
–Kate O’Neill
"The reason I chose human-friendly [as a term] over human-centered partly because I wanted to be very honest about the goal and not fall back into, you know, jargony kinds of language that, you know, you and I and the folks listening probably all understand in a certain way, but the CEOs and the folks that I'm necessarily trying to get reading this book and make their decisions in a different way based on it."
–Kate O’Neill
“We love coming up with new names for different things. Like whether something is “cloud,” or whether it’s like, you know, “SaaS,” or all these different terms that we’ve come up with over the years… After spending so long working in tech, it is kind of fun to laugh at it. But it’s nice that there’s a real earnestness [to it]. That’s sort of evergreen [laugh]. People are always trying to genuinely solve human problems, which is what I try to tap into these days, with the work that I do, is really trying to help businesses—business leaders, mostly, but a lot of those are non-tech leaders, and I think that’s where this really sticks is that you get a lot of people who have ascended into CEO or other C-suite roles who don’t come from a technology background.”
–Kate O’Neill
"My feeling is that if you're not regularly doing ethnographic research and having a lot of exposure time directly to customers, you’re doomed. The people—the makers—have to be exposed to the users and stakeholders. There has to be ongoing work in this space; it can't just be about defining project requirements and then disappearing. However, I don't see a lot of data teams and AI teams that have non-technical research going on where they're regularly spending time with end users or customers such that they could even imagine what the art of the possible could be.”
–Brian T. O’Neill
Links
- KO Insights: https://www.koinsights.com/
- LinkedIn for Kate O’Neill: https://www.linkedin.com/in/kateoneill/
- Kate O’Neill Book: What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast

Wednesday Sep 03, 2025
Wednesday Sep 03, 2025
In this episode, I talk with Ilya Preston, co-founder and CEO of PAXAFE, a logistics orchestration and decision intelligence platform for temperature-controlled supply chains (aka “cold chain”). Ilya explains how PAXAFE helps companies shipping sensitive products, like pharmaceuticals, vaccines, food, and produce, by delivering end-to-end visibility and actionable insights powered by analytics and AI that reduce product loss, improve efficiency, and support smarter real-time decisions.
Ilya shares the challenges of building a configurable system that works for transportation, planning, and quality teams across industries. We also discuss their product development philosophy, team structure, and use of AI for document processing, diagnostics, and workflow automation.
Highlights/ Skip to:
- Intro to Paxafe (2:13)
- How PAXAFE brings tons of cold chain data together in one user experience (2:33)
- Innovation in cold chain analytics is up, but so is cold chain product loss. (4:42)
- The product challenge of getting sufficient telemetry data at the right level of specificity to derive useful analytical insights (7:14)
- Why and how PAXAFE pivoted away from providing IoT hardware to collect telemetry (10:23)
- How PAXAFE supports complex customer workflows, cold chain logistics, and complex supply chains (13:57)
- Who the end users of PAXAFE are, and how the product team designs for these users (20:00)
- Pharma loses around $40 billion a year relying on ‘Bob’s intuition’ in the warehouse. How Paxafe balances institutional user knowledge with the cold hard facts of analytics (42:43)
- Lessons learned when Ilya’s team fell in love with its own product and didn’t listen to the market (23:57)
Quotes from Today’s Episode
"Our initial vision for what PAXAFE would become was 99.9% spot on. The only thing we misjudged was market readiness—we built a product that was a few years ahead of its time."
–IIya
"As an industry, pharma is losing $40 billion worth of product every year because decisions are still based on warehouse intuition about what works and what doesn’t. In production, the problem is even more extreme, with roughly $800 billion lost annually due to temperature issues and excursions."
-IIya
"With our own design, our initial hypothesis and vision for what Pacaf could be really shaped where we are today. Early on, we had a strong perspective on what our customers needed—and along the way, we fell in love with our own product and design.."
-IIya
"We spent months perfecting risk scores… only to hear from customers, ‘I don’t care about a 71 versus a 62—just tell me what to do.’ That single insight changed everything."
-IIya
"If you’re not talking to customers or building a product that supports those conversations, you’re literally wasting time. In the zero-to-product-market-fit phase, nothing else matters, you need to focus entirely on understanding your customers and iterating your product around their needs..”
-IIya
"Don’t build anything on day one, probably not on day two, three, or four either. Go out and talk to customers. Focus not on what they think they need, but on their real pain points. Understand their existing workflows and the constraints they face while trying to solve those problems."
-IIya
Links
- PAXAFE: https://www.paxafe.com/
- LinkedIn for Ilya Preston: https://www.linkedin.com/in/ilyapreston/
- LinkedIn for company: https://www.linkedin.com/company/paxafe/

Tuesday Aug 19, 2025
Tuesday Aug 19, 2025
This is part two of the framework; if you missed part one, head to episode 175 and start there so you're all caught up.
In this episode of Experiencing Data, I continue my deep dive into the MIRRR UX Framework for designing trustworthy agentic AI applications. Building on Part 1’s “Monitor” and “Interrupt,” I unpack the three R’s: Redirect, Rerun, and Rollback—and share practical strategies for data product managers and leaders tasked with creating AI systems people will actually trust and use. I explain human-centered approaches to thinking about automation and how to handle unexpected outcomes in agentic AI applications without losing user confidence. I am hoping this control framework will help you get more value out of your data while simultaneously creating value for the human stakeholders, users, and customers.
Highlights / Skip to:
- Introducing the MIRRR UX Framework (1:08)
- Designing for trust and user adoption plus perspectives you should be including when designing systems. (2:31)
- Monitor and interrupt controls let humans pause anything from a single AI task to the entire agent (3:17)
- Explaining “redirection” in the example context of use cases for claims adjusters working on insurance claims—so adjusters (users) can focus on important decisions. (4:35)
- Rerun controls: lets humans redo an angentic task after unexpected results, preventing errors and building trust in early AI rollouts (11:12)
- Rerun vs. Redirect: what the difference is in the context of AI, using additional use cases from the insurance claim processing domain (12:07)
- Empathy and user experience in AI adoption, and how the most useful insights come from directly observing users—not from analytics (18:28)
- Thinking about agentic AI as glue for existing applications and workflows, or as a worker (27:35)
Quotes from Today’s Episode
The value of AI isn’t just about technical capability, it’s based in large part on whether the end-users will actually trust and adopt it. If we don’t design for trust from the start, even the most advanced AI can fail to deliver value."
"In agentic AI, knowing when to automate is just as important as knowing what to automate. Smart product and design decisions mean sometimes holding back on full automation until the people, processes, and culture are ready for it."
"Sometimes the most valuable thing you can do is slow down, create checkpoints, and give people a chance to course-correct before the work goes too far in the wrong direction."
"Reruns and rollbacks shouldn’t be seen as failures, they’re essential safety mechanisms that protect both the integrity of the work and the trust of the humans in the loop. They give people the confidence to keep using the system, even when mistakes happen."
"You can’t measure trust in an AI system by counting logins or tracking clicks. True adoption comes from understanding the people using it, listening to them, observing their workflows, and learning what really builds or breaks their confidence."
"You’ll never learn the real reasons behind a team’s choices by only looking at analytics, you have to actually talk to them and watch them work."
"Labels matter, what you call a button or an action can shape how people interpret and trust what will happen when they click it."
Quotes from Today’s Episode

Tuesday Aug 05, 2025
Tuesday Aug 05, 2025
In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working.
In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.
By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI.
Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems:
- Monitor – enabling appropriate transparency into AI agent behavior and performance
- Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed
…and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework.
Highlights / Skip to:
- 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications.
- 01:27 The importance of trust in AI systems and how it is linked to user adoption
- 03:06 Cultural shifts, AI hype, and growing AI skepticism
- 04:13 Human centered design practices for agentic AI
- 06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation
- 11:32 Measuring success of agentic applications with UX outcomes
- 15:26 Introducing the first two of five MIRRR framework control points:
- 16:29 M is for Monitor; understanding the agent’s “performance,” and the right
level of transparency end users need, from individual tasks to aggregate views - 20:29 I is for Interrupt; when and why users may need to stop the agent—and
what happens next
- 16:29 M is for Monitor; understanding the agent’s “performance,” and the right
- 28:02 Conclusion and next steps

Tuesday Jul 22, 2025
Tuesday Jul 22, 2025
In this episode of Experiencing Data, I chat with Irina Malkova who is the VP of AI Engineering and VP of Data and Analytics for Tech and Product at Salesforce. Irina shares how her teams are reinventing internal analytics, combining classic product data work with cutting-edge AI engineering—and her recent post on LinkedIn titled “AI adoption moves at the speed of user trust,” having a strong design-centered perspective, inspires today’s episode. (I even quoted her on this in a couple recent product design conference talks I gave!) In today’s drop, Irina shares how they’re enabling analytical insights at Salesforce via a Slack-based AI agent, how they have changed their AI and engineering org structures (and why), the bad advice they got on organizing their data product teams, and more. This is a great episode for senior data product and AI executives managing complex orgs and technology environments who want to see how Salesforce is scaling AI for smarter, faster decisions.

Tuesday Jul 08, 2025
Tuesday Jul 08, 2025
Todd Olson joins me to talk about making analytics worth paying for and relevant in the age of AI. The CEO of Pendo, an analytics SAAS company, Todd shares how the company evolved to support a wider audience by simplifying dashboards, removing user roadblocks, and leveraging AI to both generate and explain insights. We also talked about the roles of product management at Pendo. Todd views AI product management as a natural evolution for adaptable teams and explains how he thinks about hiring product roles in 2025. Todd also shares how he thinks about successful user adoption of his product around “time to value” and “stickiness” over vanity metrics like time spent.
Highlights/ Skip to:
- How Todd has addressed analytics apathy over the past decade at Pendo (1:17)
- Getting back to basics and not barraging people with more data and power (4:02)
- Pendo’s strategy for keeping the product experience simple without abandoning power users (6:44)
- Whether Todd is considering using an LLM (prompt-based) answer-driven experience with Pendo's UI (8:51)
- What Pendo looks for when hiring product managers right now, and why (14:58)
- How Pendo evaluates AI product managers, specifically (19:14)
- How Todd Olson views AI product management compared to traditional software product management (21:56)
- Todd’s concerns about the probabilistic nature of AI-generated answers in the product UX (27:51)
- What KPIs Todd uses to know whether Pendo is doing enough to reach its goals (32:49)
- Why being able to tell what answers are best will become more important as choice increases (40:05)
Quotes from Today’s Episode
- “Let’s go back to classic Geoffrey Moore Crossing the Chasm, you’re selling to early adopters. And what you’re doing is you’re relying on the early adopters’ skill set and figuring out how to take this data and connect it to business problems. So, in the early days, we didn’t do anything because the market we were selling to was very, very savvy; they’re hungry people, they just like new things. They’re getting data, they’re feeling really, really smart, everything’s working great. As you get bigger and bigger and bigger, you start to try to sell to a bigger TAM, a bigger audience, you start trying to talk to the these early majorities, which are, they’re not early adopters, they’re more technology laggards in some degree, and they don’t understand how to use data to inform their job. They’ve never used data to inform their job. There, we’ve had to do a lot more work.” Todd (2:04 - 2:58)
- “I think AI is amazing, and I don’t want to say AI is overhyped because AI in general is—yeah, it’s the revolution that we all have to pay attention to. Do I think that the skills necessary to be an AI product manager are so distinct that you need to hire differently? No, I don’t. That’s not what I’m seeing. If you have a really curious product manager who’s going all in, I think you’re going to be okay. Some of the most AI-forward work happening at Pendo is not just product management. Our design team is going crazy. And I think one of the things that we’re seeing is a blend between design and product, that they’re always adjacent and connected; there’s more sort of overlappiness now.” Todd (22:41 - 23:28)
- “I think about things like stickiness, which may not be an aggregate time, but how often are people coming back and checking in? And if you had this companion or this agent that you just could not live without, and it caused you to come into the product almost every day just to check in, but it’s a fast check-in, like, a five-minute check-in, a ten-minute check-in, that’s pretty darn sticky. That’s a good metric. So, I like stickiness as a metric because it’s measuring [things like], “Are you thinking about this product a lot?” And if you’re thinking about it a lot, and like, you can’t kind of live without it, you’re going to go to it a lot, even if it’s only a few minutes a day. Social media is like that. Thankfully I’m not addicted to TikTok or Instagram or anything like that, but I probably check it nearly every day. That’s a pretty good metric. It gets part of my process of any products that you’re checking every day is pretty darn good. So yeah, but I think we need to reframe the conversation not just total time. Like, how are we measuring outcomes and value, and I think that’s what’s ultimately going to win here.” Todd (39:57)
Links

Tuesday Jun 24, 2025
Tuesday Jun 24, 2025
Today on the podcast, I interview AI researcher Tony Zhang about some of his recent findings about the effects that fully automated AI has on user decision-making. Tony shares lessons from his recent research study comparing typical recommendation AIs with a “forward-reasoning” approach that nudges users to contribute their own reasoning with process-oriented support that may lead to better outcomes. We’ll look at his two study examples where they provided an AI-enabled interface for pilots tasked with deciding mid-flight the next-best alternate airport to land at, and another scenario asking investors to rebalance an ETF portfolio. The takeaway, taken right from Tony’s research, is that “going forward, we suggest that process-oriented support can be an effective framework to inform the design of both 'traditional' AI-assisted decision-making tools but also GenAI-based tools for thought.”
Highlights/ Skip to:
- Tony Zhang’s background (0:46)
- Context for the study (4:12)
- Zhang’s metrics for measuring over-reliance on AI (5:06)
- Understanding the differences between the two design options that study participants were given (15:39)
- How AI-enabled hints appeared for pilots in each version of the UI (17:49)
- Using AI to help pilots make good decisions faster (20:15)
- We look at the ETF portfolio rebalancing use case in the study (27:46)
- Strategic and tactical findings that Tony took away from his study (30:47)
- The possibility of commercially viable recommendations based on Tony’s findings (35:40)
- Closing thoughts (39:04)
Quotes from Today’s Episode
- “I wanted to keep the difference between the [recommendation & forward reasoning versions] very minimal to isolate the effect of the recommendation coming in. So, if I showed you screenshots of those two versions, they would look very, very similar. The only difference that you would immediately see is that the recommendation version is showing numbers 1, 2, and 3 for the recommended airports. These [rankings] are not present in the forward-reasoning one [airports are default sorted nearest to furthest]. This actually is a pretty profound difference in terms of the interaction or the decision-making impact that the AI has. There is this normal flight mode and forward reasoning, so that pilots are already immersed in the system and thinking with the system during normal flight. It changes the process that they are going through while they are working with the AI.” Tony (18:50 - 19:42)
- “You would imagine that giving the recommendation makes your decision faster, but actually, the recommendations were not faster than the forward-reasoning one. In the forward-reasoning one, during normal flight, pilots could already prepare and have a good overview of their surroundings, giving them time to adjust to the new situation. Now, in normal flight, they don’t know what might be happening, and then suddenly, a passenger emergency happens. While for the recommendation version, the AI just comes into the situation once you have the emergency, and then you need to do this backward reasoning that we talked about initially.” Tony ( 21:12 - 21:58)
- “Imagine reviewing code written by other people. It’s always hard because you had no idea what was going on when it was written. That was the idea behind the forward reasoning. You need to look at how people are working and how you can insert AI in a way that it seamlessly fits and provides some benefit to you while keeping you in your usual thought process. So, the way that I see it is you need to identify where the key pain points actually are in your current decision-making process and try to address those instead of just trying to solve the task entirely for users.” Tony (25:40 - 26:19)
Links
- LinkedIn: https://www.linkedin.com/in/zelun-tony-zhang/
- Augmenting Human Cognition With Generative AI: Lessons From AI-Assisted Decision-Making: https://arxiv.org/html/2504.03207v1