

144.3K
Downloads
176
Episodes
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be? While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be? If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies. I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better. Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes

7 days ago
7 days ago
In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working.
In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.
By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI.
Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems:
- Monitor – enabling appropriate transparency into AI agent behavior and performance
- Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed
…and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework.
Highlights / Skip to:
- 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications.
- 01:27 The importance of trust in AI systems and how it is linked to user adoption
- 03:06 Cultural shifts, AI hype, and growing AI skepticism
- 04:13 Human centered design practices for agentic AI
- 06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation
- 11:32 Measuring success of agentic applications with UX outcomes
- 15:26 Introducing the first two of five MIRRR framework control points:
- 16:29 M is for Monitor; understanding the agent’s “performance,” and the right
level of transparency end users need, from individual tasks to aggregate views - 20:29 I is for Interrupt; when and why users may need to stop the agent—and
what happens next
- 16:29 M is for Monitor; understanding the agent’s “performance,” and the right
- 28:02 Conclusion and next steps
No comments yet. Be the first to say something!