

155.3K
Downloads
186
Episodes
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be?
While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be?
If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype?
My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions.
Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies.
I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better.
Hashtag: #ExperiencingData.
JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS
https://designingforanalytics.com/ed
ABOUT THE HOST, BRIAN T. O’NEILL:
https://designingforanalytics.com/bio/
Episodes

Tuesday May 18, 2021
Tuesday May 18, 2021
I once saw a discussion on LinkedIn about a fraud detection model that had been built but never used. The model worked — it was expensive — but it just simply didn’t get used because the humans in the loop were not incentivized to use it.
It was on this very thread that I first met Salesforce Director of Product Management Pavan Tuvu, who chimed in on the thread about a similar experience he went through. When I heard about his experience, I asked him if he would share it with you and he agreed. So, today on the Experiencing Data podcast, I’m excited to have Pavan on to talk about some lessons he learned while designing ad-spend software that utilized advanced analytics — and the role of the humans in the loop. We discussed:
- Pavan's role as Director of Product Management at Salesforce and how he works to make data easier to use for teams. (0:40)
- Pavan's work protecting large-dollar advertising accounts from bad actors by designing a ML system that predicts and caps ad spending. (6:10)
- 'Human override of the machine': How Pavan addressed concerns that its advertising security system would incorrectly police legitimate large-dollar ad spends. (12:22)
- How the advertising security model Pavan worked on learned from human feedback. (24:49)
- How leading with "why" when designing data products will lead to a better understanding of what customers need to solve. (29:05)

Tuesday May 04, 2021
Tuesday May 04, 2021
Reed Sturtevant sees a lot of untapped potential in “tough tech.”
As a General Partner at The Engine, a venture capital firm launched by MIT, Reed and his colleagues invest in companies with breakthrough technology that, if successful, could positively transform the world.
It’s been about 15 years since I’ve last caught up to Reed—who was CTO at a startup we worked at together—so I’m so excited to welcome him on this episode of Experiencing Data! Reed and I talked about AI and how some of the portfolio companies in his fund are using data to produce better products, solutions, and inventions to tackle some of the world’s toughest challenges.
In our chat, we covered:
- How Reed's venture capital firm, The Engine, is investing in technology driven businesses focused on making positive social impacts. (0:28)
- The challenges that technical PhDs and postdocs face when transitioning from academia to entrepreneurship. (2:22)
- Focusing on a greater mission: The importance of self-examining whether an invention would be a good business. (5:16)
- How one technology business invested in by The Engine, The Routing Company, is leveraging AI and data to optimize public transportation and bridge service gaps. (9:05)
- Understanding and solving a problem: Using ‘design exercises’ to find successful market fits for existing technological solutions. (16:53)
- Solutions first, problems second: Why asking the right questions is key to mapping a technological solution back to a problem in the market. (19:31)
- Understanding and articulating a product’s value to potential buyers. (22:54)
- How the go-to-market strategies of software companies have changed over the last few decades. (26:16)
Resources and Links:
- The Engine: https://www.engine.xyz/
Quotes from Today’s Episode
There have been a couple of times while working at The Engine when I’ve taken it as a sign of maturity when a team self-examines whether their invention is actually the right way to build a business. - Reed (5:59)
For some of the data scientists I know, particularly with AI, executive teams can mandate AI without really understanding the problem they want to solve. It actually pushes the problem discovery onto the solution people — but they’re not always the ones trained to go find the problems. - Brian (19:42)
You can keep hitting people over the head with a product, or you can go figure out what people care about and determine how you can slide your solution into something they care about. ... You don’t know that until you go out and talk to them,listen, and and get in to their world. And I think that’s still something that’s not happening a lot with data teams. - Brian (24:45)
I think there really is a maturity among even the early stage teams now, where they can have a shelf full of techniques that they can just pick and choose from in terms of how to build a product, how to put it in front of people, and how to have the [user] experience be a gentle on-ramp. - Reed, on startups (27:29)

Tuesday Apr 20, 2021
Tuesday Apr 20, 2021
Debbie Reynolds is known as “The Data Diva” — and for good reason.
In addition to being founder, CEO and chief data privacy officer of her own successful consulting firm, Debbie has been named to the Global Top 20 CyberRisk Communicators by The European Risk Policy Institute in 2020. She’s also written a few books, such as The GDPR Challenge: Privacy, Technology, and Compliance In An Age of Accelerating Change; as well as articles for other publications.
If you are building data products, especially customer-facing software, you’ll want to tune into this episode. Debbie and Ihad an awesome discussion about data privacy from the lens of user experience instead of the typical angle we are all used to: legal compliance. While collecting user data can enable better user experiences, we can also break a customer’s trust if we don’t request access properly.
In our chat, we covered:
- 'Humans are using your product': What it means to be a 'data steward' when building software. (0:27)
- 'Privacy by design': The importance for software creators to think about privacy throughout the entire product creation process. (4:32)
- The different laws (and lack thereof) regarding data privacy — and the importance to think about a product's potential harm during the design process. (6:58)
- The importance of having 'diversity at all levels' when building data products. (16:41)
- The role of transparency in data collection. (19:41)
- Fostering a positive and collaborative relationship between a product or service’s designers, product owners, and legal compliance experts. (24:55)
- The future of data monetization and how it relates to privacy. (29:18)
Resources and Links:
Quotes from Today’s Episode
When it comes to your product, humans are using it. Regardless of whether the users are internal or external — what I tell people is to put themselves in the shoes of someone who’s using this and think about what you would want to have done with your information or with your rights. Putting it in that context, I think, helps people think and get out of their head about it. Obviously there’s a lot of skill and a lot of experience that it takes to build these products and think about them in technical ways. But I also try to tell people that when you’re dealing with data and you’re building products, you’re a data steward. The data belongs to someone else, and you’re holding it for them, or you’re allowing them to either have access to it or leverage it in some way. So, think about yourself and what you would think you would want done with your information. - Debbie (3:28)
Privacy by design is looking at the fundamental levels of how people are creating things, and having them think about privacy as they’re doing that creation. When that happens, then privacy is not a difficult thing at the end. Privacy really isn’t something you could tack on at the end of something; it’s something that becomes harder if it’s not baked in. So, being able to think about those things throughout the process makes it easier. We’re seeing situations now where consumers are starting to vote with their feet — if they feel like a tool or a process isn’t respecting their privacy rights, they want to be able to choose other things. So, I think that’s just the way of the world. .... It may be a situation where you’re going to lose customers or market share if you’re not thinking about the rights of individuals. - Debbie (5:20)
I think diversity at all levels is important when it comes to data privacy, such as diversity in skill sets, points of view, and regional differences. … I think people in the EU — because privacy is a fundamental human right — feel about it differently than we do here in the US where our privacy rights don’t really kick in unless it’s a transaction. ... The parallel I say is that people in Europe feel about privacy like we feel about freedom of speech here — it’s just very deeply ingrained in the way that they do things. And a lot of the time, when we’re building products, we don’t want to be collecting data or doing something in ways that would harm the way people feel about your product. So, you definitely have to be respectful of those different kinds of regimes and the way they handle data. … I’ll give you a biased example that someone had showed me, which was really interesting. There was a soap dispenser that was created where you put your hand underneath and then the soap comes out. It’s supposed to be a motion detection thing. And this particular one would not work on people of color. I guess whatever sensor they created, it didn’t have that color in the spectrum of what they thought would be used for detection or whatever. And so those are problems that happen a lot if you don’t have diverse people looking at these products. Because you — as a person that is creating products — you really want the most people possible to be able to use your products. I think there is an imperative on the economic side to make sure these products can work for everyone. - Debbie (17:31)
Transparency is the wave of the future, I think, because so many privacy laws have it. Almost any privacy law you think of has transparency in it, some way, shape, or form. So, if you’re not trying to be transparent with the people that you’re dealing with, or potential customers, you’re going to end up in trouble. - Debbie (24:35)
In my experience, while I worked with lawyers in the digital product design space — and it was heaviest when I worked at a financial institution — I watched how the legal and risk department basically crippled stuff constantly. And I say “cripple” because the feeling that I got was there’s a line between adhering to the law and then also—some of this is a gray area, like disclosure. Or, if we show this chart that has this information, is that construed as advice? I understand there’s a lot of legal regulation there. My feeling was, there’s got to be a better way for compliance departments and lawyers that genuinely want to do the right thing in their work to understand how to work with product design, digital design teams, especially ones using data in interesting ways. How do you work with compliance and legal when we’re designing digital products that use data so that it’s a team effort, and it’s not just like, “I’m going to cover every last edge because that’s what I’m here to do is to stop anything that could potentially get us sued.” There is a cost to that. There’s an innovation cost to that. It’s easier, though, to look at the lawyer and say, “Well, I guess they know the law better, so they’re always going to win that argument.” I think there’s a potential risk there. - Brain (25:01)
Trust is so important. A lot of times in our space, we think about it with machine learning, and AI, and trusting the model predictions and all this kind of stuff, but trust is a brand attribute as well and it’s part of the reason I think design is important because the designers tend to be the most empathetic and user-centered of the bunch. That’s what we’re often there to do is to keep that part in check because we can do almost anything these days with the tech and the data, and some of it’s like, “Should we do this?” And if we do do it, how do we do it so we’re on brand, and the trust is built, and all these other factors go into that user experience. - Brian (34:21)

Tuesday Apr 06, 2021
Tuesday Apr 06, 2021
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI).
Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.
I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.
In our chat, we covered:
- Ben's career studying human-computer interaction and computer science. (0:30)
- 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55)
- 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56)
- 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16)
- A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08)
- Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34)
- Ben's upcoming book on human-centered AI. (35:55)
Resources and Links:
- People-Centered Internet: https://peoplecentered.net/
- Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X
- Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764
- Partnership on AI: https://www.partnershiponai.org/
- AI incident database: https://www.partnershiponai.org/aiincidentdatabase/
- University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/
- ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html
- Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/
- Ben on Twitter: https://twitter.com/benbendc
Quotes from Today’s Episode
The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)
The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)
Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)
There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)
Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)
Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

Tuesday Mar 23, 2021
Tuesday Mar 23, 2021
Marty Cagan has had a storied career working as a product executive. With a resume that includes Vice President of Product at Netscape and Ebay, Marty is an expert in product management and strategy.
This week, Marty joins me on Experiencing Data to talk more about what a successful data product team looks like, as well as the characteristics of an effective product manager. We also explored the idea of product management applied to internal data teams. Marty and I didn’t necessarily agree on everything in this conversation, but I loved his relentless focus on companies’ customers. Marty and I also talked a bit about his new book, Empowered: Ordinary People, Extraordinary Teams. I also spoke with Marty about:
- The responsibilities of a data product team. (0:59)
- Whether an internally-facing software solution can be considered a 'product.' (5:02)
- Customer-facing vs. customer-enabling: Why Marty tries hard not to confuse the terminology of internal employees as customers. (7:50)
- The common personality characteristics and skill sets of effective product managers. (12:53)
- The importance of 'customer exposure time.' (17:56)
- The role of product managers in upholding ethical standards. (24:57)
- The value of a good designer on a product team. (28:07)
- Why Marty decided to write his latest book, Empowered, about leadership. (30:52)
Quotes from Today’s Episode
We try hard not to confuse customers with internal employees — for example, a sales organization, or customer service organization. They are important partners, but when a company starts to confuse these internal organizations with real customers, all kinds of bad things happen — especially to the real customer. [...] A lot of data reporting teams are, in most companies, being crushed with requests. So, how do you decide what to prioritize? Well, a product strategy should help with that and leadership should help with that. But, fundamentally, the actual true customers are going to drive a lot of what we need to do. It’s important that we keep that in mind. - Marty (9:13)
I come out of the technology space, and, for me, the worlds of product design and product management are two overlapping circles. Some people fall in the middle, some people are a little bit heavier to one side or the other. The focus there is there’s a lot of focus on empathy, and a focus on understanding how to frame the problem correctly — it’s about not jumping to a solution immediately without really understanding the customer pain point. - Brian (10:47)
One thing I’ve seen frequently throughout my career is that designers often have no idea how the business sustains itself. They don’t understand how it makes money, they don’t understand how it’s even sold or marketed. They are relentlessly focused on user experience, but the other half of it is making a business viable. - Brian (14:57)
Ethical issues really do, in almost all cases I see, originate with the leaders. However, it’s also true that they can first manifest themselves in the product teams. The product manager is often the first one to see that this could be a problem, even when it’s totally unintentional. - Marty (26:45)
My interest has always been product teams because every good product I know came from a product team. Literally — it is a combination of product design and engineering that generate great products. I’m interested in the nature of that collaboration and in nurturing the dynamics of a healthy team. To me, having strong engineering that’s all engaged with direct customer access is fundamental. Similarly, a professional designer is important — somebody that really understands service design, interaction design, visual design, and the user research behind it. The designer role is responsible for getting inside the heads of the users. This is hard. And it’s one of those things, when it’s done well, nobody even notices it. - Marty (28:54)
Links Referenced
- Silicon Valley Product Group: https://svpg.com/
- Empowered: https://svpg.com/empowered-ordinary-people-extraordinary-products/
- Inspired: https://svpg.com/inspired-how-to-create-products-customers-love/
- Twitter: https://twitter.com/cagan
LinkedIn: https://www.linkedin.com/in/cagan/

Monday Mar 08, 2021
Monday Mar 08, 2021
Journalism is one of the keystones of American democracy. For centuries, reporters and editors have kept those in power accountable by seeking out the truth and reporting it.
However, the art of newsgathering has changed dramatically in the digital age. Just take it from NPR Senior Director of Audience Insights Steve Mulder — whose team is helping change the way NPR makes editorial decisions by introducing a streamlined and accessible platform for data analytics and insights.
Steve and I go way, way back (Lycos anyone!?) — and I’m so excited to welcome him on this episode of Experiencing Data! We talked a lot about the Story Analytics and Data Insights (SANDI) dashboard for NPR content creators that Steve’s team just recently launched, and dove into:
- How Steve’s design and UX background influences his approach to building analytical tools and insights (1:04)
- Why data teams at NPR embrace qualitative UX research when building analytics and insights solutions for the editorial team. (6:03)
- What the Story Analytics and Data Insights (SANDI) dashboard for NPR’s newsroom is, the goals it is supporting, and the data silos that had to be broken down (10:52)
- How the NPR newsroom uses SANDI to measure audience reach and engagement. (14:40)
- 'It's our job to be translators': The role of moving from ‘what’ to ‘so what’ to ‘now what’ (22:57)
Quotes from Today’s Episode
People with backgrounds in UX and design end up everywhere. And I think it's because we have a couple of things going for us. We are user-centered in our hearts. Our goal is to understand people and what they need — regardless of what space we're talking about. We are grounded in research and getting to the underlying motivations of people and what they need. We're focused on good communication and interpretation and putting knowledge into action — we're generalists. - Steve (1:44)
The familiar trope is that quantitative research tells you what is going on, and qualitative research tells you why. Qualitative research gets underneath the surface to answer why people feel the way they do. Why are they motivated? Why are they describing their needs in a certain way? - Steve (6:32)
The more we work with people and develop relationships — and build that deeper sense of trust as an organization with each other — the more openness there is to having a real conversation. - Steve (9:06)
I’ve been reading a book by Nancy Duarte called DataStory (see Episode 32 of this show), and in the book she talks about this model of the career growth [...]that is really in sync with how I've been thinking about it. [...]you begin as an explorer of data — you're swimming in the data and finding insights from the data-first perspective. Over time in your career, you become an explainer. And an explainer is all about creating meaning: what is the context and interpretation that I can bring to this insight that makes it important, that answers the question, “So what?” And then the final step is to inspire, to actually inspire action and inspire new ways of looking at business problems or whatever you're looking at. - Steve (25:50)
I think that carving things down to what's the simplest is always a big challenge, just because those of us drowning in data are always tempted to expose more of it than we should. - Steve (29:30)
There's a healthy skepticism in some parts of NPR around data and around the fact that ‘I don't want data to limit what I do with my job. I don't want it to tell me what to do.’ We spend a lot of time reassuring people that data is never going to make decisions for you — it's just the foundation that you can stand on to better make your own decision. … We don't use data-driven decisions. At NPR, we talk about data-??? decisions because that better reflects the fact that it is data and expertise together that make things magic. - Steve (34:34)
Resources and Links:
- Twitter: https://twitter.com/muldermedia

Tuesday Feb 23, 2021
Tuesday Feb 23, 2021
With a 30+ year career in data warehousing, BI and advanced analytics under his belt, Bill has become a leader in the field of big data and data science – and, not to mention, a popular social media influencer. Having previously worked in senior leadership at DellEMC and Yahoo!, Bill is now an executive fellow and professor at the University of San Francisco School of Management as well as an honorary professor at the National University of Ireland-Galway.
I’m so excited to welcome Bill as my guest on this week’s episode of Experiencing Data. When I first began specializing my consulting in the area of data products, Bill was one of the first leaders that I specifically noticed was leveraging design thinking on a regular basis in his work. In this long overdue episode, we dug into some examples of how he’s using it with teams today. Bill sees design as a process of empowering humans to collaborate with one another, and he also shares insights from his new book, “The? Economics of Data, Analytics and Digital Transformation.”
In total, we covered:
- Why it’s crucial to understand a customer’s needs when building a data product and how design helps uncover this. (2:04)
- How running an “envisioning workshop” with a customer before starting a project can help uncover important information that might otherwise be overlooked. (5:09)
- How to approach the human/machine interaction when using machine learning and AI to guide customers in making decisions – and why it’s necessary at times to allow a human to override the software. (11:15)
- How teams that embrace design-thinking can create “organizational improvisation” and drive greater value. (14:49)
- Bill take on how to properly prioritize use cases (17:40)
- How toidentify a data product’s problems ahead of time. (21:36)
- The trait that Bill sees in the best data scientists and design thinkers (25:41)
- How Bill helps transition the practice of data science from being a focus on analytic outputs to operational and business outcomes. (28:40)
- Bill’s new book, “The Economics of Data, Analytics, and Digital Transformation.” (31:34)
- Brian and Bill’s take on the need for organizations to create a technological and cultural environment of continuous learning and adapting if they seek to innovate. (38:22)
Quotes from Today’s Episode
There’s certainly a UI aspect of design, which is to build products that are more conducive for the user to interact with – products that are more natural, more intuitive … But I also think about design from an empowerment perspective. When I consider design-thinking techniques, I think about how I can empower the wide variety of stakeholders that I need to service with my data science. I’m looking to identify and uncover those variables and metrics that might be better predictors of performance. To me, at the very beginning of the design process, it’s about empowering everybody to have ideas. – Bill (2:25)
Envisioning workshops are designed to let people realize that there are people all across the organization who bring very different perspectives to a problem. When you combine those perspectives, you have an illuminating thing. Now let’s be honest: many large organizations don’t do this well at all. And the reason why is not because they’re not smart, it’s because in many cases, senior executives aren’t willing to let go. Design thinking isn’t empowering the senior executives. In many cases, it’s about empowering those frontline employees … If you have a culture where the senior executives have to be the smartest people in the room, design is doomed. – Bill (10:15)
Organizational charts are the great destroyer of creativity because you put people in boxes. We talk about data silos, but we create these human silos where people can’t go out … Screw boxes. We want to create swirls – we want to create empowered teams. In fact, the most powerful teams are the ones who can embrace design thinking to create what I call organizational improvisation. Meaning, you have the ability to mix and match people across the organization based on their skill sets for the problem at hand, dissipate them when the problem is gone, and reconstitute them around a different problem. It’s like watching a great soccer team play … These players have been trained and conditioned, they make their own decisions on the field, and they interact with each other. Watching a good soccer team is like ballet because they’ve all been empowered to make decisions. – Bill (15:30)
I tend to feel like design thinkers can be born from any job title, not just “creatives” – even certain types of verytechnically gifted people can be really good at it. A lot of it is focused around the types of questions they ask and their ability to be empathetic. – Brian (25:55)
(Is there another quote from me? So many good ones in this episode from Bill though so if not, i understand)
The best design thinkers and the best data scientists share one common trait: they’re humble. They have the ability to ask questions, to learn. They don’t walk in with an answer…and here’s the beauty of design thinking: anybody can do it. But you have to be humble. If you already know the answer, then you’re never going to be a good designer. Never. – Bill (26:34)
From an economic perspective … The value of data isn’t in having it. The value in data is how you use it to generate more value … In the same way that design thinking is learning how to speak the language of the customer, economics is about learning how to speak the language of the business. And when you bring those concepts together around data science, that’s a blend that is truly a game-changer. – Bill (36:03)
Links

Tuesday Feb 09, 2021
Tuesday Feb 09, 2021
On this solo episode of Experiencing Data, I discussed eight design strategies that will help your data product team create immensely valuable IOT monitoring applications.
Whether your team is creating a system for predictive maintenance, forecasting, or root-cause analysis – analytics are often a big part of helping users make sense of the huge volumes of telemetry and data an IOT system can generate. Often times, product or technical teams see the game as, “how do we display all the telemetry from the system in a way the user can understand?” The problem with this approach is that it is completely decoupled from the business objectives the customers likely have-and it is a recipe for a very hard-to-use application.
The reality is that a successful application may require little to no human interaction at all-that may actually be the biggest value of all that you can create for your customer: showing up only when necessary, with just the right insight.
So, let’s dive into some design considerations for these analytical monitoring applications, dashboards, and experiences.
In total, I covered:
- Why it’s important to consider that a monitoring application user experiences may happen across multiple screens, interfaces, departments or people. (2:32)
- Design considerations benefits when building a forecasting or predictive application that allows customers to change parameters and explore “what-if” scenarios. (6:09)
- Designing for seasonality: What it means to have a monitoring application that understands and adapts to periodicity in the real world. (11:03)
- How the best user experiences for monitoring and maintenance applications using analytics seamlessly integrate people, processes and related technology. (16:03)
- The role of alerting and notifications in these systems … and where things can go wrong if they aren’t well designed from a UX perspective. (19:49)
- How to keep the customer (user’s) business top of mind within the application UX. (23:19)
- One secret to making time-series charts in particular more powerful and valuable to users. (25:24)
- Some of the common features and use cases I see monitoring applications needing to support on out-of-the-box dashboards. (27:15)
Quotes from Today’s Episode
Consider your data product across multiple applications, screens, departments and people. Be aware that the experience may go beyond the walls of the application sitting in front of you. – Brian (5:58)
When it comes to building forecast or predictive applications, a model’s accuracy frequently comes second to the interpretability of the model. Because if you don’t have transparency in the UX, then you don’t have trust. And if you don’t have trust, then no one pays attention. If no one pays attention, then none of the data science work you did matters. – Brian (7:15)
Well-designed applications understand the real world. They know about things like seasonality and what normalcy means in the environment in which this application exists. These applications learn and take into consideration new information as it comes in. (11:03)
The greatest IoT UIs and UXs may be the ones where you rarely have to use the service to begin with. These services give you alerts and notifications at the right time with the right amount of information along with actionable next steps. – Brian (20:00)
With tons of IoT telemetry comes a lot of discussion of stats and metrics that are visualized on charts and tables. But at the end of the day, your customer probably may not really care about the objects themselves. Ultimately, the devices being monitored are there to provide business value to your customer. Working backwards from the business value perspective helps guide solid UX design choices. – Brian (23:18)

Tuesday Jan 26, 2021
Tuesday Jan 26, 2021
Designing a data product from the ground up is a daunting task, and it is complicated further when you have several different user types who all have different expectations for the service. Whether an application offers a wealth of traditional historical analytics or leverages predictive capabilities using machine learning, for example, you may find that different users have different expectations. As a leader, you may be forced to make choices about how and what data you’ll present, and how you will allow these different user types to interact with it. These choices can be difficult when domain knowledge, time availability, job responsibility, and a need for control vary greatly across these personas. So what should you do?
To answer that, today I’m going solo on Experiencing Data to highlight some strategies I think about when designing multi-user enterprise data products so that in the end, something truly innovative, useful, and valuable emerges.
In total, I covered:
- Why UX research is imperative and the types of research I think are important (4:43)
- The importance for teams to have a single understanding of how a product’s success will be measured before it is built and launched (and how research helps clarify this). (8:28)
- The pros and cons of using the design tool called “personas” to help guide design decision making for multiple different user types. (19:44)
- The idea of ‘Minimum valuable product’ and how you balance this with multiple user types (24:26)
- The strategy I use to reduce complexity and find opportunities to solve multiple users’ needs with a single solution (29:26)
- The relevancy of declaratory vs. exploratory analytics and why this is relevant. (32:48)
- My take on offering customization as a means to satisfy multiple customer types. (35:15)
- Expectations leaders should have-particularly if you do not have trained product designers or UX professionals on your team. (43:56)
Resources and Links
- My training seminar, Designing Human-Centered Data Products: http://designingforanalytics.com/theseminar
- Designing for Analytics Self-Assessment Guide: http://designingforanalytics.com/guide
- (Book) The User Is Always Right: A Practical Guide to Creating and Using Personas for the Web by Steve Mulder https://www.amazon.com/User-Always-Right-Practical-Creating/dp/0321434536
- My C-E-D Design Framework for Integrating Advanced Analytics into Decision Support Software: https://designingforanalytics.com/resources/c-e-d-ux-framework-for-advanced-analytics/
- Homepage for all of my free resources on designing innovative machine learning and analytics solutions: designingforanalytics.com/resources

Tuesday Jan 12, 2021
Tuesday Jan 12, 2021
There’s a lot at stake in the decisions that social workers have to make when they care for people — and Dr. Besa Bauta keeps this in mind when her teams are designing the data products that care providers use in the field.
As Chief Data Officer at MercyFirst, a New York-based social service nonprofit, Besa explains how her teams use design and design thinking to create useful decision support applications that lead to improved clinician-client interactions, health and well-being outcomes, and better decision making.
In addition to her work at MercyFirst, Besa currently serves as an adjunct assistant professor at New York University’s Silver School of Social Work where she teaches public health, social science theories and mental/behavioral health. On today’s episode, Besa and I talked about how MercyFirst’s focus on user experience improves its delivery of care and the challenges Besa and her team have encountered in driving adoption of new technology.
In total, we covered:
- How data digitization is improving the functionality of information technologies. (1:40)
- Why MercyFirst, a social service organization, partners with technology companies to create useful data products. (3:30)
- How MercyFirst decides which applications are worth developing. (7:06)
- Evaluating effectiveness: How MercyFirst’s focus on user experience improves the delivery of care. (10:45)
- “With anything new, there is always fear”: The challenges MercyFirst has with getting buy-in on new technology from both patients and staff. (15:07)
- Besa’s take on why it is important to engage the correct stakeholders early on in the design of an application — and why she engages the naysayers. (20:05)
- The challenges MercyFirst faces with getting its end-users to participate in providing feedback on an application’s design and UX. (24:10)
- Why Besa believes it is important to be thinking of human-centered design from the inception of a project. (27:50)
- Why it is imperative to involve key stakeholders in the design process of artificial intelligence and machine learning products. (31:20)
Quotes from Today’s Episode
We're not a technology company, ...so, for us, it’s about finding the right partners that understand our use cases and who are also willing to work alongside us to actually develop something that our end-users — our physicians, for example — are able to use in their interaction with a patient. - Besa
No one wants to have a different type of application every other week, month, or year. We want to have a solution that grows with the organization. - Besa on the importance of creating a product that is sustainable over time
If we think about data as largely about providing decision support or decision intelligence, how do you measure that it's designed to do a good job? What's the KPI for choosing good KPIs? - Brian
Earlier on, engaging with the key stakeholders is really important. You're going to have important gatekeepers, who are going to say, ‘No, no, no,’ — the naysayers. I start with the naysayers first — the harder nuts to crack — and say, ‘How can this improve your process or your service?’ If I could win them over, the rest is cake. Well, almost. Not all the time. - Besa
Failure is how some orgs learn about just how much design matters. At some point, they realize that data science, engineering, and technical work doesn't count if no human will use that app, model, product, or dashboard when it rolls out. -Brian
Besa: It was a dud. [laugh].
Brian: —yeah, if it doesn’t get used, it doesn't matter
What my team has done is create workgroups with our vendors and others to sort of shift developmental timelines [...] and change what needs to go into development and production first—and then ensure there's a tiered approach to meet [everyone’s] needs because we work as a collective. It’s not just one healthcare organization: there are many health and social service organizations on the same boat. - Besa
It's really important to think about the human in the middle of this entire process. Sometimes products get developed without really thinking, ‘is this going to improve the way I do things? Is it going to improve anything?’ … The more personalized a product is,the better it is and the greater the adoption. - Besa