

141.5K
Downloads
173
Episodes
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be? While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be? If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies. I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better. Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes

Tuesday Apr 06, 2021
Tuesday Apr 06, 2021
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI).
Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.
I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.
In our chat, we covered:
- Ben's career studying human-computer interaction and computer science. (0:30)
- 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55)
- 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56)
- 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16)
- A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08)
- Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34)
- Ben's upcoming book on human-centered AI. (35:55)
Resources and Links:
- People-Centered Internet: https://peoplecentered.net/
- Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X
- Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764
- Partnership on AI: https://www.partnershiponai.org/
- AI incident database: https://www.partnershiponai.org/aiincidentdatabase/
- University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/
- ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html
- Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/
- Ben on Twitter: https://twitter.com/benbendc
Quotes from Today’s Episode
The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)
The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)
Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)
There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)
Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)
Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

Tuesday Mar 23, 2021
Tuesday Mar 23, 2021
Marty Cagan has had a storied career working as a product executive. With a resume that includes Vice President of Product at Netscape and Ebay, Marty is an expert in product management and strategy.
This week, Marty joins me on Experiencing Data to talk more about what a successful data product team looks like, as well as the characteristics of an effective product manager. We also explored the idea of product management applied to internal data teams. Marty and I didn’t necessarily agree on everything in this conversation, but I loved his relentless focus on companies’ customers. Marty and I also talked a bit about his new book, Empowered: Ordinary People, Extraordinary Teams. I also spoke with Marty about:
- The responsibilities of a data product team. (0:59)
- Whether an internally-facing software solution can be considered a 'product.' (5:02)
- Customer-facing vs. customer-enabling: Why Marty tries hard not to confuse the terminology of internal employees as customers. (7:50)
- The common personality characteristics and skill sets of effective product managers. (12:53)
- The importance of 'customer exposure time.' (17:56)
- The role of product managers in upholding ethical standards. (24:57)
- The value of a good designer on a product team. (28:07)
- Why Marty decided to write his latest book, Empowered, about leadership. (30:52)
Quotes from Today’s Episode
We try hard not to confuse customers with internal employees — for example, a sales organization, or customer service organization. They are important partners, but when a company starts to confuse these internal organizations with real customers, all kinds of bad things happen — especially to the real customer. [...] A lot of data reporting teams are, in most companies, being crushed with requests. So, how do you decide what to prioritize? Well, a product strategy should help with that and leadership should help with that. But, fundamentally, the actual true customers are going to drive a lot of what we need to do. It’s important that we keep that in mind. - Marty (9:13)
I come out of the technology space, and, for me, the worlds of product design and product management are two overlapping circles. Some people fall in the middle, some people are a little bit heavier to one side or the other. The focus there is there’s a lot of focus on empathy, and a focus on understanding how to frame the problem correctly — it’s about not jumping to a solution immediately without really understanding the customer pain point. - Brian (10:47)
One thing I’ve seen frequently throughout my career is that designers often have no idea how the business sustains itself. They don’t understand how it makes money, they don’t understand how it’s even sold or marketed. They are relentlessly focused on user experience, but the other half of it is making a business viable. - Brian (14:57)
Ethical issues really do, in almost all cases I see, originate with the leaders. However, it’s also true that they can first manifest themselves in the product teams. The product manager is often the first one to see that this could be a problem, even when it’s totally unintentional. - Marty (26:45)
My interest has always been product teams because every good product I know came from a product team. Literally — it is a combination of product design and engineering that generate great products. I’m interested in the nature of that collaboration and in nurturing the dynamics of a healthy team. To me, having strong engineering that’s all engaged with direct customer access is fundamental. Similarly, a professional designer is important — somebody that really understands service design, interaction design, visual design, and the user research behind it. The designer role is responsible for getting inside the heads of the users. This is hard. And it’s one of those things, when it’s done well, nobody even notices it. - Marty (28:54)
Links Referenced
- Silicon Valley Product Group: https://svpg.com/
- Empowered: https://svpg.com/empowered-ordinary-people-extraordinary-products/
- Inspired: https://svpg.com/inspired-how-to-create-products-customers-love/
- Twitter: https://twitter.com/cagan
LinkedIn: https://www.linkedin.com/in/cagan/

Monday Mar 08, 2021
Monday Mar 08, 2021
Journalism is one of the keystones of American democracy. For centuries, reporters and editors have kept those in power accountable by seeking out the truth and reporting it.
However, the art of newsgathering has changed dramatically in the digital age. Just take it from NPR Senior Director of Audience Insights Steve Mulder — whose team is helping change the way NPR makes editorial decisions by introducing a streamlined and accessible platform for data analytics and insights.
Steve and I go way, way back (Lycos anyone!?) — and I’m so excited to welcome him on this episode of Experiencing Data! We talked a lot about the Story Analytics and Data Insights (SANDI) dashboard for NPR content creators that Steve’s team just recently launched, and dove into:
- How Steve’s design and UX background influences his approach to building analytical tools and insights (1:04)
- Why data teams at NPR embrace qualitative UX research when building analytics and insights solutions for the editorial team. (6:03)
- What the Story Analytics and Data Insights (SANDI) dashboard for NPR’s newsroom is, the goals it is supporting, and the data silos that had to be broken down (10:52)
- How the NPR newsroom uses SANDI to measure audience reach and engagement. (14:40)
- 'It's our job to be translators': The role of moving from ‘what’ to ‘so what’ to ‘now what’ (22:57)
Quotes from Today’s Episode
People with backgrounds in UX and design end up everywhere. And I think it's because we have a couple of things going for us. We are user-centered in our hearts. Our goal is to understand people and what they need — regardless of what space we're talking about. We are grounded in research and getting to the underlying motivations of people and what they need. We're focused on good communication and interpretation and putting knowledge into action — we're generalists. - Steve (1:44)
The familiar trope is that quantitative research tells you what is going on, and qualitative research tells you why. Qualitative research gets underneath the surface to answer why people feel the way they do. Why are they motivated? Why are they describing their needs in a certain way? - Steve (6:32)
The more we work with people and develop relationships — and build that deeper sense of trust as an organization with each other — the more openness there is to having a real conversation. - Steve (9:06)
I’ve been reading a book by Nancy Duarte called DataStory (see Episode 32 of this show), and in the book she talks about this model of the career growth [...]that is really in sync with how I've been thinking about it. [...]you begin as an explorer of data — you're swimming in the data and finding insights from the data-first perspective. Over time in your career, you become an explainer. And an explainer is all about creating meaning: what is the context and interpretation that I can bring to this insight that makes it important, that answers the question, “So what?” And then the final step is to inspire, to actually inspire action and inspire new ways of looking at business problems or whatever you're looking at. - Steve (25:50)
I think that carving things down to what's the simplest is always a big challenge, just because those of us drowning in data are always tempted to expose more of it than we should. - Steve (29:30)
There's a healthy skepticism in some parts of NPR around data and around the fact that ‘I don't want data to limit what I do with my job. I don't want it to tell me what to do.’ We spend a lot of time reassuring people that data is never going to make decisions for you — it's just the foundation that you can stand on to better make your own decision. … We don't use data-driven decisions. At NPR, we talk about data-??? decisions because that better reflects the fact that it is data and expertise together that make things magic. - Steve (34:34)
Resources and Links:
- Twitter: https://twitter.com/muldermedia

Tuesday Feb 23, 2021
Tuesday Feb 23, 2021
With a 30+ year career in data warehousing, BI and advanced analytics under his belt, Bill has become a leader in the field of big data and data science – and, not to mention, a popular social media influencer. Having previously worked in senior leadership at DellEMC and Yahoo!, Bill is now an executive fellow and professor at the University of San Francisco School of Management as well as an honorary professor at the National University of Ireland-Galway.
I’m so excited to welcome Bill as my guest on this week’s episode of Experiencing Data. When I first began specializing my consulting in the area of data products, Bill was one of the first leaders that I specifically noticed was leveraging design thinking on a regular basis in his work. In this long overdue episode, we dug into some examples of how he’s using it with teams today. Bill sees design as a process of empowering humans to collaborate with one another, and he also shares insights from his new book, “The? Economics of Data, Analytics and Digital Transformation.”
In total, we covered:
- Why it’s crucial to understand a customer’s needs when building a data product and how design helps uncover this. (2:04)
- How running an “envisioning workshop” with a customer before starting a project can help uncover important information that might otherwise be overlooked. (5:09)
- How to approach the human/machine interaction when using machine learning and AI to guide customers in making decisions – and why it’s necessary at times to allow a human to override the software. (11:15)
- How teams that embrace design-thinking can create “organizational improvisation” and drive greater value. (14:49)
- Bill take on how to properly prioritize use cases (17:40)
- How toidentify a data product’s problems ahead of time. (21:36)
- The trait that Bill sees in the best data scientists and design thinkers (25:41)
- How Bill helps transition the practice of data science from being a focus on analytic outputs to operational and business outcomes. (28:40)
- Bill’s new book, “The Economics of Data, Analytics, and Digital Transformation.” (31:34)
- Brian and Bill’s take on the need for organizations to create a technological and cultural environment of continuous learning and adapting if they seek to innovate. (38:22)
Quotes from Today’s Episode
There’s certainly a UI aspect of design, which is to build products that are more conducive for the user to interact with – products that are more natural, more intuitive … But I also think about design from an empowerment perspective. When I consider design-thinking techniques, I think about how I can empower the wide variety of stakeholders that I need to service with my data science. I’m looking to identify and uncover those variables and metrics that might be better predictors of performance. To me, at the very beginning of the design process, it’s about empowering everybody to have ideas. – Bill (2:25)
Envisioning workshops are designed to let people realize that there are people all across the organization who bring very different perspectives to a problem. When you combine those perspectives, you have an illuminating thing. Now let’s be honest: many large organizations don’t do this well at all. And the reason why is not because they’re not smart, it’s because in many cases, senior executives aren’t willing to let go. Design thinking isn’t empowering the senior executives. In many cases, it’s about empowering those frontline employees … If you have a culture where the senior executives have to be the smartest people in the room, design is doomed. – Bill (10:15)
Organizational charts are the great destroyer of creativity because you put people in boxes. We talk about data silos, but we create these human silos where people can’t go out … Screw boxes. We want to create swirls – we want to create empowered teams. In fact, the most powerful teams are the ones who can embrace design thinking to create what I call organizational improvisation. Meaning, you have the ability to mix and match people across the organization based on their skill sets for the problem at hand, dissipate them when the problem is gone, and reconstitute them around a different problem. It’s like watching a great soccer team play … These players have been trained and conditioned, they make their own decisions on the field, and they interact with each other. Watching a good soccer team is like ballet because they’ve all been empowered to make decisions. – Bill (15:30)
I tend to feel like design thinkers can be born from any job title, not just “creatives” – even certain types of verytechnically gifted people can be really good at it. A lot of it is focused around the types of questions they ask and their ability to be empathetic. – Brian (25:55)
(Is there another quote from me? So many good ones in this episode from Bill though so if not, i understand)
The best design thinkers and the best data scientists share one common trait: they’re humble. They have the ability to ask questions, to learn. They don’t walk in with an answer…and here’s the beauty of design thinking: anybody can do it. But you have to be humble. If you already know the answer, then you’re never going to be a good designer. Never. – Bill (26:34)
From an economic perspective … The value of data isn’t in having it. The value in data is how you use it to generate more value … In the same way that design thinking is learning how to speak the language of the customer, economics is about learning how to speak the language of the business. And when you bring those concepts together around data science, that’s a blend that is truly a game-changer. – Bill (36:03)
Links

Tuesday Feb 09, 2021
Tuesday Feb 09, 2021
On this solo episode of Experiencing Data, I discussed eight design strategies that will help your data product team create immensely valuable IOT monitoring applications.
Whether your team is creating a system for predictive maintenance, forecasting, or root-cause analysis – analytics are often a big part of helping users make sense of the huge volumes of telemetry and data an IOT system can generate. Often times, product or technical teams see the game as, “how do we display all the telemetry from the system in a way the user can understand?” The problem with this approach is that it is completely decoupled from the business objectives the customers likely have-and it is a recipe for a very hard-to-use application.
The reality is that a successful application may require little to no human interaction at all-that may actually be the biggest value of all that you can create for your customer: showing up only when necessary, with just the right insight.
So, let’s dive into some design considerations for these analytical monitoring applications, dashboards, and experiences.
In total, I covered:
- Why it’s important to consider that a monitoring application user experiences may happen across multiple screens, interfaces, departments or people. (2:32)
- Design considerations benefits when building a forecasting or predictive application that allows customers to change parameters and explore “what-if” scenarios. (6:09)
- Designing for seasonality: What it means to have a monitoring application that understands and adapts to periodicity in the real world. (11:03)
- How the best user experiences for monitoring and maintenance applications using analytics seamlessly integrate people, processes and related technology. (16:03)
- The role of alerting and notifications in these systems … and where things can go wrong if they aren’t well designed from a UX perspective. (19:49)
- How to keep the customer (user’s) business top of mind within the application UX. (23:19)
- One secret to making time-series charts in particular more powerful and valuable to users. (25:24)
- Some of the common features and use cases I see monitoring applications needing to support on out-of-the-box dashboards. (27:15)
Quotes from Today’s Episode
Consider your data product across multiple applications, screens, departments and people. Be aware that the experience may go beyond the walls of the application sitting in front of you. – Brian (5:58)
When it comes to building forecast or predictive applications, a model’s accuracy frequently comes second to the interpretability of the model. Because if you don’t have transparency in the UX, then you don’t have trust. And if you don’t have trust, then no one pays attention. If no one pays attention, then none of the data science work you did matters. – Brian (7:15)
Well-designed applications understand the real world. They know about things like seasonality and what normalcy means in the environment in which this application exists. These applications learn and take into consideration new information as it comes in. (11:03)
The greatest IoT UIs and UXs may be the ones where you rarely have to use the service to begin with. These services give you alerts and notifications at the right time with the right amount of information along with actionable next steps. – Brian (20:00)
With tons of IoT telemetry comes a lot of discussion of stats and metrics that are visualized on charts and tables. But at the end of the day, your customer probably may not really care about the objects themselves. Ultimately, the devices being monitored are there to provide business value to your customer. Working backwards from the business value perspective helps guide solid UX design choices. – Brian (23:18)

Tuesday Jan 26, 2021
Tuesday Jan 26, 2021
Designing a data product from the ground up is a daunting task, and it is complicated further when you have several different user types who all have different expectations for the service. Whether an application offers a wealth of traditional historical analytics or leverages predictive capabilities using machine learning, for example, you may find that different users have different expectations. As a leader, you may be forced to make choices about how and what data you’ll present, and how you will allow these different user types to interact with it. These choices can be difficult when domain knowledge, time availability, job responsibility, and a need for control vary greatly across these personas. So what should you do?
To answer that, today I’m going solo on Experiencing Data to highlight some strategies I think about when designing multi-user enterprise data products so that in the end, something truly innovative, useful, and valuable emerges.
In total, I covered:
- Why UX research is imperative and the types of research I think are important (4:43)
- The importance for teams to have a single understanding of how a product’s success will be measured before it is built and launched (and how research helps clarify this). (8:28)
- The pros and cons of using the design tool called “personas” to help guide design decision making for multiple different user types. (19:44)
- The idea of ‘Minimum valuable product’ and how you balance this with multiple user types (24:26)
- The strategy I use to reduce complexity and find opportunities to solve multiple users’ needs with a single solution (29:26)
- The relevancy of declaratory vs. exploratory analytics and why this is relevant. (32:48)
- My take on offering customization as a means to satisfy multiple customer types. (35:15)
- Expectations leaders should have-particularly if you do not have trained product designers or UX professionals on your team. (43:56)
Resources and Links
- My training seminar, Designing Human-Centered Data Products: http://designingforanalytics.com/theseminar
- Designing for Analytics Self-Assessment Guide: http://designingforanalytics.com/guide
- (Book) The User Is Always Right: A Practical Guide to Creating and Using Personas for the Web by Steve Mulder https://www.amazon.com/User-Always-Right-Practical-Creating/dp/0321434536
- My C-E-D Design Framework for Integrating Advanced Analytics into Decision Support Software: https://designingforanalytics.com/resources/c-e-d-ux-framework-for-advanced-analytics/
- Homepage for all of my free resources on designing innovative machine learning and analytics solutions: designingforanalytics.com/resources

Tuesday Jan 12, 2021
Tuesday Jan 12, 2021
There’s a lot at stake in the decisions that social workers have to make when they care for people — and Dr. Besa Bauta keeps this in mind when her teams are designing the data products that care providers use in the field.
As Chief Data Officer at MercyFirst, a New York-based social service nonprofit, Besa explains how her teams use design and design thinking to create useful decision support applications that lead to improved clinician-client interactions, health and well-being outcomes, and better decision making.
In addition to her work at MercyFirst, Besa currently serves as an adjunct assistant professor at New York University’s Silver School of Social Work where she teaches public health, social science theories and mental/behavioral health. On today’s episode, Besa and I talked about how MercyFirst’s focus on user experience improves its delivery of care and the challenges Besa and her team have encountered in driving adoption of new technology.
In total, we covered:
- How data digitization is improving the functionality of information technologies. (1:40)
- Why MercyFirst, a social service organization, partners with technology companies to create useful data products. (3:30)
- How MercyFirst decides which applications are worth developing. (7:06)
- Evaluating effectiveness: How MercyFirst’s focus on user experience improves the delivery of care. (10:45)
- “With anything new, there is always fear”: The challenges MercyFirst has with getting buy-in on new technology from both patients and staff. (15:07)
- Besa’s take on why it is important to engage the correct stakeholders early on in the design of an application — and why she engages the naysayers. (20:05)
- The challenges MercyFirst faces with getting its end-users to participate in providing feedback on an application’s design and UX. (24:10)
- Why Besa believes it is important to be thinking of human-centered design from the inception of a project. (27:50)
- Why it is imperative to involve key stakeholders in the design process of artificial intelligence and machine learning products. (31:20)
Quotes from Today’s Episode
We're not a technology company, ...so, for us, it’s about finding the right partners that understand our use cases and who are also willing to work alongside us to actually develop something that our end-users — our physicians, for example — are able to use in their interaction with a patient. - Besa
No one wants to have a different type of application every other week, month, or year. We want to have a solution that grows with the organization. - Besa on the importance of creating a product that is sustainable over time
If we think about data as largely about providing decision support or decision intelligence, how do you measure that it's designed to do a good job? What's the KPI for choosing good KPIs? - Brian
Earlier on, engaging with the key stakeholders is really important. You're going to have important gatekeepers, who are going to say, ‘No, no, no,’ — the naysayers. I start with the naysayers first — the harder nuts to crack — and say, ‘How can this improve your process or your service?’ If I could win them over, the rest is cake. Well, almost. Not all the time. - Besa
Failure is how some orgs learn about just how much design matters. At some point, they realize that data science, engineering, and technical work doesn't count if no human will use that app, model, product, or dashboard when it rolls out. -Brian
Besa: It was a dud. [laugh].
Brian: —yeah, if it doesn’t get used, it doesn't matter
What my team has done is create workgroups with our vendors and others to sort of shift developmental timelines [...] and change what needs to go into development and production first—and then ensure there's a tiered approach to meet [everyone’s] needs because we work as a collective. It’s not just one healthcare organization: there are many health and social service organizations on the same boat. - Besa
It's really important to think about the human in the middle of this entire process. Sometimes products get developed without really thinking, ‘is this going to improve the way I do things? Is it going to improve anything?’ … The more personalized a product is,the better it is and the greater the adoption. - Besa

Tuesday Dec 29, 2020
Tuesday Dec 29, 2020
It’s not just science fiction: As AI becomes more complex and prevalent, so do the ethical implications of this new technology.But don’t just take it from me – take it from Carol Smith, a leading voice in the field of UX and AI. Carol is a senior research scientist in human-machine interaction at Carnegie Mellon University’s Emerging Tech Center, a division of the school’s Software Engineering Institute. Formerly a senior researcher for Uber’s self-driving vehicle experience, Carol-who also works as an adjunct professor at the university’s Human-Computer Interaction Institute-does research on Ethical AI in her work with the US Department of Defense.
Throughout her 20 years in the UX field, Carol has studied how focusing on ethics can improve user experience with AI. On today’s episode, Carol and I talked about exactly that: the intersection of user experience and artificial intelligence, what Carol’s work with the DoD has taught her, and why design matters when using machine learning and automation. Better yet, Carol gives us some specific, actionable guidance and her four principles for designing ethical AI systems.
In total, we covered:
- “Human-machine teaming”: what Carol learned while researching how passengers would interact with autonomous cars at Uber (2:17)
- Why Carol focuses on the ethical implications of the user experience research she is doing (4:20)
- Why designing for AI is both a new endeavor and an extension of existing human-centered design principles (6:24)
- How knowing a user’s information needs can drive immense value in AI products (9:14)
- Carol explains how teams can improve their AI product by considering ethics (11:45)
- “Thinking through the worst-case scenarios”: Why ethics matters in AI development (14:35) and methods to include ethics early in the process (17:11)
- The intersection between soldiers and artificial intelligence (19:34)
- Making AI flexible to human oddities and complexities (25:11)
- How exactly diverse teams help us design better AI solutions (29:00)
- Carol’s four principles of designing ethical AI systems and “abusability testing” (32:01)
Quotes from Today’s Episode
“The craft of design-particularly for #analytics and #AI solutions-is figuring out who this customer is-your user-and exactly what amount of evidence do they need, and at what time do they need it, and the format they need it in.” – Brian
“From a user experience, or human-centered design aspect, just trying to learn as much as you can about the individuals who are going to use the system is really helpful … And then beyond that, as you start to think about ethics, there are a lot of activities you can do, just speculation activities that you can do on the couch, so to speak, and think through – what is the worst thing that could happen with the system?” – Carol
“[For AI, I recommend] ‘abusability testing,’ or ‘black mirror episode testing,’ where you’re really thinking through the absolute worst-case scenario because it really helps you to think about the people who could be the most impacted. And particularly people who are marginalized in society, we really want to be careful that we’re not adding to the already bad situations that they’re already facing.” – Carol, on ways to think about the ethical implications of an AI system
“I think people need to be more open to doing slightly slower work […] the move fast and break things time is over. It just, it doesn’t work. Too many people do get hurt, and it’s not a good way to make things. We can make them better, slightly slower.” – Carol
“The four principles of designing ethical AI systems are: accountable to humans, cognizant of speculative risks and benefits, respectful and secure, and honest and usable. And so with these four aspects, we can start to really query the systems and think about different types of protections that we want to provide.” – Carol
“Keep asking tough questions. Have these tough conversations. This is really hard work. It’s very uncomfortable work for a lot of people. They’re just not used to having these types of ethical conversations, but it’s really important that we become more comfortable with them, and keep asking those questions. Because if we’re not asking the questions, no one else may ask them.” – Carol
Links

Tuesday Dec 15, 2020
054 - Jared Spool on Designing Innovative ML/AI and Analytics User Experiences
Tuesday Dec 15, 2020
Tuesday Dec 15, 2020
Jared Spool is arguably the most well-known name in the field of design and user experience. For more than a decade, he has beena witty, powerful voice for why UX is critical to value creation within businesses. Formerly an engineer, Jared started working in UX in 1978, founded UIE (User Interface Engineering) in 1988, and has helped establish the field over the last 30 years. In addition, he advised the US Digital Service / Executive Office of President Obama and in 2016, Jared co-founded the Center Centre, the user experience design school that’s creating a new generation of industry-ready UX designers.
Today however, we turned to the topic of UX in the context of analytics, ML and AI—and what teams–especially those without trained designers on staff–need to know about creating successful data products.
In our chat, we covered:
- Jared’s definition of “design”
- The definition of UX outcomes, and who should be responsible for defining and delivering them
- Understanding the “value chain” of user experience and the idea that “everyone” creating the solution is a designer and responsible for UX
- Brian’s take on the current state of data and AI-awareness within the field of UX —and whether Jared agrees with Brian’s perceptions
- Why teams should use visual aids to drive change and innovation, and two tools they can use to execute this
- The relationship between data literacy and design
- The type of math training Jared thinks is missing in education and why he thinks it should replace calculus in high school -- Examples of how UX design directly addresses privacy and ethical issues with intelligent devices
- Some example actions that leaders who are new to the UX profession can do immediately to start driving more value with data products
Quotes from Today’s Episode
“Center Centre is a school in Chattanooga for creating UX designers, and it's also the name of the professional development business that we've created around it that helps organizations create and exude excellence in terms of making UX design and product services…” - Jared
“The reality is this: on the other side of all that data, there are people. There's the direct people who are interacting with the data directly, interacting with the intelligence interacting with the various elements of what's going on, but at the same time, there's indirect folks. If someone is making decisions based on that intelligence, those decisions affect somebody else's life.” - Jared
“I think something that's missing frequently here is the inability to think beyond the immediate customer who requests a solution.” Brian
“The fact that there are user experience teams anywhere is sort of a new and novel thing. A decade ago, that was very unlikely that you'd go into a business and there’d be a user experience team of any note that had any sort of influence across the business.” - Jared
[At Netflix], we'd probably put the people who work in the basement on [server and network] performance at the opposite side of the chart from the people who work on the user interface or what we consider the user experience of Netflix […] Except at that one moment where someone's watching their favorite film, and that little spinny thing comes up, and the film pauses, and the experience is completely interrupted. And it's interrupted because the latency, and the throughput, and the resilience of the network are coming through to the user interface. And suddenly, that group of people in the basement are the most important UX designers at Netflix. - Jared
My feeling is, with the exception of perhaps the FANG companies, the idea of designers being required, or part of the equation when we're developing probabilistic solutions that use machine learning etc., well, it's not even part of the conversation with most user experience leaders that I talk to. - Brian
Links
- Center Centre website

Tuesday Dec 01, 2020
053 - Creating (and Debugging) Successful Data Product Teams with Jesse Anderson
Tuesday Dec 01, 2020
Tuesday Dec 01, 2020
In this episode of Experiencing Data, I speak with Jesse Anderson, who is Managing Director of the Big Data Institute and author of a new book
titled, Data Teams: A Unified Management Model for Successful Data-Focused Teams. Jesse opens up about why teams often run into trouble in their efforts to build data products, and what can be done to drive better outcomes.
In our chat, we covered:
- Jesse’s concept of debugging teams
- How Jesse defines a data product, how he distinguishes them from software products
- What users care about in useful data products
- Why your tech leads need to be involved with frontline customers, users, and business leaders
- Brian’s take on Jesse’s definition of a “data team” and the roles involved-especially around two particular disciplines
- The role that product owners tend to play in highly productive teams
- What conditions lead teams to building the wrong product
- How data teams are challenged to bring together parts of the company that never talk to each other – like business, analytics, and engineering teams
- The differences in how tech companies create software and data products, versus how non-digital natives often go about the process
Quotes from Today’s Episode
“I have a sneaking suspicion that leads and even individual contributors will want to read this book, but it’s more [to provide] suggestions for middle,upper management, and executive management.” – Jesse
“With data engineering, we can’t make v1 and v2 of data products. We actually have to make sure that our data products can be changed and evolve, otherwise we will be constantly shooting ourselves in the foot. And this is where the experience or the difference between a data engineer and software engineer comes into place.” – Jesse
“I think there’s high value in lots of interfacing between the tech leads and whoever the frontline customers are…” – Brian
“In my opinion-and this is what I talked about in some of the chapters-the business should be directly interacting with the data teams.” – Jesse
“[The reason] I advocate so strongly for having skilled product management in [a product design] group is because they need to be shielding teams that are doing implementation from the thrashing that may be going on upstairs.” – Brian
“One of the most difficult things of data teams is actually bringing together parts of the company that never talk to each other.” – Jesse
Links
- Big Data Institute
- Data Teams: A Unified Management Model for Successful Data-Focused Teams
- Follow Jesse on Twitter
- Connect with Jesse on LinkedIn