Experiencing Data with Brian T. O’Neill
064 - How AI Shapes the Products of Startups in MIT’s “Tough Tech” Venture Fund, The Engine feat. General Partner, Reed Sturtevant

064 - How AI Shapes the Products of Startups in MIT’s “Tough Tech” Venture Fund, The Engine feat. General Partner, Reed Sturtevant

May 4, 2021

Reed Sturtevant sees a lot of untapped potential in “tough tech.”

As a General Partner at The Engine, a venture capital firm launched by MIT, Reed and his colleagues invest in companies with breakthrough technology that, if successful, could positively transform the world.


It’s been about 15 years since I’ve last caught up to Reed—who was CTO at a startup we worked at together—so I’m so excited to welcome him on this episode of Experiencing Data! Reed and I talked about AI and how some of the portfolio companies in his fund are using data to produce better products, solutions, and inventions to tackle some of the world’s toughest challenges. 


In our chat, we covered:

  • How Reed's venture capital firm, The Engine, is investing in technology driven businesses focused on making positive social impacts. (0:28)
  • The challenges that technical PhDs and postdocs face when transitioning from academia to entrepreneurship. (2:22)
  • Focusing on a greater mission: The importance of self-examining whether an invention would be a good business. (5:16)
  • How one technology business invested in by The Engine, The Routing Company, is leveraging AI and data to optimize public transportation and bridge service gaps. (9:05)
  • Understanding and solving a problem: Using ‘design exercises’ to find successful market fits for existing technological solutions. (16:53)
  • Solutions first, problems second: Why asking the right questions is key to mapping a technological solution back to a problem in the market. (19:31)
  • Understanding and articulating a product’s value to potential buyers. (22:54)
  • How the go-to-market strategies of software companies have changed over the last few decades. (26:16)

Resources and Links:

Quotes from Today’s Episode

There have been a couple of times while working at The Engine when I’ve taken it as a sign of maturity when a team self-examines whether their invention is actually the right way to build a business. - Reed (5:59)


For some of the data scientists I know, particularly with AI, executive teams can mandate AI without really understanding the problem they want to solve.   It actually pushes the problem discovery onto the solution people — but they’re not always the ones trained to go find the problems. - Brian (19:42)


You can keep hitting people over the head with a product, or you can go figure out what people care about and determine how you can slide your solution into something they care about. ... You don’t know that until you go out and talk to them,listen, and and get in to their world. And I think that’s still something that’s not happening a lot with data teams. - Brian (24:45)


I think there really is a maturity among even the early stage teams now, where they can have a shelf full of techniques that they can just pick and choose from in terms of how to build a product, how to put it in front of people, and how to have the [user] experience be a gentle on-ramp. - Reed, on startups (27:29)

063 - Beyond Compliance: Designing Data Products With Data Privacy As a UX Benefit with The Data Diva (Debbie Reynolds)

063 - Beyond Compliance: Designing Data Products With Data Privacy As a UX Benefit with The Data Diva (Debbie Reynolds)

April 20, 2021

Debbie Reynolds is known as “The Data Diva” — and for good reason. 


In addition to being founder, CEO and chief data privacy officer of her own successful consulting firm, Debbie has been named to the Global Top 20 CyberRisk Communicators by The European Risk Policy Institute in 2020. She’s also written a few books, such as The GDPR Challenge: Privacy, Technology, and Compliance In An Age of Accelerating Change; as well as articles for other publications.


If you are building data products, especially customer-facing software, you’ll want to tune into this episode. Debbie and Ihad an awesome discussion about data privacy from the lens of user experience instead of the typical angle we are all used to: legal compliance. While collecting user data can enable better user experiences, we can also break a customer’s trust if we don’t request access properly.  


In our chat, we covered:

  • 'Humans are using your product': What it means to be a 'data steward' when building software. (0:27)
  • 'Privacy by design': The importance for software creators to think about privacy throughout the entire product creation process. (4:32)
  • The different laws (and lack thereof) regarding data privacy — and the importance to think about a product's potential harm during the design process. (6:58)
  • The importance of having 'diversity at all levels' when building data products. (16:41)
  • The role of transparency in data collection. (19:41)
  • Fostering a positive and collaborative relationship between a product or service’s designers, product owners, and legal compliance experts. (24:55)
  • The future of data monetization and how it relates to privacy. (29:18)


Resources and Links:

Quotes from Today’s Episode

When it comes to your product, humans are using it. Regardless of whether the users are internal or external — what I tell people is to put themselves in the shoes of someone who’s using this and think about what you would want to have done with your information or with your rights. Putting it in that context, I think, helps people think and get out of their head about it. Obviously there’s a lot of skill and a lot of experience that it takes to build these products and think about them in technical ways. But I also try to tell people that when you’re dealing with data and you’re building products, you’re a data steward. The data belongs to someone else, and you’re holding it for them, or you’re allowing them to either have access to it or leverage it in some way. So, think about yourself and what you would think you would want done with your information. - Debbie (3:28)


Privacy by design is looking at the fundamental levels of how people are creating things, and having them think about privacy as they’re doing that creation. When that happens, then privacy is not a difficult thing at the end. Privacy really isn’t something you could tack on at the end of something; it’s something that becomes harder if it’s not baked in. So, being able to think about those things throughout the process makes it easier. We’re seeing situations now where consumers are starting to vote with their feet — if they feel like a tool or a process isn’t respecting their privacy rights, they want to be able to choose other things. So, I think that’s just the way of the world. .... It may be a situation where you’re going to lose customers or market share if you’re not thinking about the rights of individuals. - Debbie (5:20)


I think diversity at all levels is important when it comes to data privacy, such as diversity in skill sets, points of view, and regional differences. … I think people in the EU — because privacy is a fundamental human right — feel about it differently than we do here in the US where our privacy rights don’t really kick in unless it’s a transaction. ...  The parallel I say is that people in Europe feel about privacy like we feel about freedom of speech here — it’s just very deeply ingrained in the way that they do things. And a lot of the time, when we’re building products, we don’t want to be collecting data or doing something in ways that would harm the way people feel about your product. So, you definitely have to be respectful of those different kinds of regimes and the way they handle data. … I’ll give you a biased example that someone had showed me, which was really interesting. There was a soap dispenser that was created where you put your hand underneath and then the soap comes out. It’s supposed to be a motion detection thing. And this particular one would not work on people of color. I guess whatever sensor they created, it didn’t have that color in the spectrum of what they thought would be used for detection or whatever. And so those are problems that happen a lot if you don’t have diverse people looking at these products. Because you — as a person that is creating products — you really want the most people possible to be able to use your products. I think there is an imperative on the economic side to make sure these products can work for everyone. - Debbie (17:31)


Transparency is the wave of the future, I think, because so many privacy laws have it. Almost any privacy law you think of has transparency in it, some way, shape, or form. So, if you’re not trying to be transparent with the people that you’re dealing with, or potential customers, you’re going to end up in trouble. - Debbie (24:35) 


In my experience, while I worked with lawyers in the digital product design space — and it was heaviest when I worked at a financial institution — I watched how the legal and risk department basically crippled stuff constantly. And I say “cripple” because the feeling that I got was there’s a line between adhering to the law and then also—some of this is a gray area, like disclosure. Or, if we show this chart that has this information, is that construed as advice? I understand there’s a lot of legal regulation there. My feeling was, there’s got to be a better way for compliance departments and lawyers that genuinely want to do the right thing in their work to understand how to work with product design, digital design teams, especially ones using data in interesting ways. How do you work with compliance and legal when we’re designing digital products that use data so that it’s a team effort, and it’s not just like, “I’m going to cover every last edge because that’s what I’m here to do is to stop anything that could potentially get us sued.” There is a cost to that. There’s an innovation cost to that. It’s easier, though, to look at the lawyer and say, “Well, I guess they know the law better, so they’re always going to win that argument.” I think there’s a potential risk there. - Brain (25:01)


Trust is so important. A lot of times in our space, we think about it with machine learning, and AI, and trusting the model predictions and all this kind of stuff, but trust is a brand attribute as well and it’s part of the reason I think design is important because the designers tend to be the most empathetic and user-centered of the bunch. That’s what we’re often there to do is to keep that part in check because we can do almost anything these days with the tech and the data, and some of it’s like, “Should we do this?” And if we do do it, how do we do it so we’re on brand, and the trust is built, and all these other factors go into that user experience. - Brian (34:21)

062 - Why Ben Shneiderman is Writing a Book on the Importance of Designing Human-Centered AI

062 - Why Ben Shneiderman is Writing a Book on the Importance of Designing Human-Centered AI

April 6, 2021

Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). 

Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.


I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.


In our chat, we covered:

  • Ben's career studying human-computer interaction and computer science. (0:30)
  • 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55)
  • 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56)
  • 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16)
  • A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08)
  • Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34)
  • Ben's upcoming book on human-centered AI. (35:55)

Resources and Links:


Quotes from Today’s Episode

The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)


The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)


Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)


There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)


Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)

Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

061 - Applying a Product Mindset to Internal Data Products with Silicon Valley Product Group Partner Marty Cagan

061 - Applying a Product Mindset to Internal Data Products with Silicon Valley Product Group Partner Marty Cagan

March 23, 2021

Marty Cagan has had a storied career working as a product executive. With a resume that includes Vice President of Product at Netscape and Ebay, Marty is an expert in product management and strategy. 


This week, Marty joins me on Experiencing Data to talk more about what a successful data product team looks like, as well as the characteristics of an effective product manager. We also explored the idea of product management applied to internal data teams. Marty and I didn’t necessarily agree on everything in this conversation, but I loved his relentless focus on companies’ customers. Marty and I also talked a bit about his new book, Empowered: Ordinary People, Extraordinary Teams. I also spoke with Marty about: 


  • The responsibilities of a data product team. (0:59)
  • Whether an internally-facing software solution can be considered a 'product.' (5:02)
  • Customer-facing vs. customer-enabling: Why Marty tries hard not to confuse the terminology of  internal employees as customers. (7:50)
  • The common personality characteristics and skill sets of effective product managers. (12:53)
  • The importance of 'customer exposure time.' (17:56)
  • The role of product managers in upholding ethical standards. (24:57)
  • The value of a good designer on a product team. (28:07)
  • Why Marty decided to write his latest book, Empowered, about leadership. (30:52)

Quotes from Today’s Episode


We try hard not to confuse customers with internal employees — for example, a sales organization, or customer service organization. They are important partners, but when a company starts to confuse these internal organizations with real customers, all kinds of bad things happen — especially to the real customer. [...] A lot of data reporting teams are, in most companies, being crushed with requests. So, how do you decide what to prioritize? Well, a product strategy should help with that and leadership should help with that. But, fundamentally, the actual true customers are going to drive a lot of what we need to do. It’s important that we keep that in mind. - Marty (9:13)


I come out of the technology space, and, for me, the worlds of product design and product management are two overlapping circles. Some people fall in the middle, some people are a little bit heavier to one side or the other. The focus there is there’s a lot of focus on empathy, and a focus on understanding how to frame the problem correctly — it’s about not jumping to a solution immediately without really understanding the customer pain point. - Brian (10:47)


One thing I’ve seen frequently throughout my career is that designers often have no idea how the business sustains itself. They don’t understand how it makes money, they don’t understand how it’s even sold or marketed. They are relentlessly focused on user experience, but the other half of it is making a business viable. - Brian (14:57)


Ethical issues really do, in almost all cases I see, originate with the leaders. However, it’s also true that they can first manifest themselves in the product teams. The product manager is often the first one to see that this could be a problem, even when it’s totally unintentional. - Marty (26:45)


My interest has always been product teams because every good product I know came from a product team. Literally — it is a combination of product design and engineering that generate great products. I’m interested in the nature of that collaboration and in nurturing the dynamics of a healthy team. To me, having strong engineering that’s all engaged with direct customer access is fundamental. Similarly, a professional designer is important — somebody that really understands service design, interaction design, visual design, and the user research behind it. The designer role is responsible for getting inside the heads of the users. This is hard. And it’s one of those things, when it’s done well, nobody even notices it. - Marty (28:54)


Links Referenced

LinkedIn: https://www.linkedin.com/in/cagan/

060 - How NPR Uses Data to Drive Editorial Decisions in the Newsroom with Sr. Dir. of Audience Insights Steve Mulder

060 - How NPR Uses Data to Drive Editorial Decisions in the Newsroom with Sr. Dir. of Audience Insights Steve Mulder

March 9, 2021

Journalism is one of the keystones of American democracy. For centuries, reporters and editors have kept those in power accountable by seeking out the truth and reporting it.


However, the art of newsgathering has changed dramatically in the digital age. Just take it from NPR Senior Director of Audience Insights Steve Mulder — whose team is helping change the way NPR makes editorial decisions by introducing a streamlined and accessible platform for data analytics and insights.


Steve and I go way, way back (Lycos anyone!?) — and I’m so excited to welcome him on this episode of Experiencing Data! We talked a lot about the Story Analytics and Data Insights (SANDI) dashboard for NPR content creators that Steve’s team just recently launched, and dove into:

  • How Steve’s design and UX background influences his approach to building analytical tools and insights  (1:04)
  • Why data teams at NPR embrace qualitative UX research when building analytics and insights solutions for the editorial team. (6:03)
  • What the Story Analytics and Data Insights (SANDI) dashboard for NPR’s newsroom is, the goals it is supporting, and the data silos that had to be broken down (10:52)
  • How the NPR newsroom uses SANDI to measure audience reach and engagement. (14:40)
  • 'It's our job to be translators': The role of moving from ‘what’ to ‘so what’ to ‘now what’ (22:57)

Quotes from Today’s Episode

People with backgrounds in UX and design end up everywhere. And I think it's because we have a couple of things going for us. We are user-centered in our hearts. Our goal is to understand people and what they need — regardless of what space we're talking about. We are grounded in research and getting to the underlying motivations of people and what they need. We're focused on good communication and interpretation and putting knowledge into action — we're generalists. - Steve (1:44)


The familiar trope is that quantitative research tells you what is going on, and qualitative research tells you why. Qualitative research gets underneath the surface to answer why people feel the way they do. Why are they motivated? Why are they describing their needs in a certain way? - Steve (6:32)


The more we work with people and develop relationships — and build that deeper sense of trust as an organization with each other — the more openness there is to having a real conversation. - Steve (9:06)


I’ve been reading a book by Nancy Duarte called DataStory (see Episode 32 of this show), and in the book she talks about this model of the career growth [...]that is really in sync with how I've been thinking about it. [...]you begin as an explorer of data — you're swimming in the data and finding insights from the data-first perspective. Over time in your career, you become an explainer. And an explainer is all about creating meaning: what is the context and interpretation that I can bring to this insight that makes it important, that answers the question, “So what?” And then the final step is to inspire, to actually inspire action and inspire new ways of looking at business problems or whatever you're looking at. - Steve (25:50)


I think that carving things down to what's the simplest is always a big challenge, just because those of us drowning in data are always tempted to expose more of it than we should. - Steve (29:30)


There's a healthy skepticism in some parts of NPR  around data and around the fact that ‘I don't want data to limit what I do with my job. I don't want it to tell me what to do.’ We spend a lot of time reassuring people that data is never going to make decisions for you — it's just the foundation that you can stand on to better make your own decision. … We don't use data-driven decisions. At NPR, we talk about data-??? decisions because that better reflects the fact that it is data and expertise together that make things magic.  - Steve (34:34)

Resources and Links:

059 - How Design Thinking Helps Organizations and Data Science Teams Create Economic Value with Machine Learning and Analytics feat. Bill Schmarzo

059 - How Design Thinking Helps Organizations and Data Science Teams Create Economic Value with Machine Learning and Analytics feat. Bill Schmarzo

February 23, 2021

With a 30+ year career in data warehousing, BI and advanced analytics under his belt, Bill has become a leader in the field of big data and data science – and, not to mention, a popular social media influencer. Having previously worked in senior leadership at DellEMC and Yahoo!, Bill is now an executive fellow and professor at the University of San Francisco School of Management as well as an honorary professor at the National University of Ireland-Galway.

I’m so excited to welcome Bill as my guest on this week’s episode of Experiencing Data. When I first began specializing my consulting in the area of data products, Bill was one of the first leaders that I specifically noticed was leveraging design thinking on a regular basis in his work. In this long overdue episode, we dug into some examples of how he’s using it with teams today. Bill sees design as a process of empowering humans to collaborate with one another, and he also shares insights from his new book, “The? Economics of Data, Analytics and Digital Transformation.”

In total, we covered:

  • Why it’s crucial to understand a customer’s needs when building a data product and how design helps uncover this. (2:04)
  • How running an “envisioning workshop” with a customer before starting a project can help uncover important information that might otherwise be overlooked. (5:09)
  • How to approach the human/machine interaction when using machine learning and AI to guide customers in making decisions – and why it’s necessary at times to allow a human to override the software. (11:15)
  • How teams that embrace design-thinking can create “organizational improvisation” and drive greater value. (14:49)
  • Bill take on how to properly prioritize use cases (17:40)
  • How toidentify a data product’s problems ahead of time. (21:36)
  • The trait that Bill sees in the best data scientists and design thinkers (25:41)
  • How Bill helps transition the practice of data science from being a focus on analytic outputs to operational and business outcomes. (28:40)
  • Bill’s new book, “The Economics of Data, Analytics, and Digital Transformation.” (31:34)
  • Brian and Bill’s take on the need for organizations to create a technological and cultural environment of continuous learning and adapting if they seek to innovate. (38:22)

Quotes from Today’s Episode

There’s certainly a UI aspect of design, which is to build products that are more conducive for the user to interact with – products that are more natural, more intuitive … But I also think about design from an empowerment perspective. When I consider design-thinking techniques, I think about how I can empower the wide variety of stakeholders that I need to service with my data science. I’m looking to identify and uncover those variables and metrics that might be better predictors of performance. To me, at the very beginning of the design process, it’s about empowering everybody to have ideas. – Bill (2:25)

Envisioning workshops are designed to let people realize that there are people all across the organization who bring very different perspectives to a problem. When you combine those perspectives, you have an illuminating thing. Now let’s be honest: many large organizations don’t do this well at all. And the reason why is not because they’re not smart, it’s because in many cases, senior executives aren’t willing to let go. Design thinking isn’t empowering the senior executives. In many cases, it’s about empowering those frontline employees … If you have a culture where the senior executives have to be the smartest people in the room, design is doomed. – Bill (10:15)

Organizational charts are the great destroyer of creativity because you put people in boxes. We talk about data silos, but we create these human silos where people can’t go out … Screw boxes. We want to create swirls – we want to create empowered teams. In fact, the most powerful teams are the ones who can embrace design thinking to create what I call organizational improvisation. Meaning, you have the ability to mix and match people across the organization based on their skill sets for the problem at hand, dissipate them when the problem is gone, and reconstitute them around a different problem. It’s like watching a great soccer team play … These players have been trained and conditioned, they make their own decisions on the field, and they interact with each other. Watching a good soccer team is like ballet because they’ve all been empowered to make decisions. – Bill (15:30)

I tend to feel like design thinkers can be born from any job title, not just “creatives” – even certain types of verytechnically gifted people can be really good at it. A lot of it is focused around the types of questions they ask and their ability to be empathetic. – Brian (25:55)

(Is there another quote from me? So many good ones in this episode from Bill though so if not, i understand)

The best design thinkers and the best data scientists share one common trait: they’re humble. They have the ability to ask questions, to learn. They don’t walk in with an answer…and here’s the beauty of design thinking: anybody can do it. But you have to be humble. If you already know the answer, then you’re never going to be a good designer. Never. – Bill (26:34)
From an economic perspective … The value of data isn’t in having it. The value in data is how you use it to generate more value … In the same way that design thinking is learning how to speak the language of the customer, economics is about learning how to speak the language of the business. And when you bring those concepts together around data science, that’s a blend that is truly a game-changer. – Bill (36:03)


058 - IoT Spotlight: 8 UI / UX Strategies for Designing Indispensable Monitoring Applications

058 - IoT Spotlight: 8 UI / UX Strategies for Designing Indispensable Monitoring Applications

February 9, 2021

On this solo episode of Experiencing Data, I discussed eight design strategies that will help your data product team create immensely valuable IOT monitoring applications.

Whether your team is creating a system for predictive maintenance, forecasting, or root-cause analysis – analytics are often a big part of helping users make sense of the huge volumes of telemetry and data an IOT system can generate. Often times, product or technical teams see the game as, “how do we display all the telemetry from the system in a way the user can understand?” The problem with this approach is that it is completely decoupled from the business objectives the customers likely have-and it is a recipe for a very hard-to-use application.

The reality is that a successful application may require little to no human interaction at all-that may actually be the biggest value of all that you can create for your customer: showing up only when necessary, with just the right insight.

So, let’s dive into some design considerations for these analytical monitoring applications, dashboards, and experiences.

In total, I covered:

  • Why it’s important to consider that a monitoring application user experiences may happen across multiple screens, interfaces, departments or people. (2:32)
  • Design considerations benefits when building a forecasting or predictive application that allows customers to change parameters and explore “what-if” scenarios. (6:09)
  • Designing for seasonality: What it means to have a monitoring application that understands and adapts to periodicity in the real world. (11:03)
  • How the best user experiences for monitoring and maintenance applications using analytics seamlessly integrate people, processes and related technology. (16:03)
  • The role of alerting and notifications in these systems … and where things can go wrong if they aren’t well designed from a UX perspective. (19:49)
  • How to keep the customer (user’s) business top of mind within the application UX. (23:19)
  • One secret to making time-series charts in particular more powerful and valuable to users. (25:24)
  • Some of the common features and use cases I see monitoring applications needing to support on out-of-the-box dashboards. (27:15)


Quotes from Today’s Episode

Consider your data product across multiple applications, screens, departments and people. Be aware that the experience may go beyond the walls of the application sitting in front of you. – Brian (5:58)

When it comes to building forecast or predictive applications, a model’s accuracy frequently comes second to the interpretability of the model. Because if you don’t have transparency in the UX, then you don’t have trust. And if you don’t have trust, then no one pays attention. If no one pays attention, then none of the data science work you did matters. – Brian (7:15)

Well-designed applications understand the real world. They know about things like seasonality and what normalcy means in the environment in which this application exists. These applications learn and take into consideration new information as it comes in. (11:03)

The greatest IoT UIs and UXs may be the ones where you rarely have to use the service to begin with. These services give you alerts and notifications at the right time with the right amount of information along with actionable next steps. – Brian (20:00)

With tons of IoT telemetry comes a lot of discussion of stats and metrics that are visualized on charts and tables. But at the end of the day, your customer probably may not really care about the objects themselves. Ultimately, the devices being monitored are there to provide business value to your customer. Working backwards from the business value perspective helps guide solid UX design choices. – Brian (23:18)



057 - How to Design Successful Enterprise Data Products When You Have Multiple User Types to Satisfy

057 - How to Design Successful Enterprise Data Products When You Have Multiple User Types to Satisfy

January 26, 2021

Designing a data product from the ground up is a daunting task, and it is complicated further when you have several different user types who all have different expectations for the service. Whether an application offers a wealth of traditional historical analytics or leverages predictive capabilities using machine learning, for example, you may find that different users have different expectations. As a leader, you may be forced to make choices about how and what data you’ll present, and how you will allow these different user types to interact with it. These choices can be difficult when domain knowledge, time availability, job responsibility, and a need for control vary greatly across these personas. So what should you do?

To answer that, today I’m going solo on Experiencing Data to highlight some strategies I think about when designing multi-user enterprise data products so that in the end, something truly innovative, useful, and valuable emerges.

In total, I covered:

  • Why UX research is imperative and the types of research I think are important (4:43)
  • The importance for teams to have a single understanding of how a product’s success will be measured before it is built and launched (and how research helps clarify this). (8:28)
  • The pros and cons of using the design tool called “personas” to help guide design decision making for multiple different user types. (19:44)
  • The idea of ‘Minimum valuable product’ and how you balance this with multiple user types (24:26)
  • The strategy I use to reduce complexity and find opportunities to solve multiple users’ needs with a single solution (29:26)
  • The relevancy of declaratory vs. exploratory analytics and why this is relevant. (32:48)
  • My take on offering customization as a means to satisfy multiple customer types. (35:15)
  • Expectations leaders should have-particularly if you do not have trained product designers or UX professionals on your team. (43:56)

Resources and Links

056 - How Design Helps Drive Adoption of Data Products Used for Social Work with Chief Data Officer Dr. Besa Bauta of MercyFirst

056 - How Design Helps Drive Adoption of Data Products Used for Social Work with Chief Data Officer Dr. Besa Bauta of MercyFirst

January 12, 2021

There’s a lot at stake in the decisions that social workers have to make when they care for people — and Dr. Besa Bauta keeps this in mind when her teams are designing the data products that care providers use in the field.

As Chief Data Officer at MercyFirst, a New York-based social service nonprofit, Besa explains how her teams use design and design thinking to create useful decision support applications that lead to improved clinician-client interactions, health and well-being outcomes, and better decision making.

In addition to her work at MercyFirst, Besa currently serves as an adjunct assistant professor at New York University’s Silver School of Social Work where she teaches public health, social science theories and mental/behavioral health. On today’s episode, Besa and I talked about how MercyFirst’s focus on user experience improves its delivery of care and the challenges Besa and her team have encountered in driving adoption of new technology.

In total, we covered:

  • How data digitization is improving the functionality of information technologies. (1:40)
  • Why MercyFirst, a social service organization, partners with technology companies to create useful data products. (3:30)
  • How MercyFirst decides which applications are worth developing. (7:06)
  • Evaluating effectiveness: How MercyFirst’s focus on user experience improves the delivery of care. (10:45)
  • “With anything new, there is always fear”: The challenges MercyFirst has with getting buy-in on new technology from both patients and staff. (15:07)
  • Besa’s take on why it is important to engage the correct stakeholders early on in the design of an application — and why she engages the naysayers. (20:05)
  • The challenges MercyFirst faces with getting its end-users to participate in providing feedback on an application’s design and UX. (24:10)
  • Why Besa believes it is important to be thinking of human-centered design from the inception of a project. (27:50)
  • Why it is imperative to involve key stakeholders in the design process of artificial intelligence and machine learning products. (31:20)

Quotes from Today’s Episode

We're not a technology company, ...so, for us, it’s about finding the right partners that understand our use cases and who are also willing to work alongside us to actually develop something that our end-users — our physicians, for example — are able to use in their interaction with a patient. - Besa

No one wants to have a different type of application every other week, month, or year. We want to have a solution that grows with the organization. - Besa on the importance of creating a product that is sustainable over time

If we think about data as largely about providing decision support or decision intelligence, how do you measure that it's designed to do a good job? What's the KPI for choosing good KPIs? - Brian

Earlier on, engaging with the key stakeholders is really important. You're going to have important gatekeepers, who are going to say, ‘No, no, no,’ — the naysayers. I start with the naysayers first — the harder nuts to crack — and say, ‘How can this improve your process or your service?’ If I could win them over, the rest is cake. Well, almost. Not all the time. - Besa

Failure is how some orgs learn about just how much design matters. At some point, they realize that data science, engineering, and technical work doesn't count if no human will use that app, model, product, or dashboard when it rolls out.  -Brian

Besa: It was a dud. [laugh].

Brian: —yeah, if it doesn’t get used, it doesn't matter

What my team has done is create workgroups with our vendors and others to sort of shift developmental timelines [...] and change what needs to go into development and production first—and then ensure there's a tiered approach to meet [everyone’s] needs because we work as a collective. It’s not just one healthcare organization: there are many health and social service organizations on the same boat. - Besa 

It's really important to think about the human in the middle of this entire process. Sometimes products get developed without really thinking, ‘is this going to improve the way I do things? Is it going to improve anything?’ … The more personalized a product is,the better it is and the greater the adoption. - Besa


055 - What Can Carol Smith’s Ethical AI Work at the DoD Teach Us About Designing Human-Machine Experiences?

055 - What Can Carol Smith’s Ethical AI Work at the DoD Teach Us About Designing Human-Machine Experiences?

December 29, 2020

It’s not just science fiction: As AI becomes more complex and prevalent, so do the ethical implications of this new technology.But don’t just take it from me – take it from Carol Smith, a leading voice in the field of UX and AI. Carol is a senior research scientist in human-machine interaction at Carnegie Mellon University’s Emerging Tech Center, a division of the school’s Software Engineering Institute. Formerly a senior researcher for Uber’s self-driving vehicle experience, Carol-who also works as an adjunct professor at the university’s Human-Computer Interaction Institute-does research on Ethical AI in her work with the US Department of Defense.

Throughout her 20 years in the UX field, Carol has studied how focusing on ethics can improve user experience with AI. On today’s episode, Carol and I talked about exactly that: the intersection of user experience and artificial intelligence, what Carol’s work with the DoD has taught her, and why design matters when using machine learning and automation. Better yet, Carol gives us some specific, actionable guidance and her four principles for designing ethical AI systems.

In total, we covered:

  • “Human-machine teaming”: what Carol learned while researching how passengers would interact with autonomous cars at Uber (2:17)
  • Why Carol focuses on the ethical implications of the user experience research she is doing (4:20)
  • Why designing for AI is both a new endeavor and an extension of existing human-centered design principles (6:24)
  • How knowing a user’s information needs can drive immense value in AI products (9:14)
  • Carol explains how teams can improve their AI product by considering ethics (11:45)
  • “Thinking through the worst-case scenarios”: Why ethics matters in AI development (14:35) and methods to include ethics early in the process (17:11)
  • The intersection between soldiers and artificial intelligence (19:34)
  • Making AI flexible to human oddities and complexities (25:11)
  • How exactly diverse teams help us design better AI solutions (29:00)
  • Carol’s four principles of designing ethical AI systems and “abusability testing” (32:01)

Quotes from Today’s Episode

“The craft of design-particularly for #analytics and #AI solutions-is figuring out who this customer is-your user-and exactly what amount of evidence do they need, and at what time do they need it, and the format they need it in.” – Brian

“From a user experience, or human-centered design aspect, just trying to learn as much as you can about the individuals who are going to use the system is really helpful … And then beyond that, as you start to think about ethics, there are a lot of activities you can do, just speculation activities that you can do on the couch, so to speak, and think through – what is the worst thing that could happen with the system?” – Carol

“[For AI, I recommend] ‘abusability testing,’ or ‘black mirror episode testing,’ where you’re really thinking through the absolute worst-case scenario because it really helps you to think about the people who could be the most impacted. And particularly people who are marginalized in society, we really want to be careful that we’re not adding to the already bad situations that they’re already facing.” – Carol, on ways to think about the ethical implications of an AI system

“I think people need to be more open to doing slightly slower work […] the move fast and break things time is over. It just, it doesn’t work. Too many people do get hurt, and it’s not a good way to make things. We can make them better, slightly slower.” – Carol

“The four principles of designing ethical AI systems are: accountable to humans, cognizant of speculative risks and benefits, respectful and secure, and honest and usable. And so with these four aspects, we can start to really query the systems and think about different types of protections that we want to provide.” – Carol

“Keep asking tough questions. Have these tough conversations. This is really hard work. It’s very uncomfortable work for a lot of people. They’re just not used to having these types of ethical conversations, but it’s really important that we become more comfortable with them, and keep asking those questions. Because if we’re not asking the questions, no one else may ask them.” – Carol



Podbean App

Play this podcast on Podbean App