127.8K
Downloads
161
Episodes
If you’re a leader tasked with generating business and org. value through ML/AI and analytics, you’ve probably struggled with low user adoption. Making the tech gets easier, but getting users to use, and buyers to buy, remains difficult—but you’ve heard a ”data product” approach can help. Can it? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting designer’s perspective on why creating ML and analytics outputs isn’t enough to create business and UX outcomes. How can UX design and product management help you create innovative ML/AI and analytical data products? What exactly are data products—and how can data product management help you increase user adoption of ML/analytics—so that stakeholders can finally see the business value of your data? Every 2 weeks, I answer these questions via solo episodes and interviews with innovative chief data officers, data product management leaders, and top UX professionals. Hashtag: #ExperiencingData. PODCAST HOMEPAGE: Get 1-page summaries, text transcripts, and join my Insights mailing list: https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes
Tuesday Jul 27, 2021
Tuesday Jul 27, 2021
As much as AI has the ability to change the world in very positive ways, it also can be incredibly destructive. Sean McGregor knows this well, as he is currently developing the Partnership on AI’s AI Incident Database, a searchable collection of news articles that covers questionable use, failures, and other incidents that affect people when AI solutions are poorly designed.
On this episode of Experiencing Data, Sean takes us through his notable work around using machine learning in the domain of fire suppression, and how human-centered design is critical to ensuring these decision support solutions are actually used and trusted by the users. We also covered the social implications of new decision-making tools leveraging AI, and:
- Sean's focus on ensuring his models and interfaces were interpretable by users when designing his fire-suppression system and why this was important. (0:51)
- How Sean built his fire suppression model so that different stakeholders can optimize the system for their unique purposes. (8:44)
- The social implications of new decision-making tools. (11:17)
- Tailoring to the needs of 'high-investment' and 'low-investment' people when designing visual analytics. (14:58)
- The AI Incident Database: Preventing future AI deployment harm by collecting and displaying examples of the unintended and negative consequences of AI. (18:20)
- How human-centered design could prevent many incidents of harmful AI deployment — and how it could also fall short. (22:13)
- 'It's worth the time and effort': How taking time to agree on key objectives for a data product with stakeholders can lead to greater adoption. (30:24)
Quotes from Today’s Episode
“As soon as you enter into the decision-making space, you’re really tearing at the social fabric in a way that hasn’t been done before. And that’s where analytics and the systems we’re talking about right now are really critical because that is the middle point that we have to meet in and to find those points of compromise.” - Sean (12:28)
“I think that a lot of times, unfortunately, the assumption [in data science is], ‘Well if you don’t understand it, that’s not my problem. That’s your problem, and you need to learn it.’ But my feeling is, ‘Well, do you want your work to matter or not? Because if no one’s using it, then it effectively doesn’t exist.’” - Brian (17:41)
“[The AI Incident Database is] a collection of largely news articles [about] bad things that have happened from AI [so we can] try and prevent history from repeating itself, and [understand] more of [the] unintended and bad consequences from AI....” - Sean (19:44)
“Human-centered design will prevent a great many of the incidents [of AI deployment harm] that have and are being ingested in the database. It’s not a hundred percent thing. Even in human-centered design, there’s going to be an absence of imagination, or at least an inadequacy of imagination for how these things go wrong because intelligent systems — as they are currently constituted — are just tremendously bad at the open-world, open-set problem.” - Sean (22:21)
“It’s worth the time and effort to work with the people that are going to be the proponents of the system in the organization — the ones that assure adoption — to kind of move them through the wireframes and examples and things that at the end of the engineering effort you believe are going to be possible. … Sometimes you have to know the nature of the data and what inferences can be delivered on the basis of it, but really not jumping into the principal engineering effort until you adopt and agree to what the target is. [This] is incredibly important and very often overlooked.” - Sean (31:36)
“The things that we’re working on in these technological spaces are incredibly impactful, and you are incredibly powerful in the way that you’re influencing the world in a way that has never, on an individual basis, been so true. And please take that responsibility seriously and make the world a better place through your efforts in the development of these systems. This is right at the crucible for that whole process.” - Sean (33:09)
Links Referenced
- seanbmcgregor.com: https://seanbmcgregor.com
- AI Incident Database: https://incidentdatabase.ai
- Partnership on AI: https://www.partnershiponai.org
Twitter: https://twitter.com/seanmcgregor
Tuesday Jul 13, 2021
Tuesday Jul 13, 2021
Doug Laney is the preeminent expert in the field of infonomics — and it’s not just because he literally wrote the book on it.
As the Data & Analytics Strategy Innovation Fellow at consulting firm West Monroe, Doug helps businesses use infonomics to measure the economic value of their data and monetize it. He also is a visiting professor at the University of Illinois at Urbana-Champaign where he teaches classes on analytics and infonomics.
On this episode of Experiencing Data, Doug and I talk about his book Infonomics, the many different ways that businesses can monetize data, the role of creativity and product management in producing innovative data products, and the ever-evolving role of the Chief Data Officer.
In our chat, we covered:
- Why Doug's book Infonomics argues that measuring data for its value potential is key to effectively managing and monetizing it. (2:21)
- A 'regenerative asset': Innovative methods for deploying and monetizing data — and the differences between direct, indirect, and inverted data monetization. (5:10)
- The responsibilities of a Chief Data Officer (CDO) — and how taking a product management approach to data can generate additional value. (13:28)
- Why Doug believes that a 'lack of vision and leadership' is partly behind organizational hesitancy of data monetization efforts. (17:10)
- ‘A pretty unique skill’: The importance of bringing in people with experience creating and marketing data products when monetizing data. (19:10)
- Insurance and torrenting: Creative ways companies have leveraged their data to generate additional value. (24:27)
- Ethical data monetization: Why Doug believes consumers must receive a benefit when organizations leverage their data for profit. (27:14)
- The data monetization workshops Doug runs for businesses looking to generate new value streams from its data. (29:42)
Quotes from Today’s Episode
“Many organizations [endure] a vicious cycle of not measuring [their data], and therefore not managing, and therefore not monetizing their data as well as they can. The idea behind my book Infonomics is, flip that. I’ll just start with measuring your data, understanding what you have, its quality characteristics, and its value potential. But vision is important as well, and so that’s where we start with monetization, and thinking more broadly about the ways to generate measurable economic benefits from data.” - Doug (4:13)
“A lot of people will compare data to oil and say that ‘Data is the new oil.’ But you can only use a drop of oil one way at a time. When you consume a drop of oil, it creates heat and energy and pollution, and when you use a drop of oil, it doesn’t generate more oil. Data is very different. It has unique economic qualities that economists would call a non-rivalrous, non-depleting, and regenerative asset.” - Doug (7:52)
“The Chief Data Officer (CDO) role has come on strong in organizations that really want to manage their data as an actual asset, ensure that it is accounted for as generating value and is being managed and controlled effectively. Most CDOs play both offense and defense in controlling and governing data on one side and in enabling it on the other side to drive more business value.”- Doug (14:17)
“The more successful teams that I read about and I see tend to be of a mixed skill set, they’re cross-functional; there’s a space for creativity and learning, there’s a concept of experimentation that’s happening there.” - Brian (19:10)
“Companies that become more data-driven have a market-to-book value that’s nearly two times higher than the market average. And companies that make the bulk of their revenue by selling data products or derivative data have a market-to-book value that’s nearly three times the market average. So, there's a really compelling reason to do this. It’s just that not a lot of executives are really comfortable with it. Data continues to be something that’s really amorphous and they don’t really have their heads around.” - Doug (21:38)
“There’s got to be a benefit to the consumer in the way that you use their data. And that benefit has to be clear, and defined, and ideally measured for them, that we’re able to reduce the price of this product that you use because we’re able to share your data, even if it’s anonymously; this reduces the price of your product.” - Doug (28:24)
Links referenced
- Infonomics: https://www.amazon.com/Infonomics-Monetize-Information-Competitive-Advantage/dp/1138090387
- Email: dlaney@westmonroe.com
- LinkedIn: https://www.linkedin.com/in/douglaney/
- Westmonroe.com: https://westmonroe.com
- Coursera: https://www.coursera.org/instructor/dblaney
Tuesday Jun 29, 2021
Tuesday Jun 29, 2021
Drew Smith knows how much value data analytics can add to a business when done right.
Having worked at the IKEA Group for 17 years, Drew helped the company become more data-driven, implementing successful strategies for data analytics and governance across multiple areas of the company.
Now, Drew serves as the Executive Vice President of the Analytics Leadership Consortium at the International Institute for Analytics, where he helps Fortune 1000 companies successfully leverage analytics and data science.
On this episode of Experiencing Data, Drew and I talk a lot about the factors contributing to low adoption rates of data products, how product and design-thinking approaches can help, and the value of proper one-on-one research with customers.
In our chat, we covered:
- 'It’s bad and getting worse': Drew's take on the factors behind low adoption of data products. (1:08)
- Decentralizing data analytics: How understanding a user's business problems by including them in the design process can lead to increased adoption of data products. (6:22)
- The importance for business leaders to have a conceptual understanding of the algorithms used in decision-making data products. (14:43)
- Why data analysts need to focus more on providing business value with the models they create. (18:14)
- Looking for restless curiosity in new hires for data teams — and the importance of nurturing new learning through training. (22:19)
- The value of spending one-on-one time with end-users to research their decision-making process before creating a data product. (27:00)
- User-informed data products: The benefits of design and product-thinking approaches when creating data analytics solutions. (33:04)
- How Drew's view of data analytics has changed over 15 years in the field . (45:34)
Quotes from Today’s Episode
“I think as we [decentralize analytics back to functional areas] — as firms keep all of the good parts of centralizing, and pitch out the stuff that doesn’t work — I think we’ll start to see some changes [when it comes to the adoption of data products.]” - Drew (10:07)
“I think data people need to accept that the outcome is not the model — the outcome is a business performance which is measurable, material, and worth the change.” - Drew (21:52)
“We talk about the concept of outcomes over outputs a lot on this podcast, and it’s really about understanding what is the downstream [effect] that emerges from the thing I made. Nobody really wants the thing you made; they just want the result of the thing you made. We have to explore what that is earlier in the process — and asking, “Why?” is very important.” - Brian (22:21)
“I have often said that my favorite people in the room, wherever I am, aren’t the smartest, it’s the most curious.” - Drew (23:55)
“For engineers and people that make things, it’s a lot more fun to make stuff that gets used. Just at the simplest level, the fact that someone cared and it didn’t just get shelved, and especially when you spent half your year on this thing, and your performance review is tied to it, it’s just more enjoyable to work on it when someone’s happy with the outcome.” - Brian (33:04)
“Product thinking starts with the assumption that ‘this is a good product,’ it’s usable and it’s making our business better, but it’s not finished. It’s a continuous loop. It’s feeding back in data through its exhaust. The user is using it — maybe even in ways I didn’t imagine — and those ways are better than I imagined, or worse than I imagined, or different than I imagined, but they inform the product.” - Drew (36:35)
Links Referenced
- Email: dsmith@iiaanalytics.com
- Company site: https://iiaanalytics.com
- LinkedIn: https://www.linkedin.com/in/andrewjsmithknownasdrew/
Analytics Leadership Consortium: https://iianalytics.com/services/analytics-leadership-consortium
Tuesday Jun 15, 2021
Tuesday Jun 15, 2021
On today’s episode of Experiencing Data, I’m so excited to have Omar Khawaja on to talk about how his team is integrating user-centered design into data science, BI and analytics at Roche Diagnostics.
In this episode, Omar and I have a great discussion about techniques for creating more user-centered data products that produce value — as well as how taking such an approach can lead to needed change management on how data is used and interpreted.
In our chat, we covered:
- What Omar is responsible for in his role as Head of BI & Analytics at Roche Diagnostics — and why a human-centered design approach to data analytics is important to him. (0:57)
- Understanding the end-user's needs: Techniques for creating more user-centric products — and the challenges of taking on such an approach. (6:10)
- Dissecting 'data culture': Why Omar believes greater implementation of data-driven decision-making begins with IT 'demonstrating' the approach's benefits. (9:31)
- Understanding user personas: How Roche is delivering better outcomes for medical patients by bringing analytical insights to life. (15:19)
- How human-centered design yields early 'actionable insights' that can lead to needed change management on how data is used and interpreted. (22:12)
- The journey of learning: Why 'it's everybody's job' to be focused on user experience — and how field research can help determine an end-users needs. (27:26)
- Omar's love of cricket and the statistics collected about the sport! (31:23)
Resources and Links:
- Roche Diagnostics: https://www.roche.com/
- LinkedIn: https://www.linkedin.com/in/kmaomar/
- Twitter: https://twitter.com/kmaomar
Quotes from Today’s Episode
“I’ve been in the area of data and analytics since two decades ago, and out of my own learning — and I’ve learned it the hard way — at the end of the day, whether we are doing these projects or products, they have to be used by the people. The human factor naturally comes in.” - Omar (2:27)
“Especially when we’re talking about enterprise software, and some of these more complex solutions, we don’t really want people noticing the design to begin with. We just want it to feel valuable, and intuitive, and useful right out of the box, right from the start.” - Brian (4:08)
“When we are doing interviews with [end-users] as part of the whole user experience [process], you learn to understand what’s being said in between the lines, and then you learn how to ask the right questions. Those exploratory questions really help you understand: What is the real need?” - Omar (8:46)
“People are talking about data-driven [cultures], data-informed [cultures] — but at the end of the day, it has to start by demonstrating what change we want. ... Can we practice what we are trying to preach? Am I demonstrating that with my team when I’m making decisions in my day-to-day life? How do I use the data? IT is very good at asking our business colleagues and sometimes fellow IT colleagues to use various enterprise IT and business tools. Are we using, ourselves, those tools nicely?” - Omar (11:33)
“We focus a lot on what’s technically possible, but to me, there’s often a gap between the human need and what the data can actually support. And the bigger that gap is, the less chance things get used. The more we can try to close that gap when we get into the implementation stage, the more successful we probably will be with getting people to care and to actually use these solutions.” - Brian (22:20)
“When we are working in the area of data and analytics, I think it’s super important to know how this data and insights will be used — which requires an element of putting yourself in the user’s shoes. In the case of an enterprise setup, it’s important for me to understand the end-user in different roles and personas: What they are doing and how their job is. [This involves] sitting with them, visiting them, visiting the labs, visiting the factory floors, sitting with the finance team, and learning what they do in the system. These are the places where you have your learning.” - Omar (29:09)
Tuesday Jun 01, 2021
Tuesday Jun 01, 2021
Earlier this year, the always informative Women in Analytics Conference took place online. I didn’t go — but a blog post about one of the conference’s presentations on the International Institute of Analytics’ website caught my attention.
The post highlighted key points from a talk called Design Thinking in Analytics that was given at the conference by Alison Magyari, an IT Manager at Eaton. In her presentation, Alison explains the four design steps she utilizes when starting a new project — as well as what “design thinking” means to her.
Human-centered design is one of the main themes of Experiencing Data, so given Alison’s talk about tapping into the emotional state of customers to create better designed data products, I knew she would be a great guest. In this episode, Alison and I have a great discussion about building a “design thinking mindset” — as well as the importance of keeping the design process flexible.
In our chat, we covered:
- How Alison employs design thinking in her role at Eaton to better understand the 'voice of the customer.' (0:28)
- Same frustrations, no excitement, little use: The factors that led to Alison's pursuit of a design thinking mindset when building data products at Eaton. (3:35)
- Alleviating the 'pain points' with design thinking: The importance of understanding how a data tool makes users feel. (10:24)
- How Eaton's business analysts (and end users) take ownership of the design process — and the challenges Alison faced in building a team of business analysts committed to design thinking. (15:51)
- 'It's not one size fits all': The benefits of keeping the design process flexible — and why curiosity and empathy are traits of successful designers. (21:06)
- 'Pay me now or pay me later': How Alison dealt with pushback to spending more time and resources on design — and how she dealt with skepticism from business users. (24:09)
Resources and Links:
- Blog post on International Institute for Analytics: https://www.iianalytics.com/blog/2021/2/25/utilizing-human-centered-design-to-inform-products-and-reach-communities
- Eaton: https://www.eaton.com/
- LinkedIn: https://www.linkedin.com/in/alisonmagyari/
- Email: alisonmagyari@eaton.com
Quotes from Today’s Episode
“In IT, it’s really interesting how sometimes we get caught up in just looking at the technology for what it is, and we forget that the technology is there to serve our business partners.” - Alison (2:00)
“You can give people exactly what they asked for, but if you’re designing solutions and data-driven products with someone, and if they’re really for somebody else, you actually have to dig in to figure out the unarticulated needs. TAnd they may not know how to invite you in to do ask for that. They may not even know how they’re going to make a decision with data about something. So, you can say “sorry, ... You could say, “Well, you’re not prepared to talk to us yet,.” oOr, you can be part of helping them work it out. ‘decide,H how will you make a decision with this information? Let us be part of that problem-finding exercise with you, not just the solution part. Because you can fail if you just give people what they asked for, so it’s best to be part of the problem finding not just solving.” - Brian (8:42)
“During our design process, we noted down what the sentiment of our users was while they were using our data product. … Our users so appreciated when we would mirror back to them our observations about what they were feeling, and we were right about it. I mean, they were much more open to talking to us. They were much more open and they shared exactly what they were feeling.” - Alison (12:51)
“In our case, we did have the business analyst team really own the design process. Towards the end, we were the champions for it, but then our business users really took ownership, which I was proud of. They realized that if they didn’t embrace this, that they were going to have to deal with the same pain points for years to come. They didn’t want to deal with that, so they were really good partners in taking ownership at the end of the day.” - Alison (16:56)
“The way you learn how to do design is by doing it. … the second thing is that you don’t have to do, “All of it,” to get some value out of it. You could just do prototyping, you could do usability evaluation, you could do ‘what if’ analyses. You can do a little of one thing and probably get some value out of that fairly early, and it’s fairly safe. And then over time, you can learn other techniques. Eventually, you will have a library of techniques that you can apply. It’s a mindset, it’s really about changing the mind. It’s heads not hands, as I sometimes say: It’s not really about hands. It’s about how we think and approach problem-solving.” - Brian (20:16)
“I think everybody can do design, but I think the ones that have been incredibly successful at it have a natural curiosity. They don’t just stop with the first answer that they get. They want to know, “If I were doing this job, would I be satisfied with compiling a 50 column spreadsheet every single day in my life? Probably not. Its curiosity and empathy — if you have those traits, naturally, then design is just kind of a better fit.” - Alison (23:15)
Tuesday May 18, 2021
Tuesday May 18, 2021
I once saw a discussion on LinkedIn about a fraud detection model that had been built but never used. The model worked — it was expensive — but it just simply didn’t get used because the humans in the loop were not incentivized to use it.
It was on this very thread that I first met Salesforce Director of Product Management Pavan Tuvu, who chimed in on the thread about a similar experience he went through. When I heard about his experience, I asked him if he would share it with you and he agreed. So, today on the Experiencing Data podcast, I’m excited to have Pavan on to talk about some lessons he learned while designing ad-spend software that utilized advanced analytics — and the role of the humans in the loop. We discussed:
- Pavan's role as Director of Product Management at Salesforce and how he works to make data easier to use for teams. (0:40)
- Pavan's work protecting large-dollar advertising accounts from bad actors by designing a ML system that predicts and caps ad spending. (6:10)
- 'Human override of the machine': How Pavan addressed concerns that its advertising security system would incorrectly police legitimate large-dollar ad spends. (12:22)
- How the advertising security model Pavan worked on learned from human feedback. (24:49)
- How leading with "why" when designing data products will lead to a better understanding of what customers need to solve. (29:05)
Tuesday May 04, 2021
Tuesday May 04, 2021
Reed Sturtevant sees a lot of untapped potential in “tough tech.”
As a General Partner at The Engine, a venture capital firm launched by MIT, Reed and his colleagues invest in companies with breakthrough technology that, if successful, could positively transform the world.
It’s been about 15 years since I’ve last caught up to Reed—who was CTO at a startup we worked at together—so I’m so excited to welcome him on this episode of Experiencing Data! Reed and I talked about AI and how some of the portfolio companies in his fund are using data to produce better products, solutions, and inventions to tackle some of the world’s toughest challenges.
In our chat, we covered:
- How Reed's venture capital firm, The Engine, is investing in technology driven businesses focused on making positive social impacts. (0:28)
- The challenges that technical PhDs and postdocs face when transitioning from academia to entrepreneurship. (2:22)
- Focusing on a greater mission: The importance of self-examining whether an invention would be a good business. (5:16)
- How one technology business invested in by The Engine, The Routing Company, is leveraging AI and data to optimize public transportation and bridge service gaps. (9:05)
- Understanding and solving a problem: Using ‘design exercises’ to find successful market fits for existing technological solutions. (16:53)
- Solutions first, problems second: Why asking the right questions is key to mapping a technological solution back to a problem in the market. (19:31)
- Understanding and articulating a product’s value to potential buyers. (22:54)
- How the go-to-market strategies of software companies have changed over the last few decades. (26:16)
Resources and Links:
- The Engine: https://www.engine.xyz/
Quotes from Today’s Episode
There have been a couple of times while working at The Engine when I’ve taken it as a sign of maturity when a team self-examines whether their invention is actually the right way to build a business. - Reed (5:59)
For some of the data scientists I know, particularly with AI, executive teams can mandate AI without really understanding the problem they want to solve. It actually pushes the problem discovery onto the solution people — but they’re not always the ones trained to go find the problems. - Brian (19:42)
You can keep hitting people over the head with a product, or you can go figure out what people care about and determine how you can slide your solution into something they care about. ... You don’t know that until you go out and talk to them,listen, and and get in to their world. And I think that’s still something that’s not happening a lot with data teams. - Brian (24:45)
I think there really is a maturity among even the early stage teams now, where they can have a shelf full of techniques that they can just pick and choose from in terms of how to build a product, how to put it in front of people, and how to have the [user] experience be a gentle on-ramp. - Reed, on startups (27:29)
Tuesday Apr 20, 2021
Tuesday Apr 20, 2021
Debbie Reynolds is known as “The Data Diva” — and for good reason.
In addition to being founder, CEO and chief data privacy officer of her own successful consulting firm, Debbie has been named to the Global Top 20 CyberRisk Communicators by The European Risk Policy Institute in 2020. She’s also written a few books, such as The GDPR Challenge: Privacy, Technology, and Compliance In An Age of Accelerating Change; as well as articles for other publications.
If you are building data products, especially customer-facing software, you’ll want to tune into this episode. Debbie and Ihad an awesome discussion about data privacy from the lens of user experience instead of the typical angle we are all used to: legal compliance. While collecting user data can enable better user experiences, we can also break a customer’s trust if we don’t request access properly.
In our chat, we covered:
- 'Humans are using your product': What it means to be a 'data steward' when building software. (0:27)
- 'Privacy by design': The importance for software creators to think about privacy throughout the entire product creation process. (4:32)
- The different laws (and lack thereof) regarding data privacy — and the importance to think about a product's potential harm during the design process. (6:58)
- The importance of having 'diversity at all levels' when building data products. (16:41)
- The role of transparency in data collection. (19:41)
- Fostering a positive and collaborative relationship between a product or service’s designers, product owners, and legal compliance experts. (24:55)
- The future of data monetization and how it relates to privacy. (29:18)
Resources and Links:
Quotes from Today’s Episode
When it comes to your product, humans are using it. Regardless of whether the users are internal or external — what I tell people is to put themselves in the shoes of someone who’s using this and think about what you would want to have done with your information or with your rights. Putting it in that context, I think, helps people think and get out of their head about it. Obviously there’s a lot of skill and a lot of experience that it takes to build these products and think about them in technical ways. But I also try to tell people that when you’re dealing with data and you’re building products, you’re a data steward. The data belongs to someone else, and you’re holding it for them, or you’re allowing them to either have access to it or leverage it in some way. So, think about yourself and what you would think you would want done with your information. - Debbie (3:28)
Privacy by design is looking at the fundamental levels of how people are creating things, and having them think about privacy as they’re doing that creation. When that happens, then privacy is not a difficult thing at the end. Privacy really isn’t something you could tack on at the end of something; it’s something that becomes harder if it’s not baked in. So, being able to think about those things throughout the process makes it easier. We’re seeing situations now where consumers are starting to vote with their feet — if they feel like a tool or a process isn’t respecting their privacy rights, they want to be able to choose other things. So, I think that’s just the way of the world. .... It may be a situation where you’re going to lose customers or market share if you’re not thinking about the rights of individuals. - Debbie (5:20)
I think diversity at all levels is important when it comes to data privacy, such as diversity in skill sets, points of view, and regional differences. … I think people in the EU — because privacy is a fundamental human right — feel about it differently than we do here in the US where our privacy rights don’t really kick in unless it’s a transaction. ... The parallel I say is that people in Europe feel about privacy like we feel about freedom of speech here — it’s just very deeply ingrained in the way that they do things. And a lot of the time, when we’re building products, we don’t want to be collecting data or doing something in ways that would harm the way people feel about your product. So, you definitely have to be respectful of those different kinds of regimes and the way they handle data. … I’ll give you a biased example that someone had showed me, which was really interesting. There was a soap dispenser that was created where you put your hand underneath and then the soap comes out. It’s supposed to be a motion detection thing. And this particular one would not work on people of color. I guess whatever sensor they created, it didn’t have that color in the spectrum of what they thought would be used for detection or whatever. And so those are problems that happen a lot if you don’t have diverse people looking at these products. Because you — as a person that is creating products — you really want the most people possible to be able to use your products. I think there is an imperative on the economic side to make sure these products can work for everyone. - Debbie (17:31)
Transparency is the wave of the future, I think, because so many privacy laws have it. Almost any privacy law you think of has transparency in it, some way, shape, or form. So, if you’re not trying to be transparent with the people that you’re dealing with, or potential customers, you’re going to end up in trouble. - Debbie (24:35)
In my experience, while I worked with lawyers in the digital product design space — and it was heaviest when I worked at a financial institution — I watched how the legal and risk department basically crippled stuff constantly. And I say “cripple” because the feeling that I got was there’s a line between adhering to the law and then also—some of this is a gray area, like disclosure. Or, if we show this chart that has this information, is that construed as advice? I understand there’s a lot of legal regulation there. My feeling was, there’s got to be a better way for compliance departments and lawyers that genuinely want to do the right thing in their work to understand how to work with product design, digital design teams, especially ones using data in interesting ways. How do you work with compliance and legal when we’re designing digital products that use data so that it’s a team effort, and it’s not just like, “I’m going to cover every last edge because that’s what I’m here to do is to stop anything that could potentially get us sued.” There is a cost to that. There’s an innovation cost to that. It’s easier, though, to look at the lawyer and say, “Well, I guess they know the law better, so they’re always going to win that argument.” I think there’s a potential risk there. - Brain (25:01)
Trust is so important. A lot of times in our space, we think about it with machine learning, and AI, and trusting the model predictions and all this kind of stuff, but trust is a brand attribute as well and it’s part of the reason I think design is important because the designers tend to be the most empathetic and user-centered of the bunch. That’s what we’re often there to do is to keep that part in check because we can do almost anything these days with the tech and the data, and some of it’s like, “Should we do this?” And if we do do it, how do we do it so we’re on brand, and the trust is built, and all these other factors go into that user experience. - Brian (34:21)
Tuesday Apr 06, 2021
Tuesday Apr 06, 2021
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI).
Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.
I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.
In our chat, we covered:
- Ben's career studying human-computer interaction and computer science. (0:30)
- 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55)
- 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56)
- 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16)
- A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08)
- Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34)
- Ben's upcoming book on human-centered AI. (35:55)
Resources and Links:
- People-Centered Internet: https://peoplecentered.net/
- Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X
- Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764
- Partnership on AI: https://www.partnershiponai.org/
- AI incident database: https://www.partnershiponai.org/aiincidentdatabase/
- University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/
- ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html
- Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/
- Ben on Twitter: https://twitter.com/benbendc
Quotes from Today’s Episode
The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)
The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)
Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)
There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)
Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)
Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)
Tuesday Mar 23, 2021
Tuesday Mar 23, 2021
Marty Cagan has had a storied career working as a product executive. With a resume that includes Vice President of Product at Netscape and Ebay, Marty is an expert in product management and strategy.
This week, Marty joins me on Experiencing Data to talk more about what a successful data product team looks like, as well as the characteristics of an effective product manager. We also explored the idea of product management applied to internal data teams. Marty and I didn’t necessarily agree on everything in this conversation, but I loved his relentless focus on companies’ customers. Marty and I also talked a bit about his new book, Empowered: Ordinary People, Extraordinary Teams. I also spoke with Marty about:
- The responsibilities of a data product team. (0:59)
- Whether an internally-facing software solution can be considered a 'product.' (5:02)
- Customer-facing vs. customer-enabling: Why Marty tries hard not to confuse the terminology of internal employees as customers. (7:50)
- The common personality characteristics and skill sets of effective product managers. (12:53)
- The importance of 'customer exposure time.' (17:56)
- The role of product managers in upholding ethical standards. (24:57)
- The value of a good designer on a product team. (28:07)
- Why Marty decided to write his latest book, Empowered, about leadership. (30:52)
Quotes from Today’s Episode
We try hard not to confuse customers with internal employees — for example, a sales organization, or customer service organization. They are important partners, but when a company starts to confuse these internal organizations with real customers, all kinds of bad things happen — especially to the real customer. [...] A lot of data reporting teams are, in most companies, being crushed with requests. So, how do you decide what to prioritize? Well, a product strategy should help with that and leadership should help with that. But, fundamentally, the actual true customers are going to drive a lot of what we need to do. It’s important that we keep that in mind. - Marty (9:13)
I come out of the technology space, and, for me, the worlds of product design and product management are two overlapping circles. Some people fall in the middle, some people are a little bit heavier to one side or the other. The focus there is there’s a lot of focus on empathy, and a focus on understanding how to frame the problem correctly — it’s about not jumping to a solution immediately without really understanding the customer pain point. - Brian (10:47)
One thing I’ve seen frequently throughout my career is that designers often have no idea how the business sustains itself. They don’t understand how it makes money, they don’t understand how it’s even sold or marketed. They are relentlessly focused on user experience, but the other half of it is making a business viable. - Brian (14:57)
Ethical issues really do, in almost all cases I see, originate with the leaders. However, it’s also true that they can first manifest themselves in the product teams. The product manager is often the first one to see that this could be a problem, even when it’s totally unintentional. - Marty (26:45)
My interest has always been product teams because every good product I know came from a product team. Literally — it is a combination of product design and engineering that generate great products. I’m interested in the nature of that collaboration and in nurturing the dynamics of a healthy team. To me, having strong engineering that’s all engaged with direct customer access is fundamental. Similarly, a professional designer is important — somebody that really understands service design, interaction design, visual design, and the user research behind it. The designer role is responsible for getting inside the heads of the users. This is hard. And it’s one of those things, when it’s done well, nobody even notices it. - Marty (28:54)
Links Referenced
- Silicon Valley Product Group: https://svpg.com/
- Empowered: https://svpg.com/empowered-ordinary-people-extraordinary-products/
- Inspired: https://svpg.com/inspired-how-to-create-products-customers-love/
- Twitter: https://twitter.com/cagan
LinkedIn: https://www.linkedin.com/in/cagan/