127.8K
Downloads
161
Episodes
If you’re a leader tasked with generating business and org. value through ML/AI and analytics, you’ve probably struggled with low user adoption. Making the tech gets easier, but getting users to use, and buyers to buy, remains difficult—but you’ve heard a ”data product” approach can help. Can it? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting designer’s perspective on why creating ML and analytics outputs isn’t enough to create business and UX outcomes. How can UX design and product management help you create innovative ML/AI and analytical data products? What exactly are data products—and how can data product management help you increase user adoption of ML/analytics—so that stakeholders can finally see the business value of your data? Every 2 weeks, I answer these questions via solo episodes and interviews with innovative chief data officers, data product management leaders, and top UX professionals. Hashtag: #ExperiencingData. PODCAST HOMEPAGE: Get 1-page summaries, text transcripts, and join my Insights mailing list: https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes
Tuesday Jul 27, 2021
Tuesday Jul 27, 2021
As much as AI has the ability to change the world in very positive ways, it also can be incredibly destructive. Sean McGregor knows this well, as he is currently developing the Partnership on AI’s AI Incident Database, a searchable collection of news articles that covers questionable use, failures, and other incidents that affect people when AI solutions are poorly designed.
On this episode of Experiencing Data, Sean takes us through his notable work around using machine learning in the domain of fire suppression, and how human-centered design is critical to ensuring these decision support solutions are actually used and trusted by the users. We also covered the social implications of new decision-making tools leveraging AI, and:
- Sean's focus on ensuring his models and interfaces were interpretable by users when designing his fire-suppression system and why this was important. (0:51)
- How Sean built his fire suppression model so that different stakeholders can optimize the system for their unique purposes. (8:44)
- The social implications of new decision-making tools. (11:17)
- Tailoring to the needs of 'high-investment' and 'low-investment' people when designing visual analytics. (14:58)
- The AI Incident Database: Preventing future AI deployment harm by collecting and displaying examples of the unintended and negative consequences of AI. (18:20)
- How human-centered design could prevent many incidents of harmful AI deployment — and how it could also fall short. (22:13)
- 'It's worth the time and effort': How taking time to agree on key objectives for a data product with stakeholders can lead to greater adoption. (30:24)
Quotes from Today’s Episode
“As soon as you enter into the decision-making space, you’re really tearing at the social fabric in a way that hasn’t been done before. And that’s where analytics and the systems we’re talking about right now are really critical because that is the middle point that we have to meet in and to find those points of compromise.” - Sean (12:28)
“I think that a lot of times, unfortunately, the assumption [in data science is], ‘Well if you don’t understand it, that’s not my problem. That’s your problem, and you need to learn it.’ But my feeling is, ‘Well, do you want your work to matter or not? Because if no one’s using it, then it effectively doesn’t exist.’” - Brian (17:41)
“[The AI Incident Database is] a collection of largely news articles [about] bad things that have happened from AI [so we can] try and prevent history from repeating itself, and [understand] more of [the] unintended and bad consequences from AI....” - Sean (19:44)
“Human-centered design will prevent a great many of the incidents [of AI deployment harm] that have and are being ingested in the database. It’s not a hundred percent thing. Even in human-centered design, there’s going to be an absence of imagination, or at least an inadequacy of imagination for how these things go wrong because intelligent systems — as they are currently constituted — are just tremendously bad at the open-world, open-set problem.” - Sean (22:21)
“It’s worth the time and effort to work with the people that are going to be the proponents of the system in the organization — the ones that assure adoption — to kind of move them through the wireframes and examples and things that at the end of the engineering effort you believe are going to be possible. … Sometimes you have to know the nature of the data and what inferences can be delivered on the basis of it, but really not jumping into the principal engineering effort until you adopt and agree to what the target is. [This] is incredibly important and very often overlooked.” - Sean (31:36)
“The things that we’re working on in these technological spaces are incredibly impactful, and you are incredibly powerful in the way that you’re influencing the world in a way that has never, on an individual basis, been so true. And please take that responsibility seriously and make the world a better place through your efforts in the development of these systems. This is right at the crucible for that whole process.” - Sean (33:09)
Links Referenced
- seanbmcgregor.com: https://seanbmcgregor.com
- AI Incident Database: https://incidentdatabase.ai
- Partnership on AI: https://www.partnershiponai.org
Twitter: https://twitter.com/seanmcgregor
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.