127.9K
Downloads
161
Episodes
If you’re a leader tasked with generating business and org. value through ML/AI and analytics, you’ve probably struggled with low user adoption. Making the tech gets easier, but getting users to use, and buyers to buy, remains difficult—but you’ve heard a ”data product” approach can help. Can it? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting designer’s perspective on why creating ML and analytics outputs isn’t enough to create business and UX outcomes. How can UX design and product management help you create innovative ML/AI and analytical data products? What exactly are data products—and how can data product management help you increase user adoption of ML/analytics—so that stakeholders can finally see the business value of your data? Every 2 weeks, I answer these questions via solo episodes and interviews with innovative chief data officers, data product management leaders, and top UX professionals. Hashtag: #ExperiencingData. PODCAST HOMEPAGE: Get 1-page summaries, text transcripts, and join my Insights mailing list: https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes
Tuesday May 07, 2019
Tuesday May 07, 2019
Dr. Andrey Sharapov is a senior data scientist and machine learning engineer at Lidl. He is currently working on various projects related to machine learning and data product development including analytical planning tools that help with business issues such as stocking and purchasing. Previously, he spent 2 years at Xaxis and he led data science initiatives and developed tools for customer analytics at TeamViewer. Andrey and I met at a Predicitve Analytics World conference we were both speaking at, and I found out he is very interested in “explainable AI,” an aspect of user experience that I think is worth talking about and so that’s what today’s episode will focus on.
In our chat, we covered:
- Lidl’s planning tool for their operational teams and what it predicts.
- The lessons learned from Andrey’s first attempt to build an explainable AI tool and other human factors related to designing data products
- What explainable AI is, and why it is critical in certain situations
- How explainable AI is useful for debugging other data models
- We discuss why explainable AI isn’t always used
- Andrey’s thoughts on the importance of including your end user in the data production creation process from the very beginning.
Also, here’s a little post-episode thought from a design perspective:
I know there are counter-vailing opinions that state that explainability of models is “over-hyped.” One popular rationalization uses examples such as how certain professions (e.g. medical practitioners) make decisions all the time that cannot be fully explained, yet people believe the decision making without necessarily expecting it to be fully explained. The reality is that while not every model or end UX necessarily needs explainability, I think there are human factors that can be satisfied by providing explainability such as building customer trust more rapidly, or helping convince customers/users why/how a new technology solution may be better than “the old way” of doing things. This is not a blanket recommendation to “always include explainability” in your service/app/UI; I think many factors come into play and as with any design choice, I think you should let your customer/user feedback help you decide whether your service needs explainability to be valuable, useful, and engaging.
Resources and Links:
Explainable AI- XAI Group (LinkedIn)
Quotes from Today’s Episode
“I hear frequently there can be a tendency in the data science community to want to do excellent data science work and not necessarily do excellent business work. I also hear how some data scientists may think, ‘explainable AI is not going to improve the model’ or ‘help me get published’ – so maybe that’s responsible for why [explainable AI] is not as widely in use.” – Brian O’Neill
“When you go and talk to an operational person, who has in mind a certain number of basic rules, say three, five, or six rules [they use] when doing planning, and then when you come to him with a machine learning model, something that is let’s say, ‘black box,’ and then you tell him ‘okay, just trust my prediction,’ then in most of the cases, it just simply doesn’t work. They don’t trust it. But the moment when you come with an explanation for every single prediction your model does, you are increasing your chances of a mutual conversation between this responsible person and the model…” – Andrey Sharapov
“We actually do a lot of traveling these days, going to Bulgaria, going to Poland, Hungry, every country, we try to talk to these people [our users] directly. [We] try to get the requirements directly from them and then show the results back to them…” – Andrey Sharapov
“The sole purpose of the tool we built was to make their work more efficient, in a sense that they could not only produce better results in terms of accuracy, but they could also learn about the market themselves because we created a plot for elasticity curves. They could play with the price and see if they made the price too high, too low, and how much the order quantity would change.” – Andrey Sharapov
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.