127.8K
Downloads
161
Episodes
If you’re a leader tasked with generating business and org. value through ML/AI and analytics, you’ve probably struggled with low user adoption. Making the tech gets easier, but getting users to use, and buyers to buy, remains difficult—but you’ve heard a ”data product” approach can help. Can it? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting designer’s perspective on why creating ML and analytics outputs isn’t enough to create business and UX outcomes. How can UX design and product management help you create innovative ML/AI and analytical data products? What exactly are data products—and how can data product management help you increase user adoption of ML/analytics—so that stakeholders can finally see the business value of your data? Every 2 weeks, I answer these questions via solo episodes and interviews with innovative chief data officers, data product management leaders, and top UX professionals. Hashtag: #ExperiencingData. PODCAST HOMEPAGE: Get 1-page summaries, text transcripts, and join my Insights mailing list: https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes
Monday Mar 08, 2021
Monday Mar 08, 2021
Journalism is one of the keystones of American democracy. For centuries, reporters and editors have kept those in power accountable by seeking out the truth and reporting it.
However, the art of newsgathering has changed dramatically in the digital age. Just take it from NPR Senior Director of Audience Insights Steve Mulder — whose team is helping change the way NPR makes editorial decisions by introducing a streamlined and accessible platform for data analytics and insights.
Steve and I go way, way back (Lycos anyone!?) — and I’m so excited to welcome him on this episode of Experiencing Data! We talked a lot about the Story Analytics and Data Insights (SANDI) dashboard for NPR content creators that Steve’s team just recently launched, and dove into:
- How Steve’s design and UX background influences his approach to building analytical tools and insights (1:04)
- Why data teams at NPR embrace qualitative UX research when building analytics and insights solutions for the editorial team. (6:03)
- What the Story Analytics and Data Insights (SANDI) dashboard for NPR’s newsroom is, the goals it is supporting, and the data silos that had to be broken down (10:52)
- How the NPR newsroom uses SANDI to measure audience reach and engagement. (14:40)
- 'It's our job to be translators': The role of moving from ‘what’ to ‘so what’ to ‘now what’ (22:57)
Quotes from Today’s Episode
People with backgrounds in UX and design end up everywhere. And I think it's because we have a couple of things going for us. We are user-centered in our hearts. Our goal is to understand people and what they need — regardless of what space we're talking about. We are grounded in research and getting to the underlying motivations of people and what they need. We're focused on good communication and interpretation and putting knowledge into action — we're generalists. - Steve (1:44)
The familiar trope is that quantitative research tells you what is going on, and qualitative research tells you why. Qualitative research gets underneath the surface to answer why people feel the way they do. Why are they motivated? Why are they describing their needs in a certain way? - Steve (6:32)
The more we work with people and develop relationships — and build that deeper sense of trust as an organization with each other — the more openness there is to having a real conversation. - Steve (9:06)
I’ve been reading a book by Nancy Duarte called DataStory (see Episode 32 of this show), and in the book she talks about this model of the career growth [...]that is really in sync with how I've been thinking about it. [...]you begin as an explorer of data — you're swimming in the data and finding insights from the data-first perspective. Over time in your career, you become an explainer. And an explainer is all about creating meaning: what is the context and interpretation that I can bring to this insight that makes it important, that answers the question, “So what?” And then the final step is to inspire, to actually inspire action and inspire new ways of looking at business problems or whatever you're looking at. - Steve (25:50)
I think that carving things down to what's the simplest is always a big challenge, just because those of us drowning in data are always tempted to expose more of it than we should. - Steve (29:30)
There's a healthy skepticism in some parts of NPR around data and around the fact that ‘I don't want data to limit what I do with my job. I don't want it to tell me what to do.’ We spend a lot of time reassuring people that data is never going to make decisions for you — it's just the foundation that you can stand on to better make your own decision. … We don't use data-driven decisions. At NPR, we talk about data-??? decisions because that better reflects the fact that it is data and expertise together that make things magic. - Steve (34:34)
Resources and Links:
- Twitter: https://twitter.com/muldermedia
Tuesday Feb 23, 2021
Tuesday Feb 23, 2021
With a 30+ year career in data warehousing, BI and advanced analytics under his belt, Bill has become a leader in the field of big data and data science – and, not to mention, a popular social media influencer. Having previously worked in senior leadership at DellEMC and Yahoo!, Bill is now an executive fellow and professor at the University of San Francisco School of Management as well as an honorary professor at the National University of Ireland-Galway.
I’m so excited to welcome Bill as my guest on this week’s episode of Experiencing Data. When I first began specializing my consulting in the area of data products, Bill was one of the first leaders that I specifically noticed was leveraging design thinking on a regular basis in his work. In this long overdue episode, we dug into some examples of how he’s using it with teams today. Bill sees design as a process of empowering humans to collaborate with one another, and he also shares insights from his new book, “The? Economics of Data, Analytics and Digital Transformation.”
In total, we covered:
- Why it’s crucial to understand a customer’s needs when building a data product and how design helps uncover this. (2:04)
- How running an “envisioning workshop” with a customer before starting a project can help uncover important information that might otherwise be overlooked. (5:09)
- How to approach the human/machine interaction when using machine learning and AI to guide customers in making decisions – and why it’s necessary at times to allow a human to override the software. (11:15)
- How teams that embrace design-thinking can create “organizational improvisation” and drive greater value. (14:49)
- Bill take on how to properly prioritize use cases (17:40)
- How toidentify a data product’s problems ahead of time. (21:36)
- The trait that Bill sees in the best data scientists and design thinkers (25:41)
- How Bill helps transition the practice of data science from being a focus on analytic outputs to operational and business outcomes. (28:40)
- Bill’s new book, “The Economics of Data, Analytics, and Digital Transformation.” (31:34)
- Brian and Bill’s take on the need for organizations to create a technological and cultural environment of continuous learning and adapting if they seek to innovate. (38:22)
Quotes from Today’s Episode
There’s certainly a UI aspect of design, which is to build products that are more conducive for the user to interact with – products that are more natural, more intuitive … But I also think about design from an empowerment perspective. When I consider design-thinking techniques, I think about how I can empower the wide variety of stakeholders that I need to service with my data science. I’m looking to identify and uncover those variables and metrics that might be better predictors of performance. To me, at the very beginning of the design process, it’s about empowering everybody to have ideas. – Bill (2:25)
Envisioning workshops are designed to let people realize that there are people all across the organization who bring very different perspectives to a problem. When you combine those perspectives, you have an illuminating thing. Now let’s be honest: many large organizations don’t do this well at all. And the reason why is not because they’re not smart, it’s because in many cases, senior executives aren’t willing to let go. Design thinking isn’t empowering the senior executives. In many cases, it’s about empowering those frontline employees … If you have a culture where the senior executives have to be the smartest people in the room, design is doomed. – Bill (10:15)
Organizational charts are the great destroyer of creativity because you put people in boxes. We talk about data silos, but we create these human silos where people can’t go out … Screw boxes. We want to create swirls – we want to create empowered teams. In fact, the most powerful teams are the ones who can embrace design thinking to create what I call organizational improvisation. Meaning, you have the ability to mix and match people across the organization based on their skill sets for the problem at hand, dissipate them when the problem is gone, and reconstitute them around a different problem. It’s like watching a great soccer team play … These players have been trained and conditioned, they make their own decisions on the field, and they interact with each other. Watching a good soccer team is like ballet because they’ve all been empowered to make decisions. – Bill (15:30)
I tend to feel like design thinkers can be born from any job title, not just “creatives” – even certain types of verytechnically gifted people can be really good at it. A lot of it is focused around the types of questions they ask and their ability to be empathetic. – Brian (25:55)
(Is there another quote from me? So many good ones in this episode from Bill though so if not, i understand)
The best design thinkers and the best data scientists share one common trait: they’re humble. They have the ability to ask questions, to learn. They don’t walk in with an answer…and here’s the beauty of design thinking: anybody can do it. But you have to be humble. If you already know the answer, then you’re never going to be a good designer. Never. – Bill (26:34)
From an economic perspective … The value of data isn’t in having it. The value in data is how you use it to generate more value … In the same way that design thinking is learning how to speak the language of the customer, economics is about learning how to speak the language of the business. And when you bring those concepts together around data science, that’s a blend that is truly a game-changer. – Bill (36:03)
Links
Tuesday Feb 09, 2021
Tuesday Feb 09, 2021
On this solo episode of Experiencing Data, I discussed eight design strategies that will help your data product team create immensely valuable IOT monitoring applications.
Whether your team is creating a system for predictive maintenance, forecasting, or root-cause analysis – analytics are often a big part of helping users make sense of the huge volumes of telemetry and data an IOT system can generate. Often times, product or technical teams see the game as, “how do we display all the telemetry from the system in a way the user can understand?” The problem with this approach is that it is completely decoupled from the business objectives the customers likely have-and it is a recipe for a very hard-to-use application.
The reality is that a successful application may require little to no human interaction at all-that may actually be the biggest value of all that you can create for your customer: showing up only when necessary, with just the right insight.
So, let’s dive into some design considerations for these analytical monitoring applications, dashboards, and experiences.
In total, I covered:
- Why it’s important to consider that a monitoring application user experiences may happen across multiple screens, interfaces, departments or people. (2:32)
- Design considerations benefits when building a forecasting or predictive application that allows customers to change parameters and explore “what-if” scenarios. (6:09)
- Designing for seasonality: What it means to have a monitoring application that understands and adapts to periodicity in the real world. (11:03)
- How the best user experiences for monitoring and maintenance applications using analytics seamlessly integrate people, processes and related technology. (16:03)
- The role of alerting and notifications in these systems … and where things can go wrong if they aren’t well designed from a UX perspective. (19:49)
- How to keep the customer (user’s) business top of mind within the application UX. (23:19)
- One secret to making time-series charts in particular more powerful and valuable to users. (25:24)
- Some of the common features and use cases I see monitoring applications needing to support on out-of-the-box dashboards. (27:15)
Quotes from Today’s Episode
Consider your data product across multiple applications, screens, departments and people. Be aware that the experience may go beyond the walls of the application sitting in front of you. – Brian (5:58)
When it comes to building forecast or predictive applications, a model’s accuracy frequently comes second to the interpretability of the model. Because if you don’t have transparency in the UX, then you don’t have trust. And if you don’t have trust, then no one pays attention. If no one pays attention, then none of the data science work you did matters. – Brian (7:15)
Well-designed applications understand the real world. They know about things like seasonality and what normalcy means in the environment in which this application exists. These applications learn and take into consideration new information as it comes in. (11:03)
The greatest IoT UIs and UXs may be the ones where you rarely have to use the service to begin with. These services give you alerts and notifications at the right time with the right amount of information along with actionable next steps. – Brian (20:00)
With tons of IoT telemetry comes a lot of discussion of stats and metrics that are visualized on charts and tables. But at the end of the day, your customer probably may not really care about the objects themselves. Ultimately, the devices being monitored are there to provide business value to your customer. Working backwards from the business value perspective helps guide solid UX design choices. – Brian (23:18)
Tuesday Jan 26, 2021
Tuesday Jan 26, 2021
Designing a data product from the ground up is a daunting task, and it is complicated further when you have several different user types who all have different expectations for the service. Whether an application offers a wealth of traditional historical analytics or leverages predictive capabilities using machine learning, for example, you may find that different users have different expectations. As a leader, you may be forced to make choices about how and what data you’ll present, and how you will allow these different user types to interact with it. These choices can be difficult when domain knowledge, time availability, job responsibility, and a need for control vary greatly across these personas. So what should you do?
To answer that, today I’m going solo on Experiencing Data to highlight some strategies I think about when designing multi-user enterprise data products so that in the end, something truly innovative, useful, and valuable emerges.
In total, I covered:
- Why UX research is imperative and the types of research I think are important (4:43)
- The importance for teams to have a single understanding of how a product’s success will be measured before it is built and launched (and how research helps clarify this). (8:28)
- The pros and cons of using the design tool called “personas” to help guide design decision making for multiple different user types. (19:44)
- The idea of ‘Minimum valuable product’ and how you balance this with multiple user types (24:26)
- The strategy I use to reduce complexity and find opportunities to solve multiple users’ needs with a single solution (29:26)
- The relevancy of declaratory vs. exploratory analytics and why this is relevant. (32:48)
- My take on offering customization as a means to satisfy multiple customer types. (35:15)
- Expectations leaders should have-particularly if you do not have trained product designers or UX professionals on your team. (43:56)
Resources and Links
- My training seminar, Designing Human-Centered Data Products: http://designingforanalytics.com/theseminar
- Designing for Analytics Self-Assessment Guide: http://designingforanalytics.com/guide
- (Book) The User Is Always Right: A Practical Guide to Creating and Using Personas for the Web by Steve Mulder https://www.amazon.com/User-Always-Right-Practical-Creating/dp/0321434536
- My C-E-D Design Framework for Integrating Advanced Analytics into Decision Support Software: https://designingforanalytics.com/resources/c-e-d-ux-framework-for-advanced-analytics/
- Homepage for all of my free resources on designing innovative machine learning and analytics solutions: designingforanalytics.com/resources
Tuesday Jan 12, 2021
Tuesday Jan 12, 2021
There’s a lot at stake in the decisions that social workers have to make when they care for people — and Dr. Besa Bauta keeps this in mind when her teams are designing the data products that care providers use in the field.
As Chief Data Officer at MercyFirst, a New York-based social service nonprofit, Besa explains how her teams use design and design thinking to create useful decision support applications that lead to improved clinician-client interactions, health and well-being outcomes, and better decision making.
In addition to her work at MercyFirst, Besa currently serves as an adjunct assistant professor at New York University’s Silver School of Social Work where she teaches public health, social science theories and mental/behavioral health. On today’s episode, Besa and I talked about how MercyFirst’s focus on user experience improves its delivery of care and the challenges Besa and her team have encountered in driving adoption of new technology.
In total, we covered:
- How data digitization is improving the functionality of information technologies. (1:40)
- Why MercyFirst, a social service organization, partners with technology companies to create useful data products. (3:30)
- How MercyFirst decides which applications are worth developing. (7:06)
- Evaluating effectiveness: How MercyFirst’s focus on user experience improves the delivery of care. (10:45)
- “With anything new, there is always fear”: The challenges MercyFirst has with getting buy-in on new technology from both patients and staff. (15:07)
- Besa’s take on why it is important to engage the correct stakeholders early on in the design of an application — and why she engages the naysayers. (20:05)
- The challenges MercyFirst faces with getting its end-users to participate in providing feedback on an application’s design and UX. (24:10)
- Why Besa believes it is important to be thinking of human-centered design from the inception of a project. (27:50)
- Why it is imperative to involve key stakeholders in the design process of artificial intelligence and machine learning products. (31:20)
Quotes from Today’s Episode
We're not a technology company, ...so, for us, it’s about finding the right partners that understand our use cases and who are also willing to work alongside us to actually develop something that our end-users — our physicians, for example — are able to use in their interaction with a patient. - Besa
No one wants to have a different type of application every other week, month, or year. We want to have a solution that grows with the organization. - Besa on the importance of creating a product that is sustainable over time
If we think about data as largely about providing decision support or decision intelligence, how do you measure that it's designed to do a good job? What's the KPI for choosing good KPIs? - Brian
Earlier on, engaging with the key stakeholders is really important. You're going to have important gatekeepers, who are going to say, ‘No, no, no,’ — the naysayers. I start with the naysayers first — the harder nuts to crack — and say, ‘How can this improve your process or your service?’ If I could win them over, the rest is cake. Well, almost. Not all the time. - Besa
Failure is how some orgs learn about just how much design matters. At some point, they realize that data science, engineering, and technical work doesn't count if no human will use that app, model, product, or dashboard when it rolls out. -Brian
Besa: It was a dud. [laugh].
Brian: —yeah, if it doesn’t get used, it doesn't matter
What my team has done is create workgroups with our vendors and others to sort of shift developmental timelines [...] and change what needs to go into development and production first—and then ensure there's a tiered approach to meet [everyone’s] needs because we work as a collective. It’s not just one healthcare organization: there are many health and social service organizations on the same boat. - Besa
It's really important to think about the human in the middle of this entire process. Sometimes products get developed without really thinking, ‘is this going to improve the way I do things? Is it going to improve anything?’ … The more personalized a product is,the better it is and the greater the adoption. - Besa
Tuesday Dec 29, 2020
Tuesday Dec 29, 2020
It’s not just science fiction: As AI becomes more complex and prevalent, so do the ethical implications of this new technology.But don’t just take it from me – take it from Carol Smith, a leading voice in the field of UX and AI. Carol is a senior research scientist in human-machine interaction at Carnegie Mellon University’s Emerging Tech Center, a division of the school’s Software Engineering Institute. Formerly a senior researcher for Uber’s self-driving vehicle experience, Carol-who also works as an adjunct professor at the university’s Human-Computer Interaction Institute-does research on Ethical AI in her work with the US Department of Defense.
Throughout her 20 years in the UX field, Carol has studied how focusing on ethics can improve user experience with AI. On today’s episode, Carol and I talked about exactly that: the intersection of user experience and artificial intelligence, what Carol’s work with the DoD has taught her, and why design matters when using machine learning and automation. Better yet, Carol gives us some specific, actionable guidance and her four principles for designing ethical AI systems.
In total, we covered:
- “Human-machine teaming”: what Carol learned while researching how passengers would interact with autonomous cars at Uber (2:17)
- Why Carol focuses on the ethical implications of the user experience research she is doing (4:20)
- Why designing for AI is both a new endeavor and an extension of existing human-centered design principles (6:24)
- How knowing a user’s information needs can drive immense value in AI products (9:14)
- Carol explains how teams can improve their AI product by considering ethics (11:45)
- “Thinking through the worst-case scenarios”: Why ethics matters in AI development (14:35) and methods to include ethics early in the process (17:11)
- The intersection between soldiers and artificial intelligence (19:34)
- Making AI flexible to human oddities and complexities (25:11)
- How exactly diverse teams help us design better AI solutions (29:00)
- Carol’s four principles of designing ethical AI systems and “abusability testing” (32:01)
Quotes from Today’s Episode
“The craft of design-particularly for #analytics and #AI solutions-is figuring out who this customer is-your user-and exactly what amount of evidence do they need, and at what time do they need it, and the format they need it in.” – Brian
“From a user experience, or human-centered design aspect, just trying to learn as much as you can about the individuals who are going to use the system is really helpful … And then beyond that, as you start to think about ethics, there are a lot of activities you can do, just speculation activities that you can do on the couch, so to speak, and think through – what is the worst thing that could happen with the system?” – Carol
“[For AI, I recommend] ‘abusability testing,’ or ‘black mirror episode testing,’ where you’re really thinking through the absolute worst-case scenario because it really helps you to think about the people who could be the most impacted. And particularly people who are marginalized in society, we really want to be careful that we’re not adding to the already bad situations that they’re already facing.” – Carol, on ways to think about the ethical implications of an AI system
“I think people need to be more open to doing slightly slower work […] the move fast and break things time is over. It just, it doesn’t work. Too many people do get hurt, and it’s not a good way to make things. We can make them better, slightly slower.” – Carol
“The four principles of designing ethical AI systems are: accountable to humans, cognizant of speculative risks and benefits, respectful and secure, and honest and usable. And so with these four aspects, we can start to really query the systems and think about different types of protections that we want to provide.” – Carol
“Keep asking tough questions. Have these tough conversations. This is really hard work. It’s very uncomfortable work for a lot of people. They’re just not used to having these types of ethical conversations, but it’s really important that we become more comfortable with them, and keep asking those questions. Because if we’re not asking the questions, no one else may ask them.” – Carol
Links
Tuesday Dec 15, 2020
054 - Jared Spool on Designing Innovative ML/AI and Analytics User Experiences
Tuesday Dec 15, 2020
Tuesday Dec 15, 2020
Jared Spool is arguably the most well-known name in the field of design and user experience. For more than a decade, he has beena witty, powerful voice for why UX is critical to value creation within businesses. Formerly an engineer, Jared started working in UX in 1978, founded UIE (User Interface Engineering) in 1988, and has helped establish the field over the last 30 years. In addition, he advised the US Digital Service / Executive Office of President Obama and in 2016, Jared co-founded the Center Centre, the user experience design school that’s creating a new generation of industry-ready UX designers.
Today however, we turned to the topic of UX in the context of analytics, ML and AI—and what teams–especially those without trained designers on staff–need to know about creating successful data products.
In our chat, we covered:
- Jared’s definition of “design”
- The definition of UX outcomes, and who should be responsible for defining and delivering them
- Understanding the “value chain” of user experience and the idea that “everyone” creating the solution is a designer and responsible for UX
- Brian’s take on the current state of data and AI-awareness within the field of UX —and whether Jared agrees with Brian’s perceptions
- Why teams should use visual aids to drive change and innovation, and two tools they can use to execute this
- The relationship between data literacy and design
- The type of math training Jared thinks is missing in education and why he thinks it should replace calculus in high school -- Examples of how UX design directly addresses privacy and ethical issues with intelligent devices
- Some example actions that leaders who are new to the UX profession can do immediately to start driving more value with data products
Quotes from Today’s Episode
“Center Centre is a school in Chattanooga for creating UX designers, and it's also the name of the professional development business that we've created around it that helps organizations create and exude excellence in terms of making UX design and product services…” - Jared
“The reality is this: on the other side of all that data, there are people. There's the direct people who are interacting with the data directly, interacting with the intelligence interacting with the various elements of what's going on, but at the same time, there's indirect folks. If someone is making decisions based on that intelligence, those decisions affect somebody else's life.” - Jared
“I think something that's missing frequently here is the inability to think beyond the immediate customer who requests a solution.” Brian
“The fact that there are user experience teams anywhere is sort of a new and novel thing. A decade ago, that was very unlikely that you'd go into a business and there’d be a user experience team of any note that had any sort of influence across the business.” - Jared
[At Netflix], we'd probably put the people who work in the basement on [server and network] performance at the opposite side of the chart from the people who work on the user interface or what we consider the user experience of Netflix […] Except at that one moment where someone's watching their favorite film, and that little spinny thing comes up, and the film pauses, and the experience is completely interrupted. And it's interrupted because the latency, and the throughput, and the resilience of the network are coming through to the user interface. And suddenly, that group of people in the basement are the most important UX designers at Netflix. - Jared
My feeling is, with the exception of perhaps the FANG companies, the idea of designers being required, or part of the equation when we're developing probabilistic solutions that use machine learning etc., well, it's not even part of the conversation with most user experience leaders that I talk to. - Brian
Links
- Center Centre website
Tuesday Dec 01, 2020
053 - Creating (and Debugging) Successful Data Product Teams with Jesse Anderson
Tuesday Dec 01, 2020
Tuesday Dec 01, 2020
In this episode of Experiencing Data, I speak with Jesse Anderson, who is Managing Director of the Big Data Institute and author of a new book
titled, Data Teams: A Unified Management Model for Successful Data-Focused Teams. Jesse opens up about why teams often run into trouble in their efforts to build data products, and what can be done to drive better outcomes.
In our chat, we covered:
- Jesse’s concept of debugging teams
- How Jesse defines a data product, how he distinguishes them from software products
- What users care about in useful data products
- Why your tech leads need to be involved with frontline customers, users, and business leaders
- Brian’s take on Jesse’s definition of a “data team” and the roles involved-especially around two particular disciplines
- The role that product owners tend to play in highly productive teams
- What conditions lead teams to building the wrong product
- How data teams are challenged to bring together parts of the company that never talk to each other – like business, analytics, and engineering teams
- The differences in how tech companies create software and data products, versus how non-digital natives often go about the process
Quotes from Today’s Episode
“I have a sneaking suspicion that leads and even individual contributors will want to read this book, but it’s more [to provide] suggestions for middle,upper management, and executive management.” – Jesse
“With data engineering, we can’t make v1 and v2 of data products. We actually have to make sure that our data products can be changed and evolve, otherwise we will be constantly shooting ourselves in the foot. And this is where the experience or the difference between a data engineer and software engineer comes into place.” – Jesse
“I think there’s high value in lots of interfacing between the tech leads and whoever the frontline customers are…” – Brian
“In my opinion-and this is what I talked about in some of the chapters-the business should be directly interacting with the data teams.” – Jesse
“[The reason] I advocate so strongly for having skilled product management in [a product design] group is because they need to be shielding teams that are doing implementation from the thrashing that may be going on upstairs.” – Brian
“One of the most difficult things of data teams is actually bringing together parts of the company that never talk to each other.” – Jesse
Links
- Big Data Institute
- Data Teams: A Unified Management Model for Successful Data-Focused Teams
- Follow Jesse on Twitter
- Connect with Jesse on LinkedIn
Tuesday Nov 17, 2020
Tuesday Nov 17, 2020
In this episode of Experiencing Data, I sat down with James Taylor, the CEO of Decision Management Solutions. This discussion centers around how enterprises build ML-driven software to make decisions faster, more precise, and more consistent-and why this pursuit may fail.
We covered:
- The role that decision management plays in business, especially when making decisions quickly, reliably, consistently, transparently and at scale.
- The concept of the "last mile," and why many companies fail to get their data products across it
- James' take on operationalization of ML models, why Brian dislikes this term
- Why James thinks it is important to distinguish between technology problems and organizational change problems when leveraging ML.
- Why machine learning is not a substitute for hard work.
- What happens when human-centered design is combined with decision management.
- James's book, Digital Decisioning: How to Use Decision Management to Get Business Value from AI, which lays out a methodology for automating decision making.
Quotes from Today's Episode
"If you're a large company, and you have a high volume transaction where it's not immediately obvious what you should do in response to that transaction, then you have to make a decision - quickly, at scale, reliably, consistently, transparently. We specialize in helping people build solutions to that problem." - James
"Machine learning is not a substitute for hard work, for thinking about the problem, understanding your business, or doing things. It's a way of adding value. It doesn't substitute for things." - James
"One thing that I kind of have a distaste for in the data science space when we're talking about models and deploying models is thinking about 'operationalization' as something that's distinct from the technology-building process." - Brian
"People tend to define an analytical solution, frankly, that will never work because[…] they're solving the wrong problem. Or they build a solution that in theory would work, but they can't get it across the last mile. Our experience is that you can't get it across the last mile if you don't begin by thinking about the last mile." - James
"When I look at a problem, I'm looking at how I use analytics to make that better. I come in as an analytics person." - James
"We often joke that you have to work backwards. Instead of saying, 'here's my data, here's the analytics I can build from my data […], you have to say, 'what's a better decision look like? How do I make the decision today? What analytics will help me improve that decision?' How do I find the data I need to build those analytics?' Because those are the ones that will actually change my business." - James
"We talk about [the last mile] a lot ... which is ensuring that when the human beings come in and touch, use, and interface with the systems and interfaces that you've created, that this isthe make or break point-where technology goes to succeed or die." - Brian
Links
- Decision Management Solutions
- Digital Decisioning: How to Use Decision Management to Get Business Value from AI
- James' Personal Blog
- Connect with James on Twitter
- Connect with James on LinkedIn
Tuesday Nov 03, 2020
Tuesday Nov 03, 2020
Chenda Bunkasem is head of machine learning at Undock, where she is focusing on using quantitative methods to influence ethical design. In this episode of Experiencing Data, Chenda and I explore her actual methods to designing ethical AI solutions as well as how she works with UX and product teams on ML solutions.
We covered:
- How data teams can actually design ethical ML models, after understanding if ML is the right approach to begin with
- How Chenda aligns her data science work with the desired UX, so that technical choices are always in support of the product and user instead of “what’s cool”
- An overview of Chenda’s role at Undock, where she works very closely with product and marketing teams, advising them on uses for machine learning
- How Chenda’s approaches to using AI may change when there are humans in the loop
- What NASA’s Technology Readiness Level (TRL) evaluation is, and how Chenda uses it in her machine learning work
- What ethical pillars are and how they relate to building AI solutions
- What the Delphi method is and how it relates to creating and user-testing ethical machine learning solutions
Quotes From Today’s Episode
“There's places where machine learning should be used and places where it doesn't necessarily have to be.” - Chenda
“The more interpretability, the better off you always are.” - Chenda
“The most advanced AI doesn't always have to be implemented. People usually skip past this, and they're looking for the best transformer or the most complex neural network. It's not the case. It’s about whether or not the product sticks and the product works alongside the user to aid whatever their endeavor is, or whatever the purpose of that product is. It can be very minimalist in that sense.” - Chenda
“First we bring domain experts together, and then we analyze the use case at hand, and whatever goes in the middle — the meat, between that — is usually decided through many iterations after meetings, and then after going out and doing some sort of user testing, or user research, coming back, etc.” - Chenra, explaining the Delphi method.
“First you're taking answers on someone's ethical pillars or a company's ethical pillars based off of their intuition, and then you're finding how that solution can work in a more engineering or systems-design fashion. “ - Chenda
“I'm kind of very curious about this area of prototyping, and figuring out how fast can we learn something about what the problem space is, and what is needed, prior to doing too much implementation work that we or the business don't want to rewind and throw out.” - Brian
“There are a lot of data projects that get created that end up not getting used at all.”- Brian
Links
Connect with Chenda on LinkedIn