

134.4K
Downloads
167
Episodes
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be? While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be? If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies. I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better. Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes

Tuesday Mar 08, 2022
Tuesday Mar 08, 2022
Today, I’m flying solo in order to introduce you to CED: my three-part UX framework for designing your ML / predictive / prescriptive analytics UI around trust, engagement, and indispensability. Why this, why now? I have had several people tell me that this has been incredibly helpful to them in designing useful, usable analytics tools and decision support applications.
I have written about the CED framework before at the following link:
https://designingforanalytics.com/ced
There you will find an example of the framework put into a real-world context. In this episode, I wanted to add some extra color to what is discussed in the article. If you’re an individual contributor, the best part is that you don’t have to be a professional designer to begin applying this to your own data products. And for leaders of teams, you can use the ideas in CED as a “checklist” when trying to audit your team’s solutions in the design phase—before it’s too late or expensive to make meaningful changes to the solutions.
CED is definitely easier to implement if you understand the basics of human-centered design, including research, problem finding and definition, journey mapping, consulting, and facilitation etc. If you need a step-by-step method to develop these foundational skills, my training program, Designing Human-Centered Data Products, might help. It comes in two formats: a Self-Guided Video Course and a bi-annual Instructor-Led Seminar.
Quotes from Today’s Episode
- “‘How do we visualize the data?’ is the wrong starting question for designing a useful decision support application. That makes all kinds of assumptions that we have the right information, that we know what the users' goals and downstream decisions are, and we know how our solution will make a positive change in the customer or users’ life.”- Brian (@rhythmspice) (02:07)
- “The CED is a UX framework for designing analytics tools that drive decision-making. Three letters, three parts: Conclusions; C, Evidence: E, and Data: D. The tough pill for some technical leaders to swallow is that the application, tool or product they are making may need to present what I call a ‘conclusion’—or if you prefer, an ‘opinion.’ Why? Because many users do not want an ‘exploratory’ tool—even when they say they do. They often need an insight to start with, before exploration time becomes valuable.” - Brian (@rhythmspice) (04:00)
- “CED requires you to do customer and user research to understand what the meaningful changes, insights, and things that people want or need actually are. Well designed ‘Conclusions’—when experienced in an analytics tool using the CED framework—often manifest themselves as insights such as unexpected changes, confirmation of expected changes, meaningful change versus meaningful benchmarks, scoring how KPIs track to predefined and meaningful ranges, actionable recommendations, and next best actions. Sometimes these Conclusions are best experienced as charts and visualizations, but not always—and this is why visualizing the data rarely is the right place to begin designing the UX.” - Brian (@rhythmspice) (08:54)
- “If I see another analytics tool that promises ‘actionable insights’ but is primarily experienced as a collection of gigantic data tables with 10, 20, or 30+ columns of data to parse, your design is almost certainly going to frustrate, if not alienate, your users. Not because all table UIs are bad, but because you’ve put a gigantic tool-time tax on the user, forcing them to derive what the meaningful conclusions should be.” - Brian (@rhythmspice) (20:20)

Tuesday Feb 22, 2022
Tuesday Feb 22, 2022
Why design matters in data products is a question that, at first glance, may not be easily answered for some until they see users try to use ML models and analytics to make decisions. For Bill Báez, a data scientist and VP of Strategy at Ascend Innovations, realizing that design and UX matters in this context was a realization that grew over the course of a few years. Bill’s origins in the Air Force, and his transition to Ascend Innovations, instilled lessons about the importance of using design thinking with both clients and users.
After observing solutions built in total isolation with zero empathy and knowledge of how they were being perceived in the wild, Bill realized the critical need to bring developers “upstairs” to actually observe the people using the solutions that were being built.
Currently, Ascend Innovation’s consulting is primarily rooted in healthcare and community services, and in this episode, Bill provides some real-world examples where their machine learning and analytics solutions were informed by approaching the problems from a human-centered design perspective. Bill also dives in to where he is on his journey to integrate his UX and data science teams at Ascend so they can create better value for their clients and their client’s constituents.
Highlights in this episode include:
- What caused Bill to notice design for the first time and its importance in data products (03:12)
- Bridging the gap between data science, UX, and the client’s needs at Ascend (08:07)
- How to deal with the “presenting problem” and working with feedback (16:00)
- Bill’s advice for getting designers, UX, and clients on the same page based on his experience to date (23:56)
- How Bill provides unity for his UX and data science teams (32:40)
- The effects of UX in medicine (41:00)
Quotes from Today’s Episode
- “My journey into Design Thinking started in earnest when I started at Ascend, but I didn’t really have the terminology to use. For example, Design Thinking and UX were actually terms I was not personally aware of until last summer. But now that I know and have been exposed to it and have learned more about it, I realize I’ve been doing a lot of that type of work in earnest since 2018. - Bill (03:37)
- “Ascend Innovations has always been product-focused, although again, services is our main line of business. As we started hiring a more dedicated UX team, people who’ve been doing this for their whole career, it really helped me to understand what I had experienced prior to coming to Ascend. Part of the time I was here at Ascend that UX framework and that Design Thinking lens, it really brings a lot more firepower to what data science is trying to achieve at the end of the day.” - Bill (08:29)
- “Clients were surprised that we were asking such rudimentary questions. They’ll say ‘Well, we’ve already talked about that,’ or, ‘It should be obvious.’ or ‘Well, why are you asking me such a simple question?’ And we had to explain to them that we wanted to start at the bottom to move to the top. We don’t want to start somewhere midway and get the top. We want to make sure that we are all in alignment with what we’re trying to do, so we want to establish that baseline of understanding. So, we’re going to start off asking very simple questions and work our way up from there...” - Bill (21:09)
- “We’re building a thing, but the thing only has value if it creates a change in the world. The world being, in the mind of the stakeholder, in the minds of the users, maybe some third parties that are affected by that stuff, but it’s the change that matters. So what is the better state we want in the future for our client or for our customers and users? That’s the thing we’re trying to create. Not the thing; the change from the thing is what we want, and getting to that is the hard part.” - Brian (@rhythmspice) (26:33)
“This is a gift that you’re giving to [stakeholders] to save time, to save money, to avoid building something that will never get used and will not provide value to them. You do need to push back against this and if they say no, that’s fine. Paint the picture of the risk, though, by not doing design. It’s very easy for us to build a ML model. It’s hard for us to build a model that someone will actually use to make the world better. And in this case, it’s healthcare or support, intervention support for addicts. “Do you really want a model, or do you want an improvement in the lives of these addicts? That’s ultimately where we’re going with this, and if we don’t do this, the risk of us pushing out an output that doesn’t get used is high. So, design is a gift, not a tax...” - Brian (@rhythmspice) (34:34)- “I’d say to anybody out there right now who’s currently working on data science efforts: the sooner you get your people comfortable with the idea of doing Design Thinking, get them implemented into the projects that are currently going on. [...] I think that will be a real game-changer for your data scientists and your organization as a whole...” - Bill (42:19)

Tuesday Feb 08, 2022
Tuesday Feb 08, 2022
Building a SAAS business that focuses on building a research tool, more than building a data product, is how Jonathan Kay, CEO and Co-Founder of Apptopia frames his company’s work. Jonathan and I worked together when Apptopia pivoted from its prior business into a mobile intelligence platform for brands. Part of the reason I wanted to have Jonathan talk to you all is because I knew that he would strip away all the easy-to-see shine and varnish from their success and get really candid about what worked…and what hasn’t…during their journey to turn a data product into a successful SAAS business. So get ready: Jonathan is going to reveal the very curvy line that Apptopia has taken to get where they are today.
In this episode, Jonathan also describes one of the core product design frameworks that Apptopia is currently using to help deliver actionable insights to their customers. For Jonathan, Apptopia’s research-centric approach changes the ways in which their customers can interact with data and is helping eliminate the lull between “the why” and “the actioning” with data.
Here are some of the key parts of the interview:
- An introduction to Apptopia and how they serve brands in the world of mobile app data (00:36)
- The current UX gaps that Apptopia is working to fill (03:32)
- How Apptopia balances flexibility with ease-of-use (06:22)
- How Apptopia establishes the boundaries of its product when it’s just one part of a user’s overall workflow (10:06)
- The challenge of “low use, low trust” and getting “non-data” people to act (13:45)
- Developing strong conclusions and opinions and presenting them to customers (18:10)
- How Apptopia’s product design process has evolved when working with data, particularly at the UI level (21:30)
- The relationship between Apptopia’s buyer, versus the users of the product and how they balance the two (24:45)
- Jonathan’s advice for hiring good data product design and management staff (29:45)
- How data fits into Jonathan’s own decision making as CEO of Apptopia (33:21)
- Jonathan’s advice for emerging data product leaders (36:30)
Quotes from Today’s Episode
- “I want to just give you some props on the work that you guys have done and seeing where it's gone from when we worked together. The word grit, I think, is the word that I most associate with you and Eli [former CEO, co-founder] from those times. It felt very genuine that you believed in your mission and you had a long-term vision for it.” - Brian T. O’Neill (@rhythmspice) (02:08)
- “A research tool gives you the ability to create an input, which might be, ‘I want to see how Netflix is performing.’ And then it gives you a bunch of data. And it gives you good user experience that allows you to look for the answer to the question that’s in your head, but you need to start with a question. You need to know how to manipulate the tool. It requires a huge amount of experience and understanding of the data consumer in order to actually get the answer to the question. For me, that feels like a miss because I think the amount of people who need and can benefit from data, and the amount of people who know how to instrument the tools to get the answers from the data—well, I think there’s a huge disconnect in those numbers. And just like when I take my car to get service, I expected the car mechanic knows exactly what the hell is going on in there, right? Like, our obligation as a data provider should be to help people get closer to the answer. And I think we still have some room to go in order to get there.” - Jonathan Kay (@JonathanCKay) (04:54)
- “You need to present someone the what, the why, etc.—then the research component [of your data product] is valuable. And so it’s not that having a research tool isn’t valuable. It’s just, you can’t have the whole thing be that. You need to give them the what and the why first.” - Jonathan Kay (@JonathanCKay) (08:45)
- “You can't put equal resources into everything. Knowing the boundaries of your data product is important, but it's a hard thing to know sometimes where to draw those. A leader has to ask, ‘am I getting outside of my sweet spot? Is this outside of the mission?’ Figuring the right boundaries goes back to customer research.” - Brian T. O’Neill (@rhythmspice) (12:54)
- “What would I have done differently if I was starting Apptopia today? I would have invested into the quality of the data earlier. I let the product design move me into the clouds a little bit, because sometimes you're designing a product and you're designing visuals, but we were doing it without real data. One of the biggest things that I've learned over a lot of mistakes over a long period of time, is that we've got to incorporate real data in the design process.” - Jonathan Kay (@JonathanCKay) (20:09)
- “We work with one of the biggest food manufacturer distributors in the world, and they were choosing between us and our biggest competitor, and what they essentially did was [say] “I need to put this report together every two weeks. I used your competitor’s platform during a trial and your platform during the trial, and I was able to do it two hours faster in your platform, so I chose you—because all the other checkboxes were equal. However, at the end of the day, if we could get two hours a week back by using your tool, saving time and saving money and making better decisions, they’re all equal ROI contributors.” - Jonathan Kay on UX (@JonathanCKay) (27:23)
- “In terms of our product design and management hires, we're typically looking for people who have not worked at one company for 10 years. We've actually found a couple phenomenal designers that went from running their own consulting company to wanting to join full time. That was kind of a big win because one of them had a huge breadth of experience working with a bunch of different products in a bunch of different spaces.”- Jonathan Kay (@JonathanCKay) (30:34)
- “In terms of how I use data when making decisions for Apptopia, here’s an example. If you break our business down into different personas, my understanding one time was that one of our personas was more stagnant. The data however, did not support that. And so we're having a resource planning meeting, and I'm saying, ‘let's pull back resources a little bit,’ but [my team is] showing me data that says my assumption on that customer segment is actually incorrect. I think entrepreneurs and passionate people need data more because we have so much conviction in our decisions—and because of that,I'm more likely to make bad decisions. Theoretically good entrepreneurs should have good instincts, and you need to trust those, but what I’m saying is, you also need to check those. It's okay to make sure that your instinct is correct, right? And one of the ways that I’ve gotten more mature is by forcing people to show me data to either back up my decision in either direction and being comfortable being wrong. And I am wrong at least half of the time with those things!” - Jonathan Kay (@JonathanCKay) (34:09)

Tuesday Jan 25, 2022
Tuesday Jan 25, 2022
Design takes many forms and shapes. It is an art, a science, and a method for problem solving. For Bob Goodman, a product management and design executive, the way to view design is as a story and a narrative that conveys the solution to the customer. As a former journalist with 20 years of experience in consumer and enterprise software, Bob has a unique perspective on enabling end-user decision making with data.
Having worked in both product management and UX, Bob shapes the narrative on approaching product management and product design as parts of a whole, and we talked about how data products fit into this model. Bob also shares why he believes design and product need to be under the same umbrella to prevent organizational failures. We also discussed the challenges and complexities that come with delivering data-driven insights to end users when ML and analytics are behind the scenes.
- An overview of Bob’s recent work as an SVP of product management - and why design, UX and product management were unified. (00:47)
- Bob’s thoughts on centralizing the company data model - and how this data and storytelling are integral to the design process. (06:10)
- How product managers and data scientists can gain perspective on their work. (12:22)
- Bob describes a recent dashboard and analytics product, and how customers were involved in its creation. (18:30)
- How “being wrong” is a method of learning - and a look at what Bob calls the “spotlight challenge.” (23:04)
- Why productizing data science is challenging. (30:14)
- Bob’s advice for making trusted data products. (33:46)
Quotes from Today’s Episode
- “[I think of] product management and product design as a unified function. How do those work together? There’s that Steve Jobs quote that we all know and love that design is not just what it looks like but it’s also how it works, and when you think of it that way, kind of end-to-end, you start to see product management and product design as a very unified.”- Bob Goodman (@bob_goodman) (01:34)
- “I have definitely experienced that some people see product management and design and UX is quite separate [...] And this has been a fascinating discovery because I think as a hybrid person, I didn’t necessarily draw those distinctions. [...] From product and design standpoint, I personally was often used to, especially in startup contexts, starting with the data that we had to work with [...]and saying, ‘Oh, this is our object model, and this is where we have context, [...]and this is the end-to-end workflow.’ And I think it’s an evolution of the industry that there’s been more and more specialization, [and] training, and it’s maybe added some barriers that didn’t exist between these disciplines [in the past].”- Bob Goodman (@bob_goodman) (03:30)
- “So many projects tend to fail because no one can really define what good means at the beginning. The strategy is not clear, the problem set is not clear. If you have a data team that thinks the job is to surface the insights from this data, a designer is thinking about the users’ discrete tasks, feelings, and objectives. They are not there to look at the data set; they are there to answer a question and inform a decision. For example, the objective is not to look at sleep data; it may be to understand, ‘am I’m getting enough rest?’”- Brian T. O’Neill (@rhythmspice) (08:22)
- “I imagine that when one is fascinated by data, it might be natural to presume that everyone will share this equal fascination with a sort of sleuthing or discovery. And then it’s not the case, It’s TL;DR. And so, often users want the headline, or they even need the kind of headline news to start at a glance. And so this is where this idea of storytelling with data comes in, and some of the research [that helps us] understand the mindset that consumers come to the table with.”- Bob Goodman (@bob_goodman) (09:51)
- “You were talking about this technologist’s idea of being ‘not user right, but it’s data right.’ I call this technically right, effectively wrong. This is not an infrequent thing that I hear about where the analysis might be sound, or the visualization might technically be the right thing for a certain type of audience. The difference is, are we designing for decision-making or are we designing to display the data that does tell some story, whether or not it informs the human decision-making that we’re trying to support? The latter is what most analytics solutions should strive to be”- Brian T. O’Neill (@rhythmspice) (16:11)
- “We were working to have a really unified approach and data strategy, and to deliver on that in the best possible way for our clients and our end-users [...]. There are many solutions for custom reports, and drill-downs and data extracts, and we have all manner of data tooling. But in the part that we’re really productizing with an experience layer on top, we’re definitely optimizing on the meaningful part versus the display side [which] maybe is a little bit of a ‘less is more’ type of approach.”- Bob Goodman (@bob_goodman) (17:25)
- “Delivering insights is simply the topic that we’re starting with, which is just as a user, as a reader, especially a business reader, ‘how much can I intake? And what do I need to make sense of it?’ How declarative can you be, responsibly and appropriately to bring the meaning and the insights forward?There might be a line that’s too much.”- Bob Goodman (@bob_goodman) (33:02)
Links Referenced
- LinkedIn: https://www.linkedin.com/in/bobgoodman/

Tuesday Jan 11, 2022
Tuesday Jan 11, 2022
Episode Description
As the conversation around AI continues, Professor Cynthia Rudin, Computer Scientist and Director at the Prediction Analysis Lab at Duke University, is here to discuss interpretable machine learning and her incredible work in this complex and evolving field. To begin, she is the most recent (2021) recipient of the $1M Squirrel AI Award for her work on making machine learning more interpretable to users and ultimately more beneficial to humanity.
In this episode, we explore the distinction between explainable and interpretable machine learning and how black boxes aren’t necessarily “better” than more interpretable models. Cynthia offers up real-world examples to illustrate her perspective on the role of humans and AI, shares takeaways from her previous work which ranges from predicting criminial recidivism to predicting manhole cover explosions in NYC (yes!). I loved this chat with her because, for one, Cynthia has strong, heavily informed opinions from her concentrated work in this area, and secondly, because Cynthia is thinking about both the end users of ML applications as well as the humans who are “out of the loop,” but nonetheless impacted by the decisions made by the users of these AI systems.
In this episode, we cover:
- Background on the Squirrel AI Award – and Cynthia unpacks the differences between Explainable and Interpretable ML. (00:46)
- Using real-world examples, Cynthia demonstrates why black boxes should be replaced. (04:49)
- Cynthia’s work on the New York City power grid project, exploding manhole covers, and why it was the messiest dataset she had ever seen. (08:20)
- A look at the future of machine learning and the value of human interaction as it moves into the next frontier. (15:52)
- Cynthia’s thoughts on collecting end-user feedback and keeping humans in the loop. (21:46)
- The current problems Cynthia and her team are exploring—the Roshomon Set, optimal sparse decision trees, sparse linear models, causal inference, and more. (32:33)
Quotes from Today’s Episode
- “I’ve been trying to help humanity my whole life with AI, right? But it’s not something I tried to earn because there was no award like this in the field while I was trying to do all of this work. But I was just totally amazed, and honored, and humbled that they chose me.”- Cynthia Rudin on receiving the AAAI Squirrel AI Award. (@cynthiarudin) (1:03)
- “Instead of trying to replace the black boxes with inherently interpretable models, they were just trying to explain the black box. And when you do this, there's a whole slew of problems with it. First of all, the explanations are not very accurate—they often mislead you. Then you also have problems where the explanation methods are giving more authority to the black box, rather than telling you to replace them.”- Cynthia Rudin (@cynthiarudin) (03:25)
- “Accuracy at all costs assumes that you have a static dataset and you’re just trying to get as high accuracy as you can on that dataset. [...] But that is not the way we do data science. In data science, if you look at a standard knowledge discovery process, [...] after you run your machine learning technique, you’re supposed to interpret the results and use that information to go back and edit your data and your evaluation metric. And you update your whole process and your whole pipeline based on what you learned. So when people say things like, ‘Accuracy at all costs,’ I’m like, ‘Okay. Well, if you want accuracy for your whole pipeline, maybe you would actually be better off designing a model you can understand.’”- Cynthia Rudin (@cynthiarudin) (11:31)
- “When people talk about the accuracy-interpretability trade-off, it just makes no sense to me because it’s like, no, it’s actually reversed, right? If you can actually understand what this model is doing, you can troubleshoot it better, and you can get overall better accuracy.“- Cynthia Rudin (@cynthiarudin) (13:59)
- “Humans and machines obviously do very different things, right? Humans are really good at having a systems-level way of thinking about problems. They can look at a patient and see things that are not in the database and make decisions based on that information, but no human can calculate probabilities really accurately in their heads from large databases. That’s why we use machine learning. So, the goal is to try to use machine learning for what it does best and use the human for what it does best. But if you have a black box, then you’ve effectively cut that off because the human has to basically just trust the black box. They can’t question the reasoning process of it because they don’t know it.”- Cynthia Rudin (@cynthiarudin) (17:42)
- “Interpretability is not always equated with sparsity. You really have to think about what interpretability means for each domain and design the model to that domain, for that particular user.”- Cynthia Rudin (@cynthiarudin) (19:33)
- “I think there's sometimes this perception that there's the truth from the data, and then there's everything else that people want to believe about whatever it says.”- Brian T. O’Neill (@rhythmspice) (23:51)
- “Surveys have their place, but there's a lot of issues with how we design surveys to get information back. And what you said is a great example, which is 7 out of 7 people said, ‘this is a serious event.’ But then you find out that they all said serious for a different reason—and there's a qualitative aspect to that. […] The survey is not going to tell us if we should be capturing some of that information if we don't know to ask a question about that.”- Brian T. O’Neill (@rhythmspice) (28:56)
Links
- Squirrel AI Award: https://aaai.org/Pressroom/Releases/release-21-1012.php
- “Machine Bias”: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Users.cs.duke.edu/~cynthia: https://users.cs.duke.edu/~cynthia
- Teaching: https://users.cs.duke.edu/~cynthia/teaching.html

Tuesday Dec 28, 2021
Tuesday Dec 28, 2021
Episode Description
The relationship between humans and artificial intelligence has been an intricate topic of conversation across many industries. François Candelon, Global Director at Boston Consulting Group Henderson Institute, has been a significant contributor to that conversation, most notably through an annual research initiative that BCG and MIT Sloan Management Review have been conducting about AI in the enterprise. In this episode, we’re digging particularly into the findings of the 2020 and 2021 studies that were just published at the time of this recording.
Through these yearly findings, the study has shown that organizations with the most competitive advantage are the ones that are focused on effectively designing AI-driven applications around the humans in the loop. As these organizations continue to generate value with AI, the gap between them and companies that do not embrace AI has only increased. To close this gap, companies will have to learn to design trustworthy AI applications that actually get used, produce value, and are designed around mutual learning between the technology and users. François claims that a “human plus AI” approach —what former Experiencing Data guest Ben Schneiderman calls HCAI (see Ep. 062)—can create organizational learning, trust, and improved productivity.
In this episode, we cover:
- How the Henderson Institute is conducting its multi-year study with MIT Sloan Management Review. (00:43)
- The core findings of the 2020 study, what the 10/20/70 rule is, and how Francois uses it to determine a company’s level of successful deployment of AI, and specific examples of what leading companies are doing in terms of user experience around AI. (03:08)
- The core findings of the 2021 study, and how mutual learning between human and machine (i.e. the experience of learning from and contributing to ML applications) increases the success rate of AI deployments. (07:53)
- The AI driving license for CxOs: A discussion about the gap between C-suite and data scientists and why it’s critical for teams to be agile and integrate both capabilities. (14:44)
- Why companies should embed AI as the core of their operating process. (22:07)
- François’ perspective on leveraging AI and why it is meant to solve problems and impact cultural change. (29:28)
Quotes from Today’s Episode
- “What makes the real difference is when you have what we call organizational learning, which means that at the same time you learn from AI as an individual, as a human, AI will learn from you. And this is relatively easy to understand because as we’re in a world, which is always more uncertain, the rate of learning, the ability for an organization to learn, is one of the most important competitive advantages.”- François Candelon (04:58)
- “When there is an additional effectiveness linked to AI, people will feel more comfortable, will feel augmented, not replaced, and then they will trust AI. As they trust, they are ready to have additional use cases implemented and therefore you are entering into a virtuous cycle.”- François Candelon (08:06)
- “If you try to optimize human plus AI and build on their respective capabilities—humans are much better at dealing with ambiguity and AI deals with large amounts of data, If you’re able to combine both, then you’re in a situation to be ready to create a source of competitive advantage.”- François Candelon (09:36)
- “I think that’s largely the point of my show and what I’m trying to focus on is to talk to the people who do want to go beyond the technical work. Building technically, right, effectively wrong solutions is something nobody needs, and at some point, not only is it not good for your career, but you might find it more rewarding to work on things that actually matter, that get used, that go into the world, that produce value. It’s more personally gratifying, not just for the business, but yourself.”- Brian T. O’Neill (@rhythmspice) (20:55)
- “Making sure that AI becomes the core of your operating process and your operating model [is] very important. I think that very often companies ask themselves, ‘how could AI help me optimize my process?’ I believe that they should now move—or at least the most advanced—are now moving to, ‘how should I make sure that I redesign my process to get the full potential of AI, to bring AI at the core of my operating model?’”- François Candelon (24:40)
- “AI is a way to solve problems, not an objective in itself. So, this is why when I used to say we are an AI-enabled or an AI-powered company, it shows a capability. It shows a way of thinking and the ability to deal with the foundational capabilities of AI. It’s not something else. And this is why—for the data scientists that will be open to better understanding business—they will learn a lot, and it will be very enlightening to be able to solve these issues and to solve these problems.”- François Candelon (30:51)
- “The human in the loops matter, folks. For now at least, we’re still here. It’s not all machines running machines. So, you have to figure out the human-machine interaction. It’s not going away, and so when you’re ready, it’s time to face that we need to design for the human in the loop, and we need to think about the last mile, and we need to think about change, adoption, and all the human factors that go into the solution, as well as the technologies.”- Brian T. O’Neill (@rhythmspice) (35:35)
Links
- BCG Henderson Institute: https://bcghendersoninstitute.com/
- François on LinkedIn: https://www.linkedin.com/in/françois-candelon

Tuesday Dec 14, 2021
Tuesday Dec 14, 2021
Finding it hard to know the value of your data products on the business or your end users? Do you struggle to understand the impact your data science, analytics, or product team is having on the people they serve?
Many times, the challenge comes down to figuring out WHAT to measure, and HOW. Clients, users, and customers often don’t even know what the right success or progress metrics are, let alone how to quantify them. Learning how to measure what might seem impossible is a highly valuable skill for leaders who want to track their progress with data—but it’s not all black and white. It’s not always about “more data,” and measurement is also not about “the finite, right answer.” Analytical minds, ready to embrace subjectivity and uncertainty in this episode!
In this insightful chat, Doug and I explore examples from his book, How to Measure Anything, and we discuss its applicability to the world of data and data products. From defining trust to identifying cognitive biases in qualitative research, Doug shares how he views the world in ways that we can actually measure. We also discuss the relationship between data and uncertainty, forecasting, and why people who are trying to measure something usually believe they have a lot less data than they really do.
Episode Description
- A discussion about measurement, defining “trust”, and why it is important to collect data in a systematic way. (01:35)
- Doug explores “concept, object and methods of measurement” - and why most people have more data than they realize when investigating questions. (09:29)
- Why asking the right questions is more important than “needing to be the expert” - and a look at cognitive biases. (16:46)
- The Dunning-Kruger effect and how it applies to the way people measure outcomes - and Bob discusses progress metrics vs success metrics and the illusion of cognition. (25:13)
- How one of the challenges with machine learning also creates valuable skepticism - and the three criteria for experience to convert into learning. (35:35)
Quotes from Today’s Episode
-
“Often things like trustworthiness or collaboration, or innovation, or any—all the squishy stuff, they sound hard to measure because they’re actually an umbrella term that bundles a bunch of different things together, and you have to unpack it to figure out what it is you’re talking about. It’s the beginning of all scientific inquiry is to figure out what your terms mean; what question are you even asking?”- Doug Hubbard (@hdr_frm) (02:33)
-
“Another interesting phenomenon about measurement in general and uncertainty, is that it’s in the cases where you have a lot of uncertainty when you don’t need many data points to greatly reduce it. [People] might assume that if [they] have a lot of uncertainty about something, that [they are] going to need a lot of data to offset that uncertainty. Mathematically speaking, just the opposite is true. The more uncertainty you have, the bigger uncertainty reduction you get from the first observation. In other words, if, you know almost nothing, almost anything will tell you something. That’s the way to think of it.”- Doug Hubbard (@hdr_frm) (07:05)
-
“I think one of the big takeaways there that I want my audience to hear is that if we start thinking about when we’re building these solutions, particularly analytics and decision support applications, instead of thinking about it as we’re trying to give the perfect answer here, or the model needs to be as accurate as possible, changing the framing to be, ‘if we went from something like a wild-ass guess, to maybe my experience and my intuition, to some level of data, what we’re doing here is we’re chipping away at the uncertainty, right?’ We’re not trying to go from zero to 100. Zero to 20 may be a substantial improvement if we can just get rid of some of that uncertainty, because no solution will ever predict the future perfectly, so let’s just try to reduce some of that uncertainty.”- Brian T. O’Neill (@rhythmspice) (08:40)
- “So, this is really important: [...] you have more data than you think, and you need less than you think. People just throw up their hands far too quickly when it comes to measurement problems. They just say, ‘Well, we don’t have enough data for that.’ Well, did you look? Tell me how much time you spent actually thinking about the problem or did you just give up too soon? [...] Assume there is a way to measure it, and the constraint is that you just haven’t thought of it yet. ”- Doug Hubbard (@hdr_frm) (15:37)
-
“I think people routinely believe they have a lot less data than they really do. They tend to believe that each situation is more unique than it really is [to the point] that you can’t extrapolate anything from prior observations. If that were really true, your experience means nothing.”- Doug Hubbard (@hdr_frm) (29:42)
- “When you have a lot of uncertainty, that’s exactly when you don’t need a lot of data to reduce it significantly. That’s the general rule of thumb here. [...] If what we’re trying to improve upon is just the subjective judgment of the stakeholders, all the research today—and by the way, here’s another area where there’s tons of data—there’s literally hundreds of studies where naive statistical models are compared to human experts […] and the consistent finding is that even naive statistical models outperform human experts in a surprising variety of fields.”- Doug Hubbard (@hdr_frm) (32:50)
Links Referenced
- How to Measure Anything: https://www.amazon.com/gp/product/1118539273/
- Hubbard Decision Research: https://hubbardresearch.com

Tuesday Nov 30, 2021
Tuesday Nov 30, 2021
Berit Hoffmann, Chief Product Officer at Sisu, tackles design from a customer-centric perspective with a focus on finding problems at their source and enabling decision making. However, she had to learn some lessons the hard way along the road, and in this episode, we dig into those experiences and what she’s now doing differently in her current role as a CPO.
In particular, Berit reflects on her “ivory tower design” experience at a past startup called Bebop. In that time, she quickly realized the importance of engaging with customer needs and building intuitive and simple solutions for complex problems. Berit also discusses the Double Diamond Process and how it shapes her own decision-making and the various ways she carries her work at Sisu.
In this episode, we also cover:
- How Berit’s “ivory tower design experience” at Bebop taught her the importance of dedicating time to focus on the customer. (01:31)
- What Berit looked for as she researched Sisu prior to joining - and how she and Peter Bailis, Founder and CEO, share the same philosophy on what a product’s user experience should look like. (03:57)
- Berit discusses the Double Diamond Process and the life cycle of designing a project - and shares her take on designing for decision making. (10:17)
- Sisu’s shift from answering the why to the what - and how they approach user testing using product as a metric layer. (19:10)
- Berit explores the tension that can arise when designing a decision support tool. (31:03)
Quotes from Today’s Episode
- “I kind of learned the hard way, the importance of spending that time with customers upfront and really digging into understanding what problems are most challenging for them. Those are the problems to solve, not the ones that you as a product manager or as a designer think are most important. It is a lesson I carry forward with me in terms of how I approach anything I'm going to work on now. The sooner I can get it in front of users, the sooner I can get feedback and really validate or invalidate my assumptions, the better because they're probably going to tell me why I'm wrong.”- Berit Hoffmann (03:15)
- “As a designer and product thinker, the problem finding is almost more important than the solutioning because the solution is easy when you really understand the need. It's not hard to come up with good solutions when the need is so clear, which you can only get through conversation, inquiry, shadowing, and similar research and design methods.” - Brian T. O’Neill (@rhythmspice) (10:54)
- “Decision-making is a human process. There's no world in which you're going to spit out an answer and say, ‘just go do it.’ Software is always going to be missing the rich context and expertise that humans have about their business and the context in which they're making the decision. So, what that says to me is inherently, decision-making is also going to be an iterative process. [...] What I think technology can do is it can automate and accelerate a lot of the manual repetitive steps in the analysis that are taking up a bunch of time today. Especially as data is getting exponentially more complex and multi-dimensional.”- Berit Hoffmann (17:44)
- “When we talk to people about solving problems, 9 out of 10 people say they would add something to whatever it is that you're making to make it better. So often, when designers think about modernism, it is very much about ‘what can I take away that will help it make it better?’ And, I think this gets lost. The tendency with data, when you think about how much we're collecting and the scale of it, is that adding it is always going to make it better and it doesn't make it better all the time. It can slow things down and cause noise. It can make people ask even more questions. When in reality, the goal is to make a decision.”- Brian T. O’Neill (@rhythmspice) (30:11)
- “I’m trying to resist the urge to get industry-specific or metric specific in any of the kind of baseline functionality in the product. And instead, say that we can experiment in a lightweight way in terms of outside of the product, health content, guidance on best practices, etc. That is going to be a constant tension because the types of decisions that you enact and the types of questions you're digging into are really different depending on whether you're a massive hotel chain compared to a quick-service restaurant compared to a B2B SAAS company. The personas and the questions are so different. So that's a tension that I think is really interesting when you think about the decision-making workflow and who those stakeholders are.”- Berit Hoffmann (32:05)
Links Referenced
- Sisu: https://sisudata.com
- Berit Hoffmann on LinkedIn: https://www.linkedin.com/in/Hoffmannn-berit/
- Sisu on LinkedIn: https://www.linkedin.com/company/sisu-data/

Tuesday Nov 16, 2021
Tuesday Nov 16, 2021
Eric Weber, Head of Data Product at Yelp, has spent his career developing a product-minded approach to producing data-driven solutions that actually deliver value. For Eric, developing a data product mindset is still quite new and today, we’re digging into all things “data product management” and why thinking of data with a product mindset matters.
In our conversation, Eric defines what data products are and explains the value that data product managers can bring to their companies. Eric’s own ethos on centering on empathy, while equally balanced with technical credibility, is central to his perspectives on data product management. We also discussed how Eric is bringing all of this to hand at Yelp and the various ways they’re tackling their customers' data product needs.
In this episode, we also cover:
- What is a data product and why do we need data product management? (01:34)
- Why successful data product managers carry two important traits - empathy and technical credibility. (10:47)
- A discussion about the levels of problem-solving maturity, the challenge behind delivering solutions, and where product managers can be the most effective during the process. (16:54)
- A look at Yelp’s customer research strategy and what they are focusing on to optimize the user experience. (21:28)
- How Yelp’s product strategy is influenced by classes of problems – and Yelp’s layers of experimentation. (27:38)
- Eric reflects on unlearning and talks about his newsletter, From Data to Product. (34:36)
Quotes from Today’s Episode
- “Data products bring companies a way to think about the long-term viability and sustainability of their data investments. [...] And part of that is creating things that are sustainable, that have a strategy, that have a customer in mind. And a lot of these things people do - maybe they don't call it out explicitly, but this is a packaging that I think focuses us in the right places rather than hoping for the best.”- Eric Weber (@edweber1) (02:43)
- “My hypothesis right now is that by introducing [product management] as a role, you create a vision for our product that is not just tied to a person, it's not just tied to a moment in time of the company. It's something where you can actually have another product manager come in and understand where things are headed. I think that is really the key to seeing the 10 to 20-year sustainability, other than crossing your fingers and hoping that one person stays for a long time, which is kind of a tough bet in this environment.”- Eric Weber (@edweber1) (07:27)
- “My background is in design and one of the things that I have to work on a lot with my clients and with data scientists in particular, is getting out of the head of wanting to work on “the thing” and learning how to fall in love with the customer's problem and their need. And this whole idea of empathy, not being a squishy thing, but do you want your work to matter? Or, do you just write code or work on models all day long and you don't care if it ships and makes a difference? I think good product-minded people care a lot about that outcome. So, this output versus outcome thing is a mindset change that has to happen.”- Brian T. O’Neill (@rhythmspice) (10:56)
- “The question about whether you focus on internal development or external buying often goes back to, what is your business trying to do? And how much is this going to cost us over time? And it's fascinating because I want [anyone listening] to come across [the data product] field as an area in motion. It's probably going to look pretty different a year from now, which I find pretty awesome and fascinating myself.”- Eric Weber (@edweber1) (27:02)
- “If you don't have a deep understanding of what your customer is trying to do and are able to abstract it to some general class of problem, you're probably going to end up building a solution that's too narrow and not sustainable because it will solve something in the short term. But, what if you have to re-architect the whole thing? That's where it becomes really expensive and where having a product strategy pays off.”- Eric Weber (@edweber1) (31:28)
- “I've had to unlearn that idea that I need to create a definitive framework of what someone does. I just need to be able to put on different lenses. [For example] if I'm talking to design today, these are probably the things that they're going to be focused on and concerned about. If I'm talking to our executive team, this is probably how they're going to break this problem down and look at it. So, I think it's not necessarily dropping certain frameworks, it's being able to understand that some of them are useful in certain scenarios and they're not in others. And that ability is something that I think has created this chance for me to look at the data product from different spaces and think about why it might be valuable.”- Eric Weber (@edweber1) (35:54)
Links

Tuesday Nov 02, 2021
Tuesday Nov 02, 2021
Even in the performing arts world, data and analytics is serving a purpose. Jordan Gross Richmond is the Chief Product Officer at AMS Analytics, where they provide benchmarking and performance reporting to performing arts organizations. As many of you know, I’m also a musician who tours and performs in the performing arts market and so I was curious to hear how data plays a role “off the stage” within these organizations. In particular, I wanted to know how Jordan designed the interfaces for AMS Analytics’s product, and what’s unique (or not!) about using data to manage arts organizations.
Jordan also talks about the beginnings of AMS and their relationship with leaders in the performing arts industry and the “birth of benchmarking” in this space. From an almost manual process in the beginning, AMS now has a SaaS platform that allows performing arts centers to see the data that helps drive their organizations. Given that many performing arts centers are non-profit organizations, I also asked Jordan about how these organizations balance their artistic mission against the colder, harder facts of data such as ticket sales, revenue, and “the competition.”
In this episode, we also cover:
- How the AMS platform helps leaders manage their performing arts centers and the evolution of the AMS business model. (01:10)
- Benchmarking as a measure of success in the performing arts industry and the “two buckets of context” AMS focuses on. (06:00)
- Strategies for measuring intangible success and how performing arts data is about more than just the number of seats filled at concerts and shows. (15:48)
- The relationships between AMS and its customers, their organizational structure, and how AMS has shaped it into a useful SaaS product. (26:27)
- The role of users in designing the solution and soliciting feedback and what Jordan means when he says he “focuses on the problems, and not the solutions” in his role as Chief Product Officer. (35:38)
Quotes from Today’s Episode
- “I think [AMS] is a one-of-a-kind thing, and what it does now is it provides what I consider to be a steering wheel for these leaders. It’s not the kind of thing that’s going to help anybody figure out what to do tomorrow; it’s more about what’s going on in a year from now and in five years from now. And I think the need for this particular vision comes from the evolution in the business model in general of the performing arts and the cultural arts in America.”- Jordan Gross Richmond (@the1jordangross) (03:07)
- “No one metric can solve everything. It’s a one-to-one relationship in terms of data model to analytical point. So, we have to be really careful that we don't think that just because there's a lot of charts on the screen, we must be able to answer all of our [customers'] questions.”- Jordan Gross Richmond (@the1jordangross) (18:18)
- “We are absolutely a product-led organization, which essentially means that the solutions are built into the product, and the relationship with the clients and the relationship with future clients is actually all engineered into the product itself. And so I never want to create anything in a black box. Nobody benefits from a feature that nobody cares about.”- Jordan Gross Richmond (@the1jordangross) (29:16)
- “This is an evolution that's driven not by the technology itself, but [...] by the key stakeholders amongst this community. And we found that to be really successful. In terms of product line growth, when you listen to your users and they feel heard, the sky's the limit. Because at that point, they have buy-in, so you have a real relationship. ”- Jordan Gross Richmond (@the1jordangross) (31:11)
- “Successful product leaders don't focus on the solutions. We focus on the problems. And that's where I like to stay, because sometimes we kind of get into lots of proposals. My role in these meetings is often to help identify the problem and make sure we're all solving the same problem because we can get off pretty easily on a solution that sounds sexy [or] interesting, but if we're not careful, we might be solving a problem that doesn't even exist.”- Jordan Gross Richmond (@the1jordangross) (35:09)
- “It’s about starting with the customer’s problems and working backwards from that. I think that you have to start with the problem space that they're in, and then you do the best job you can with the data that's available. [...] So, I love the fact that you're having these working groups. Sometimes we call these design partners in the design world, and I think that kind of regular interaction and exposure, especially early and as frequently as possible, is a great habit.”- Brian T. O’Neill (@rhythmspice) (40:26)