Experiencing Data with Brian T. O’Neill
083 -Why Bob Goodman Thinks Product Management and Design Must Dance Together to Create “Experience Layers” for Data Products

083 -Why Bob Goodman Thinks Product Management and Design Must Dance Together to Create “Experience Layers” for Data Products

January 25, 2022

Design takes many forms and shapes. It is an art, a science, and a method for problem solving. For Bob Goodman, a product management and design executive, the way to view design is as a story and a narrative that conveys the solution to the customer. As a former journalist with 20 years of experience in consumer and enterprise software, Bob has a unique perspective on enabling end-user decision making with data.

Having worked in both product management and UX, Bob shapes the narrative on approaching product management and product design as parts of a whole, and we talked about how data products fit into this model. Bob also shares why he believes design and product need to be under the same umbrella to prevent organizational failures. We also discussed the challenges and complexities that come with delivering data-driven insights to end users when ML and analytics are behind the scenes.

  • An overview of Bob’s recent work as an SVP of product management - and why design, UX and product management were unified. (00:47)
  • Bob’s thoughts on centralizing the company data model - and how this data and storytelling are integral to the design process. (06:10)
  • How product managers and data scientists can gain perspective on their work. (12:22)
  • Bob describes a recent dashboard and analytics product, and how customers were involved in its creation. (18:30)
  • How “being wrong” is a method of learning - and a look at what Bob calls the  “spotlight challenge.” (23:04)
  • Why productizing data science is challenging. (30:14)
  • Bob’s advice for making trusted data products. (33:46)

Quotes from Today’s Episode

  • “[I think of] product management and product design as a unified function. How do those work together? There’s that Steve Jobs quote that we all know and love that design is not just what it looks like but it’s also how it works, and when you think of it that way, kind of end-to-end, you start to see product management and product design as a very unified.”- Bob Goodman (@bob_goodman) (01:34)
  • “I have definitely experienced that some people see product management and design and UX is quite separate [...] And this has been a fascinating discovery because I think as a hybrid person, I didn’t necessarily draw those distinctions. [...] From product and design standpoint, I personally was often used to, especially in startup contexts, starting with the data that we had to work with [...]and saying, ‘Oh, this is our object model, and this is where we have context, [...]and this is the end-to-end workflow.’ And I think it’s an evolution of the industry that there’s been more and more specialization, [and] training, and it’s maybe added some barriers that didn’t exist between these disciplines [in the past].”- Bob Goodman (@bob_goodman) (03:30)
  • “So many projects tend to fail because no one can really define what good means at the beginning. The strategy is not clear, the problem set is not clear. If you have a data team that thinks the job is to surface the insights from this data, a designer is thinking about the users’ discrete tasks, feelings, and objectives. They are not there to look at the data set; they are there to answer a question and inform a decision. For example, the objective is not to look at sleep data; it may be to understand, ‘am I’m getting enough rest?’”- Brian T. O’Neill (@rhythmspice) (08:22)
  • “I imagine that when one is fascinated by data, it might be natural to presume that everyone will share this equal fascination with a sort of sleuthing or discovery. And then it’s not the case, It’s TL;DR. And so, often users want the headline, or they even need the kind of headline news to start at a glance. And so this is where this idea of storytelling  with data comes in, and some of the research [that helps us] understand the mindset that consumers come to the table with.”- Bob Goodman (@bob_goodman) (09:51)

 

  • “You were talking about this technologist’s idea of being ‘not user right, but it’s data right.’ I call this technically right, effectively wrong. This is not an infrequent thing that I hear about where the analysis might be sound, or the visualization might technically be the right thing for a certain type of audience. The difference is, are we designing for decision-making or are we designing to display the data that does tell some story, whether or not it informs the human decision-making that we’re trying to support? The latter is what most analytics solutions should strive to be”- Brian T. O’Neill (@rhythmspice) (16:11)
  • “We were working to have a really unified approach and data strategy, and to deliver on that in the best possible way for our clients and our end-users [...]. There are many solutions for custom reports, and drill-downs and data extracts, and we have all manner of data tooling. But in the part that we’re really productizing with an experience layer on top, we’re definitely optimizing on the meaningful part versus the display side [which] maybe is a little bit of a ‘less is more’ type of approach.”- Bob Goodman (@bob_goodman) (17:25)

 

  • “Delivering insights is simply the topic that we’re starting with, which is just as a user, as a reader, especially a business reader, ‘how much can I intake? And what do I need to make sense of it?’ How declarative can you be, responsibly and appropriately to bring the meaning and the insights forward?There might be a line that’s too much.”- Bob Goodman (@bob_goodman)  (33:02)

Links Referenced

082 - What the 2021 $1M Squirrel AI Award Winner Wants You To Know About Designing Interpretable Machine Learning Solutions w/ Cynthia Rudin

082 - What the 2021 $1M Squirrel AI Award Winner Wants You To Know About Designing Interpretable Machine Learning Solutions w/ Cynthia Rudin

January 11, 2022

Episode Description

As the conversation around AI continues, Professor Cynthia Rudin, Computer Scientist and Director at the Prediction Analysis Lab at Duke University, is here to discuss interpretable machine learning and her incredible work in this complex and evolving field. To begin, she is the most recent (2021) recipient of the $1M Squirrel AI Award for her work on making machine learning more interpretable to users and ultimately more beneficial to humanity.

In this episode, we explore the distinction between explainable and interpretable machine learning and how black boxes aren’t necessarily “better” than more interpretable models. Cynthia offers up real-world examples to illustrate her perspective on the role of humans and AI, shares takeaways from her previous work which ranges from predicting criminial recidivism to predicting manhole cover explosions in NYC (yes!). I loved this chat with her because, for one, Cynthia has strong, heavily informed opinions from her concentrated work in this area, and secondly, because Cynthia is thinking about both the end users of ML applications as well as the humans who are “out of the loop,” but nonetheless impacted by the decisions made by the users of these AI systems.

In this episode, we cover:

  • Background on the Squirrel AI Award – and Cynthia unpacks the differences between Explainable and Interpretable ML. (00:46)
  • Using real-world examples, Cynthia demonstrates why black boxes should be replaced. (04:49)
  • Cynthia’s work on the New York City power grid project, exploding manhole covers, and why it was the messiest dataset she had ever seen. (08:20)
  • A look at the future of machine learning and the value of human interaction as it moves into the next frontier. (15:52)
  • Cynthia’s thoughts on collecting end-user feedback and keeping humans in the loop. (21:46)
  • The current problems Cynthia and her team are exploring—the Roshomon Set, optimal sparse decision trees, sparse linear models, causal inference, and more. (32:33)

Quotes from Today’s Episode

  • “I’ve been trying to help humanity my whole life with AI, right? But it’s not something I tried to earn because there was no award like this in the field while I was trying to do all of this work. But I was just totally amazed, and honored, and humbled that they chose me.”- Cynthia Rudin on receiving the AAAI Squirrel AI Award. (@cynthiarudin) (1:03)
  • “Instead of trying to replace the black boxes with inherently interpretable models, they were just trying to explain the black box. And when you do this, there's a whole slew of problems with it. First of all, the explanations are not very accurate—they often mislead you. Then you also have problems where the explanation methods are giving more authority to the black box, rather than telling you to replace them.”- Cynthia Rudin (@cynthiarudin) (03:25)
  • “Accuracy at all costs assumes that you have a static dataset and you’re just trying to get as high accuracy as you can on that dataset. [...] But that is not the way we do data science. In data science, if you look at a standard knowledge discovery process, [...] after you run your machine learning technique, you’re supposed to interpret the results and use that information to go back and edit your data and your evaluation metric. And you update your whole process and your whole pipeline based on what you learned. So when people say things like, ‘Accuracy at all costs,’ I’m like, ‘Okay. Well, if you want accuracy for your whole pipeline, maybe you would actually be better off designing a model you can understand.’”- Cynthia Rudin (@cynthiarudin) (11:31)
  • “When people talk about the accuracy-interpretability trade-off, it just makes no sense to me because it’s like, no, it’s actually reversed, right? If you can actually understand what this model is doing, you can troubleshoot it better, and you can get overall better accuracy.“- Cynthia Rudin (@cynthiarudin) (13:59)
  • “Humans and machines obviously do very different things, right? Humans are really good at having a systems-level way of thinking about problems. They can look at a patient and see things that are not in the database and make decisions based on that information, but no human can calculate probabilities really accurately in their heads from large databases. That’s why we use machine learning. So, the goal is to try to use machine learning for what it does best and use the human for what it does best. But if you have a black box, then you’ve effectively cut that off because the human has to basically just trust the black box. They can’t question the reasoning process of it because they don’t know it.”- Cynthia Rudin (@cynthiarudin) (17:42)
  • “Interpretability is not always equated with sparsity. You really have to think about what interpretability means for each domain and design the model to that domain, for that particular user.”- Cynthia Rudin (@cynthiarudin) (19:33)
  • “I think there's sometimes this perception that there's the truth from the data, and then there's everything else that people want to believe about whatever it says.”- Brian T. O’Neill (@rhythmspice) (23:51)
  • “Surveys have their place, but there's a lot of issues with how we design surveys to get information back. And what you said is a great example, which is 7 out of 7 people said, ‘this is a serious event.’ But then you find out that they all said serious for a different reason—and there's a qualitative aspect to that. […] The survey is not going to tell us if we should be capturing some of that information if we don't know to ask a question about that.”- Brian T. O’Neill (@rhythmspice) (28:56)

Links

081 - The Cultural and $ Benefits of Human-Centered AI in the Enterprise: Digging Into BCG/MIT Sloan’s AI Research w/ François Candelon

081 - The Cultural and $ Benefits of Human-Centered AI in the Enterprise: Digging Into BCG/MIT Sloan’s AI Research w/ François Candelon

December 28, 2021

Episode Description

The relationship between humans and artificial intelligence has been an intricate topic of conversation across many industries. François Candelon, Global Director at Boston Consulting Group Henderson Institute, has been a significant contributor to that conversation, most notably through an annual research initiative that BCG and MIT Sloan Management Review have been conducting about AI in the enterprise. In this episode, we’re digging particularly into the findings of the 2020 and 2021 studies that were just published at the time of this recording. 

Through these yearly findings, the study has shown that organizations with the most competitive advantage are the ones that are focused on effectively designing AI-driven applications around the humans in the loop. As these organizations continue to generate value with AI, the gap between them and companies that do not embrace AI has only increased. To close this gap, companies will have to learn to design trustworthy AI applications that actually get used, produce value, and are designed around mutual learning between the technology and users. François claims that a “human plus AI” approach —what former Experiencing Data guest Ben Schneiderman calls HCAI (see Ep. 062)—can create organizational learning, trust, and improved productivity. 

In this episode, we cover:

  • How the Henderson Institute is conducting its multi-year study with MIT Sloan Management Review. (00:43)
  • The core findings of the 2020 study, what the 10/20/70 rule is, and how Francois uses it to determine a company’s level of successful deployment of AI, and specific examples of what leading companies are doing in terms of user experience around AI. (03:08)
  • The core findings of the 2021 study, and how mutual learning between human and machine (i.e. the experience of learning from and contributing to ML applications) increases the success rate of AI deployments. (07:53)
  • The AI driving license for CxOs: A discussion about the gap between C-suite and data scientists and why it’s critical for teams to be agile and integrate both capabilities. (14:44)
  • Why companies should embed AI as the core of their operating process. (22:07)
  • François’ perspective on leveraging AI and why it is meant to solve problems and impact cultural change. (29:28)

Quotes from Today’s Episode

  • “What makes the real difference is when you have what we call organizational learning, which means that at the same time you learn from AI as an individual, as a human, AI will learn from you. And this is relatively easy to understand because as we’re in a world, which is always more uncertain, the rate of learning, the ability for an organization to learn, is one of the most important competitive advantages.”- François Candelon (04:58)
  • “When there is an additional effectiveness linked to AI, people will feel more comfortable, will feel augmented, not replaced, and then they will trust AI. As they trust, they are ready to have additional use cases implemented and therefore you are entering into a virtuous cycle.”- François Candelon (08:06)
  • “If you try to optimize human plus AI and build on their respective capabilities—humans are much better at dealing with ambiguity and AI deals with large amounts of data, If you’re able to combine both, then you’re in a situation to be ready to create a source of competitive advantage.”- François Candelon (09:36)
  • “I think that’s largely the point of my show and what I’m trying to focus on is to talk to the people who do want to go beyond the technical work. Building technically, right, effectively wrong solutions is something nobody needs, and at some point, not only is it not good for your career, but you might find it more rewarding to work on things that actually matter, that get used, that go into the world, that produce value. It’s more personally gratifying, not just for the business, but yourself.”- Brian T. O’Neill (@rhythmspice) (20:55)
  • “Making sure that AI becomes the core of your operating process and your operating model [is] very important. I think that very often companies ask themselves, ‘how could AI help me optimize my process?’ I believe that they should now move—or at least the most advanced—are now moving to, ‘how should I make sure that I redesign my process to get the full potential of AI, to bring AI at the core of my operating model?’”- François Candelon (24:40)
  • “AI is a way to solve problems, not an objective in itself. So, this is why when I used to say we are an AI-enabled or an AI-powered company, it shows a capability. It shows a way of thinking and the ability to deal with the foundational capabilities of AI. It’s not something else. And this is why—for the data scientists that will be open to better understanding business—they will learn a lot, and it will be very enlightening to be able to solve these issues and to solve these problems.”- François Candelon (30:51)
  • “The human in the loops matter, folks. For now at least, we’re still here. It’s not all machines running machines. So, you have to figure out the human-machine interaction. It’s not going away, and so when you’re ready, it’s time to face that we need to design for the human in the loop, and we need to think about the last mile, and we need to think about change, adoption, and all the human factors that go into the solution, as well as the technologies.”- Brian T. O’Neill (@rhythmspice) (35:35)

Links 

080 – How to Measure the Impact of Data Products…and Anything Else with Forecasting and Measurement Expert Doug Hubbard

080 – How to Measure the Impact of Data Products…and Anything Else with Forecasting and Measurement Expert Doug Hubbard

December 14, 2021

Finding it hard to know the value of your data products on the business or your end users? Do you struggle to understand the impact your data science, analytics, or product team is having on the people they serve?  

Many times, the challenge comes down to figuring out WHAT to measure, and HOW. Clients, users, and customers often don’t even know what the right success or progress metrics are, let alone how to quantify them. Learning how to measure what might seem impossible is a highly valuable skill for leaders who want to track their progress with data—but it’s not all black and white. It’s not always about “more data,” and measurement is also not about “the finite, right answer.” Analytical minds, ready to embrace subjectivity and uncertainty in this episode! 

In this insightful chat, Doug and I explore examples from his book, How to Measure Anything, and we discuss its applicability to the world of data and data products. From defining trust to identifying cognitive biases in qualitative research, Doug shares how he views the world in ways that we can actually measure. We also discuss the relationship between data and uncertainty, forecasting, and why people who are trying to measure something usually believe they have a lot less data than they really do. 

Episode Description

  • A discussion about measurement, defining “trust”, and why it is important to collect data in a systematic way. (01:35)
  • Doug explores “concept, object and methods of measurement” - and why most people have more data than they realize when investigating questions. (09:29)
  • Why asking the right questions is more important than “needing to be the expert” - and a look at cognitive biases. (16:46)
  • The Dunning-Kruger effect and how it applies to the way people measure outcomes - and Bob discusses progress metrics vs success metrics and the illusion of cognition. (25:13)
  • How one of the challenges with machine learning also creates valuable skepticism - and the three criteria for experience to convert into learning. (35:35)

Quotes from Today’s Episode

  • “Often things like trustworthiness or collaboration, or innovation, or any—all the squishy stuff, they sound hard to measure because they’re actually an umbrella term that bundles a bunch of different things together, and you have to unpack it to figure out what it is you’re talking about. It’s the beginning of all scientific inquiry is to figure out what your terms mean; what question are you even asking?”- Doug Hubbard (@hdr_frm) (02:33)

  • “Another interesting phenomenon about measurement in general and uncertainty, is that it’s in the cases where you have a lot of uncertainty when you don’t need many data points to greatly reduce it. [People] might assume that if [they] have a lot of uncertainty about something, that [they are] going to need a lot of data to offset that uncertainty. Mathematically speaking, just the opposite is true. The more uncertainty you have, the bigger uncertainty reduction you get from the first observation. In other words, if, you know almost nothing, almost anything will tell you something. That’s the way to think of it.”- Doug Hubbard (@hdr_frm) (07:05) 

  • “I think one of the big takeaways there that I want my audience to hear is that if we start thinking about when we’re building these solutions, particularly analytics and decision support applications, instead of thinking about it as we’re trying to give the perfect answer here, or the model needs to be as accurate as possible, changing the framing to be, ‘if we went from something like a wild-ass guess, to maybe my experience and my intuition, to some level of data, what we’re doing here is we’re chipping away at the uncertainty, right?’ We’re not trying to go from zero to 100. Zero to 20 may be a substantial improvement if we can just get rid of some of that uncertainty, because no solution will ever predict the future perfectly, so let’s just try to reduce some of that uncertainty.”- Brian T. O’Neill (@rhythmspice) (08:40)

  • “So, this is really important: [...] you have more data than you think, and you need less than you think. People just throw up their hands far too quickly when it comes to measurement problems. They just say, ‘Well, we don’t have enough data for that.’ Well, did you look? Tell me how much time you spent actually thinking about the problem or did you just give up too soon? [...] Assume there is a way to measure it, and the constraint is that you just haven’t thought of it yet. ”- Doug Hubbard (@hdr_frm) (15:37)
  • “I think people routinely believe they have a lot less data than they really do. They tend to believe that each situation is more unique than it really is [to the point] that you can’t extrapolate anything from prior observations. If that were really true, your experience means nothing.”- Doug Hubbard (@hdr_frm) (29:42)

  • “When you have a lot of uncertainty, that’s exactly when you don’t need a lot of data to reduce it significantly. That’s the general rule of thumb here. [...] If what we’re trying to improve upon is just the subjective judgment of the stakeholders, all the research today—and by the way, here’s another area where there’s tons of data—there’s literally hundreds of studies where naive statistical models are compared to human experts […] and the consistent finding is that even naive statistical models outperform human experts in a surprising variety of fields.”- Doug Hubbard (@hdr_frm) (32:50)

Links Referenced

 

079 - How Sisu’s CPO, Berit Hoffmann, Is Approaching the Design of Their Analytics Product…and the UX Mistakes She Won’t Make Again

079 - How Sisu’s CPO, Berit Hoffmann, Is Approaching the Design of Their Analytics Product…and the UX Mistakes She Won’t Make Again

November 30, 2021

Berit Hoffmann, Chief Product Officer at Sisu, tackles design from a customer-centric perspective with a focus on finding problems at their source and enabling decision making. However, she had to learn some lessons the hard way along the road, and in this episode, we dig into those experiences and what she’s now doing differently in her current role as a CPO.

In particular, Berit reflects on her “ivory tower design” experience at a past startup called Bebop. In that time, she quickly realized the importance of engaging with customer needs and building intuitive and simple solutions for complex problems. Berit also discusses the Double Diamond Process and how it shapes her own decision-making and the various ways she carries her work at Sisu.

 

In this episode, we also cover:

  • How Berit’s “ivory tower design experience” at Bebop taught her the importance of dedicating time to focus on the customer. (01:31)
  • What Berit looked for as she researched Sisu prior to joining - and how she and Peter Bailis, Founder and CEO, share the same philosophy on what a product’s user experience should look like. (03:57)
  • Berit discusses the Double Diamond Process and the life cycle of designing a project - and shares her take on designing for decision making. (10:17)
  • Sisu’s shift from answering the why to the what - and how they approach user testing using product as a metric layer. (19:10)
  • Berit explores the tension that can arise when designing a decision support tool. (31:03)

Quotes from Today’s Episode

  • “I kind of learned the hard way, the importance of spending that time with customers upfront and really digging into understanding what problems are most challenging for them. Those are the problems to solve, not the ones that you as a product manager or as a designer think are most important. It is a lesson I carry forward with me in terms of how I approach anything I'm going to work on now. The sooner I can get it in front of users, the sooner I can get feedback and really validate or invalidate my assumptions, the better because they're probably going to tell me why I'm wrong.”- Berit Hoffmann (03:15)

 

  • “As a designer and product thinker, the problem finding is almost more important than the solutioning because the solution is easy when you really understand the need. It's not hard to come up with good solutions when the need is so clear, which you can only get through conversation, inquiry, shadowing, and similar research and design methods.” - Brian T. O’Neill (@rhythmspice) (10:54)

 

  • “Decision-making is a human process. There's no world in which you're going to spit out an answer and say, ‘just go do it.’ Software is always going to be missing the rich context and expertise that humans have about their business and the context in which they're making the decision. So, what that says to me is inherently, decision-making is also going to be an iterative process. [...] What I think technology can do is it can automate and accelerate a lot of the manual repetitive steps in the analysis that are taking up a bunch of time today. Especially as data is getting exponentially more complex and multi-dimensional.”- Berit Hoffmann  (17:44)

 

  • “When we talk to people about solving problems, 9 out of 10 people say they would add something to whatever it is that you're making to make it better. So often, when designers think about modernism, it is very much about ‘what can I take away that will help it make it better?’ And, I think this gets lost. The tendency with data, when you think about how much we're collecting and the scale of it, is that adding it is always going to make it better and it doesn't make it better all the time. It can slow things down and cause noise. It can make people ask even more questions. When in reality, the goal is to make a decision.”- Brian T. O’Neill (@rhythmspice) (30:11)

 

  • “I’m trying to resist the urge to get industry-specific or metric specific in any of the kind of baseline functionality in the product. And instead, say that we can experiment in a lightweight way in terms of outside of the product, health content, guidance on best practices, etc. That is going to be a constant tension because the types of decisions that you enact and the types of questions you're digging into are really different depending on whether you're a massive hotel chain compared to a quick-service restaurant compared to a B2B SAAS company. The personas and the questions are so different. So that's a tension that I think is really interesting when you think about the decision-making workflow and who those stakeholders are.”- Berit Hoffmann (32:05)

Links Referenced

078 - From Data to Product: What is Data Product Management and Why Do We Need It with Eric Weber

078 - From Data to Product: What is Data Product Management and Why Do We Need It with Eric Weber

November 16, 2021

Eric Weber, Head of Data Product at Yelp, has spent his career developing a product-minded approach to producing data-driven solutions that actually deliver value. For Eric, developing a data product mindset is still quite new and today, we’re digging into all things “data product management” and why thinking of data with a product mindset matters.

In our conversation, Eric defines what data products are and explains the value that data product managers can bring to their companies. Eric’s own ethos on centering on empathy, while equally balanced with technical credibility, is central to his perspectives on data product management. We also discussed how Eric is bringing all of this to hand at Yelp and the various ways they’re tackling their customers' data product needs.

In this episode, we also cover:

  • What is a data product and why do we need data product management? (01:34)
  • Why successful data product managers carry two important traits - empathy and technical credibility. (10:47)
  • A discussion about the levels of problem-solving maturity, the challenge behind delivering solutions, and where product managers can be the most effective during the process. (16:54)
  • A look at Yelp’s customer research strategy and what they are focusing on to optimize the user experience. (21:28)
  • How Yelp’s product strategy is influenced by classes of problems – and Yelp’s layers of experimentation. (27:38)
  • Eric reflects on unlearning and talks about his newsletter, From Data to Product. (34:36)

Quotes from Today’s Episode

  • “Data products bring companies a way to think about the long-term viability and sustainability of their data investments. [...] And part of that is creating things that are sustainable, that have a strategy, that have a customer in mind. And a lot of these things people do - maybe they don't call it out explicitly, but this is a packaging that I think focuses us in the right places rather than hoping for the best.”-  Eric Weber (@edweber1) (02:43)
  • “My hypothesis right now is that by introducing [product management] as a role, you create a vision for our product that is not just tied to a person, it's not just tied to a moment in time of the company. It's something where you can actually have another product manager come in and understand where things are headed. I think that is really the key to seeing the 10 to 20-year sustainability, other than crossing your fingers and hoping that one person stays for a long time, which is kind of a tough bet in this environment.”- Eric Weber (@edweber1) (07:27)
  • “My background is in design and one of the things that I have to work on a lot with my clients and with data scientists in particular, is getting out of the head of wanting to work on “the thing” and learning how to fall in love with the customer's problem and their need. And this whole idea of empathy, not being a squishy thing, but do you want your work to matter? Or, do you just write code or work on models all day long and you don't care if it ships and makes a difference? I think good product-minded people care a lot about that outcome. So, this output versus outcome thing is a mindset change that has to happen.”- Brian T. O’Neill (@rhythmspice) (10:56)
  • “The question about whether you focus on internal development or external buying often goes back to, what is your business trying to do? And how much is this going to cost us over time? And it's fascinating because I want [anyone listening] to come across [the data product] field as an area in motion. It's probably going to look pretty different a year from now, which I find pretty awesome and fascinating myself.”- Eric Weber (@edweber1) (27:02)
  • “If you don't have a deep understanding of what your customer is trying to do and are able to abstract it to some general class of problem, you're probably going to end up building a solution that's too narrow and not sustainable because it will solve something in the short term. But, what if you have to re-architect the whole thing? That's where it becomes really expensive and where having a product strategy pays off.”- Eric Weber (@edweber1) (31:28)
  • “I've had to unlearn that idea that I need to create a definitive framework of what someone does. I just need to be able to put on different lenses. [For example] if I'm talking to design today, these are probably the things that they're going to be focused on and concerned about. If I'm talking to our executive team, this is probably how they're going to break this problem down and look at it. So, I think it's not necessarily dropping certain frameworks, it's being able to understand that some of them are useful in certain scenarios and they're not in others. And that ability is something that I think has created this chance for me to look at the data product from different spaces and think about why it might be valuable.”- Eric Weber (@edweber1) (35:54)

Links

 

077 - Productizing Analytics for Performing Arts Organizations with AMS Analytics CPO Jordan Gross Richmond

077 - Productizing Analytics for Performing Arts Organizations with AMS Analytics CPO Jordan Gross Richmond

November 2, 2021

Even in the performing arts world, data and analytics is serving a purpose. Jordan Gross Richmond is the Chief Product Officer at AMS Analytics, where they provide benchmarking and performance reporting to performing arts organizations. As many of you know, I’m also a musician who tours and performs in the performing arts market and so I was curious to hear how data plays a role “off the stage” within these organizations. In particular, I wanted to know how Jordan designed the interfaces for AMS Analytics’s product, and what’s unique (or not!) about using data to manage arts organizations.

Jordan also talks about the beginnings of AMS and their relationship with leaders in the performing arts industry and the “birth of benchmarking” in this space. From an almost manual process in the beginning, AMS now has a SaaS platform that allows performing arts centers to see the data that helps drive their organizations. Given that many performing arts centers are non-profit organizations, I also asked Jordan about how these organizations balance their artistic mission against the colder, harder facts of data such as ticket sales, revenue, and “the competition.”  

In this episode, we also cover:

  • How the AMS platform helps leaders manage their performing arts centers and the evolution of the AMS business model. (01:10)
  • Benchmarking as a measure of success in the performing arts industry and the “two buckets of context” AMS focuses on. (06:00)
  • Strategies for measuring intangible success and how performing arts data is about more than just the number of seats filled at concerts and shows. (15:48)
  • The relationships between AMS and its customers, their organizational structure, and how AMS has shaped it into a useful SaaS product. (26:27)
  • The role of users in designing the solution and soliciting feedback and what Jordan means when he says he “focuses on the problems, and not the solutions” in his role as Chief Product Officer. (35:38)

 Quotes from Today’s Episode

  • “I think [AMS] is a one-of-a-kind thing, and what it does now is it provides what I consider to be a steering wheel for these leaders. It’s not the kind of thing that’s going to help anybody figure out what to do tomorrow; it’s more about what’s going on in a year from now and in five years from now. And I think the need for this particular vision comes from the evolution in the business model in general of the performing arts and the cultural arts in America.”- Jordan Gross Richmond (@the1jordangross) (03:07)
  • “No one metric can solve everything. It’s a one-to-one relationship in terms of data model to analytical point. So, we have to be really careful that we don't think that just because there's a lot of charts on the screen, we must be able to answer all of our [customers'] questions.”- Jordan Gross Richmond (@the1jordangross) (18:18)
  • “We are absolutely a product-led organization, which essentially means that the solutions are built into the product, and the relationship with the clients and the relationship with future clients is actually all engineered into the product itself. And so I never want to create anything in a black box. Nobody benefits from a feature that nobody cares about.”- Jordan Gross Richmond (@the1jordangross) (29:16)
  • “This is an evolution that's driven not by the technology itself, but [...] by the key stakeholders amongst this community. And we found that to be really successful. In terms of product line growth, when you listen to your users and they feel heard, the sky's the limit. Because at that point, they have buy-in, so you have a real relationship. ”- Jordan Gross Richmond (@the1jordangross) (31:11)
  • “Successful product leaders don't focus on the solutions. We focus on the problems. And that's where I like to stay, because sometimes we kind of get into lots of proposals. My role in these meetings is often to help identify the problem and make sure we're all solving the same problem because we can get off pretty easily on a solution that sounds sexy [or] interesting, but if we're not careful, we might be solving a problem that doesn't even exist.”- Jordan Gross Richmond (@the1jordangross) (35:09)
  • “It’s about starting with the customer’s problems and working backwards from that. I think that you have to start with the problem space that they're in, and then you do the best job you can with the data that's available. [...] So, I love the fact that you're having these working groups. Sometimes we call these design partners in the design world, and I think that kind of regular interaction and exposure, especially early and as frequently as possible, is a great habit.”- Brian T. O’Neill (@rhythmspice) (40:26)

Links Referenced

https://www.ams-analytics.com/

076 - How Bedrock’s “Data by Design” Mantra Helps Them Build Human-Centered Solutions with Jesús Templado

076 - How Bedrock’s “Data by Design” Mantra Helps Them Build Human-Centered Solutions with Jesús Templado

October 19, 2021

Why do we need or care about design in the work of data science? Jesús Templado, Managing Director at Bedrock, is here to tell us about how Bedrock executes their mantra, “data by design.” 

 

Bedrock has found ways to bring to their clients a design-driven, human-centered approach by utilizing a “hybrid model” to synthesize technical possibilities with human needs. In this episode, we explore Bedrock’s vision for how to achieve this synthesis as part of the firm’s DNA, and how Bedrock adopted their vision to make data more approachable with the client being central to their design efforts. Jesús also discusses a time when he championed making “data by design” a successful strategy with a large chain of hotels, and he offers insight on how making clients feel validated and heard plays a part.

 

In our chat, we also covered: 

  • “Data by design” and how Bedrock implements this design-driven approach. (00:43)
  • Bedrock’s vision for how they support their clients and why design has always been part of their DNA. (08:53)
  • Jesús shares a time when he successfully implemented a design process for a large chain of hotels, and some of the challenges that came with that approach. (14:47)
  • The importance of making clients feel heard by dedicating time to research and UX and how the team navigates conversations about risk with customers. (24:12)
  • More on the client experience and how Bedrock covers a large spectrum of areas to ensure that they deliver a product that makes sense for the customer. (33:01)
  • Jesús’ opinion on why companies should consider change management when building products and systems - and a look at the Data Stand-Up podcast (35:42)

Quotes from Today’s Episode

Many people in corporations don’t have the technical background to understand the possibilities when it comes to analyzing or using data. So, bringing a design-based framework, such as design thinking, is really important for all of the work that we do for our clients.” - Jesús Templado (2:33)

 

“We’ve mentioned “data by design” before as our mantra; we very much prefer building long-lasting relationships based on [understanding] our clients' business and their strategic goals. We then design and ideate an implementation roadmap with them and then based on that, we tackle different periods for building different models. But we build the models because we understand what’s going to bring us an outcome for the business—not because the business brings us in to deliver only a model for the sake of predicting what the weather is going to be in two weeks.”- Jesús Templado (14:07)

 

“I think as consultants and people in service, it’s always nice to make friends. And, I like when I can call a client a friend, but I feel like I’m really here to help them deliver a better future state [...] And the road may be bumpy, especially if design is a new thing. And it is often new; in the context of data science and analytics projects.”- Brian T. O’Neill (@rhythmspice) (26:49)

 

“When we do data science [...] that’s a means to an end. We do believe it’s important that the client understands the reasoning behind everything that we do and build, but at the end of the day, it’s about understanding that business problem, understanding the challenge that the company is facing, knowing what the expected outcome is, and knowing how you will deliver or predict that outcome to be used for something meaningful and relevant for the business.”- Jesús Templado (33:06)

 

“The appetite for innovation is high, but a lot of the companies that want to do it are more concerned about risk. Risk and innovation are at opposite ends of the spectrum. And so, if you want to be innovative, by definition—you’re signing up for failure on the way to success. [...] It’s about embracing an iterative process, it’s about getting feedback along the way, it’s about knowing that we don’t know everything, and we’re signing up for that ambiguity along the way to something better.”- Brian T. O’Neill (@rhythmspice) (38:20)

 

Links Referenced

075 - How CDW is Integrating Design Into Its Data Science and Analytics Teams with Prasad Vadlamani

075 - How CDW is Integrating Design Into Its Data Science and Analytics Teams with Prasad Vadlamani

October 5, 2021

How do we get the most breadth out of design and designers when building data products? One way is to have designers be at the front leading the charge when it comes to creating data products that must be useful, usable, and valuable.

 

For this episode Prasad Vadlamani, CDW’s Director of Data Science and Advanced Analytics, joins us for a chat about how they are making design a larger focus of how they create useful, usable data products. Prasad talks about the importance of making technology—including AI-driven solutions—human centered, and how CDW tries to keep the end user in mind. 


Prasad and I also discuss his perspectives on how to build designers into a data product team and how to successfully navigate the grey areas between various areas of expertise. When this is done well, then the entire team can work with each other's strengths and advantages to create a more robust product. We also discuss the role a UI-free user experience plays in some data products, some differences between external and internally-facing solutions, and some of Prasad’s valuable takeaways that have helped to shape the way he thinks design, data science, and analytics can collaborate.

 

In our chat, we covered: 

 

  • Prasad’s first introduction to designers and how he leverages the disciplines of design and product in his data science and analytics work (1:09)
  • The terminology behind product manager and designer and how these functions play a role in an enterprise AI team (5:18)
  • How teams can use their wide range of competencies to their advantage (8:52)
  • A look at one UI-less experience and the value of the “invisible interface” (14:58)
  • Understanding the model development process and why the model takes up only a small percentage of the effort required to successfully bring a data product to end users (20:52)
  • The differences between building an internal vs external product, what to consider, and Prasad’s “customer zero” approach. (29.17)
  • Expectations Prasad sets with customers (stakeholders) about the life expectancy of data products when they are in their early stage of development (35:02)
074 - Why a Former Microsoft ML/AI Researcher Turned to Design to Create Intelligent Products from Messy Data with Abhay Agarwal, Founder of Polytopal

074 - Why a Former Microsoft ML/AI Researcher Turned to Design to Create Intelligent Products from Messy Data with Abhay Agarwal, Founder of Polytopal

September 21, 2021

Episode Description

The challenges of design and AI are exciting ones to face. The key to being successful in that space lies in many places, but one of the most important is instituting the right design language.

For Abhay Agarwal, Founder of Polytopal, when he began to think about design during his time at Microsoft working on systems to help the visually impared, he realized the necessity of a design language for AI. Stepping away from that experience, he leaned into how to create a new methodology of design centered around human needs. His efforts have helped shift the lens of design towards how people solve problems.

In this episode, Abhay and I go into details on a snippet from his course page for the Stanford d. where he claimed that “the foreseeable future would not be well designed, given the difficulty of collaboration between disciplines.” Abhay breaks down how he thinks his design language for AI should work and how to build it out so that everyone in an organization can come to a more robust understanding of AI. We also discuss the future of designers and AI and the ebb and flow of changing, learning, and moving forward with the AI narrative. 

In our chat, we covered:

  • Abhay’s background in AI research and what happened to make him move towards design as a method to produce intelligence from messy data. (1:01)
  • Why Abhay has come up with a new design language called Lingua Franca for machine learning products [and his course on this at Stanford’s d.school]. (3:21)
  • How to become more human-centered when building AI products, what ethnographers can uncover, and some of Abhay’s real-world examples. (8:06)
  • Biases in design and the challenges in developing a shared language for both designers and AI engineers. (15:59)
  • Discussing interpretability within black box models using music recommendation systems, like Spotify, as an example. (19:53)
  • How “unlearning” solves one of the biggest challenges teams face when collaborating and engaging with each other. (27:19) 
  • How Abhay is shaping the field of design and ML/AI -- and what’s in store for Lingua Franca. (35:45)

Quotes from Today's Episode

“I certainly don’t think that one needs to hit the books on design thinking or listen to a design thinker describe their process in order to get the fundamentals of a human-centered design process. I personally think it’s something that one can describe to you within the span of a single conversation, and someone who is listening to that can then interpret that and say, ‘Okay well, what am I doing that could be more human-centered?’ In the AI space, I think this is the perennial question.” - Abhay Agarwal (@Denizen_Kane) (6:30)

 

“Show me a company where designers feel at an equivalent level to AI engineers when brainstorming technology? It just doesn’t happen. There’s a future state that I want us to get to that I think is along those lines. And so, I personally see this as, kind of, a community-wide discussion, engagement, and multi-strategy approach.” - Abhay Agarwal (@Denizen_Kane) (18:25)

 

“[Discussing ML data labeling for music recommenders] I was just watching a video about drum and bass production, and they were talking about, “Or you can write your bass lines like this”—and they call it reggaeton. And it’s not really reggaeton at all, which was really born in Puerto Rico. And Brazil does the same thing with their versions of reggae. It’s not the one-drop reggae we think of Bob Marley and Jamaica. So already, we’ve got labeling issues—and they’re not even wrong; it’s just that that’s the way one person might interpret what these musical terms mean” - Brian O’Neill (@rhythmspice) (25:45)

 

“There is a new kind of hybrid role that is emerging that we play into...which is an AI designer, someone who is very proficient with understanding the dynamics of AI systems. The same way that we have digital UX designers, app designers—there had to be apps before they could be app designers—there is now AI, and then there can thus be AI designers.” - Abhay  Agarwal (@Denizen_Kane) (33:47)

 

Links Referenced

Podbean App

Play this podcast on Podbean App