127.5K
Downloads
161
Episodes
If you’re a leader tasked with generating business and org. value through ML/AI and analytics, you’ve probably struggled with low user adoption. Making the tech gets easier, but getting users to use, and buyers to buy, remains difficult—but you’ve heard a ”data product” approach can help. Can it? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting designer’s perspective on why creating ML and analytics outputs isn’t enough to create business and UX outcomes. How can UX design and product management help you create innovative ML/AI and analytical data products? What exactly are data products—and how can data product management help you increase user adoption of ML/analytics—so that stakeholders can finally see the business value of your data? Every 2 weeks, I answer these questions via solo episodes and interviews with innovative chief data officers, data product management leaders, and top UX professionals. Hashtag: #ExperiencingData. PODCAST HOMEPAGE: Get 1-page summaries, text transcripts, and join my Insights mailing list: https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes
12 hours ago
12 hours ago
Today, I’m chatting with Adam Berke, the Chief Product Officer at The Predictive Index. For 70 years, The Predictive Index has helped customers hire the right employees, and after the merger with Charma, their products now nurture the employee/manager relationship. This is something right up Adam’s alley, as he previously helped co-found the employee and workflow performance management software company Charma before both aforementioned organizations merged back in 2023.
You’ll hear Adam talk about the first-time challenges (and successes) that come with integrating two products and two product teams, and why squashing out any ambiguity with overindexing (i.e. coming prepared with new org charts ASAP) is essential during the process.
Integrating behavioral science into the world of data is what has allowed The Predictive Index to thrive since the 1950s. While this is the company’s main selling point, Adam explains how the science-forward approach can still create some disagreements–and learning opportunities–with The Predictive Index’s legacy customers.
Highlights/ Skip to:
- What is The Predictive Index and how does the product team conduct their work (1:24)
- Why Charma merged with The Predictive Index (5:11)
- The challenges Adam has faced as a CPO since the Charma/Predictive Index merger (9:21)
- How Predictive Index has utilized behavioral science to remove the guesswork of hiring (14:22)
- The makeup of the product team that designs and delivers The Predictive Index's products (20:24)
- Navigating the clashes between changing science and Predictive Index's legacy customers (22:37)
- How The Predictive Index analyzes the quality of their products with multiple user data metrics (27:21)
- What Adam would do differently if had to redo the merger (37:52)
- Where you can find more from Adam and The Predictive Index (41:22)
Quotes from Today’s Episode
- “ Acquisitions are complicated. Outside of a few select companies, there are very few that have mergers and acquisitions as a repeatable discipline. More often than not, neither [company in the merger] has an established playbook for how to do this. You’re [acquiring a company] because of its product, team, or maybe even one feature. You have different theories on how the integration might look, but experiencing it firsthand is a whole different thing. My initial role didn’t exist in [The Predictive Index] before. The rest of the whole PI organization knows how to get their work done before this, and now there’s this new executive. There’s just tons of [questions and confusion] if you don’t go in assuming good faith and be willing to work through the bumps. It’s going to get messy.” - Adam Berke (9:41)
- “We integrated the teams and relaunched the product. Charma became [a part of the product called] PI Perform, and right away there was re-skinning, redesign, and some back-end architecture that needed to happen to make it its own module. From a product perspective, we’re trying to deliver [Charma’s] unique value prop. That’s when we can start [figuring out how to] infuse PI’s behavioral science into these workflows. We have this foundation. We got the thing organized. We got the teams organized. We were 12 people when we were acquired… and here we are a year later. 150+ new customers have been added to PI Perform because it’s accelerating now that we’re figuring out the product.” - Adam Berke (12:18)
- “Our product team has the roles that you would expect: a PM, researcher, ux design, and then one atypical role–a PhD behavioral scientist. [Our product already had] suggested topics and templates [for manager/IC one-on-one meetings], but now we want to make those templates and suggested topics more dynamic. There might be different questions to draw out a better discussion, and our behavioral scientists help us determine [those questions]... [Our behavioral scientists] look at the science, other research, and calibrate [the one-on-one questions] before we implement them into the product.” - Adam Berke (21:04)
- “We’ve adapted the technology and science over time as they move forward. We want to update the product with the most recent science, but there are customers who have used this product in a certain way for decades in some cases. Our desire is to follow the science… but you can’t necessarily stop people from using the stuff in a way that they used it 20 years ago. We sometimes end up with disagreements [with customers over product changes based on scientific findings], and those are tricky conversations. But even in that debate… it comes down to all the best practices you would follow in product development in general–listening to your customers, asking that additional ‘why’ question, and trying to get to root causes.” - Adam Berke (23:36)
- “ We’re doing an upgrade to our platform right now trying to figure out how to manage user permissions in the new version of the product. The way that we did it in the old version had a lot of problems associated… and we put out a survey. “Hey, do you use this to do X?’ We got hundreds of responses and found that half of them were not using it for the reason that we thought they were. At first, we thought thousands of people were going to have deep, deep sensitivities to tweaks in how this works, and now we realize that it might be half that, at best. A simple one-question survey asked about the right problem in the right way can help to avoid a lot of unnecessary thrashing on a product problem that might not have even existed in the first place.” - Adam Berke (35:22)
Links Referenced
- The Predictive Index: https://www.predictiveindex.com/
- LinkedIn: https://www.linkedin.com/in/adamberke/
Tuesday Dec 24, 2024
Tuesday Dec 24, 2024
Today, I’m talking to Andy Sutton, GM of Data and AI at Endeavour Group, Australia's largest liquor and hospitality company. In this episode, Andy—who is also a member of the Data Product Leadership Community (DPLC)—shares his journey from traditional, functional analytics to a product-led approach that drives their mission to leverage data and personalization to build the “Spotify for wines.” This shift has greatly transformed how Endeavour’s digital and data teams work together, and Andy explains how their advanced analytics work has paid off in terms of the company’s value and profitability.
You’ll learn about the often overlooked importance of relationships in a data-driven world, and how Andy sees the importance of understanding how users do their job in the wild (with and without your product(s) in hand). Earlier this year, Andy also gave the DPLC community a deeper look at how they brew data products at EDG, and that recording is available to our members in the archive.
We covered:
- What it was like at EDG before Andy started adopting a producty approach to data products and how things have now changed (1:52)
- The moment that caused Andy to change how his team was building analytics solutions (3:42)
- The amount of financial value that Andy's increased with his scaling team as a result of their data product work (5:19)
- How Andy and Endeavour use personalization to help build “the Spotify of wine” (9:15)
- What the team under Andy required in order to make the transition to being product-led (10:27)
- The successes seen by Endeavour through the digital and data teams’ working relationship (14:04)
- What data product management looks like for Andy’s team (18:45)
- How Andy and his team find solutions to bridging the adoption gap (20:53)
- The importance of exposure time to end users for the adoption of a data product (23:43)
- How talking to the pub staff at EDG’s bars and restaurants helps his team build better data products (27:04)
- What Andy loves about working for Endeavour Group (32:25)
- What Andy would change if he could rewind back to 2022 and do it all over (34:55)
- Final thoughts (38:25)
Quotes from Today’s Episode
- “I think the biggest thing is the value we unlock in terms of incremental dollars, right? I’ve not worked in analytics team before where we’ve been able to deliver a measurable value…. So, we’re actually—in theory—we’re becoming a profit center for the organization, not just a cost center. And so, there’s kind of one key metric. The second one, we do measure the voice of the team and how engaged our team are, and that’s on an upward trend since we moved to the new operating model, too. We also measure [a type of] “voice of partner” score [and] get something like a 4.1 out of 5 on that scale. Those are probably the three biggest ones: we’re putting value in, and we’re delivering products, I guess, our internal team wants to use, and we are building an enthused team at the same time.” - Andy Sutton (16:18)
- “ You can put an [unfinished] product in front of an end customer, and they will give you quality feedback that you can then iterate on quickly. You can do that with an internal team, but you’ll lose credibility. Internal teams hold their analytics colleagues to a higher standard than the external customers. We’re trying to change how people do their roles. People feel very passionate about the roles they do, and how they do them, and what they bring to that role. We’re trying to build some of that into products. It requires probably more design consideration than I’d anticipated, and we’re still bringing in more designers to help us move closer to the start line.’” - Andy Sutton (19:25)
- “ [Customer research] is becoming critical in terms of the products we’re building. You’re building a product, a set of products, or a process for an operations team. In our context, an operations team can mean a team of people who run a pub. It’s not just about convincing me, my product managers, or my data scientists that you need research; we want to take some of the resources out of running that bar for a period of time because we want to spend time with [the pub staff] watching, understanding, and researching. We’ve learned some of these things along the way… we’ve earned the trust, we’ve earned that seat at the table, and so we can have those conversations. It’s not trivial to get people to say, ‘I’ll give you a day-long workshop, or give you my team off of running a restaurant and a bar for the day so that they can spend time with you, and so you can understand our processes.’” - Andy Sutton (24:42)
- “ I think what is very particular to pubs is the importance of the interaction between the customer and the person serving the customer. [Pubs] are about the connections between the staff and the customer, and you don’t get any of that if you’re just looking at things from a pure data perspective… You don’t see the [relationships between pub staff and customer] in the [data], so how do you capture some of that in your product? It’s about understanding the context of the data, not just the data itself.” - Andy Sutton (28:15)
- “Every winery, every wine grower, every wine has got a story. These conversations [and relationships] are almost natural in our business. Our CEO started work on the shop floor in one of our stores 30 years ago. That kind of relationship stuff percolates through the organization. Having these conversations around the customer and internal stakeholders in the context of data feels a lot easier because storytelling and relationships are the way we get things done. An analytics team may get frustrated with people who can’t understand data, but it’s [the analytics team’s job] to help bridge that gap.” - Andy Sutton (32:34)
Links Referenced
- LinkedIn: https://www.linkedin.com/in/andysutton/
- Endeavour Group: https://www.endeavourgroup.com.au/
- Data Product Leadership Community https://designingforanalytics.com/community
Tuesday Dec 10, 2024
Tuesday Dec 10, 2024
After getting started in construction management, Anna Jacobson traded in the hard hat for the world of data products and operations at a VC company. Anna, who has a structural engineering undergrad and a masters in data science, is also a Founding Member of the Data Product Leadership Community (DPLC). However, her work with data products is more “accidental” and is just part of her responsibility at Operator Collective. Nonetheless, Anna had a lot to share about building data products, dashboards, and insights for users—including resistant ones!
That resistance is precisely what I wanted to talk to her about in this episode: how does Anna get somebody to adopt a data product to which they may be apathetic, if not completely resistant?
At the end of the episode, Anna gives us a sneak peek at what she’s planning to talk about in our final 2024 live DPLC group discussion coming up on 12/18/2024.
We covered:
- (1:17) Anna's background and how she got involved with data products
- (3:32) The ways Anna applied her experiences working in construction management to her current work with data products at a VC firm
- (5:32) Explaining one of the main data products she works on at Operator Collective
- (9:55) How Anna defines success for her data products
- (15:21) The process of designing data products for "non-believers"
- (21:08) How to think about "super users" and their feedback on a data product
- (27:11) How a company's cultural problems can be a blocker for product adoption
- (38:21) A preview of what you can expect from Anna's talk and live group discussion in the DPLC
- (40:24) Closing thoughts from Anna
- (42:54) Where you can find more from Anna
Quotes from Today’s Episode
- “People working with data products are always thinking about how to [gain user adoption of their product]... I can’t think of a single one where [all users] were immediately on board. There’s a lot to unpack in what it takes to get non-believers on board, and it’s something that none of us ever get any training on. You just learn through experience, and it’s not something that most people took a class on in college. All of the social science around what we do gets really passed over for all the technical stuff. It takes thinking through and understanding where different [users] are coming from, and [understanding] that my perspective alone is not enough to make it happen.” - Anna Jacobson (16:00)
- “If you only bring together the super users and don’t try to get feedback from the average user, you are missing the perspective of the person who isn’t passionate about the product. A non-believer is someone who is just over capacity. They may be very hard-working, they may be very smart, but they just don’t have the bandwidth for new things. That’s something that has to be overcome when you’re putting a new product into place.” - Anna Jacobson (22:35)
- “If a company can’t find budget to support [a data product], that’s a cultural decision. It’s not a financial decision. They find the money for the things that they care about. Solving the technology challenge is pretty easy, but you have to have a company that’s motivated to do that. If you want to implement something new, be it a data product or any change in an organization, identifying the cultural barriers and figuring out how to bring [people in an organization] on board is the crux of it. The money and the technology can be found.” - Anna Jacobson (27:58)
- “I think people are actually very bad at explaining what they want, and asking people what they want is not helpful. If you ask people what they want to do, then I think you have a shot at being able to build a product that does [what they want]. The executive sponsors typically have a very different perspective on what the product [should be] than the users do. If all of your information is getting filtered through the executive sponsor, you’re probably not getting the full picture” - Anna Jacobson (31:45)
- “You want to define what the opportunity is, the problem, the solution, and you want to talk about costs and benefits. You want to align [the data product] with corporate strategy, and those things are fairly easy to map out. But as you get down to the user, what they want to know is, ‘How is this going to make my life easier? How is this going to make [my job] faster? How is it going to result in better outcomes?’ They may have an interest in how it aligns with corporate strategy, but that’s not what’s going to motivate them. It’s really just easier, faster, better.” - Anna Jacobson (35:00)
Links Referenced
LinkedIn: https://www.linkedin.com/in/anna-ching-jacobson/
DPLC (Data Product Leadership Community): https://designingforanalytics.com/community
Tuesday Nov 26, 2024
Tuesday Nov 26, 2024
R&D for materials-based products can be expensive, because improving a product’s materials takes a lot of experimentation that historically has been slow to execute. In traditional labs, you might change one variable, re-run your experiment, and see if the data shows improvements in your desired attributes (e.g. strength, shininess, texture/feel, power retention, temperature, stability, etc.). However, today, there is a way to leverage machine learning and AI to reduce the number of experiments a material scientist needs to run to gain the improvements they seek. Materials scientists spend a lot of time in the lab—away from a computer screen—so how do you design a desirable informatics SAAS that actually works, and fits into the workflow of these end users?
As the Chief Product Officer at MaterialsZone, Ori Yudilevich came on Experiencing Data with me to talk about this challenge and how his PM, UX, and data science teams work together to produce a SAAS product that makes the benefits of materials informatics so valuable that materials scientists depend on their solution to be time and cost-efficient with their R&D efforts.
We covered:
- (0:45) Explaining what Ori does at MaterialZone and who their product serves
- (2:28) How Ori and his team help make material science testing more efficient through their SAAS product
- (9:37) How they design a UX that can work across various scientific domains
- (14:08) How “doing product” at MaterialsZone matured over the past five years
- (17:01) Explaining the "Wizard of Oz" product development technique
- (21:09) The importance of integrating UX designers into the "Wizard of Oz"
- (23:52) The challenges MaterialZone faces when trying to get users to adopt to their product
- (32:42) Advice Ori would've given himself five years ago
- (33:53) Where you can find more from MaterialsZone and Ori
Quotes from Today’s Episode
- “The fascinating thing about materials science is that you have this variety of domains, but all of these things follow the same process. One of the problems [consumer goods companies] face is that they have to do lengthy testing of their products. This is something you can use machine learning to shorten. [Product research] is an iterative process that typically takes a long time. Using your data effectively and using machine learning to predict what can happen, what’s better to try out, and what will reduce costs can accelerate time to market.” - Ori Yudilevich (3:47)
- “The difference [in time spent testing a product] can be up to 70% [i.e. you can run 70% fewer experiments using ML.] That [also] means 70% less resources you’re using. Under the ‘old system’ of trial and error, you were just trying out a lot of things. The human mind cannot process a large number of parameters at once, so [a materials scientist] would just start playing only with [one parameter at a time]. You’ll have many experiments where you just try to optimize [for] one parameter, but then you might have 20, 30, or 100 more [to test]. Using machine learning, you can change a lot of parameters at once. The model can learn what has the most effect, what has a positive effect, and what has a negative effect. The differences can be really huge.” - Ori Yudilevich (5:50)
- “Once you go deeper into a use case, you see that there are a lot of differences. The types of raw materials, the data structure, the quantity of data, etc. For example, with batteries, you have lots of data because you can test hundreds all at once. Whereas with something like ceramics, you don’t try so many [experiments]. You just can’t. It’s much slower. You can’t do so many [experiments] in parallel. You have much less data. Your models are different, and your data structure is different. But there’s also quite a lot of commonality because you’re storing the data. In the end, you have each domain, some raw materials, formulations, tests that you’re doing, and different statistical plots that are very common.” - Ori Yudilvech (11:24)
- “We’ll typically do what we call the ‘Wizard of Oz’ technique. You simulate as if you have a feature, but you’re actually working for your client behind the scenes. You tell them [the simulated feature] is what you’re doing, but then measure [the client’s response] to understand if there’s any point in further developing that feature. Once you validate it, have enough data, and know where the feature is going, then you’ll start designing it and releasing it in incremental stages. We’ve made a lot of progress in how we discover opportunities and how we build something iteratively to make sure that we’re always going in the right direction” - Ori Yudilevich (15:56)
- “The main problem we’re encountering is changing the mindset of users. Our users are not people who sit in front of a computer. These are researchers who work in [a materials science] lab. The challenge [we have] is getting people to use the platform more. To see it’s worth [their time] to look at some insights, and run the machine learning models. We’re always looking for ways to make that transition faster… and I think the key is making [the user experience] just fun, easy, and intuitive.” - Ori Yudilevich (24:17)
- “Even if you make [the user experience] extremely smooth, if [users] don’t see what they get out of it, they’re still not going to [adopt your product] just for the sake of doing it. What we find is if this [product] can actually make them work faster or develop better products– that gets them interested. If you’re adopting these advanced tools, it makes you a better researcher and worker. People who [adopt those tools] grow faster. They become leaders in their team, and they slowly drag the others in.” - Ori Yudilevich (26:55)
- “Some of [MaterialsZone’s] most valuable employees are the people who have been users. Our product manager is a materials scientist. I’m not a material scientist, and it’s hard to imagine being that person in the lab. What I think is correct turns out to be completely wrong because I just don’t know what it’s like. Having [material scientists] who’ve made the transition to software and data science? You can’t replace that.” - Ori Yudilevich (31:32)
Links Referenced
Website: https://www.materials.zone
LinkedIn: https://www.linkedin.com/in/oriyudilevich/
Email: ori@materials.zone
Thursday Nov 14, 2024
Thursday Nov 14, 2024
Jeremy Forman joins us to open up about the hurdles– and successes that come with building data products for pharmaceutical companies. Although he’s new to Pfizer, Jeremy has years of experience leading data teams at organizations like Seagen and the Bill and Melinda Gates Foundation. He currently serves in a more specialized role in Pfizer’s R&D department, building AI and analytical data products for scientists and researchers. .
Jeremy gave us a good luck at his team makeup, and in particular, how his data product analysts and UX designers work with pharmaceutical scientists and domain experts to build data-driven solutions.. We talked a good deal about how and when UX design plays a role in Pfizer’s data products, including a GenAI-based application they recently launched internally.
Highlights/ Skip to:
- (1:26) Jeremy's background in analytics and transition into working for Pfizer
- (2:42) Building an effective AI analytics and data team for pharma R&D
- (5:20) How Pfizer finds data products managers
- (8:03) Jeremy's philosophy behind building data products and how he adapts it to Pfizer
- (12:32) The moment Jeremy heard a Pfizer end-user use product management research language and why it mattered
- (13:55) How Jeremy's technical team members work with UX designers
- (18:00) The challenges that come with producing data products in the medical field
- (23:02) How to justify spending the budget on UX design for data products
- (24:59) The results we've seen having UX design work on AI / GenAI products
- (25:53) What Jeremy learned at the Bill & Melinda Gates Foundation with regards to UX and its impact on him now
- (28:22) Managing the "rough dance" between data science and UX
- (33:22) Breaking down Jeremy's GenAI application demo from CDIOQ
- (36:02) What would Jeremy prioritize right now if his team got additional funding
- (38:48) Advice Jeremy would have given himself 10 years ago
- (40:46) Where you can find more from Jeremy
Quotes from Today’s Episode
- “We have stream-aligned squads focused on specific areas such as regulatory, safety and quality, or oncology research. That’s so we can create functional career pathing and limit context switching and fragmentation. They can become experts in their particular area and build a culture within that small team. It’s difficult to build good [pharma] data products. You need to understand the domain you’re supporting. You can’t take somebody with a financial background and put them in an Omics situation. It just doesn’t work. And we have a lot of the scars, and the failures to prove that.” - Jeremy Forman (4:12)
- “You have to have the product mindset to deliver the value and the promise of AI data analytics. I think small, independent, autonomous, empowered squads with a product leader is the only way that you can iterate fast enough with [pharma data products].” - Jeremy Forman (8:46)
- “The biggest challenge is when we say data products. It means a lot of different things to a lot of different people, and it’s difficult to articulate what a data product is. Is it a view in a database? Is it a table? Is it a query? We’re all talking about it in different terms, and nobody’s actually delivering data products.” - Jeremy Forman (10:53)
- “I think when we’re talking about [data products] there’s some type of data asset that has value to an end-user, versus a report or an algorithm. I think it’s even hard for UX people to really understand how to think about an actual data product. I think it’s hard for people to conceptualize, how do we do design around that? It’s one of the areas I think I’ve seen the biggest challenges, and I think some of the areas we’ve learned the most. If you build a data product, it’s not accurate, and people are getting results that are incomplete… people will abandon it quickly.” - Jeremy Forman (15:56)
- “ I think that UX design and AI development or data science work is a magical partnership, but they often don’t know how to work with each other. That’s been a challenge, but I think investing in that has been critical to us. Even though we’ve had struggles… I think we’ve also done a good job of understanding the [user] experience and impact that we want to have. The prototype we shared [at CDIOQ] is driven by user experience and trying to get information in the hands of the research organization to understand some portfolio types of decisions that have been made in the past. And it’s been really successful.” - Jeremy Forman (24:59)
- “If you’re having technology conversations with your business users, and you’re focused only the technology output, you’re just building reports. [After adopting If we’re having technology conversations with our business users and only focused on the technology output, we’re just building reports. [After we adopted a human-centered design approach], it was talking [with end-users] about outcomes, value, and adoption. Having that resource transformed the conversation, and I felt like our quality went up. I felt like our output went down, but our impact went up. [End-users] loved the tools, and that wasn’t what was happening before… I credit a lot of that to the human-centered design team.” - Jeremy Forman (26:39)
- “When you’re thinking about automation through machine learning or building algorithms for [clinical trial analysis], it becomes a harder dance between data scientists and human-centered design. I think there’s a lack of appreciation and understanding of what UX can do. Human-centered design is an empathy-driven understanding of users’ experience, their work, their workflow, and the challenges they have. I don’t think there’s an appreciation of that skill set.” - Jeremy Forman (29:20)
- “Are people excited about it? Is there value? Are we hearing positive things? Do they want us to continue? That’s really how I’ve been judging success. Is it saving people time, and do they want to continue to use it? They want to continue to invest in it. They want to take their time as end-users, to help with testing, helping to refine it. Those are the indicators. We’re not generating revenue, so what does the adoption look like? Are people excited about it? Are they telling friends? Do they want more? When I hear that the ten people [who were initial users] are happy and that they think it should be rolled out to the whole broader audience, I think that’s a good sign.” - Jeremy Forman (35:19)
Links Referenced
LinkedIn: https://www.linkedin.com/in/jeremy-forman-6b982710/
Tuesday Oct 29, 2024
Tuesday Oct 29, 2024
The relationship between AI and ethics is both developing and delicate. On one hand, the GenAI advancements to date are impressive. On the other, extreme care needs to be taken as this tech continues to quickly become more commonplace in our lives. In today’s episode, Ovetta Sampson and I examine the crossroads ahead for designing AI and GenAI user experiences.
While professionals and the general public are eager to embrace new products, recent breakthroughs, etc.; we still need to have some guard rails in place. If we don’t, data can easily get mishandled, and people could get hurt. Ovetta possesses firsthand experience working on these issues as they sprout up. We look at who should be on a team designing an AI UX, exploring the risks associated with GenAI, ethics, and need to be thinking about going forward.
Highlights/ Skip to:
- (1:48) Ovetta's background and what she brings to Google’s Core ML group
- (6:03) How Ovetta and her team work with data scientists and engineers deep in the stack
- (9:09) How AI is changing the front-end of applications
- (12:46) The type of people you should seek out to design your AI and LLM UXs
- (16:15) Explaining why we’re only at the very start of major GenAI breakthroughs
- (22:34) How GenAI tools will alter the roles and responsibilities of designers, developers, and product teams
- (31:11) The potential harms of carelessly deploying GenAI technology
- (42:09) Defining acceptable levels of risk when using GenAI in real-world applications
- (53:16) Closing thoughts from Ovetta and where you can find her
Quotes from Today’s Episode
- “If artificial intelligence is just another technology, why would we build entire policies and frameworks around it? The reason why we do that is because we realize there are some real thorny ethical issues [surrounding AI]. Who owns that data? Where does it come from? Data is created by people, and all people create data. That’s why companies have strong legal, compliance, and regulatory policies around [AI], how it’s built, and how it engages with people. Think about having a toddler and then training the toddler on everything in the Library of Congress and on the internet. Do you release that toddler into the world without guardrails? Probably not.” - Ovetta Sampson (10:03)
- “[When building a team] you should look for a diverse thinker who focuses on the limitations of this technology- not its capability. You need someone who understands that the end destination of that technology is an engagement with a human being. You need somebody who understands how they engage with machines and digital products. You need that person to be passionate about testing various ways that relationships can evolve. When we go from execution on code to machine learning, we make a shift from [human] agency to a shared-agency relationship. The user and machine both have decision-making power. That’s the paradigm shift that [designers] need to understand. You want somebody who can keep that duality in their head as they’re testing product design.” - Ovetta Sampson (13:45)
- “We’re in for a huge taxonomy change. There are words that mean very specific definitions today. Software engineer. Designer. Technically skilled. Digital. Art. Craft. AI is changing all that. It’s changing what it means to be a software engineer. Machine learning used to be the purview of data scientists only, but with GenAI, all of that is baked in to Gemini. So, now you start at a checkpoint, and you’re like, all right, let’s go make an API, right? So, the skills, the understanding, the knowledge, the taxonomy even, how we talk about these things, how do we talk about the machine who speaks to us talks to us, who could create a podcast out of just voice memos?” - Ovetta Sampson (24:16)
- “We have to be very intentional [when building AI tools], and that’s the kind of folks you want on teams. [Designers] have to go and play scary scenarios. We have to do that. No designer wants to be “Negative Nancy,” but this technology has huge potential to harm. It has harmed. If we don’t have the skill sets to recognize, document, and minimize harm, that needs to be part of our skill set. If we’re not looking out for the humans, then who actually is?” - Ovetta Sampson (32:10)
- “[Research shows] things happen to our brain when we’re exposed to artificial intelligence… there are real human engagement risks that are an opportunity for design. When you’re designing a self-driving car, you can’t just let the person go to sleep unless the car is fully [automated] and every other car on the road is self-driving. If there are humans behind the wheel, you need to have a feedback loop system—something that’s going to happen [in case] the algorithm is wrong. If you don’t have that designed, there’s going to be a large human engagement risk that a car is going to run over somebody who’s [for example] pushing a bike up a hill[...] Why? The car could not calculate the right speed and pace of a person pushing their bike. It had the speed and pace of a person walking, the speed and pace of a person on a bike, but not the two together. Algorithms will be wrong, right?” - Ovetta Sampson (39:42)
- “Model goodness used to be the purview of companies and the data scientists. Think about the first search engines. Their model goodness was [about] 77%. That’s good, right? And then people started seeing photos of apes when [they] typed in ‘black people.’ Companies have to get used to going to their customers in a wide spectrum and asking them when they’re [models or apps are] right and wrong. They can’t take on that burden themselves anymore. Having ethically sourced data input and variables is hard work. If you’re going to use this technology, you need to put into place the governance that needs to be there.” - Ovetta Sampson (44:08)
Tuesday Oct 15, 2024
Tuesday Oct 15, 2024
Sometimes DIY UI/UX design only gets you so far—and you know it’s time for outside help. One thing prospects from SAAS analytics and data-related product companies often ask me is how things are like in the other guy/gal’s backyard. They want to compare their situation to others like them. So, today, I want to share some of the common “themes” I see that usually are the root causes of what leads to a phone call with me.
By the time I am on the phone with most prospects who already have a product in market, they’re usually either having significant problems with 1 or more of the following: sales friction (product value is opaque); low adoption/renewal worries (user apathy), customer complaints about UI/UX being hard to use; velocity (team is doing tons of work, but leader isn’t seeing progress)—and the like.
I’m hoping today’s episode will explain some of the root causes that may lead to these issues — so you can avoid them in your data product building work!
Highlights/ Skip to:
- (10:47) Design != "front-end development" or analyst work
- (12:34) Liking doing UI/UX/viz design work vs. knowing
- (15:04) When a leader sees lots of work being done, but the UX/design isn’t progressing
- (17:31) Your product’s UX needs to convey some magic IP/special sauce…but it isn’t
- (20:25) Understanding the tradeoffs of using libraries, templates, and other solution’s design as a foundation for your own
- (25:28) The sunk cost bias associated with POCs and “we’ll iterate on it”
- (28:31) Relying on UI/UX "customization" to please all customers
- (31:26) The hidden costs of abstraction of system objects, UI components, etc. to make life easier for engineering and technical teams
- (32:32) Believing you’ll know the design is good “when you see it” (and what you don’t know you don’t know)
- (36:43) Believing that because the data science/AI/ML modeling under your solution was, accurate, difficult, and/or expensive makes it automatically worth paying for
Quotes from Today’s Episode
- The challenge is often not knowing what you don’t know about a project. We often end up focusing on building the tech [and rushing it out] so we can get some feedback on it… but product is not about getting it out there so we can get feedback. The goal of doing product well is to produce value, benefits, or outcomes. Learning is important, but that’s not what the objective is. The objective is benefits creation. (5:47)
- When we start doing design on a project that’s not design actionable, we build debt and sometimes can hurt the process of design. If you start designing your product with an entire green space, no direction, and no constraints, the chance of you shipping a good v1 is small. Your product strategy needs to be design-actionable for the team to properly execute against it. (19:19)
- While you don’t need to always start at zero with your UI/UX design, what are the parts of your product or application that do make sense to borrow , “steal” and cheat from? And when does it not? It takes skill to know when you should be breaking the rules or conventions. Shortcuts often don’t produce outsized results—unless you know what a good shortcut looks like. (22:28)
- A proof of concept is not a minimum valuable product. There’s a difference between proving the tech can work and making it into a product that’s so valuable, someone would exchange money for it because it’s so useful to them. Whatever that value is, these are two different things. (26:40)
- Trying to do a little bit for everybody [through excessive customization] can often result in nobody understanding the value or utility of your solution. Customization can hide the fact the team has decided not to make difficult choices. If you’re coming into a crowded space… it’s like’y not going to be a compelling reason to [convince customers to switch to your solution]. Customization can be a tax, not a benefit. (29:26)
- Watch for the sunk cost bias [in product development]. [Buyers] don’t care how the sausage was made. Many don’t understand how the AI stuff works, they probably don’t need to understand how it works. They want the benefits downstream from technology wrapped up in something so invaluable they can’t live without it. Watch out for technically right, effectively wrong. (39:27)
Tuesday Oct 01, 2024
Tuesday Oct 01, 2024
In today’s episode, I’m joined by John Felushko, a product manager at LabStats who impressed me after we recently had a 1x1 call together. John and his team have developed a successful product that helps universities track and optimize their software and hardware usage so schools make smart investments. However, John also shares how culture and value are very tied together—and why their product isn’t a fit for every school, and every country. John shares how important customer relationships are , how his team designs great analytics user experiences, how they do user research, and what he learned making high-end winter sports products that’s relevant to leading a SAAS analytics product. Combined with John’s background in history and the political economy of finance, John paints some very colorful stories about what they’re getting right—and how they’ve course corrected over the years at LabStats.
Highlights/ Skip to:
- (0:46) What is the LabStats product
- (2:59) Orienting analytics around customer value instead of IT/data
- (5:51) "Producer of Persistently Profitable Product Process"
- (11:22) How they make product adjustments based on previous failures
- (15:55) Why a lack of cultural understanding caused LabStats to fail internationally
- (18:43) Quantifying value beyond dollars and cents
- (25:23) How John is able to work so closely with his customers without barriers
- (30:24) Who makes up the LabStats product research team
- (35:04) How strong customer relationships help inform the UX design process
- (38:29) Getting senior management to accept that you can't regularly and accurately predict when you’ll be feature-complete and ship
- (43:51) Where John learned his skills as a successful product manager
- (47:20) Where you can go to cultivate the non-technical skills to help you become a better SAAS analytics product leader
- (51:00) What advice would John Felushko have given himself 10 years ago?
- (56:19) Where you can find more from John Felushko
Quotes from Today’s Episode
- “The product process is [essentially] really nothing more than the scientific method applied to business. Every product is an experiment - it has a hypothesis about a problem it solves. At LabStats [we have a process] where we go out and clearly articulate the problem. We clearly identify who the customers are, and who are [people at other colleges] having that problem. Incrementally and as inexpensively as possible, [we] test our solutions against those specific customers. The success rate [of testing solutions by cross-referencing with other customers] has been extremely high.” - John Felushko (6:46)
- “One of the failures I see in Americans is that we don’t realize how much culture matters. Americans have this bias to believe that whatever is valuable in my culture is valuable in other cultures. Value is entirely culturally determined and subjective. Value isn’t a number on a spreadsheet. [LabStats positioned our producty] as something that helps you save money and be financially efficient. In French government culture, financial efficiency is not a top priority. Spending government money on things like education is seen as a positive good. The more money you can spend on it, the better. So, the whole message of financial efficiency wasn’t going to work in that market.” - John Felushko (16:35)
- “What I’m really selling with data products is confidence. I’m selling assurance. I’m selling an emotion. Before I was a product manager, I spent about ten years in outdoor retail, selling backpacks and boots. What I learned from that is you’re always selling emotion, at every level. If you can articulate the ROI, the real value is that the buyer has confidence they bought the right thing.” - John Felushko (20:29)
- “[LabStats] has three massive, multi-million dollar horror stories in our past where we [spent] millions of dollars in development work for no results. No ROI. Horror stories are what shape people’s values more than anything else. Avoiding negative outcomes is what people avoid more than anything else. [It’s important to] tell those stories and perpetuate those [lessons] through the culture of your organization. These are the times we screwed up, and this is what we learned from it—do you want to screw up like that again because we learned not to do that.” - John Felushko (38:45)
- “There’s an old description of a product manager, like, ‘Oh, they come across as the smartest person in the room.’ Well, how do you become that person? Expand your view, and expand the amount of information you consume as widely as possible. That’s so important to UX design and thinking about what went wrong. Why are some customers super happy and some customers not? What is the difference between those two groups of people? Is it culture? Is it time? Is it mental ability? Is it the size of the screen they’re looking at my product on? What variables can I define and rule out, and what data sources do I have to answer all those questions? It’s just the normal product manager thing—constant curiosity.” -John Felushko (48:04)
Tuesday Sep 17, 2024
Tuesday Sep 17, 2024
In today’s episode, I’m going to perhaps work myself out of some consulting engagements, but hey, that’s ok! True consulting is about service—not PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today I’m going to tell you when you should NOT get help!
Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons aren’t necessarily representative of what I believe, but rather what I’ve heard from clients and prospects over 25 years—what they believe. For each of these, I’m also giving a counterargument, so hopefully, you get both sides of the coin.
Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary units—and so in this episode, I’m also going to talk about other forms of value that products can create that are worth paying for—and how mushy things like “feelings” might just come into play ;-) Ready?
Highlights/ Skip to:
- (1:52) Going for short, easy wins
- (4:29) When you think you have good design sense/taste
- (7:09) The impending changes coming with GenAI
- (11:27) Concerns about "dumbing down" or oversimplifying technical analytics solutions that need to be powerful and flexible
- (15:36) Agile and process FTW?
- (18:59) UX design for and with platform products
- (21:14) The risk of involving designers who don’t understand data, analytics, AI, or your complex domain considerations
- (30:09) Designing after the ML models have been trained—and it’s too late to go back
- (34:59) Not tapping professional design help when your user base is small , and you have routine access and exposure to them
- (40:01) Explaining the value of UX design investments to your stakeholders when you don’t 100% control the budget or decisions
Quotes from Today’s Episode
- “It is true that most impactful design often creates more product and engineering work because humans are messy. While there sometimes are these magic, small GUI-type changes that have big impact downstream, the big picture value of UX can be lost if you’re simply assigning low-level GUI improvement tasks and hoping to see a big product win. It always comes back to the game you’re playing inside your team: are you working to produce UX and business outcomes or shipping outputs on time? ” (3:18)
- “If you’re building something that needs to generate revenue, there has to be a sense of trust and belief in the solution. We’ve all seen the challenges of this with LLMs. [when] you’re unable to get it to respond in a way that makes you feel confident that it understood the query to begin with. And then you start to have all these questions about, ‘Is the answer not in there,’ or ‘Am I not prompting it correctly?’ If you think that most of this is just an technical data science problem, then don’t bother to invest in UX design work… ” (9:52)
- “Design is about, at a minimum, making it useful and usable, if not delightful. In order to do that, we need to understand the people that are going to use it. What would an improvement to this person’s life look like? Simplifying and dumbing things down is not always the answer. There are tools and solutions that need to be complex, flexible, and/or provide a lot of power – especially in an enterprise context. Working with a designer who solely insists on simplifying everything at all costs regardless of your stated business outcome goals is a red flag—and a reason not to invest in UX design—at least with them!“ (12:28)“I think what an analytics product manager [or] an AI product manager needs to accept is there are other ways to measure the value of UX design’s contribution to your product and to your organization. Let’s say that you have a mission-critical internal data product, it’s used by the most senior executives in the organization, and you and your team made their day, or their month, or their quarter. You saved their job. You made them feel like a hero. What is the value of giving them that experience and making them feel like those things… What is that worth when a key customer or colleague feels like you have their back with this solution you created? Ideas that spread, win, and if these people are spreading your idea, your product, or your solution… there’s a lot of value in that.” (43:33)
- “Let’s think about value in non-financial terms. Terms like feelings. We buy insurance all the time. We’re spending money on something that most likely will have zero economic value this year because we’re actually trying not to have to file claims. Yet this industry does very well because the feeling of security matters. That feeling is worth something to a lot of people. The value of feeling secure is something greater than whatever the cost of the insurance plan. If your solution can build feelings of confidence and security, what is that worth? Does “hard to measure precisely” necessarily mean “low value?” (47:26)
Tuesday Sep 03, 2024
Tuesday Sep 03, 2024
Due to a technical glitch that ended up unpublishing this episode right after it originally was released, Episode 151 is a replay of my conversation with Zalak Trivdei from this past March . Please enjoy our chat if you missed it the first time around!
Thanks,
Brian
Links
Sigma Computing: https://sigmacomputing.com
Email: zalak@sigmacomputing.com
LinkedIn: https://www.linkedin.com/in/trivedizalak/
Sigma Computing Embedded: https://sigmacomputing.com/embedded
About Promoted Episodes on Experiencing Data: https://designingforanalytics.com/promoted