

134.4K
Downloads
167
Episodes
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be? While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be? If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies. I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better. Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes

Tuesday Aug 25, 2020
Tuesday Aug 25, 2020
When you think of Steelcase, their office furniture probably comes to mind. However, Steelcase is much more than just a manufacturer of office equipment. They enable their customers (workplace/workspace designers) to help those designers’ clients create useful, effective, workplaces and offices that are also safe and compliant.
Jorge Lozano is a data science manager at Steelcase and recently participated as a practitioner and guest on an IIA webinar I gave about product design and management being the missing links in many data science and analytics initiatives. I was curious to dig deeper with Jorge about how Steelcase is enabling its customers to adjust workspaces to account for public health guidelines around COVID-19 and employees returning to their physical offices. The data science team was trying to make it easy for its design customers to understand health guidelines around seat density, employee proximity and other relevant metrics so that any workspace designs could be “checked” against public health guidelines.
Figuring out the what, when, and how to present these health guidelines in a digital experience was a journey that Jorge was willing to share.
We covered:
- Why the company was struggling to understand how their [office] products came together, and how the data science group tried to help answer this.
- The digital experience Steelcase is working on to re-shape offices for safe post-pandemic use.
- How Steelcase is evaluating whether their health and safety recommendations were in fact safe, and making a difference.
- How Jorge’s team transitioned from delivering “static data science” outputs into providing an enabling capability to the business.
- What Steelcase did to help dealer designers when engaging with customers, in order to help them explain the health risks associated with their current office layouts and plans.
- What it was like for Jorge’s team to work with a product manager and UX designer, and how it improved the process of making the workspace health guidelines useful.
Resources and Links:
- Steelcase: https://www.steelcase.com/
- LinkedIn: https://www.linkedin.com/in/jorge-lozano-flores/
Quotes from Today’s Episode
“We really pride ourselves in research-based design” - Jorge
“This [source data from design software] really enabled us to make very specific metrics to understand the current state of the North American office.” - Jorge
“Using the data that we collected, we came up with samples of workstations that are representative of what our customers are more likely to have. We retrofitted them, and then we put the retrofitted desk in the lab that basically simulates the sneeze of a person, or somebody coughing, or somebody kind of spitting a little bit while they're talking, and all of that. And we're collecting some really amazing insights that can quantify the extent to which certain retrofits work in disease transmission.” - Jorge
“I think one of the challenges is that, especially when you're dealing with a software design solution that involves probabilities, someone has to be the line-drawer.” - Brian
“The challenge right now is how to set up a system where we can swarm at things faster, where we're more efficient at understanding the needs and [are able to get] it in the hands of the right people to make those important decisions fast? It's all pointing towards data science as an enabling capability. It's a team sport.” - Jorge

Tuesday Aug 11, 2020
Tuesday Aug 11, 2020
Healthcare professionals need access to decision support tools that deliver the right information, at the right time. In a busy healthcare facility, where countless decisions are made on a daily basis, it is crucial that any analytical tools provided actually yield useful decision support to the target customer. In this episode, I talked to Karl Hightower from Novant Health about how he and his team define “quality” when it comes to data products, and what they do to meet that definition in their daily work. Karl Hightower is the Chief Data Officer and SVP of Data Products at Novant Health, a busy hospital and medical group in the Southeast United States with over 16 hospitals and more than 600 clinics. Karl and I took a deep dive into data product management, and how Karl and his team are designing products and services that help empower all of the organization’s decision makers. In our chat, we covered:
- How a non-tech company like Novant Health approaches data product management
- The challenges of designing data products with empathy in mind while being in an environment involving physicians and healthcare professionalsThe metric Karl’s team uses to judge the quality and efficacy of their data products, and how executive management contributed to defining this success criteria
- How Karl encourages deep empathy between analytics teams and their users by deeply investigating how the users being served by the team make decisions with data
- How and why Novant embraces design and UX in their data product work
- The types of outcomes Karl sees when designers and user experience professionals work with analytics and data science practitioners.
- How Karl was able to obtain end user buy-in and support for ?
- The strategy Karl used to deal with a multitude of “information silos” resulting from the company’s numerous analytics groups.
Resources and Links:
- Novant Health website: https://www.novanthealth.org/
- Novant Health LinkedIn: https://www.linkedin.com/company/novanthealth/
- Karl Hightower LinkedIn: https://www.linkedin.com/in/karl-hightower-4528123/
Quotes from Today’s Episode
“I tend to think of product management as a core role along with a technical lead and product designer in the software industry. Outside the software industry, I feel like product management is often this missing hub. ” - Brian
“I really want to understand why the person is asking for what they're asking for, so there is much more of a closer relationship between that portfolio team and their end-user community that they're working with. It's almost a day-to-day living and breathing with and understanding not just what they're asking for and why are they asking for it, but you need to understand how they use information to make decisions.” - Karl
“I think empathy can sound kind of hand-wavy at times. Soft and fluffy, like whipped cream. However, more and more at senior levels, I am hearing how much leaders feel these skills are important because the technology can be technically right and effectively wrong.” - Brian
“The decision that we got to on executive governance was how are we going to judge success criteria? How do we know that we're delivering the right products and that we're getting better on the maturity scale? And the metric is actually really simple. Ask the people that we're delivering for, does this give you what you need when you need it to make those decisions? - Karl
“The number one principle is, if I don't know how something is done [created with data], I'm very unlikely to trust it. And as you look at just the nature of healthcare, transparency absolutely has to be there because we want the clinicians to poke holes in it, and we want everyone to be able to trust it. So, we are very open. We are very transparent with everything that goes in it.” - Karl
“You need to really understand the why. You’ve got to understand what business decisions are being made, what's driving the strategy of the people who are asking for all that information.” - Karl

Tuesday Jul 28, 2020
Tuesday Jul 28, 2020
If there’s one thing that strikes fear into the heart of every business executive, it’s having your company become the next Blockbuster or Neiman Marcus — that is, ignoring change, and getting wiped out by digital competitors. In this episode, I dived into the changing business landscape with Karim Lakhani who is a Professor at Harvard Business School and co-author of the new book Competing in the Age of AI: When Algorithms and Networks Run the World, which he wrote with his friend and colleague at HBS, Marco Iansiti.
We discuss how AI, machine learning, and digital operating models are changing business architecture, and disrupting traditional business models. I also pressed Karim to go a bit deeper on how, and whether he thinks product mindset and design factor in to the success of AI in today’s businesses. We also go off on a fun tangent about the music industry, which just might have to be a future episode!. In any case, I highly recommend the book. It’s particularly practical for those of you working in organizations that are not digital natives and want to hear how the featured companies in the book are setting themselves apart by leveraging data and AI in customer-facing products and in internal applications/operations. Our conversation covers:
- Karim’s new book, Competing in the Age of AI: When Algorithms and Networks Run the World, co-authored with Marco Iansiti.
- How digital operating models are colliding with traditional product-oriented businesses, and the impact this is having on today’s organizations.
- The critical role of data product management that is frequently missing when companies try to leverage AI
- Karim’s thoughts on ethics in AI and machine learning systems, and how they need to be baked into business and engineering.
- The similarity Karim sees between COVID-19 and AI
- The role of design, particularly in human-in-the-loop systems and how companies need to consider the human experience in applications of AI that augment decision making vs. automate it.
- How Karim sees the ability to adapt in business as being critical to survival in the age of AI
Resources and Links
- Book Link: https://www.amazon.com/Competing-Age-AI-Leadership-Algorithms/dp/1633697622/
- Twitter: https://twitter.com/klakhani
- LinkedIn: https://www.linkedin.com/in/professorkl/
- Harvard Business Analytics Program: https://analytics.hbs.edu/
Quotes from Today’s Episode
“Our thesis in the book is that a new type of an organization is emerging, which has eliminated bottlenecks in old processes.” - Karim
“Digital operating models have exponential scaling properties, in terms of the value they generate, versus traditional companies that have value curves that basically flatten out, and have fixed capacity. Over time, these digital operating models collide with these traditional product models, win over customers, and gather huge amounts of market share….” - Karim
“This whole question about human-in-the-loop is important, and it's not going to go away, but we need to start thinking about, well, how good are the humans, anyway? - Karim
“Somebody once said, “Ethics defines the boundaries of what you care about.” And I think that's a really important question…” - Brian
“Non-digital natives worry about these tech companies coming around and eating them up, and I can’t help but wonder ‘why aren't you also copying the way they design and build software?’” - Brian
“...These established companies have a tough time with the change process.” - Karim

Tuesday Jul 14, 2020
Tuesday Jul 14, 2020
I am a firm believer that one of the reasons that data science and analytics has a high failure rate is a lack of product management and design. To me, product is about a mindset just as much as a job title, and I am repeatedly hearing how more and more voices in the data community are agreeing with me on this (Gartner CDO v4, International Inst. for Analytics, several O’Reilly authors, Karim Lakhani’s new book on AI, and others). This is even more true as more companies begin to leverage AI. So many of these companies fear what startups and software companies are doing, yet they do not copy the way tech companies build software applications and enable specific user experiences that unlock the desired business value.
Integral to building software is the product management function—and when these applications and tools have humans in the loop, the product/UX design function is equally as important to ensure adoption, usability, engagement, and alignment with the business objectives.
In modern tech companies, the overlap between product design and product management can be significant, and frequently, product leaders in tech companies come up through both design and engineering ranks and indeed my own work heavily overlaps with product. What this tells me is that product is a mindset, and it’s a role many can learn if they believe it’s critical.
So why aren’t more data science and analytics leaders forming strong product design and analytics functions? I don’t know, so I decided to bring Carlos onto the show to talk about his company, Product School, which offers product management training and features instructors from many of the big tech companies on how to do it. In this episode, Carlos provides a comprehensive overview of why he launched Product School, what makes an effective product manager, and the importance of having structured vision and alignment when developing products.
This conversation explores:
- Why Carlos launched the Product School for professionals who want to learn on the side without quitting their job and putting their life on hold.
- The type of mentality product managers need to have and whether specialization matters within product management.
- Whether being a product manager in machine learning and AI is different than working with a traditional software product.
- How product management is not project management
- Advice for approaching executive decision makers about product management education
- How to avoid the trap of focusing too heavily on process
- How product management often leads to executive leadership roles
- The “power trio” of engineering, product management, and design, and the value of aligning all three groups.
- Understanding the difference between applied and academic experience
- How the relationship between design and PM has changed over the last five years
- What the gap looks like between a skilled PM and an exceptional one.
Resources and Links
The State of Product Analytics (Also referred to as The Future of Product Analytics in the audio)
Mixpanel, company that they partnered with to create the above report
Episode 17 of Experiencing Data
Quotes from Today’s Episode
“You can become a product manager by building products. You don't need to be a software engineer. You don’t need to have an MBA. You don't need to be an incredible, inspiring visionary. This is stuff that you can learn, and the best way to learn it is by doing it.” - Carlos
“A product manager is a generalist. And in order to become a generalist, usually you have to have some sort of [specialty] before. So, we define product management as the intersection in between business, engineering, and design. And you can become a good product manager from either of those options.” - Carlos
“If you have [a power trio of technology, product, and design] and the energy is right, and the relationships are really strong, boy, you can get a lot of stuff done, and you can iterate quickly, and really produce some great stuff.” - Brian
“I think part of the product management mindset... is to realize part of your job now is to be a problem finder, it’s to help set the strategy, it's to help ensure that a model is not the solution.” - Brian
“I think about a bicycle wheel with the hub in the center and the spokes coming out. Product management is that hub, and it reports up into the business, but you have all these different spokes, QA, and software engineering, maybe data science and analytics, product design, and user experience design. These are all kind of spokes.” - Brian
“These are people who are constantly learning, but not just about their products. They’re constantly learning in general. Reading books, practicing sports, doing whatever it is, but always looking at what's new and wanting to play around with it, just to be dangerous enough. So, I think those three areas: obsession with a customer based on data; obsession with empathy; and then obsession with learning, or just being curious are really critical.” - Carlos

Tuesday Jun 30, 2020
Tuesday Jun 30, 2020
“What happened in Minneapolis and Louisville and Chicago and countlessother cities across the United States is unconscionable (and to be clear, racist). But what makes me the maddest is how easy this problem is to solve, just by the police deciding it’s a thing they want to solve.” - Allison Weil on Medium Before Allison Weil became an investor and Senior Associate at Hyde Park Ventures, she was a co-founder at Flag Analytics, an early intervention system for police departments designed to help identify officers at risk of committing harm. Unfortunately, Flag Analytics—as a business—was set up for failure from the start, regardless of its predictive capability. As Allison explains so candidly and openly in her recent Medium article (thanks Allison!), the company had “poor product-market fit, a poor problem-market fit, and a poor founder-market fit.” The technology was not the problem, and as a result, it did not help them succeed as a business or in producing the desired behavior change because the customers were not ready to act on the insights. Yet, the key takeaways from her team’s research during the design and validation of their product — and the uncomfortable truths they uncovered — are extremely valuable, especially now as we attempt to understand why racial injustice and police brutality continue to persist in law enforcement agencies. As it turns out, simply having the data to support a decision doesn’t mean the decision will be made using the data. This is what Allison found out while in her interactions with several police chiefs and departments, and it’s also what we discussed in this episode. I asked Allison to go deeper into her Medium article, and she agreed. Together, we covered:
- How Allison and a group of researchers tried to streamline the identification of urban police officers at risk of misconduct or harm using machine learning.
- Allison’s experience of trying to build a company and program to solve a critical societal issue, and dealing with police departments that weren’t ready to take action on the analytical insights her product revealed
- How she went about creating a “single pane of glass,” where officers could monitor known problem officers and also discover officers who may be in danger of committing harm.
- The barriers that prevented the project from being a success, from financial ones to a general unwillingness among certain departments to take remedial action against officers despite historical or predicted data
- The key factors and predictors Allison’s team found in the data set of thousands of officers that correlated highly with poor officer behavior in the future—and how it seemed to fall on deaf ears
- How Allison and her team approached the sensitive issue of race in the data, and a [perhaps unexpected] finding they discovered about how prevalent racism seemed to be in departments in general.
- Allison’s experience of conducting “ride-alongs” (qualitative 1x1 research) where she went on patrol with officers to observe their work and how the experience influenced how her team designed the product and influenced her perspective while analyzing the police officer data set.
Resources and Links:
Quotes from Today’s Episode
“The folks at the police departments that we were working with said they were well-intentioned, and said that they wanted to talk through, and fix the problem, but when it came to their actions, it didn't seem like [they were] really willing to make the choices that they needed to make based off of what the data said, and based off of what they knew already.” - Allison “I don't come from a policing background, and neither did any of my co-founders. And that made it really difficult to relate to different officers, and relate to departments. And so the combination of all of those things really didn't set me up for a whole lot of business success in that way.”- Allison “You can take a whole lot of data and do a bunch of analysis, but what I saw was the data didn't show anything that the police department didn't know already. It amplified some of what they knew, but [the problem here] wasn't about the data.” - Allison “It was really frustrating for me, as a founder, sure, because I was putting all this energy into trying to build a software and trying to build a company, but also just frustrating for me as a person and a citizen… you fundamentally want to solve a problem, or help a community solve a problem, and realize that the people at the center of it just aren't ready for it to be solved.” - Allison “...We did have race data, but race was not the primary predictor or reason for [brutality]. It may have been a factor, but it was not that there were racist cops wandering around, using force only against people of particular races. What we found was….” - Allison “The way complaints are filed department to department is really, really different. And so that results in complaints looking really, really different from department to department and counts looking different. But how many are actually reviewed and sustained? And that looks really, really different department to department.” - Allison “...Part of [diversity] is asking the questions you don't know to ask. And that's part of what you get out of having a diverse team— they're going to surface questions that no one else is asking about. And then you can have the discussion about what to do about them.” - Brian

Tuesday Jun 16, 2020
Tuesday Jun 16, 2020
The job of many internally-facing data scientists in business settings is to discover,explore, interpret, and share data, turning it into actionable insight that can benefit the company and improve outcomes. Yet, data science teams often struggle with the very basic question of how the company’s data assets can best serve the organization. Problem statements are often vague, leading to data outputs that don’t turn into value or actionable decision support in the last mile.
This is where Martin Szugat and his team at Datentreiber step in, helping clients to develop and implement successful data strategy through hands-on workshops and training. Martin is based in Germany and specializes in helping teams learn to identify specific challenges data can solve, and think through the problem solving process with a human focus. This in turn helps teams to select the right technology and be objective about whether they need advanced tools such as ML/AI, or something more simple to produce value.
In our chat, we covered:
- How Datentreiber helps clients understand and derive value from their data — identifying assets, and determining relevant use cases.
- An example of how one client changed not only its core business model, but also its culture by working with Datentreiber, transitioning from a data-driven perspective to a user-driven perspective.
- Martin’s strategy of starting with small analytics projects, and slowly gaining buy-in from end users, with a special example around social media analytics that led to greater acceptance and understanding among team members.
- The canvas tools Martin likes to use to visualize abstract concepts related to data strategy, data products, and data analysis.
- Why it helps to mix team members from different departments like marketing, sales, and IT and how Martin goes about doing that
- How cultural differences can impact design thinking, collaboration, and visualization processes.
Resources and Links:
- Company site (German) (English machine translation)
- Datentreiber Open-Source Design Tools
- Data Strategy Design (German) (English machine translation)
- Martin’s LinkedIn
Quotes from Today’s Episode
“Often, [clients] already have this feeling that they're on the wrong path, but they can't articulate it. They can't name the reason why they think they are on the wrong path. They learn that they built this shiny dashboard or whatever, but the people—their users, their colleagues—don't use this dashboard, and then they learn something is wrong.” - Martin
“I usually like to call this technically right and effectively wrong solutions. So, you did all the pipelining and engineering and all that stuff is just fine, but it didn't produce a meaningful outcome for the person that it was supposed to satisfy with some kind of decision support.” - Brian
“A simple solution is becoming a trainee in other departments. So, ask, for example, the marketing department to spend a day, or a week and help them do their work. And just look over the shoulder, what they are doing, and really try to understand what they are doing, and why they are doing it, and how they are doing it. And then, come up with solution proposals.” - Martin
...I tend to think of design as a team sport, and it's a lot about facilitating groups of these different cross-departmental groups of arriving at a solution for a particular audience; a specific audience that needs a specific problem solved.” - Brian
“[One client said] we are very good at implementing the right solutions for the wrong problems. And I think this is what often happens in data science, or business intelligence, or whatever, also in IT departments: that they are too quick in starting thinking about the solution before they understand the problem.” - Martin
“If people don't understand what you're doing or what your analytic solution is doing, they won't use it and there will be no acceptance.” - Martin
“One thing we practice a lot, [...] is in visualizing those abstract things like data strategy, data product, and analytics. So, we work a lot with canvas tools because we learned that if you show people—and it doesn't matter if it's just on a sticky note on a canvas—then people start realizing it, they start thinking about it, and they start asking the right questions and discussing the right things. ” - Martin

Tuesday Jun 02, 2020
Tuesday Jun 02, 2020
Innovation doesn’t just happen out of thin air. It requires a conscious effort, and team-wide collaboration. At the same time, innovation will be critical for NASA if the organization hopes to remain competitive and successful in the coming years. Enter Steve Rader. Steve has spent the last 31 years at NASA, working in a variety of roles including flight control under the legendary Gene Kranz, software development, and communications architecture. A few years ago, Steve was named Deputy Director for the Center of Excellence for Collaborative Innovation. As Deputy Director, Steve is spearheading the use of open innovation, as well as diversity thinking. In doing so, Steve is helping the organization find more effective ways of approaching and solving problems. In this fascinating discussion, Steve and Brian discuss design, divergent thinking, and open innovation plus:
- Why Steve decided to shift away from hands-on engineering and management to the emerging field of open innovation, and why NASA needs this as well as diversity in order to remain competitive.
- The challenge of convincing leadership that diversity of thought matters, and why the idea of innovation often receives pushback.
- How NASA is starting to make room for diversity of thought, and leveraging open innovation to solve challenges and bring new ideas forward.
- Examples of how experts from unrelated fields help discover breakthroughs to complex and greasy problems, such as potato chips!
- How the rate of technological change is different today, why innovation is more important than ever, and how crowdsourcing can help streamline problem solving.
- Steve’s thoughts on the type of leader that’s needed to drive diversity at scale, and why that person should be a generalistPrioritizing outcomes over outputs, defining problems, and determining what success looks like early on in a project.
- The metrics a team can use to measure whether one is “doing innovation.”
Resources and Links
Designingforanalytics.com/theseminar Steve Rader’s LinkedIn: https://www.linkedin.com/in/steve-rader-92b7754/ NASA Solve: nasa.gov/solve Steve Rader’s Twitter: https://twitter.com/SteveRader NASA Solve Twitter: https://twitter.com/NASAsolve
Quotes from Today’s Episode
“The big benefit you get from open innovation is that it brings diversity into the equation […]and forms this collaborative effort that is actually really, really effective.” – Steve “When you start talking about innovation, the first thing that almost everyone does is what I call the innovation eye-roll. Because management always likes to bring up that we’re innovative or we need innovation. And it just sounds so hand-wavy, like you say. And in a lot of organizations, it gets lots of lip service, but almost no funding, almost no support. In most organizations, including NASA, you’re trying to get something out the door that pays the bills. Ours isn’t to pay the bills, but it’s to make Congress happy. And, when you’re doing that, that is a really hard, rough space for innovation.” – Steve “We’ve run challenges where we’re trying to improve a solar flare algorithm, and we’ve got, like, a two-hour prediction that we’re trying to get to four hours, and the winner of that in the challenge ends up to be a cell phone engineer who had an undergraduate degree from, like, 30 years prior that he never used in heliophysics, but he was able to take that extracting signal from noise math that they use in cell phones, and apply it to heliophysics to get an eight-hour prediction capability.” – Steve “If you look at how long companies stay around, the average in 1958 was 60 years, it is now less than 18. The rate of technology change and the old model isn’t working anymore. You can’t actually get all the skills you need, all the diversity. That’s why innovation is so important now, is because it’s happening at such a rate, that companies—that didn’t used to have to innovate at this pace—are now having to innovate in ways they never thought.” – Steve “…Innovation is being driven by this big technology machine that’s happening out there, where people are putting automation to work. And there’s amazing new jobs being created by that, but it does take someone who can see what’s coming, and can see the value of augmenting their experts with diversity, with open innovation, with open techniques, with innovation techniques, period.” – Steve “…You have to be able to fail and not be afraid to fail in order to find the real stuff. But I tell people, if you’re not willing to listen to ideas that won’t work, and you reject them out of hand and shut people down, you’re probably missing out on the path to innovation because oftentimes, the most innovative ideas only come after everyone’s thrown in 5 to 10 ideas that actually won’t work.” – Steve

Tuesday May 19, 2020
Tuesday May 19, 2020
Every now and then, I like to insert a music-and-data episode into the show since hey, I’m a musician, and I’m the host 😉 Today is one of those days!
Rasty Turek is founder and CEO of Pex, a leading analytics and rights management platform used for discovering and tracking video and audio content using data science.
Pex’s AI crawls the internet for user-generated content (UGC), identifies copyrighted audio/ visual content, indexes the media, and then enables rights holders to understand where their art is being used so it can be monetized. Pex’s goal is to help its customers understand who is using their licensed content, and what they are using it for — along with key insights to support monetization initiatives and negotiations with UGC platform providers.
In this episode of Experiencing Data, we discuss:
- How the data science behind Pex works in terms of being able to fingerprint actual songs (the underlying IP of a composition) vs. masters (actual audio recordings of songs)
- The challenges PEX has in identifying complex, audio-rich user-generated content and cover recordings, and ensuring it is indexing as many usages as possible.
- The transitioning UGC market, and how Pex is trying to facilitate change. One item that Rasty discusses is Europe’s new Copyright Directive law, and how it’s impacting UGC from a licensing standpoint.
- How analytics are empowering publishers, giving them key insights and firepower to negotiate with UGC platforms over licensed content.
- Key product design and UX considerations that Pex has taken to make their analytics useful to customers
- What Rasty learned through his software iteration journey at Pex, including a memorable example about bias that influenced future iterations of the design/UI/UX
- How Pex predicts and priorities monetization opportunities for customers, and how they surface infringements.
- Why copyright education is the “last bastion of the internet” — and the role that Pex is playing in streamlining copyrighted material.
Brian also challenges Rasty directly, asking him how the Pex platform balances flexibility with complexity when dealing with extremely large data sets.
Resources and Links
Designingforanalytics.com/theseminar
Twitter: https://twitter.com/synopsi
Quotes from Today’s Episode
“I will say, 80 to 90 percent of the population eventually will be rights owners of some sort, since this is how copyright works. Everybody that produces something is immediately a rights owner, but I think most of us will eventually generate our livelihood through some form of IP, especially if you believe that the machines are going to take the manual labor from us.” - Rasty
“When people ask me how it is to run a big data company, I always tell them I wish we were not [a big data company], because I would much rather have “small data,” and have a very good business, rather than big data.” - Rasty
“There's a lot of these companies that [have operated] in this field for 20 to 30 years, we just took it a little bit further. We adjusted it towards the UGC world, and we focused on simplicity” - Rasty
“We don't follow users, we follow content. And so, at some point [during our design process] we were exploring if we could follow users [of our customers’ copyrighted content].... As we explored this more, we started noticing that [our customers] started making incorrect decisions because they were biased towards users [of their copyrighted content].” - Rasty
“If you think that your general customer is a coastal elite, but the reality is that they are Midwest farmers, you don't want to see that as the reality and you start being biased towards that. So, we immediately started removing that data and really focused on the content itself—because that content is not biased.” - Rasty
“[Re: PEX’s design process] We always started with the guiding principles. What is the task that you're trying to solve? So, for instance, if your task is to monetize your content, then obviously you want to monetize the most obvious content that will get the most views, right?.” - Rasty

Tuesday May 05, 2020
Tuesday May 05, 2020
Mark Bailey is a leading UX researcher and designer, and host of the Design for AI podcast — a
program which, similar to Experiencing Data, explores the strategies and considerations around designing data-driven human-centered applications built with machine learning and AI.
In this episode of Experiencing Data — co-released with the podcast Design for AI — Brian and Mark share the host and guest role, and discuss 10 different UX concepts teams may need to consider when approaching ML-driven data products and AI applications. A great discussion on design and #MLUX ensued, covering:
- Recognizing the barrier of trust and adoption that exists with ML, particularly at non-digital native companies, and how to address it when designing solutions.
- Why designers need to dig beyond surface level knowledge of ML, and develop a comprehensive understanding of the space
- How companies attempt to “separate reality from the movies,” with AI and ML, deploying creative strategies to build trust with end users (with specific examples from Apple and Tesla)
- Designing for “undesirable results” (how to gracefully handle the UX when a model produces unexpected predictions)
- The ongoing dance of balancing UX with organizational goals and engineering milestones
- What designers and solution creators need to be planning for and anticipating with AI products and applications
- Accessibility considerations with AI products and applications – and how itcan be improved
- Mark’s approach to ethics and community as part of the design process.
- The importance of systems design thinking when collecting data and designing models
- The different model types and deployment considerations that affect a solution’s UX — and what solution designers need to know to stay ahead
- Collaborating, and visualizing — or storyboarding — with developers, to help understand data transformation and improve model design
- The role that designers can play in developing model transparency (i.e. interpretability and explainable AI)
- Thinking about pain points or problems that can be outfitted with decision support or intelligence to make an experience better
Resources and Links:
Experiencing Data – Episode 35
Designing for Analytics Seminar
Quotes from Today’s Episode
“There’s not always going to be a software application that is the output of a machine learning model or something like that. So, to me, designers need to be thinking about decision support as being the desired outcome, whatever that may be.” – Brian
“… There are [about] 30 to 40 different types of machine learning models that are the most popular ones right now. Knowing what each one of them is good for, as the designer, really helps to conform the machine learning to the problem instead of vice versa.” – Mark
“You can be technically right and effectively wrong. All the math part [may be] right, but it can be ineffective if the human adoption piece wasn’t really factored into the solution from the start.” – Brian
“I think it’s very interesting to see what some of the big companies have done, such as Apple. They won’t use the term AI, or machine learning in any of their products. You’ll see their chips, they call them neural engines instead have anything to do with AI. I mean, so building the trust, part of it is trying to separate out reality from movies.” – Mark
“Trust and adoption is really important because of the probabilistic nature of these solutions. They’re not always going to spit out the same thing all the time. We don’t manually design every single experience anymore. We don’t always know what’s going to happen, and so it’s a system that we need to design for.” – Brian
“[Thinking about] a small piece of intelligence that adds some type of value for the customer, that can also be part of the role of the designer.” – Brian
“For a lot of us that have worked in the software industry, our power trio has been product management, software engineering lead, and some type of design lead. And then, I always talk about these rings, like, that’s the close circle. And then, the next ring out, you might have some domain experts, and some front end developer, or prototyper, a researcher, but at its core, there were these three functions there. So, with AI, is it necessary, now, that we add a fourth function to that, especially if our product was very centered around this? That’s the role of the data scientist. And so, it’s no longer a trio anymore.” – Brian

Tuesday Apr 21, 2020
Tuesday Apr 21, 2020
Rob May is a general partner at PJC, a leading venture capital firm. He was previously CEO of Talla, a platform for AI and automation, as well as co-founder and CEO of Backupify. Rob is an angel investor who has invested in numerous companies, and author of InsideAI which is said to be one of the most widely-read AI newsletters on the planet.
In this episode, Rob and I discuss AI from a VC perspective. We look into the current state of AI, service as a software, and what Rob looks for in his startup investments and portfolio companies. We also investigate why so many companies are struggling to push their AI projects forward to completion, and how this can be improved. Finally, we outline some important things that founders can do to make products based on machine intelligence (machine learning) attractive to investors.
In our chat, we covered:
- The emergence of service as a software, which can be understood as a logical extension of “software eating the world” and the 2 hard things to get right (Yes, you read it correctly and Rob will explain what this new SAAS acronym means!) !
- How automation can enable workers to complete tasks more efficiently and focus on bigger problems machines aren’t as good at solving
- Why AI will become ubiquitous in business—but not for 10-15 years
- Rob’s Predict, Automate, and Classify (PAC) framework for deploying AI for business value, and how it can help achieve maximum economic impact
- Economic and societal considerations that people should be thinking about when developing AI – and what we aren’t ready for yet as a society
- Dealing with biases and stereotypes in data, and the ethical issues they can create when training models
- How using synthetic data in certain situations can improve AI models and facilitate usage of the technology
- Concepts product managers of AI and ML solutions should be thinking about
- Training, UX and classification issues when designing experiences around AI
- The importance of model-market fit. In other words, whether a model satisfies a market demand, and whether it will actually make a difference after being deployed.
Resources and Links:
The PAC Framework for Deploying AI
Quotes from Today’s Episode
“[Service as a software] is a logical extension of software eating the world. Software eats industry after industry, and now it’s eating industries using machine learning that are primarily human labor focused.” — Rob
“It doesn’t have to be all digital. You could also think about it in terms of restaurant automation, and some of those things where if you keep the interface the same to the customer—the service you’re providing—you strip it out, and everything behind that, if it’s digital it’s an algorithm and if it’s physical, then you use a robot.” — Rob, on service as a software.
“[When designing for] AI you really want to find some way to convey to the user that the tool is getting smarter and learning.”— Rob
“There’s a gap right now between the business use cases of AI and the places it’s getting adopted in organizations,” — Rob
“The reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute.” — Rob
“If you are changing things and your business is changing, which is most businesses these days, then it’s going to help to have models around that can learn and grow and adapt. I think as we get better with different data types—not just text and images, but more and more types of data types—I think every business is going to deploy AI at some stage.” — Rob
“The general sense I get is that overall, putting these models and AI solutions is pretty difficult still.” — Brian
“They’re not looking at what’s the actual best use of AI for their business, [and thinking] ‘Where could you really apply to have the most economic impact?’ There aren’t a lot of people that have thought about it that way.” — Rob, on how AI is being misapplied in the enterprise.
“You have to focus on the outcome, not just the output.” — Brian
“We need more heuristics for how, as a product manager, you think of AI and building it into products.” — Rob
“When the internet came about, it impacted almost every business in some way, shape, or form.[…]he reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute.” — Rob
“Some biases and stereotypes are true, and so what happens if the AI uncovers one that we’re really uncomfortable with?” — Rob