127.8K
Downloads
161
Episodes
If you’re a leader tasked with generating business and org. value through ML/AI and analytics, you’ve probably struggled with low user adoption. Making the tech gets easier, but getting users to use, and buyers to buy, remains difficult—but you’ve heard a ”data product” approach can help. Can it? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting designer’s perspective on why creating ML and analytics outputs isn’t enough to create business and UX outcomes. How can UX design and product management help you create innovative ML/AI and analytical data products? What exactly are data products—and how can data product management help you increase user adoption of ML/analytics—so that stakeholders can finally see the business value of your data? Every 2 weeks, I answer these questions via solo episodes and interviews with innovative chief data officers, data product management leaders, and top UX professionals. Hashtag: #ExperiencingData. PODCAST HOMEPAGE: Get 1-page summaries, text transcripts, and join my Insights mailing list: https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes
Tuesday Jun 02, 2020
Tuesday Jun 02, 2020
Innovation doesn’t just happen out of thin air. It requires a conscious effort, and team-wide collaboration. At the same time, innovation will be critical for NASA if the organization hopes to remain competitive and successful in the coming years. Enter Steve Rader. Steve has spent the last 31 years at NASA, working in a variety of roles including flight control under the legendary Gene Kranz, software development, and communications architecture. A few years ago, Steve was named Deputy Director for the Center of Excellence for Collaborative Innovation. As Deputy Director, Steve is spearheading the use of open innovation, as well as diversity thinking. In doing so, Steve is helping the organization find more effective ways of approaching and solving problems. In this fascinating discussion, Steve and Brian discuss design, divergent thinking, and open innovation plus:
- Why Steve decided to shift away from hands-on engineering and management to the emerging field of open innovation, and why NASA needs this as well as diversity in order to remain competitive.
- The challenge of convincing leadership that diversity of thought matters, and why the idea of innovation often receives pushback.
- How NASA is starting to make room for diversity of thought, and leveraging open innovation to solve challenges and bring new ideas forward.
- Examples of how experts from unrelated fields help discover breakthroughs to complex and greasy problems, such as potato chips!
- How the rate of technological change is different today, why innovation is more important than ever, and how crowdsourcing can help streamline problem solving.
- Steve’s thoughts on the type of leader that’s needed to drive diversity at scale, and why that person should be a generalistPrioritizing outcomes over outputs, defining problems, and determining what success looks like early on in a project.
- The metrics a team can use to measure whether one is “doing innovation.”
Resources and Links
Designingforanalytics.com/theseminar Steve Rader’s LinkedIn: https://www.linkedin.com/in/steve-rader-92b7754/ NASA Solve: nasa.gov/solve Steve Rader’s Twitter: https://twitter.com/SteveRader NASA Solve Twitter: https://twitter.com/NASAsolve
Quotes from Today’s Episode
“The big benefit you get from open innovation is that it brings diversity into the equation […]and forms this collaborative effort that is actually really, really effective.” – Steve “When you start talking about innovation, the first thing that almost everyone does is what I call the innovation eye-roll. Because management always likes to bring up that we’re innovative or we need innovation. And it just sounds so hand-wavy, like you say. And in a lot of organizations, it gets lots of lip service, but almost no funding, almost no support. In most organizations, including NASA, you’re trying to get something out the door that pays the bills. Ours isn’t to pay the bills, but it’s to make Congress happy. And, when you’re doing that, that is a really hard, rough space for innovation.” – Steve “We’ve run challenges where we’re trying to improve a solar flare algorithm, and we’ve got, like, a two-hour prediction that we’re trying to get to four hours, and the winner of that in the challenge ends up to be a cell phone engineer who had an undergraduate degree from, like, 30 years prior that he never used in heliophysics, but he was able to take that extracting signal from noise math that they use in cell phones, and apply it to heliophysics to get an eight-hour prediction capability.” – Steve “If you look at how long companies stay around, the average in 1958 was 60 years, it is now less than 18. The rate of technology change and the old model isn’t working anymore. You can’t actually get all the skills you need, all the diversity. That’s why innovation is so important now, is because it’s happening at such a rate, that companies—that didn’t used to have to innovate at this pace—are now having to innovate in ways they never thought.” – Steve “…Innovation is being driven by this big technology machine that’s happening out there, where people are putting automation to work. And there’s amazing new jobs being created by that, but it does take someone who can see what’s coming, and can see the value of augmenting their experts with diversity, with open innovation, with open techniques, with innovation techniques, period.” – Steve “…You have to be able to fail and not be afraid to fail in order to find the real stuff. But I tell people, if you’re not willing to listen to ideas that won’t work, and you reject them out of hand and shut people down, you’re probably missing out on the path to innovation because oftentimes, the most innovative ideas only come after everyone’s thrown in 5 to 10 ideas that actually won’t work.” – Steve
Tuesday May 19, 2020
Tuesday May 19, 2020
Every now and then, I like to insert a music-and-data episode into the show since hey, I’m a musician, and I’m the host 😉 Today is one of those days!
Rasty Turek is founder and CEO of Pex, a leading analytics and rights management platform used for discovering and tracking video and audio content using data science.
Pex’s AI crawls the internet for user-generated content (UGC), identifies copyrighted audio/ visual content, indexes the media, and then enables rights holders to understand where their art is being used so it can be monetized. Pex’s goal is to help its customers understand who is using their licensed content, and what they are using it for — along with key insights to support monetization initiatives and negotiations with UGC platform providers.
In this episode of Experiencing Data, we discuss:
- How the data science behind Pex works in terms of being able to fingerprint actual songs (the underlying IP of a composition) vs. masters (actual audio recordings of songs)
- The challenges PEX has in identifying complex, audio-rich user-generated content and cover recordings, and ensuring it is indexing as many usages as possible.
- The transitioning UGC market, and how Pex is trying to facilitate change. One item that Rasty discusses is Europe’s new Copyright Directive law, and how it’s impacting UGC from a licensing standpoint.
- How analytics are empowering publishers, giving them key insights and firepower to negotiate with UGC platforms over licensed content.
- Key product design and UX considerations that Pex has taken to make their analytics useful to customers
- What Rasty learned through his software iteration journey at Pex, including a memorable example about bias that influenced future iterations of the design/UI/UX
- How Pex predicts and priorities monetization opportunities for customers, and how they surface infringements.
- Why copyright education is the “last bastion of the internet” — and the role that Pex is playing in streamlining copyrighted material.
Brian also challenges Rasty directly, asking him how the Pex platform balances flexibility with complexity when dealing with extremely large data sets.
Resources and Links
Designingforanalytics.com/theseminar
Twitter: https://twitter.com/synopsi
Quotes from Today’s Episode
“I will say, 80 to 90 percent of the population eventually will be rights owners of some sort, since this is how copyright works. Everybody that produces something is immediately a rights owner, but I think most of us will eventually generate our livelihood through some form of IP, especially if you believe that the machines are going to take the manual labor from us.” - Rasty
“When people ask me how it is to run a big data company, I always tell them I wish we were not [a big data company], because I would much rather have “small data,” and have a very good business, rather than big data.” - Rasty
“There's a lot of these companies that [have operated] in this field for 20 to 30 years, we just took it a little bit further. We adjusted it towards the UGC world, and we focused on simplicity” - Rasty
“We don't follow users, we follow content. And so, at some point [during our design process] we were exploring if we could follow users [of our customers’ copyrighted content].... As we explored this more, we started noticing that [our customers] started making incorrect decisions because they were biased towards users [of their copyrighted content].” - Rasty
“If you think that your general customer is a coastal elite, but the reality is that they are Midwest farmers, you don't want to see that as the reality and you start being biased towards that. So, we immediately started removing that data and really focused on the content itself—because that content is not biased.” - Rasty
“[Re: PEX’s design process] We always started with the guiding principles. What is the task that you're trying to solve? So, for instance, if your task is to monetize your content, then obviously you want to monetize the most obvious content that will get the most views, right?.” - Rasty
Tuesday May 05, 2020
Tuesday May 05, 2020
Mark Bailey is a leading UX researcher and designer, and host of the Design for AI podcast — a
program which, similar to Experiencing Data, explores the strategies and considerations around designing data-driven human-centered applications built with machine learning and AI.
In this episode of Experiencing Data — co-released with the podcast Design for AI — Brian and Mark share the host and guest role, and discuss 10 different UX concepts teams may need to consider when approaching ML-driven data products and AI applications. A great discussion on design and #MLUX ensued, covering:
- Recognizing the barrier of trust and adoption that exists with ML, particularly at non-digital native companies, and how to address it when designing solutions.
- Why designers need to dig beyond surface level knowledge of ML, and develop a comprehensive understanding of the space
- How companies attempt to “separate reality from the movies,” with AI and ML, deploying creative strategies to build trust with end users (with specific examples from Apple and Tesla)
- Designing for “undesirable results” (how to gracefully handle the UX when a model produces unexpected predictions)
- The ongoing dance of balancing UX with organizational goals and engineering milestones
- What designers and solution creators need to be planning for and anticipating with AI products and applications
- Accessibility considerations with AI products and applications – and how itcan be improved
- Mark’s approach to ethics and community as part of the design process.
- The importance of systems design thinking when collecting data and designing models
- The different model types and deployment considerations that affect a solution’s UX — and what solution designers need to know to stay ahead
- Collaborating, and visualizing — or storyboarding — with developers, to help understand data transformation and improve model design
- The role that designers can play in developing model transparency (i.e. interpretability and explainable AI)
- Thinking about pain points or problems that can be outfitted with decision support or intelligence to make an experience better
Resources and Links:
Experiencing Data – Episode 35
Designing for Analytics Seminar
Quotes from Today’s Episode
“There’s not always going to be a software application that is the output of a machine learning model or something like that. So, to me, designers need to be thinking about decision support as being the desired outcome, whatever that may be.” – Brian
“… There are [about] 30 to 40 different types of machine learning models that are the most popular ones right now. Knowing what each one of them is good for, as the designer, really helps to conform the machine learning to the problem instead of vice versa.” – Mark
“You can be technically right and effectively wrong. All the math part [may be] right, but it can be ineffective if the human adoption piece wasn’t really factored into the solution from the start.” – Brian
“I think it’s very interesting to see what some of the big companies have done, such as Apple. They won’t use the term AI, or machine learning in any of their products. You’ll see their chips, they call them neural engines instead have anything to do with AI. I mean, so building the trust, part of it is trying to separate out reality from movies.” – Mark
“Trust and adoption is really important because of the probabilistic nature of these solutions. They’re not always going to spit out the same thing all the time. We don’t manually design every single experience anymore. We don’t always know what’s going to happen, and so it’s a system that we need to design for.” – Brian
“[Thinking about] a small piece of intelligence that adds some type of value for the customer, that can also be part of the role of the designer.” – Brian
“For a lot of us that have worked in the software industry, our power trio has been product management, software engineering lead, and some type of design lead. And then, I always talk about these rings, like, that’s the close circle. And then, the next ring out, you might have some domain experts, and some front end developer, or prototyper, a researcher, but at its core, there were these three functions there. So, with AI, is it necessary, now, that we add a fourth function to that, especially if our product was very centered around this? That’s the role of the data scientist. And so, it’s no longer a trio anymore.” – Brian
Tuesday Apr 21, 2020
Tuesday Apr 21, 2020
Rob May is a general partner at PJC, a leading venture capital firm. He was previously CEO of Talla, a platform for AI and automation, as well as co-founder and CEO of Backupify. Rob is an angel investor who has invested in numerous companies, and author of InsideAI which is said to be one of the most widely-read AI newsletters on the planet.
In this episode, Rob and I discuss AI from a VC perspective. We look into the current state of AI, service as a software, and what Rob looks for in his startup investments and portfolio companies. We also investigate why so many companies are struggling to push their AI projects forward to completion, and how this can be improved. Finally, we outline some important things that founders can do to make products based on machine intelligence (machine learning) attractive to investors.
In our chat, we covered:
- The emergence of service as a software, which can be understood as a logical extension of “software eating the world” and the 2 hard things to get right (Yes, you read it correctly and Rob will explain what this new SAAS acronym means!) !
- How automation can enable workers to complete tasks more efficiently and focus on bigger problems machines aren’t as good at solving
- Why AI will become ubiquitous in business—but not for 10-15 years
- Rob’s Predict, Automate, and Classify (PAC) framework for deploying AI for business value, and how it can help achieve maximum economic impact
- Economic and societal considerations that people should be thinking about when developing AI – and what we aren’t ready for yet as a society
- Dealing with biases and stereotypes in data, and the ethical issues they can create when training models
- How using synthetic data in certain situations can improve AI models and facilitate usage of the technology
- Concepts product managers of AI and ML solutions should be thinking about
- Training, UX and classification issues when designing experiences around AI
- The importance of model-market fit. In other words, whether a model satisfies a market demand, and whether it will actually make a difference after being deployed.
Resources and Links:
The PAC Framework for Deploying AI
Quotes from Today’s Episode
“[Service as a software] is a logical extension of software eating the world. Software eats industry after industry, and now it’s eating industries using machine learning that are primarily human labor focused.” — Rob
“It doesn’t have to be all digital. You could also think about it in terms of restaurant automation, and some of those things where if you keep the interface the same to the customer—the service you’re providing—you strip it out, and everything behind that, if it’s digital it’s an algorithm and if it’s physical, then you use a robot.” — Rob, on service as a software.
“[When designing for] AI you really want to find some way to convey to the user that the tool is getting smarter and learning.”— Rob
“There’s a gap right now between the business use cases of AI and the places it’s getting adopted in organizations,” — Rob
“The reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute.” — Rob
“If you are changing things and your business is changing, which is most businesses these days, then it’s going to help to have models around that can learn and grow and adapt. I think as we get better with different data types—not just text and images, but more and more types of data types—I think every business is going to deploy AI at some stage.” — Rob
“The general sense I get is that overall, putting these models and AI solutions is pretty difficult still.” — Brian
“They’re not looking at what’s the actual best use of AI for their business, [and thinking] ‘Where could you really apply to have the most economic impact?’ There aren’t a lot of people that have thought about it that way.” — Rob, on how AI is being misapplied in the enterprise.
“You have to focus on the outcome, not just the output.” — Brian
“We need more heuristics for how, as a product manager, you think of AI and building it into products.” — Rob
“When the internet came about, it impacted almost every business in some way, shape, or form.[…]he reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute.” — Rob
“Some biases and stereotypes are true, and so what happens if the AI uncovers one that we’re really uncomfortable with?” — Rob
Tuesday Apr 07, 2020
Tuesday Apr 07, 2020
Simon Buckingham Shum is Professor of Learning Informatics at Australia’s University of Technology Sydney (UTS) and Director of the Connected Intelligence Centre (CIC)—an innovation center where students and staff can explore education data science applications. Simon holds a Ph.D from the University of York, and is known for bringing a human-centered approach to analytics and development. He also co-founded the Society for Learning Analytics Research (SoLAR), which is committed to advancing learning through ethical, educationally sound data science.
In this episode, Simon and I discuss the state of education technology (edtech), privacy, human-centered design in the context of using AI in higher ed, and the numerous technological advancements that are re-shaping the higher level education landscape.
Our conversation covered:
- How the hype cycle around big data and analytics is starting to pervade education
- The differences between using BI and analytics to streamline operations, improve retention rates, vs. the ways AI and data are used to increase learning and engagement
- Creating systems that teachers see as interesting and valuable, in order to drive user adoption and avoid friction.
- The more difficult-to-design-for, but more important skills and competencies researchers are working on to prepare students for a highly complex future workplace
- The data and privacy issues that must be factored into ethical solution designs
- Why “learning is not shopping,” meaning we the creators of the tech have to infer what goes on in the mind when studying humans, mostly by studying behavior.
- Why learning scientists and educational professionals play an important role in the edtech design process, in addition to technical workers
- How predictive modeling can be used to identify students who are struggling—and the ethical questions that such solutions raise.
Resources and Links
Designing for Analytics Podcast
Quotes from Today’s Episode
“We are seeing AI products coming out. Some of them are great, and are making a huge difference for learning STEM type subjects— science, tech, engineering, and medicine. But some of them are not getting the balance right.” — Simon
“The trust break-down will come, and has already come in certain situations, when students feel they’re being tracked…” — Simon, on students perceiving BI solutions as surveillance tools instead of beneficial
“Increasingly, it’s great to see so many people asking critical questions about the biases that you can get in training data, and in algorithms as well. We want to ask questions about whether people are trusting this technology. It’s all very well to talk about big data and AI, etc., but ultimately, no one’s going to use this stuff if they don’t trust it.” — Simon
“I’m always asking what’s the user experience going to be? How are we actually going to put something in front of people that they’re going to understand…” — Simon
“There are lots of success stories, and there are lots of failure stories. And that’s just what you expect when you’ve got edtech companies moving at high speed.” — Simon
“We’re dealing, on the one hand, with poor products that give the whole field a bad name, but on the other hand, there are some really great products out there that are making a tangible difference, and teachers are extremely enthusiastic about.” — Simon
“There’s good evidence now, about the impact that some of these tools can have on learning. Teachers can give some homework out, and the next morning, they can see on their dashboard which questions were the students really struggling with.” — Simon
“The area that we’re getting more and more interested in, and which educators are getting more and more interested in, are the kinds of skills and competencies you need for a very complex future workplace.” — Simon
“We obviously want the students’ voice in the design process. But that has to be balanced with all the other voices are there as well, like the educators’ voice, as well as the technologists, and the interaction designers and so forth.” — Simon on the nuance of UX considerations for students
“…you have to balance satisfying the stakeholder with actually what is needed.” — Brian
“…we’re really at the mercy of behavior. We have to try and infer, from behavior or traces, what’s going on in the mind, of the humans we are studying.” — Simon
“We might say, “Well, if we see a student writing like this, using these kinds of textual features that we can pick up using natural language processing, and they revise their draft writing in response to feedback that we’ve provided automatically, well, that looks like progress. It looks like they’re thinking more critically, or it looks like they’re reflecting more deeply on an experience they’ve had, for example, like a work placement.” — Simon
“They’re in products already, and when they’re used well, they can be effective. But they can also be sort of weapon of mass destruction if you use them badly.” — Simon, on predictive models
Tuesday Mar 24, 2020
Tuesday Mar 24, 2020
Cennydd Bowles is a London-based digital product designer and futurist, with almost two decades of consulting experience working with some of the largest and most influential brands in the world. Cennydd has earned a reputation as a trusted guide, helping companies navigate complex issues related to design, technology, and ethics. He’s also the author of Future Ethics, a book which outlines key ethical principles and methods for constructing a fairer future.
In this episode, Cennydd and I explore the role that ethics plays in design and innovation, and why so many companies today—in Silicon Valley and beyond—are failing to recognize the human element of their technological pursuits. Cennydd offers his unique perspective, along with some practical tips that technologists can use to design with greater mindfulness and consideration for others.
In our chat, we covered topics from Cennydd’s book and expertise including:
- Why there is growing resentment towards the tech industry and the reason all companies and innovators need to pay attention to ethics
- The importance of framing so that teams look beyond the creation of an “ethical product / solution” and out towards a better society and future
- The role that diversity plays in ethics and the reason why homogenous teams working in isolation can be dangerous for an organization and society
- Cennydd’s “front-page test,” “designated dissenter,” and other actionable ethics tips that innovators and data product teams can apply starting today
- Navigating the gray areas of ethics and how large companies handle them
- The unfortunate consequences that arise when data product teams are complacent
- The fallacy that data is neutral—and why there is no such thing as “raw” data
- Why stakeholders must take part in ethics conversations
Resources and Links:
Future Ethics (book)
Quotes from Today’s Episode
“There ought to be a clearer relationship between innovation and its social impacts.” — Cennydd
“I wouldn’t be doing this if I didn’t think there was a strong upside to technology, or if I didn’t think it couldn’t advance the species.” — Cennydd
“I think as our power has grown, we have failed to use that power responsibly, and so it’s absolutely fair that we be held to account for those mistakes.” — Cennydd
“I like to assume most creators and data people are trying to do good work. They’re not trying to do ethically wrong things. They just lack the experience or tools and methods to design with intent.” — Brian
“Ethics is about discussion and it’s about decisions; it’s not about abstract theory.” — Cennydd
“I have seen many times diversity act as an ethical early warning system [where] people who firmly believe the solution they’re about to put out into the world is, if not flawless, pretty damn close.” — Cennydd
“The ethical questions around the misapplication or the abuse of data are strong and prominent, and actually have achieved maybe even more recognition than other forms of harm that I talk about.” — Cennydd
“There aren’t a whole lot of ethical issues that are black and white.” — Cennydd
“When you never talk to a customer or user, it’s really easy to make choices that can screw them at the benefit of increasing some KPI or business metric.” — Brian
“I think there’s really talented people in the data space who actually understand bias really well, but when they think about bias, they think they’re thinking more about, ‘how is it going to skew the insight from the data?’ Not the human impact.” — Brian
“I think every business has almost a moral duty to take their consequences seriously.” — Cennydd
Tuesday Mar 10, 2020
Tuesday Mar 10, 2020
Eric Siegel, Ph.D. is founder of the Predictive Analytics World and Deep Learning World conference series, executive editor of “The Predictive Analytics Times,” and author of “Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die.” A former Columbia University professor and host of the Dr. Data Show web series, Siegel is a renowned speaker and educator who has been commissioned for more than 100 keynote addresses across multiple industries. Eric is best known for making the “how” and “why” of predictive analytics (aka machine learning) understandable and captivating to his audiences.
In our chat, we covered:
- The value of defining business outcomes and end user’s needs prior to starting the technical work of predictive modeling, algorithms, or software design.
- The idea of data prototypes being used before engaging in data science to determine where models could potentially fail—saving time while improving your odds of success.
- The first and most important step of Eric’s five-step analytics deployment plan
- Getting multiple people aligned and coordinated about pragmatic considerations and practical constraints surrounding ML project deployment.
- The score (1-10) Eric gave the data community on its ability to turn data into value
- The difference between decision support and decision automation and what the Central Intelligence Agency’s CDAO thinks about these two methods for using machine learning.
- Understanding how human decisions are informed by quantitative predictions from predictive modes, and what’s required to deliver information in a way that aligns with their needs.
- How Eric likes to bring agility to machine learning by deploying and scaling models incrementally to mitigate risk
- Where the analytics field currently stands in its overall ability to generate value in the last mile.
Resources and Links:
Quotes from Today’s Episode
“The greatest pitfall that hinders analytics is not to properly plan for its deployment.” — Brian, quoting Eric
“You don’t jump to number crunching. You start [by asking], ‘Hey, how is this thing going to actually improve business?’ “ — Eric
“You can do some preliminary number crunching, but don’t greenlight, trigger, and go ahead with the whole machine learning project until you’ve planned accordingly, and iterated. It’s a collaborative effort to design, target, define scope, and ultimately greenlight and execute on a full-scale machine learning project.” — Eric
“If you’re listening to this interview, it’s your responsibility.” — Eric, commenting on whose job it is to define the business objective of a project.
“Yeah, so in terms of if 10 were the highest potential [score], in the sort of ideal world where it was really being used to its fullest potential, I don’t know, I guess I would give us a score of [listen to find out!]. Is that what Tom [Davenport] gave!?” — Eric, when asked to rate the analytics community on its ability to deliver value with data
“We really need to get past our outputs, and the things that we make, the artifacts and those types of software, whatever it may be, and really try to focus on the downstream outcome, which is sometimes harder to manage, or measure … but ultimately, that’s where the value is created.” — Brian
“Whatever the deployment is, whatever the change from the current champion method, and now this is the challenger method, you don’t have to jump entirely from one to the other. You can incrementally deploy it. So start by saying well, 10 percent of the time we’ll use the new method which is driven by a predictive model, or by a better predictive model, or some kind of change. So in the change in the transition, you sort of do it incrementally, and you mitigate your risk in that way.”— Eric
Tuesday Feb 25, 2020
Tuesday Feb 25, 2020
Greg Nelson is VP of data analytics at Vidant Health, as well as an adjunct faculty member at Duke University. He is also the author of the “Analytics Lifecycle Toolkit,” which is a manual for integrating data management technologies. A data evangelist with over 20 years of experience in analytics and advisory, Nelson is widely known for his human-centered approach to analytics. In this episode, Greg and I explore what makes a data product or decision support application indispensable, specifically in the complex world of healthcare. In our chat, we covered:
- Seeing through the noise and identifying what really matters when designing data products
- The type of empathy training Greg and his COO are rolling out to help technical data teams produce more useful data products
- The role of data analytics product management and why this is a strategic skillset at Vidant
- The AI Playbook Greg uses at Vidant Health and their risk-based approach to assessing how they will validate the quality of a data product
- The process Greg uses to test and handle algorithmic bias and how this is linked to credibility in the data products they produce
- How exactly design thinking helps Greg’s team achieve better results, trust and credibility
- How Greg aligns workflows, processes, and best practice protocols when developing predictive models
Resources and Links:
Vidant Health Analytics Lifecycle Toolkit Greg Nelson’s article “Bias in Artificial Intelligence” Greg Nelson on LinkedIn Twitter: @GregorySNelson Video: Tuning a card deck for human-centered co-design of Learning Analytics
Quotes from Today's Episode
“We'd rather do fewer things and do them well than do lots of things and fail.”— Greg
“In a world of limited resources, our job is to make sure we're actually building the things that matter and that will get used. Product management focuses the light on use case-centered approaches and design thinking to actually come up with and craft the right data products that start with empathy.”— Greg
“I talk a lot about whole-brain thinking and whole-problem thinking. And when we understand the whole problem, the whole ‘why’ about someone's job, we recognize pretty quickly why Apple was so successful with their initial iPod.”— Greg
“The technical people have to get better [...] at extracting needs in a way that is understandable, interpretable, and really actionable, from a technology perspective. It's like teaching someone a language they never knew they needed. There's a lot of resistance to it.” — Greg
“I think deep down inside, the smart executive knows that you don’t bat .900 when you're doing innovation.” — Brian
“We can use design thinking to help us fail a little bit earlier, and to know what we learned from it, and then push it forward so that people understand why this is not working. And then you can factor what you learned into the next pass.” — Brian
“If there's one thing that I've heard from most of the leaders in the data and analytics space, with regards particularly to data scientists, it’s [the importance of] finding this “other” missing skill set, which is not the technical skillset. It's understanding the human behavioral piece and really being able to connect the fact that your technical work does have this soft skill stuff.” — Brian
“At the end of the day, I tell people our mission is to deliver data that people can trust in a way that's usable and actionable, built on a foundation of data literacy and dexterity. That trust in the first part of our core mission is essential.”— Greg
Tuesday Feb 11, 2020
Tuesday Feb 11, 2020
Nancy Duarte is a communication expert and the leader of the largest design firm in Silicon Valley, Duarte, Inc. She has more than 30 years of experience working with global companies and counts eight of the top ten Fortune 500 brands in her clientele. She is the author of six books, and her work as appeared in Fortune, Time Magazine, Forbes, Wired, Wall Street Journal, New York Times, Los Angeles Times, Cosmopolitan Magazine, and CNN.
In this episode, Nancy and I discussed some of the reasons analytics and data experts fail to effectively communicate the insights and value around data. She drew from her key findings in her work as a communication expert that she details in her new book, Data Story, and the importance of communicating data through the natural structure of storytelling.
In our chat, we covered:
- How empathy is tied to effective communication.
- Biases that cloud our own understanding of our communication skills
- How to communicate an enormous amount of data effectively and engagingly
- What’s wrong with sharing traditional presentations as a reading asset and Nancy’s improved replacement for them in the enterprise
- The difference in presenting data in business versus scientific settings
- Why STEAM, not STEM, is relevant to effective communication for data professionals and what happens when creativity and communication aren’t taught
- How the brain reacts differently when it is engaged through a story
Resources and Links:
Quotes from Today’s Episode
“I think the biggest struggle for analysts is they see a lot of data.” —Nancy
“In a business context, the goal is not to do perfect research most of the time. It’s actually to probably help inform someone else’s decision-making.” —Nancy
“Really understand empathy, become a bit of a student of story, and when you start to apply.” these, you’ll see a lot of traction around your ideas.” — Nancy
“We’ve gone so heavily rewarded the analytical mindset that now we can’t back out of that and be dual-modal about being an analytical mindset and then also really having discipline around a creative mindset.” — Nancy
“There’s a bunch of supporting data, but there’s also all this intuition and other stuff that goes into it. And so I think just learning to accept the ambiguity as part of that human experience, even in business.” — Brian
“If your software application doesn’t produce meaningful decision support, then you didn’t do anything. The data is just sitting there and it’s not actually activating.” — Brian
“People can’t draw a direct line from what art class or band does for you, and it’s the first thing that gets cut. Then we complain on the backend when people are working in professional settings that they can’t talk to us.” — Brian
Tuesday Jan 28, 2020
Tuesday Jan 28, 2020
Ganes Kesari is the co-founder and head of analytics and AI labs at Gramener, a software company that helps organizations tell more effective stories with their data through robust visualizations. He’s also an advisor, public speaker, and author who talks about AI in plain English so that a general audience can understand it. Prior to founding Gramener, Ganes worked at companies like Cognizant, Birlasoft, and HCL Technologies serving in various management and analyst roles.
Join Ganes and I as we talk about how design, as a core competency, has enabled Gramener’s analytics and machine learning work to produce better value for clients. We also touched on:
- Why Ganes believes the gap between the business and data analytics organizations is getting smaller
- How AI (and some other buzzwords) are encouraging more and more investments in understanding data
- Ganes’ opinions about the “analytics translator” role
- How companies might think they are unique for not using “traditional agile”—when in fact that’s what everyone is doing
- Ganes’ thoughts on the similarities of use cases across verticals and the rise of verticalized deep data science solutions
- Why Ganes believes organizations are increasingly asking for repeatable data science solutions
- The pivotal role that empathy plays in convincing someone to use your software or data model
- How Ganes’ team approaches client requests for data science projects, the process they follow to identify use cases for AI, and how they use AI to identify the biggest business problem that can be solved
- What Ganes believes practitioners should consider when moving data projects forward at their organizations
Resources and Links
Ganes Kesari on Twitter: @Kesaritweets
Ganes Kesari on LinkedIn: https://www.linkedin.com/in/ganes-kesari/
Quotes from Today’s Episode
“People tend to have some in-house analytics capability. They’re reaching out for design. Then it’s more of where people feel that the adoption hasn’t happened. They have that algorithm but no one understands its use. And then they try to buy some license or some exploratory visualization tools and they try their hand at it and they’ve figured out that it probably needs a lot more than some cute charts or some dashboards. It can’t be an afterthought. That’s when they reach out.” — Ganes
“Now a lot more enquiries, a lot more engagements are happening centrally at the enterprise level where they have realized the need for data science and they want to run it centrally so it’s no longer isolated silos.” — Ganes
“I see that this is a slightly broader movement where people are understanding the value of data and they see that it is something that they can’t avoid or they can’t prioritize it lower anymore.“ — Ganes
“While we have done a few hundred consulting engagements and help with bespoke solutions, there is still an element of commonality. So that’s where we abstracted some of those, the common or technology requirements and common solutions into our platform.” — Ganes
“My general perception is that most data science and analytics firms don’t think about design as a core competency or part of analytics and data science—at least not beyond perhaps data visualization.” —Brian
“I was in a LinkedIn conversation today about this and some comments that Tom Davenport had made on this show a couple of episodes ago. He was talking about how we need this type of role that goes out and understands how data is used and how systems and software are used such that we can better align the solutions with what people are doing. And I was like, ‘amen.’ That’s actually not a new role though; it’s what good designers do!” — Brian