

132.4K
Downloads
165
Episodes
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be? While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be? If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies. I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better. Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Episodes

Tuesday Jun 04, 2019
Tuesday Jun 04, 2019
Today we are joined by the analytics “man of steel,” Steve Bartos, the Manager of the Predictive Analytics team in the steel processing division at Worthington Industries. 😉 At Worthington, Steve is tasked with strategically driving impactful analytics wider and deeper across the division and, as part of this effort, helps ensure an educated, supported, and connected analytics community. In addition, Steve also serves as a co-leader of the Columbus Tableau User Group.
On today’s episode, Steve and I discuss how analytics are providing internal process improvements at Worthington. We also cover the challenges Steve faces designing effective data-rich products, the importance of the “last mile,” and how his PhD in science education shapes his work in predictive analytics.
In addition, we also talk about:
- Internal tools that Steve has developed and how they help Worthington Industries.
- Preplanning and its importance for creating a solution that works for the client.
- Using analytics to inform daily decisions, aid in monthly meetings, and assist with Kaizen (Lean) focused decisions.
- How Steve pulls out the meaningful pieces of information that can improve the division’s performance.
- How Steve tries to avoid Data-Rich and Insight-Poor customer solutions
- The importance of engaging the customer/user throughout the process
- How Steve leverages his science education background to communicate with his peers and with executives at Worthington
Resources and Links
Quotes from Today’s Episode
“Seeing the way analytics can help facilitate better decision making, doesn't necessarily come with showing someone every single question they can possibly answer, waiting for them to applaud how much time and how much energy and effort you'd saved them.” - Steve Bartos
“It's hard to talk about the influence of different machine parameters on quality if every operator is setting it up based on their own tribal knowledge of how it runs best.” - Steve Bartos
“I think bringing the question back to the user much more frequently, much sooner and at a much more focused scale has paid dividends”. - Steve Bartos
“It's getting the people that are actually going to sit and use these interfaces involved in the creation process… they should be helping you define the goals and the problems… by getting them involved, it makes the adoption process a lot easier.” - Brian O’Neill
“It's real easy to throw up some work that you've done in Tableau around a question that a manager or an executive had. It's real easy to do that. It's really difficult to do that well and have some control of the conversation, being able to say, here's what we did, here was the question, here's the day we use, here's how we analyze it and here's a suggestion where making and now let's talk about why and do that in a way that doesn't lead to an in-the-weeds session and frustration.” - Steve Bartos

Tuesday May 21, 2019
Tuesday May 21, 2019
Paul Mattal is the Director of Network Systems at Akamai, one of the largest content delivery networks in the U.S. Akamai is a major part of the backbone of the internet and on today’s episode, Paul is going to talk about the massive amount of telemetry that comes into Akamai and the various decision support tools his group is in charge of providing to internal customers. On top of the analytics aspect of our chat, we also discussed how Paul is approaching his team’s work being relatively new at Akamai.
Additionally, we covered:
- How does Paul access and use internal customer knowledge to improve the quality of applications they make?
- When to build a custom decision support tool vs. using a BI tool like Tableau?
- How does Akamai measure if their analytics are creating customer value?
- The process Paul uses with the customer to design a new data product MVP
- How Paul decides which of the many analytics applications and services “get love” when resources are constrained
- Paul’s closing advice about taking the time to design and plan before you code
Resources and Links:
Quotes from Today’s Episode
“I would say we have a lot of engagement with [customers] here. People jump to answering questions with data and they’re quick. They know how to do that and they have very good ideas about how to make sure that the approaches they take are backed by data and backed by evidence.” — Paul Mattal
“There’s actually a very mature culture here at Akamai of helping each other. Not necessarily taking on an enormous project if you don’t have the time for it, but opening your door and helping somebody solve a problem, if you have expertise that can help them.” — Paul Mattal
“I’m always curious about feedback cycles because there’s a lot of places that they start with telemetry and data, then they put technology on top of it, they build a bunch of software, and look at releases and outputs as the final part. It’s actually not. It’s the outcomes that come from the stuff we built that matter. If you don’t know what outcomes those look like, then you don’t know if you actually created anything meaningful.” — Brian O’Neill
“We’ve talked a little bit about the MVP approach, which is about doing that minimal amount of work, which may or may not be working code, but you did a minimum amount of stuff to figure out whether or not it’s meeting a need that your customer has. You’re going through some type of observation process to fuel the first thing, asset or output that you create. It’s fueled by some kind of observation or research upfront so that when you go up to bat and take a swing with something real, there’s a better chance of at least a base hit.” — Brian O’Neill
“Pretend to be the new guy for as long as you can. Go ask [about their needs/challenges] again and get to really understand what that person [customer] is experiencing, because I know you’re going to able to meet the need much better.” — Paul Mattal

Tuesday May 07, 2019
Tuesday May 07, 2019
Dr. Andrey Sharapov is a senior data scientist and machine learning engineer at Lidl. He is currently working on various projects related to machine learning and data product development including analytical planning tools that help with business issues such as stocking and purchasing. Previously, he spent 2 years at Xaxis and he led data science initiatives and developed tools for customer analytics at TeamViewer. Andrey and I met at a Predicitve Analytics World conference we were both speaking at, and I found out he is very interested in “explainable AI,” an aspect of user experience that I think is worth talking about and so that’s what today’s episode will focus on.
In our chat, we covered:
- Lidl’s planning tool for their operational teams and what it predicts.
- The lessons learned from Andrey’s first attempt to build an explainable AI tool and other human factors related to designing data products
- What explainable AI is, and why it is critical in certain situations
- How explainable AI is useful for debugging other data models
- We discuss why explainable AI isn’t always used
- Andrey’s thoughts on the importance of including your end user in the data production creation process from the very beginning.
Also, here’s a little post-episode thought from a design perspective:
I know there are counter-vailing opinions that state that explainability of models is “over-hyped.” One popular rationalization uses examples such as how certain professions (e.g. medical practitioners) make decisions all the time that cannot be fully explained, yet people believe the decision making without necessarily expecting it to be fully explained. The reality is that while not every model or end UX necessarily needs explainability, I think there are human factors that can be satisfied by providing explainability such as building customer trust more rapidly, or helping convince customers/users why/how a new technology solution may be better than “the old way” of doing things. This is not a blanket recommendation to “always include explainability” in your service/app/UI; I think many factors come into play and as with any design choice, I think you should let your customer/user feedback help you decide whether your service needs explainability to be valuable, useful, and engaging.
Resources and Links:
Explainable AI- XAI Group (LinkedIn)
Quotes from Today’s Episode
“I hear frequently there can be a tendency in the data science community to want to do excellent data science work and not necessarily do excellent business work. I also hear how some data scientists may think, ‘explainable AI is not going to improve the model’ or ‘help me get published’ – so maybe that’s responsible for why [explainable AI] is not as widely in use.” – Brian O’Neill
“When you go and talk to an operational person, who has in mind a certain number of basic rules, say three, five, or six rules [they use] when doing planning, and then when you come to him with a machine learning model, something that is let’s say, ‘black box,’ and then you tell him ‘okay, just trust my prediction,’ then in most of the cases, it just simply doesn’t work. They don’t trust it. But the moment when you come with an explanation for every single prediction your model does, you are increasing your chances of a mutual conversation between this responsible person and the model…” – Andrey Sharapov
“We actually do a lot of traveling these days, going to Bulgaria, going to Poland, Hungry, every country, we try to talk to these people [our users] directly. [We] try to get the requirements directly from them and then show the results back to them…” – Andrey Sharapov
“The sole purpose of the tool we built was to make their work more efficient, in a sense that they could not only produce better results in terms of accuracy, but they could also learn about the market themselves because we created a plot for elasticity curves. They could play with the price and see if they made the price too high, too low, and how much the order quantity would change.” – Andrey Sharapov

Tuesday Apr 23, 2019
Tuesday Apr 23, 2019
My guest today is Gadi Oren, the VP of Product for LogicMonitor. Gadi is responsible for the company’s strategic vision and product initiatives. Previously, Gadi was the CEO and Co-Founder of ITculate, where he was responsible for developing world-class technology and product that created contextual monitoring by discovering and leveraging application topology. Gadi previously served as the CTO and Co-founder of Cloudscope and he has a management degree from Sloan MIT.
Today we are going to talk with Gadi about analytics in the context of monitoring applications. This was a fun chat as Gadi and I have both worked on several applications in this space, and it was great to hear how Gadi is habitually integrating customers into his product development process. You’re also going to hear Gadi’s interesting way of framing declarative analytics as casting “opinions,” which I thought was really interesting from a UX standpoint. We also discussed:
- How to define what is “normal” for an environment being monitored and when to be concerned about variations.
- Gadi’s KPI for his team regarding customer interaction and why it is important.
- What kind of data is needed for effective prototypes
- How to approach design/prototyping for new vs. existing products
- Mistakes that product owners make falling in love with early prototypes
- Interpreting common customer signals that may identify a latent problem needing to be solved in the application
Resources and Links:
LogicMonitor
Twitter: @gadioren
LinkedIn: Gadi Oren
Quotes from Today’s Episode
“The barrier of replacing software goes down. Bad software will go out and better software will come in. If it’s easier to use, you will actually win in the marketplace because of that. It’s not a secondary aspect.” – Gadi Oren
“…ultimately, [not talking to customers] is going to take you away from understanding what’s going on and you’ll be operating on interpolating from information you know instead of listening to the customer.” – Gadi Oren
“Providing the data or the evidence for the conclusion is a way not to black box everything. You’re providing the human with the relevant analysis and evidence that went into the conclusion and hope if that was modeled on their behavior, then you’re modeling the system around what they would have done. You’re basically just replacing human work with computer work.” — Brian O’Neill
“What I found in my career and experience with clients is that sometimes if they can’t get it perfect, they’re worried about doing anything at all. I like this idea of [software analytics] casting an opinion.” — Brian O’Neill
“LogicMonitor’s mission is to provide a monitoring solution that just works, that’s simple enough to just go in, install it quickly, and get coverage on everything you need so that you as a company can focus on what you really care about, which is your business.” — Gadi Oren

Tuesday Apr 09, 2019
Tuesday Apr 09, 2019
My guest today is Carl Hoffman, the CEO of Basis Technology, and a specialist in text analytics. Carl founded Basis Technology in 1995, and in 1999, the company shipped its first products for website internationalization, enabling Lycos and Google to become the first search engines capable of cataloging the web in both Asian and European languages. In 2003, the company shipped its first Arabic analyzer and began development of a comprehensive text analytics platform. Today, Basis Technology is recognized as the leading provider of components for information retrieval, entity extraction, and entity resolution in many languages. Carl has been directly involved with the company’s activities in support of U.S. national security missions and works closely with analysts in the U.S. intelligence community.
Many of you work all day in the world of analytics: numbers, charts, metrics, data visualization, etc. But, today we’re going to talk about one of the other ingredients in designing good data products: text! As an amateur polyglot myself (I speak decent Portuguese, Spanish, and am attempting to learn Polish), I really enjoyed this discussion with Carl. If you are interested in languages, text analytics, search interfaces, entity resolution, and are curious to learn what any of this has to do with offline events such as the Boston Marathon Bombing, you’re going to enjoy my chat with Carl. We covered:
- How text analytics software is used by Border patrol agencies and its limitations.
- The role of humans in the loop, even with good text analytics in play
- What actually happened in the case of the Boston Marathon Bombing?
- Carl’s article“Exact Match” Isn’t Just Stupid. It’s Deadly.
- The 2 lessons Carl has learned regarding working with native tongue source material.
- Why Carl encourages Unicode Compliance when working with text, why having a global perspective is important, and how Carl actually implements this at his company
- Carl’s parting words on why hybrid architectures are a core foundation to building better data products involving text analytics
Resources and Links:
- Basis Technology
- Carl’s article: “Exact Match” isn’t Just Stupid. It’s Deadly.
- Carl Hoffman on LinkedIn
Quotes from Today’s Episode
“One of the practices that I’ve always liked is actually getting people that aren’t like you, that don’t think like you, in order to intentionally tease out what you don’t know. You know that you’re not going to look at the problem the same way they do…” — Brian O’Neill
“Bias is incredibly important in any system that tries to respond to human behavior. We have our own innate cultural biases that we’re sometimes not even aware of. As you [Brian] point out, it’s impossible to separate human language from the underlying culture and, in some cases, geography and the lifestyle of the people who speak that language…” — Carl Hoffman
“What I can tell you is that context and nuance are equally important in both spoken and written human communication…Capturing all of the context means that you can do a much better job of the analytics.” — Carl Hoffman
“It’s sad when you have these gaps like what happened in this border crossing case where a name spelling is responsible for not flagging down [the right] people. I mean, we put people on the moon and we get something like a name spelling [entity resolution] wrong. It’s shocking in a way.” — Brian O’Neill
“We live in a world which is constantly shades of gray and the challenge is getting as close to yes or no as we can.”– Carl Hoffman

Tuesday Mar 26, 2019
Tuesday Mar 26, 2019
Nancy Hensley is the Chief Digital Officer for IBM Analytics, a multi-billion dollar IBM software business focused on helping customers transform their companies with data science and analytics. Nancy has over 20 years of experience working in the data business in many facets from development, product management, sales, and marketing.
Today’s episode is probably going to appeal to those of you in product management or working on SAAS/cloud analytics tools. It is a bit different than our previous episodes in that we focused a lot on what “big blue” is doing to simplify its analytics suite as well as facilitating access to those tools. IBM has many different analytics-related products and they rely on good design to make sure that there is a consistent feel and experience across the suite, whether it’s Watson, statistics, or modeling tools. She also talked about how central user experience is to making IBM’s tools more cloud-like (try/buy online) vs. forcing customers to go through a traditional enterprise salesperson.
If you’ve got a “dated” analytics product or service that is hard to use or feels very “enterprisey” (in that not-so-good way), then I think you’ll enjoy the “modernization” theme of this episode. We covered:
- How Nancy is taking a 50-year old product such as SPSS and making it relevant and accessible for an audience that is 60% under 25 years of age
- The two components Nancy’s team looks at when designing an analytics product
- What “Metrics Monday” is all about at IBM Analytics
- How IBM follows-up with customers, communicates with legacy users, and how the digital market has changed consumption models
- Nancy’s thoughts on growth hacking and the role of simplification
- Why you should always consider product-market fit first and Nancy’s ideas on MVPs
- The role design plays in successful onboarding customers into IBM Analytics’ tools and what Brian refers to as the “honeymoon” experience
Resources and Links:
Quotes:
“It’s really never about whether it’s a great product. It’s about whether the client thinks it’s great when they start using it.” –Nancy
“Every time we add to the tool, we’re effectively reducing the simplicity of everything else around it.”–Brian
“The design part of it for us is so eye-opening, because again, we’ve built a lot of best in class enterprise products for years and as we shift into this digital go-to-market, it is all about the experience…”–Nancy
“Filling in that “why” piece is really important if you’re going to start changing design because you may not really understand the reasons someone’s abandoning.”–Brian
“Because a lot of our products weren’t born in the cloud originally, they weren’t born to be digitally originally, doesn’t mean they can’t be digitally consumed. We just have to really focus on the experience and one of those things is onboarding.” –Nancy
“If they [users] can’t figure out how to jump in and use the product, we’re not nailing it. It doesn’t matter how great the product is, if they can’t figure out how to effectively interact with it. –Nancy

Tuesday Mar 12, 2019
Tuesday Mar 12, 2019
Dr. Puneet Batra is the Associate Director of Machine Learning at the Broad Institute, where his team builds machine learning algorithms and pipelines to help discover new biological insights and impact disease. Puneet has spent his career stitching together data-driven solutions: in location data analysis, as cofounder of LevelTrigger; in health care, as Chief Data Scientist at Kyruus; as lead Analytic Scientist at Aster Data (Acq by Teradata); and in fundamental models of particle physics, developing theories for Fermilab’s Tevatron and CERN’s Large Hadron Collider. He has held research positions at Harvard, Stanford and Columbia Universities. Puneet completed his BA at Harvard University and has a Ph.D. from Stanford University.
A friend of mine introduced me to Puneet because he was kicking off a side project using machine learning to dig into what creativity is through the lens of jazz. Since Puneet is not a musician by training, he was looking for some domain-specific knowledge to inform his experiment, and I really liked his design-oriented thinking. While jazz kicks off our conversation, we went a lot more deeply into the contemporary role of the data scientist in this episode including:
- The discussions that need to happen between users, stakeholders, and subject matter experts so teams get a clear image of the problem that actually needs to be solved.
- Dealing with situations where the question you start with isn’t always the question that is answered in the end.
- When to sacrifice model quality for the sake of user experience and higher user engagement (i.e. the “good enough” approach)
- The role of a data scientist in product design.
Resources and Links:
Quotes from Puneet Batra
"Sometimes, accuracy isn't the most important thing you should be optimizing for; it’s the rest of the package…if you can make that a good process, then I think you're more on the road to making users happy [vs.] trying to live in this idealized world where you never make a mistake at all."
"The question you think you're answering from the beginning probably isn't the one that you're going to stay answering the entire time and you've just got to be flexible around that."
"Even data scientists and engineers should be able to listen with empathy and ask questions. I got a good number of tips from people teaching me how to do things like that. Basically, we ask a question or basically shut up and hear what their answer is."
"I'm not really sure what creativity is. I'm not really sure if machines will ever be creative. A good experiment to try to prove that out is to try to get a machine to be as creative as possible and see where it falls flat."

Tuesday Feb 26, 2019
Tuesday Feb 26, 2019
Jim Psota is the Co-Founder and CTO of Panjiva, which was named one of the top 10 Most Innovative Data Science Companies in the World by Fast Company in 2018. Panjiva has mapped the global supply chain using a combination of over 1B shipping transactions and machine learning, and recently the company was acquired by S&P Global.
Jim has spoken about artificial intelligence and entrepreneurship at Harvard Business School, MIT, and The White House, and at numerous academic and industry conferences. He also serves on the World Economic Forum’s Working Group for Artificial Intelligence and has done Ph.D. research in computer science at MIT. Some of the topics we discuss in today’s episode include:
- What Jim learned from starting Panjiva from a data-first approach
- Brian and Jim’s thoughts on problem solving driven by use cases and people vs. data and AI
- 3 things Jim wants teams to get right with data products
- Jim and Brian’s thoughts on “blackbox” analytics that try to mask complex underlying data to make the UX easier
- How Jim balances the messiness of 20+ third-party data sources, designing a good product, and billions of data points
Resources and Links:
Quotes from Jim Psota
“When you’re dealing with essentially resolving 1.5 billion records, you could think of that you need to compute 1.5 billion squared pairs of potential similarities.”
“It’s much more fulfilling to be building for a person or a set of people that you’ve actually talked to… The engineers are going to develop a much better product and frankly be much happier having that connection to the user.”
“We have crossed a pretty cool threshold where a lot of value can be created now because we have this nice combination of data availability, strong algorithms, and compute power.”
“In our case and many other company’s cases, taking third-party data, no matter where you’re getting your data, there’s going to be issues with it, there’s going to be delays, format changes, granularity differences.”
“As much as possible, we try to use the tools of data science to actually correct the data deficiency or impute or whatever technique is actually going to be better than nothing, but then say this was imputed or this is reported versus imputed…then over time, the user starts to understand if it’s gray italics [the data] was imputed, and if it’s black regular text, that’s reported data, for example.”

Wednesday Feb 13, 2019
Wednesday Feb 13, 2019
We’re back with a special music-related analytics episode! Following Next Big Sound’s acquisition by Pandora, Julien Benatar moved from engineering into product management and is now responsible for the company’s analytics applications in the Creator Tools division. He and his team of engineers, data scientists and designers provide insights on how artists are performing on Pandora and how they can effectively grow their audience. This was a particularly fun interview for me since I have music playing on Pandora and occasionally use Next Big Sound’s analytics myself. Julien and I discussed:
- How Julien’s team accounts for designing for a huge range of customers (artists) that have wildly different popularity, song plays, and followers
- How the service generates benchmark values in order to make analytics more useful to artists
- How email notifications can be useful or counter-productive in analytics services
How Julien thinks about the Data Pyramid when building out their platform - Having a “North Star” and driving analytics toward customer action
- The types of predictive analytics Next Big Sound is doing
Resources and Links:
Quotes from Julien Benatar
"I really hope we get to a point where people don’t need to be data analysts to look at data."
"People don’t just want to look at numbers anymore, they want to be able to use numbers to make decisions."
"One of our goals was to basically check every artist in the world and give them access to these tools and by checking millions of artists, it allows us to do some very good and very specific benchmarks"
“The way it works is you can thumb up or thumb down songs. If you thumb up a song, you’re giving us a signal that this is something that you like and something you want to listen to more. That’s data that we give back to artists.”
“I think the great thing today is that, compared to when Next Big Sound started in 2009, we don’t need to make a point for people to care about data. Everyone cares about data today.”

Tuesday Jan 29, 2019
Tuesday Jan 29, 2019
Jason Krantz is the Director of Business Analytics & Insights for the 135-year old company, Weil McLain and Marley Engineered Products. While the company is responsible for helping keeping homes and businesses warm, Jason is responsible for the creation and growth of analytical capabilities at Weil McLain, and was recognized in 2017 as a “Top 40 Under 40” in the HVAC industry. I’m not surprised given his posts on LinkedIn; Jason seems very focused on satisfying his internal customers and ensuring that there is practical business value anchoring their analytics initiatives. We talked about:
- How Jason’s team keeps their data accessible and relevant to the issue they need to solve for their customer.
- How Jason strives to keep the information simple and clean for the customer.
- How does Jason help drive analytics in a company culture with a lot of legacy (from its people to its parts)
- The importance of focusing on context
- How Jason drives his team to be business partners, and not report generators
Resources and Links:
Quotes from Jason Krantz:
“You realize that small quick wins are very effective because, at its core, it’s really important to get executive buy-in.”
“I’m a huge fan of simplicity. As analytics pros, we could very easily make very complex, very intricate models, and just, ‘Oh, look at how smart we are.’ It doesn’t help our customers. …we only use about two or three different visual types and we use mostly the exact same visual set-up. I can train a sales rep for probably five minutes on all of our reporting because if you understand one, you’re going to understand everything. That gets to the theme again of just simplicity. Don’t over complicate, keep it simple, keep it clean.”
“…To get buy-in, you really got to have your business case, even to your internal customers, really dialed in. If you just bring them a bunch of crap, that’s how you’re going to lose credibility. They’re going to be like, “I don’t have the time to waste with you,” even though we’re trying to be helpful.”
“What my team and I do is we really help companies weaponize their data assets.”