
163.8K
Downloads
193
Episodes
Does the value of your insights, analytics, or automated intelligence product sometimes feel invisible to buyers and users? Does your product have impressive analytics and AI technology, but user adoption and sales still are not where you want them to be?
While it has never been easier to build data-driven products, why does it still seem so hard to build indispensable data products that users can't live without—and will gladly pay for?
I’m Brian T. O’Neill, and on Experiencing Data — a Listen Notes top 2% global podcast — I help founders and B2B software product leaders close the Invisible Intelligence Gap through solo episodes and interviews with leaders at the intersection of product management, UX design, analytics, and AI.
If you’re building analytics, BI, or automated intelligence (AI) products, this non-technical show will help you better connect your product to outcomes, value, and the human factors that still matter — even in the age of AI.
Subscribe today on all major platforms or browse the episode archive.
Get 1-Page Episode Summaries:
https://designingforanalytics.com/experiencing-data-podcast/
About the Host, Brian T. O'Neill:
https://designingforanalytics.com/bio/
Does the value of your insights, analytics, or automated intelligence product sometimes feel invisible to buyers and users? Does your product have impressive analytics and AI technology, but user adoption and sales still are not where you want them to be?
While it has never been easier to build data-driven products, why does it still seem so hard to build indispensable data products that users can't live without—and will gladly pay for?
I’m Brian T. O’Neill, and on Experiencing Data — a Listen Notes top 2% global podcast — I help founders and B2B software product leaders close the Invisible Intelligence Gap through solo episodes and interviews with leaders at the intersection of product management, UX design, analytics, and AI.
If you’re building analytics, BI, or automated intelligence (AI) products, this non-technical show will help you better connect your product to outcomes, value, and the human factors that still matter — even in the age of AI.
Subscribe today on all major platforms or browse the episode archive.
Get 1-Page Episode Summaries:
https://designingforanalytics.com/experiencing-data-podcast/
About the Host, Brian T. O'Neill:
https://designingforanalytics.com/bio/
Episodes

2 days ago
2 days ago
I’ve seen this challenge again and again with teams building analytics and AI products: nobody can define what quality to the end user means or how to measure. The answer? “Adoption.” The problem is that “amount of usage” tells you nothing useful about your customer’s experience with your product beyond “it’s not zero.” So what should you be measuring instead so your buyers don’t quickly abandon once the end users get their hands on the keyboard (or agent!)?
The answer is to understand through qualitative measures what users’ experiences are like now, so you have an objective baseline from which to compare future product investment. When you can define their current experience’s quality, it’s much easier to imagine their better future, and you also now have a change you can measure. Measurable outcomes are the foundation of high-value, sticky B2B analytics and intelligence products—and when your end users’ lives are improved, the sales close, and the renewals aren’t questioned. So today, I jump into “how do you measure UX?” so you aren’t surprised when the sale doesn’t close or that renewal doesn’t come through unexpectedly.
Highlights / Skip to:
- Why I think product adoption (i.e. product usage analytics) are misleading as a means to define whether your solution is valuable to users (1:34)
- Getting a better baseline reading of user experience so you can improve their life and your sales/retention KPIs (4:56)
- How to measure, hypothesize, and observe if your product is working “well” (7:35)
- Discovering where your product is being appreciated (20:28)
- What about when AI is in the loop? (23:05)
- The risk of creating bigger messes with AI capabilities (28:20)
- How to gain useful insights from your customer exposure time (31:28)
- The quantitative metrics you can use to help measure UX outcomes (36:17)
- Why "ship it and see if it gets used" isn't a product strategy (40:52)
Links
- More Resources
- Get 1x1 Help from me if you know your product’s value is opaque, or the user experience is hindering your sales or adoption goals
![191 - Turning Agents into Software that Sells [Smarter!] with Zig.ai CEO Steve Ancheta](https://pbcdn1.podbean.com/imglogo/image-logo/11158279/v2-ed-cover-2_300x300.jpg)
Tuesday Mar 31, 2026
Tuesday Mar 31, 2026
I'm talking with Steve Ancheta, CEO of Zig, a platform designed to free sales teams from repetitive, non-revenue-generating tasks. CRM and logistical tasks can consume up to 72% of the week of a sales team, but Zig’s AI agents handle them so reps can focus on closing deals. Unlike tools built for managers, Zig follows a rep-first design—simple, intuitive, and aligned with the motivation to sell more—while also creating an intelligence layer that preserves institutional knowledge and accelerates onboarding for new hires.
I wanted to chat with Steve about how he built a product that is both used—and worth paying for—with AI under the hood. Rather than relying on chat prompts, Zig surfaces prioritized tasks in panels and cards, integrates with CRMs and Slack, and builds confidence scores from user interactions.
Because Steve comes from the world of sales—and that’s the domain his product sits in—I wanted to explore his “problem clarity” and share that with you, since I often find data and technical founders to be more solution-oriented and lacking in this area. Steve was an open book with me, and I’m hoping other founders trying to turn analytical complexity into commercial clarity can see how Steve is using AI and agents to make data work for end users—and worth paying for.
Finally, I also challenge Steve to answer whether Zig.ai is a software company or a services company with a product behind the scenes—a question you might also ask yourself depending on your GTM model.
Highlights/ Skip to:
- What is Zig.ai? (00:48)
- When managers see the value of a product but end-users don’t—and how product leaders need to react (5:20)
- What Zig’s UX is like and how it was designed (9:45)
- The sales process and risks salespeople face when demoing Zig (16:12)
- How Zig addressed their time-to-value challenge during the product experience (20:14)
- How Zig found a problem people were willing to pay to solve (24:16)
- We discuss whether an AI product company might be a services company with technology or a traditional software company (24:16)
- The Invisible Intelligence Gap Steve has observed within Zig’s business space (AI and analytics-powered sales tooling) (27:57)
- Why Steve isn’t worried about the major CRMs from building internal solutions to circumvent third-party tools like Zig (35:37)
- Steve Ancheta’s advice for trying to bring sophisticated data products to market (39:26)

Tuesday Mar 17, 2026
Tuesday Mar 17, 2026
I’ve seen this pattern repeatedly with teams building analytics and AI products: the issue usually isn’t the quality of the models or the sophistication of the data. The technology often works just fine. The real breakdown happens earlier—when teams begin with the data they already have and try to figure out what to build, instead of starting with the decisions their customers need to make.
That approach often produces polished dashboards and compelling features that generate interest, but fail to drive real action. The missing piece is context. Decisions in the real world depend on incentives, habits, risk tolerance, and uncertainty—not just clean data. If your product doesn’t reflect that reality, it won’t meaningfully change behavior.
Another common trap is assuming all available data is *evidence* worth surfacing. This “more is better” mindset leads to cluttered analytics tools that offload interpretation onto users. Even conversational AI interfaces can fall into this, encouraging open-ended exploration without helping users reach decisions.
The analytics and AI products that succeed take a different approach. They’re designed around decision-making to reduce uncertainty, fit into real workflows, and guide users toward clear actions. In doing so, they bridge the gap between analytical capability and real-world value, making the product’s intelligence tangible, usable, and worth paying for.
Highlights/ Skip to:
- The core mistake I see people making during the discovery process of building an insights product (2:07)
- Improve your product strategy by working ‘backwards” and understanding what decisions customers are trying to make (6:06)
- Insights don’t equal decisions in the real world (7:39)
- Designing with a goal of improving the lives of users in mind (11:17)
- Prototypes as a means of discovery (vs. product/solution validation) (13:48)
- The bias of data availability (20:39)
- Using AI and LLMs for discovery and product UX (24:17)
- Why AI-assisted analytics products should shape UX around making structured decisions (31:03)
- Overcoming the Invisible Intelligence Gap (34:57)
- Final thoughts (37:21)
Links
- CED: My UX Framework for Designing Analytics Tools That Drive Decision Making https://designingforanalytics.com/ced
- Need my help finding the right use cases for your analytics or AI product? Book a complimentary 1x1 discovery call with me: https://designingforanalytics.com/contact/

Wednesday Mar 04, 2026
189 - The Invisible Intelligence Gap
Wednesday Mar 04, 2026
Wednesday Mar 04, 2026
I’ve worked with a lot of teams building analytics and insights products and decision-support systems. The pattern I keep seeing isn’t that the math is wrong or the ML / AI models are weak. Much of the time, the technology is fine.
The challenge is that all that [not always artificial!] intelligence is not surfacing as value to your customer. Dashboards look impressive. AI features demo well. Pilots get strong reactions. And then… usage stalls. Sales cycles drag. Teams quietly revert to spreadsheets. Buyers, or rather, prospective buyers, say they “like the vision,” but deals don’t move into the “closed” stage.
If your gut tells you the primary blocker is not your sales process, pricing/packaging, procurement, data quality, or risk/compliance, then you may be suffering from what I call the Invisible Intelligence Gap.
Your product’s intelligence simply isn’t visible to them. Three forces tend to amplify this gap. First, the value translation gap, which is when buyers and users can’t easily connect insights to their own goals. Second is the workflow alignment gap resulting from the product not fitting how work actually gets done. Third, the trust and control gap involves users lacking confidence in how the system reaches conclusions. My frameworks like CED, FOWA, and MIRRR are designed to close these gaps by making value obvious, workflows smoother, and AI more trustworthy.
Highlights/ Skip to:
- The challenge of insights not providing value to buyers, end-users, and stakeholders (3:20)
- How the invisible intelligence gap manifests itself (6:42)
- Common symptoms of the invisible intelligence gap (8:10)
- Examples of how changes in human behavior cause the gap (10:00)
- The (3) amplifiers of the invisible intelligence gap (11:47)
- The CED framework for addressing the intelligence gap problem (18:28)
- Addressing the invisible intelligence gap with FOWA (20:14)
- Using MIRRR to solve the invisible intelligence gap (21:25)

Tuesday Feb 17, 2026
Tuesday Feb 17, 2026
I’m continuing my exploration of a hard truth many leaders of analytics software companies run into: deals don’t stall because the tech is weak. Instead, they stall because prospects can’t see the value soon enough or the risk of changing the status quo is too high. This is often a product problem, not a sales one, and obtaining Flow-of-Work Alignment (FOWA) may help you start closing more evals and deals. So what is FOWA? The idea is simple, but demanding: stop showcasing features and start designing experiences that fit into how customers already do their work, create value, and add delight when your product is added into the loop.
Getting to FOWA means tailoring demos with realistic, industry-specific data, reducing mental translation, and minimizing behavior change. In this scenario, improvements become small, testable bets tied to outcomes, not feature checklists. UX and usability are not cosmetic; they should shape trust, adoption, and buyability.
When prospects can clearly see themselves succeeding with your product, value feels obvious, evals progress, and deals close.
Highlights/ Skip to:
- Steps to implementing Flow-of-Work Alignment (FOWA):
- Tailor your demo or POC to map to the prospects' world and their workflow (1:53)
- Treat product improvements as bets that have to be tested so that observable outcomes are what you’re holding your product team accountable for (3:57)
- Reducing perceived behavior change (6:39)
- Realize that your product’s visual design are likely impacting your product’s clarity and its desirability (12:29)
- Aligning your sales and product teams around customer outcomes and not feature gaps (18:03)

Tuesday Feb 03, 2026
Tuesday Feb 03, 2026
I’m digging into a frustrating reality many teams face: even technically superior analytics and AI products routinely lose deals—not because the KPIs or models aren’t good enough, but because buyers and users can’t clearly see how the product fits into their day-to-day work. Your demos and POCs may prove what’s possible, but long time-to-understanding, heavy thinking burden on the user, and required behavior or process changes introduce risk—and risk kills momentum. When value feels complicated, sales don’t move forward.
Adding to the challenge is that many sales efforts focus almost entirely on the fiscal buyer while overlooking the end users who actually have to adopt the product to create outcomes. This buyer–user mismatch, combined with status quo bias, often leads to indecision rather than change.
To address this, I explore the idea of thinking about the sales challenge as a product problem—and I introduce the idea of achieving Flow of Work Alignment (FOWA). The goal isn’t better persuasion—it’s clearer value. Strong FOWA means transitioning from demonstrating capabilities to helping customers see themselves—and their workflows—represented in your demos and POCs. The result? Prospects understand your value quickly, ask deeper, contextual questions, and deals move forward.
Highlights/ Skip to:
- Data products must work harder to expose value clearly to avoid the dreaded “closed-lost” deal stage in your CRM (1:38)
- Making your data product’s value instantly obvious (5:18)
- How the “old model” of selling based on capabilities and feature demos can lead to lost sales (7:22)
- What Flow-of-Work Alignment is and how it can help you unlock deals (13:02)
- How to know if you have achieved FOWA or not in your product and sales process (13:58)

Tuesday Jan 20, 2026
186 - Why Powerful AI & Analytics Products Feel Useless to Buyers
Tuesday Jan 20, 2026
Tuesday Jan 20, 2026
I’m back! After about 7 years (or more) of bi-weekly publishing, I gave myself a break (to have the flu, in part), but now it’s back to business! In 2026, I’ll be focusing the podcast more on the commercial side of data products. This means more founders, CEOs, and product leader guests at small and mid-sized B2B software companies who are building technically impressive B2B analytics and AI products. With all the focus on AI, I want to focus on things that don’t change: what do value and outcomes look like to buyers and users, and how do we recreate it with analytics and AI? What learnings and changes have leaders had to make on the product and UI/UX side to get buyers to buy and users to use?
So, that brings us to today’s episode. Today, I’ll explain why I think model quality, analytics data, and raw AI capability are quickly becoming commodities, shifting the real challenge to how effectively companies can translate their data and intelligence into value that buyers and users can clearly understand and defend.
I dig into a core tension in B2B products: fiscal buyers and end users want different things. Buyers need confidence, risk reduction, and defensible ROI, while users care about making their daily work easier and safer. When products try to appeal broadly or force customers to figure out how AI fits into their workflows, adoption breaks down. Instead, I make the case for tightly scoped, workflow-aware solutions that make value obvious, deliver fast time-to-value, and support real decisions and actions.
Highlights/ Skip to:
- Refocusing the trajectory of the show for 2026 (00:31)
- Turning your product’s intelligence into clear, actionable solutions so users can see the value without having to figure it out themselves (4:32)
- You’re selling capability, but buyers are buying relief from a specific pain point (7:33)
- Asking customers where AI fits into their workflow is poor design (16:57)
- Buyers and users both require proof of value, but in different ways (20:05)
- Why incomplete workflows kill trust (24:18)
- The importance of translating technical capability into something a human is willing to own (30:09)

Tuesday Dec 23, 2025
Tuesday Dec 23, 2025
Bill Saltmarsh joins me to discuss where a modern CDO gets the inspiration to “operate in the producty way” in his domain, which is healthcare. Now Vice President of Enterprise Data and Transformation and the Chief Data Officer at Children’s Mercy Kansas City, his early days as an analyst revealed a gap between what stakeholders asked for vs. the outcomes they sought. This convinced him that data teams need to pause, ask better questions, and prioritize meaningful outcomes over quickly churning out dashboards and reports.
Bill and I discuss how a producty mindset can be embedded across an organization. He also talks about why data leaders must set firm expectations. We explore the personal and cultural shifts needed for analysts and data scientists to embrace design, facilitation, and deeper discovery, even when it initially seems to slow things down. We also examine how to define value and ROI in healthcare, where a data team's impact is often indirect.
By tying data efforts to organizational OKRs and investing in governance, strong data foundations, and data literacy, he argues that analytics, data, and AI can drive better decisions, enhance patient care, and create durable organizational value.
Highlights/ Skip to:
- What led Bill Saltmarsh to run his team at Children’s Mercy “the producty way” (1:42)
- The kinds of environments Bill worked in prior that influenced his current management philosophy (4:36)
- Why data teams shouldn’t be report factories (6:37)
- Setting the standard at the leadership level vs the everyday work (10:53)
- How Bill is skilling and hiring for non-technical skills (i.e. product, design, etc) (13:51)
- Patterns that data professionals go through to know if they’re guiding stakeholders correctly (20:54)
- The point when Bill has to think about the financial side of the hospital (26:30)
- How Bill thinks about measuring the data team’s contributions to the hospital’s success (30:28)
- Bill’s philosophy on generative AI (36:00)
Links

Tuesday Dec 09, 2025
Tuesday Dec 09, 2025
In this final part of my three-episode series on accelerating sales and adoption in B2B analytics and AI products, I unpack a growing challenge in the age of generative AI: what to do when your product automates a major chunk of a user’s workflow only to reveal an entirely new problem right behind it.
Building on Part I and Part II, I look at how AI often collapses the “front half” of a process, pushing the more complex, value-heavy work directly to users. This raises critical questions about product scope, market readiness, competitive risks, and whether you should expand your solution to tackle these newly surfaced problems or stay focused and validate what buyers will actually pay for.
I also discuss why achieving customer delight—not mere satisfaction—is essential for earning trust, reducing churn, and creating the conditions where customers become engaged design partners. Finally, I highlight the common pitfalls of DIY product design and why intentional, validated UX work is so important, especially when AI is changing how work gets done faster than ever.
Highlights/ Skip to:
- Finishing the journey: staying focused, delighting users, and intentional UX (00:35)
- AI solves problems—and can create new ones for your customers—now what? (2:17)
- Do AI products have to solve your customers’ downstream “tomorrow” problems too before they’ll pay? (6:24)
- Questions that reveal whether buyers will pay for expanded scope (6:45)
- UX outcomes: moving customers from satisfied to delighted before tackling new problems (8:11)
- How obtaining “delight” status in the customer’s mind creates trust, lock-in, and permission to build the next solution (9:54)
- Designing experiences with intention (not hope) as AI changes workflows (10:40)
- My “Ten Risks of DIY Product Design…” — why DIY UX often causes self-inflicted friction (11:46)
Links
- Listen to part I: Episode 182 and part two: Episode 183
- Read: “Ten Risks of DIY Product Design On Sales And Adoption Of B2B Data Products”
- Stop guessing what is blocking your own product’s adoption and sales:
Schedule a Design-Eyes Assessment with me, and in 90 minutes, I'll diagnose whether you're facing a design problem, a product management gap, a positioning issue, or something else entirely. You'll walk away knowing exactly what's standing between your product and the traction you need—so you don't waste time and money on product design "improvements" that won't move your critical KPIs.

Wednesday Nov 26, 2025
Wednesday Nov 26, 2025
In this second part of my three-part series (catch Part I via episode 182), I dig deeper into the key idea that sales in commercial data products can be accelerated by designing for actual user workflows—vs. going wide with a “many-purpose” AI and analytics solution that “does more,” but is misaligned with how users’ most important work actually gets done.
To explain this, I will explain the concept of user experience (UX) outcomes, and how building your solution to enable these outcomes may be a dependency for you to get sales traction, and for your customer to see the value of your solution. I also share practical steps to improve UX outcomes in commercial data products, from establishing a baseline definition of UX quality to mapping out users’ current workflows (and future ones, when agentic AI changes their job). Finally, I talk about how approaching product development as small “bets” helps you build small, and learn fast so you can accelerate value creation.
Highlights/ Skip to:
- Continuing the journey: designing for users, workflows, and tasks (00:32)
- How UX impacts sales—not just usage and adoption(02:16)
- Understanding how you can leverage users’ frustrations and perceived risks as fuel for building an indispensable data product (04:11)
- Definition of a UX outcome (7:30)
- Establishing a baseline definition of product (UX) quality, so you know how to observe and measure improvement (11:04 )
- Spotting friction and solving the right customer problems first (15:34)
- Collecting actionable user feedback (20:02)
- Moving users along the scale from frustration to satisfaction to delight (23:04)
- Unique challenges of designing B2B AI and analytics products used for decision intelligence (25:04)
Quotes from Today’s Episode
One of the hardest parts of building anything meaningful, especially in B2B or data-heavy spaces, is pausing long enough to ask what the actual ‘it’ is that we’re trying to solve.
People rush into building the fix, pitching the feature, or drafting the roadmap before they’ve taken even a moment to define what the user keeps tripping over in their day-to-day environment.
And until you slow down and articulate that shared, observable frustration, you’re basically operating on vibes and assumptions instead of behavior and reality.
What you want is not a generic problem statement but an agreed-upon description of the two or three most painful frictions that are obvious to everyone involved, frictions the user experiences visibly and repeatedly in the flow of work.
Once you have that grounding, everything else prioritization, design decisions, sequencing, even organizational alignment suddenly becomes much easier because you’re no longer debating abstractions, you’re working against the same measurable anchor.
And the irony is, the faster you try to skip this step, the longer the project drags on, because every downstream conversation becomes a debate about interpretive language rather than a conversation about a shared, observable experience.
__
Want people to pay for your product? Solve an *observable* problem—not a vague information or data problem. What do I mean?
“When you’re trying to solve a problem for users, especially in analytical or AI-driven products, one of the biggest traps is relying on interpretive statements instead of observable ones.
Interpretive phrasing like ‘they’re overwhelmed’ or ‘they don’t trust the data’ feels descriptive, but it hides the important question of what, exactly, we can see them doing that signals the problem.
If you can’t film it happening, if you can’t watch the behavior occur in real time, then you don’t actually have a problem definition you can design around.
Observable frustration might be the user jumping between four screens, copying and pasting the same value into different systems, or re-running a query five times because something feels off even though they can’t articulate why.
Those concrete behaviors are what allow teams to converge and say, ‘Yes, that’s the thing, that is the friction we agree must change,’ and that shift from interpretation to observation becomes the foundation for better design, better decision-making, and far less wasted effort.
And once you anchor the conversation in visible behavior, you eliminate so many circular debates and give everyone, from engineering to leadership, a shared starting point that’s grounded in reality instead of theory."
__
One of the reasons that measuring the usability/utility/satisfaction of your product’s UX might seem hard is that you don’t have a baseline definition of how satisfactory (or not) the product is right now. As such, it’s very hard to tell if you’re just making product *changes*—or you’re making *improvements* that might make the product worth paying for at all, worth paying more for, or easier to buy.
"It’s surprisingly common for teams to claim they’re improving something when they’ve never taken the time to document what the current state even looks like.
If you want to create a meaningful improvement, something a user actually feels, you need to understand the baseline level of friction they tolerate today, not what you imagine that friction might be.
Establishing a baseline is not glamorous work, but it’s the work that prevents you from building changes that make sense on paper but do nothing to the real flow of work.
When you diagram the existing workflow, when you map the sequence of steps the user actually takes, the mismatches between your mental model and their lived experience become crystal clear, and the design direction becomes far less ambiguous.
That act of grounding yourself in the current state allows every subsequent decision, prioritizing fixes, determining scope, measuring progress, to be aligned with reality rather than assumptions.
And without that baseline, you risk designing solutions that float in conceptual space, disconnected from the very pains you claim to be addressing."
__
Prototypes are a great way to learn—if you’re actually treating them as a means to learn, and not a product you intend to deliver regardless of the feedback customers give you.
"People often think prototyping is about validating whether their solution works, but the deeper purpose is to refine the problem itself.
Once you put even a rough prototype in front of someone and watch what they do with it, you discover the edges of the problem more accurately than any conversation or meeting can reveal.
Users will click in surprising places, ignore the part you thought mattered most, or reveal entirely different frictions just by trying to interact with the thing you placed in front of them.
That process doesn’t just improve the design, it improves the team’s understanding of which parts of the problem are real and which parts were just guesses.
Prototyping becomes a kind of externalization of assumptions, forcing you to confront whether you’re solving the friction that actually holds back the flow of work or a friction you merely predicted.
And every iteration becomes less about perfecting the interface and more about sharpening the clarity of the underlying problem, which is why the teams that prototype early tend to build faster, with better alignment, and far fewer detours."
__
Most founders and data people tend to measure UX quality by “counting usage” of their solution. Tracking usage stats, analytics on sessions, etc. The problem with this is that it tells you nothing useful about whether people are satisfied (“meets spec”) or delighted (“a product they can’t live without”). These are product metrics—but they don’t reflect how people feel.
There are better measurements to use for evaluating users’ experience that go beyond “willingness to pay.”
Payment is great, but in B2B products, buyers aren’t always users—and we’ve all bought something based on the promise of what it would do for us, but the promise fell short.
"In B2B analytics and AI products, the biggest challenge isn’t complexity, it’s ambiguity around what outcome the product is actually responsible for changing.
Teams often define success in terms of internal goals like ‘adoption,’ ‘usage,’ or ‘efficiency,’ but those metrics don’t tell you what the user’s experience is supposed to look like once the product is working well.
A product tied to vague business outcomes tends to drift because no one agrees on what the improvement should feel like in the user’s real workflow.
What you want are visible, measurable, user-centric outcomes, outcomes that describe how the user’s behavior or experience will change once the solution is in place, down to the concrete actions they’ll no longer need to take.
When you articulate outcomes at that level, it forces the entire organization to align around a shared target, reduces the scope bloat that normally plagues enterprise products, and gives you a way to evaluate whether you’re actually removing friction rather than just adding more layers of tooling.
And ironically, the clearer the user outcome is, the easier it becomes to achieve the business outcome, because the product is no longer floating in abstraction, it’s anchored in the lived reality of the people who use it."
Links
- Listen to part one: Episode 182
- Schedule a Design-Eyes Assessment with me and get clarity, now.
