Beyond UX Design cover art

Beyond UX Design

Beyond UX Design

By: Jeremy Miller
Listen for free

About this listen

Beyond UX Design’s mission is to give you the tools you need to be a truly effective UX designer by diving into the soft skills they won’t be teaching you in school or a boot camp. These soft skills are critical for your success as a UX professional.Jeremy Miller Art
Episodes
  • Survivorship Bias: Success Theater and the Data You Never See
    Apr 23 2026

    We've built our careers on case studies, portfolios, and success stories, but what if we're only ever seeing a fraction of the full picture? This week, we dig into survivorship bias and how it quietly shapes the decisions your product team makes every day.

    What if every case study and "here's how we did it" success story you've ever learned from was missing the most important part of the story?

    Every startup founder story sounds the same. Crazy idea. Doubters everywhere. Bet on themselves. Changed everything. It's a great narrative, but it's a narrative written entirely by the people who made it through. For every founder who ignored the critics and won, thousands did the exact same thing and quietly disappeared. It may sound like pessimism, but that's the math we've been ignoring.

    This week's Cognition Catalog episode breaks down survivorship bias: why we instinctively focus on the outcomes we can see while the failures stay invisible. It shows up everywhere; in the startup mythology we've absorbed, in the portfolios we scroll through on LinkedIn, and in the way product teams anchor their planning on the projects that shipped rather than the ones that got quietly shelved.

    The good news is that this isn't a bias you're stuck with. There are practical ways to build better habits into how your team makes decisions, and it starts by asking a different question. Give this one a listen if you've ever wondered why your career feels like it doesn't quite measure up to everyone else's highlight reel.

    Topics:• 00:00 - The startup founder myth and why we only hear from the survivors.• 01:23 - Welcome to the Cognition Catalog.• 02:45 - The small business failure numbers that most people never talk about.• 04:23 - What survivorship bias actually is and why it matters.• 04:35 - Why portfolio case studies only show the work that succeeded.• 05:35 - Why your career probably looks worse than everyone else's, and why that's an illusion.• 08:16 - How the college dropout mythology turns exceptions into templates.• 08:40 - How survivorship bias quietly shapes product team decisions.• 09:02 - Why your active user research is a filtered sample.• 09:28 - How survivorship bias shows up in team culture.• 10:09 - Five practical ways to fight survivorship bias on your team.• 12:11 - The one question you should always be asking about success stories.


    Thanks for listening!

    We hope you dug today’s episode. If you liked what you heard, be sure to like and subscribe wherever you listen to podcasts! And if you really enjoyed today’s episode, why don’t you leave a five-star review? Or tell some friends! It will help us out a ton.

    If you haven’t already, sign up for our email list. We won’t spam you. Pinky swear.

    • ⁠Get a FREE audiobook AND support the show⁠

    • ⁠Support the show on Patreon

    ⁠• ⁠Check out show transcripts⁠

    • ⁠Check out our website⁠

    • ⁠Subscribe on Apple Podcasts⁠

    • ⁠Subscribe on Spotify

    ⁠• ⁠Subscribe on YouTube⁠

    • ⁠Subscribe on Stitcher

    Show More Show Less
    15 mins
  • Democratize Without Destroying: The Case for Research Charters with Ned Dwyer
    Mar 31 2026

    AI is making it easier than ever to run research, but faster doesn't always mean better. In this episode, we dig into what it really means to democratize research responsibly, and why your team probably needs a charter before someone does something they can't take back.

    Your team is already running research without you. So the real question is: are you going to help them do it well, or just hope for the best?

    Ned Dwyer is the co-founder and CEO of Great Question, an all-in-one UX research platform built to bring research to everyone in an organization. Not just the people with "researcher" in their title. He's spent years thinking about how teams can democratize access to customer insights without turning research into a free-for-all, and his talk at UX Con is what first put him on my radar.

    In this conversation, we dig into one of the more divisive topics in our industry right now: research democratization. Ned makes a pretty compelling case that it's not the all-or-nothing argument a lot of people make it out to be. It's a spectrum, and where your organization should land on that spectrum depends on who you're researching, what decisions are being made, and how much risk is on the table. We also get into AI's role in all of this, from AI-moderated interviews to synthesized insights, and where teams tend to get themselves into trouble when they hand over too much to the machine without any real governance in place.

    The thing I found most useful in this conversation is Ned's concept of a democratization charter, a practical framework for defining who should be doing what kind of research, with which populations, and under what guardrails. It's something I honestly hadn't thought much about before meeting Ned, and I think it's one of the most actionable ideas we've talked about on the show. If your team is already using AI research tools (and let's be honest, they probably are), this conversation is worth your time.

    Topics:

    • 01:45 - Ned's origin story and why he built Great Question

    • 04:10 - The pressure to move fast, and what gets lost when speed wins

    • 06:11 - The 80/20 rule: how to use AI without publishing slop

    • 09:45 - Democratization is a spectrum, not a binary

    • 12:35 - Where guardrails matter most: vulnerable populations and one-way-door decisions

    • 13:12 - The case for a democratization charter

    • 19:00 - AI moderation demystified: closer to a talking survey than a human interviewer

    • 23:00 - Ned's GoDaddy confession: how rogue research goes wrong

    • 27:00 - Participant fatigue and insight overload: the new risks AI introduces

    • 31:45 - Rogue research will happen regardless... your job is to make it safer

    • 43:28 - The Will Smith spaghetti analogy and where AI tools are headed

    Thanks for listening! We hope you dug today’s episode. If you liked what you heard, be sure to like and subscribe wherever you listen to podcasts! And if you really enjoyed today’s episode, why don’t you leave a five-star review? Or tell some friends! It will help us out a ton.

    If you haven’t already, sign up for our email list. We won’t spam you. Pinky swear.

    • ⁠⁠⁠⁠⁠⁠⁠⁠Get a FREE audiobook AND support the show⁠⁠⁠⁠⁠⁠⁠⁠

    • ⁠⁠⁠⁠⁠⁠⁠⁠Support the show on Patreon⁠⁠⁠⁠⁠⁠⁠⁠

    • ⁠⁠⁠⁠⁠⁠⁠⁠Check out show transcripts⁠⁠⁠⁠⁠⁠⁠⁠

    • ⁠⁠⁠⁠⁠⁠⁠⁠Check out our website⁠⁠⁠⁠⁠⁠⁠⁠

    • ⁠⁠⁠⁠⁠⁠⁠⁠Subscribe on Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠

    • ⁠⁠⁠⁠⁠⁠⁠⁠Subscribe on Spotify⁠⁠⁠⁠⁠⁠⁠⁠

    • ⁠⁠⁠⁠⁠⁠⁠⁠Subscribe on YouTube⁠⁠⁠⁠⁠⁠⁠⁠

    • ⁠⁠⁠⁠⁠⁠⁠⁠Subscribe on Stitcher

    Show More Show Less
    42 mins
  • Expectation Bias: Your Prediction Is Showing
    Mar 19 2026

    Have you ever walked out of a usability session completely confident in your findings, only to ship something that quietly missed the mark?

    What if the signal was there the whole time, and your brain just decided it wasn't worth logging?

    This week on the Cognition Catalog, we tackle The Expectation Bias. This bias shapes what you notice before you've even decided what to think about it. Your brain has already generated a prediction before the first participant clicks a button or a teammate presents their work, and that prediction quietly shapes what registers as a signal and what gets explained away before you've made a single conscious decision about what any of it actually means.

    We get into the science behind why this happens, and trace the research back to psychologist Robert Rosenthal's work in the early 1960s. His experiments, including the landmark Pygmalion in the Classroom study with Lenore Jacobson, showed that expectations don't just color our perceptions; they can actually change outcomes. That's a sobering thought when you consider how many design decisions are built on research we assumed was neutral.

    We also dig into where this plays out on real teams: in usability sessions where hesitations get logged as "minor," in design reviews where leadership-championed features get a generous read while quietly doubted projects get interrogated at every turn, and in how we evaluate colleagues whose reputations have already done the evaluating for us. If any of that sounds familiar, this episode offers five concrete habits to help you catch the filter before it's already done its job. Give it a listen.

    Topics:

    • 00:00 - Perception is prediction

    • 02:04 - A UX research cautionary tale

    • 03:23 - Defining expectation bias

    • 03:42 - Prediction errors explained

    • 04:31 - Pygmalion effect origins

    • 06:03 - Expectation vs confirmation

    • 06:30 - How it warps team decisions

    • 08:31 - Habits to reduce bias

    • 10:47 - Wrap up and next steps


    Thanks for listening! We hope you dug today’s episode. If you liked what you heard, be sure to like and subscribe wherever you listen to podcasts! And if you really enjoyed today’s episode, why don’t you leave a five-star review? Or tell some friends! It will help us out a ton.
    If you haven’t already, sign up for our email list. We won’t spam you. Pinky swear.
    • Get a FREE audiobook AND support the show
    • Support the show on Patreon
    • Check out show transcripts
    • Check out our website
    • Subscribe on Apple Podcasts
    • Subscribe on Spotify
    • Subscribe on YouTube
    • Subscribe on Stitcher

    Show More Show Less
    13 mins
No reviews yet