Judgment shows up when designers make clear decisions, explain tradeoffs, understand context, and take responsibility for outcomes. AI can generate options quickly, but it cannot decide what matters, what to ignore, or when something is good enough.
The goal of designers is to stay relevant by forming strong points of view, grounding choices in reality, and knowing when and why to say yes or no. That ability to choose, not just create, is the real advantage.
Let’s jump into the discussion:
Taste is often treated as the designer’s edge, but taste is subjective and easy to copy. Judgment is different. Judgment shows up when choices have consequences. When tradeoffs are real. When someone has to stand behind a decision.
That raises an uncomfortable question for teams using AI every day.
If AI is doing more of the making, who is responsible for the deciding?
Let’s jump into the topic. Mike is a design leader and one of my first Glaringly Obvious conversations, where we had a overlapping topic: rethinking “Good Enough.”
My first big question, as a design leader: where does judgment show up in your work today? There are a lot of moving pieces in decision making. How much of your judgment shapes how your team uses AI versus what you personally bring to the work?
That’s a great question, and one I’ve been circling indirectly for a while.
I think design leaders today don’t just exercise judgment themselves—they’re responsible for enabling good judgment across the systems, teams, and decisions they influence.
In my article, I talk about judgment at the individual level, using my art school background as a metaphor. I still believe that’s true. But as AI tooling accelerates the pace and volume of “making,” what matters more is whether teams are given clear moments where judgment can actually happen.
We don’t need to reinvent user-centered design, the double diamond, or the scientific method because of AI. Those models still work. What does need to change is how explicit we are about decision points—where tradeoffs are made, where context matters, and where someone is accountable for saying “this is good enough” or “this is not.”
That’s where leadership shows up.
Good judgment doesn’t come from slowing everything down—it comes from creating the conditions for the right decisions to surface: bringing in the right SMEs at the right time, grounding choices in research, and helping teams think strategically through the eyes of users rather than defaulting to what AI makes easiest.
AI can accelerate options but judgment is still how we choose
…helping teams think strategically through the eyes of users rather than defaulting to what AI makes easiest.
This touches on why I think taste is still important, though I take your point that the word is applied too broadly. Taste is often the first indicator we have that someone has the experience to recognize and differentiate between the best options.
I’m curious, @mschindler, how we can be more explicit about decision-points. Are you speaking to recognizing juction points in a design process? Or is this something new that’s needed because of AI workflows?
This is great. So, from a judgment perspective in design, is there a hierarchy that matters when you bring discernment to a decision? What is the most important thing to consider in these decisions at a leadership level? Can you rank these?
Question- do you think AI will get to that point where it can make decisions better?
I’m always going to refer to programming and its history as a great example:
During the early days, engineers had to determine where the data was going, when data would get cleaned up, how the smaller intricacies worked together in the primitive languages like assembly and even C.
Nowadays, we have frameworks (higher level languages w/ automatic garbage collection) that make all of those decisions, taking on the responsibility of making that choice or not because it did a better job than 99% percent of humans- moving the effort to focusing on business logic.
Of course, we have no idea how great models will truly get (I personally think we’re already hitting a wall), but figured I’d pose the question and get us to think further out!
Thank you, @ben. And I like the programming analogy.
I do think AI will continue to get better at certain decisions like accuracy, pattern recognition, optimization, etc. I think that’s similar to what higher-level languages and frameworks did for engineering.
But I think an important distinction is that those tools never eliminated judgement, they just moved it. Engineers may have stopped worrying about memory allocation, but they still own system behavior, failure modes, and tradeoffs.
Most decisions in design are less mechanical and more contextual and consequentially related to tradeoffs in problem solving and human optimization. We’re the ones who are always going to have to decide what “good enough” look like.
AI can absolutely help us with that by surfacing options and relevant info faster. But the responsibility for choosing still sits with us, especially when the options impact real people.
Yeah, I’m not dismissing taste outright. I agree it matters. Much of my writing has focused more on the rhetorical delivery of AI and the concept of taste has been overstated relative to judgement and decision-making, IMO.
And you’re right that I’m talking about design process junctions (great word, BTW). That’s something I’m actively working through at an enterprise level right now.
I don’t see this as entirely new because of AI. I think those junctions have always existed. What’s new is that AI accelerates things so quickly that those moments can happen for teams without noticing they’ve made decisions at all.
These junctions aren’t theoretical and they always happen. They’re specific moments where someone needs to stop, make a choice, and accept consequences which AI now makes easier to skip.
Interestingly, the way AI systems perform path disambiguation is actually a useful metaphor. Not because AI should decide for us, but because it forces us to clarify intent before executing. That’s the kind of judgement I think design leaders need to be making for visible.
@mschindler Great read here.. since I’m leaning into focusing on outcomes this year, I’m interested in your POV on this framing that came to mind.
This made me think less about taste and more about time spent noticing outcomes. In your opinion, at what point does someone’s domain expertise (and judgment) become trustworthy?
Sometimes my gut disagrees with something, but then I use it to push me to dig deeper into understanding these AI patterns. Here’s an example from our work: the AI agent decided these two links should open in a separate window and also include a sync button to call an API. The agent designed this in a way that I understand, but I believe creates more friction and extra work for the user, who does not understand what it does.
Curious, if judgment depends on clear decision points, what are the decisions design leaders must explicitly own, and which ones should never be delegated to AI? Or is it a “feel” thing based on lived experience, just interacting with interfaces and code?
Great question, @nathaliesmith. And I like the way you’re framing this around outcomes.
Herbert Simon once suggested that it takes roughly ten years of working in a domain for someone to be considered an expert. I don’t think that’s about time alone so much as exposure to consequences — seeing decisions play out, watching where things break, and learning how context changes outcomes.
AI complicates this because it can surface answers faster than ever. But speed isn’t the same as trustworthiness. Trust, at least as I think about it, comes from interpretation over time. It’s noticing patterns, understanding effects, and knowing when an answer might be technically correct but contextually or emotionally wrong.
That’s where human judgment still matters. Domain expertise becomes trustworthy when someone has repeatedly made decisions, seen the results, adjusted their thinking, and can explain why they’d make a similar or different call next time. AI can assist that process, but it can’t accumulate responsibility for outcomes in the same way people do.
That is getting deep, @Bryan but I think we can still draw some clear lines, even without a specific tool or use case.
At a minimum, I don’t think design leaders should ever delegate decisions about problem framing, success criteria, or acceptable risk to AI. Those are human calls because they’re value-laden and consequential. They define what we’re optimizing for and who bears the cost if we’re wrong.
I do think AI can responsibly participate downstream by generating options, exploring variations, finding patterns, even recommending paths. But a human should always be in the loop where a decision commits the team to an outcome.
And I agree that trust here doesn’t come from intuition alone. What we can still rely on is the scientific method—forming hypotheses, testing, observing outcomes, and adjusting. Judgment’s essential in how we design those tests, interpret results, and decide when something is “predictable enough” to ship.
So for me, it’s not a “feel” thing — it’s lived experience plus evidence, applied repeatedly over time. AI can certainly help with forming and making sense of evidence. Humans still own the judgment, though.
Thank you, @mschindler. We appreciate you sharing your thinking and collaborating with the community in the discussion. Lots here for design leaders to sit with as AI becomes part of the everyday workflow.
Your framing around judgment as something leaders must design for, not just exercise, really makes sense. Making decision points explicit, owning tradeoffs, and staying accountable for outcomes feels like we need to keep leaning into these ideas as AI speeds up the making side of the work.
Yeah, this is a constant conversation for us at this point. Definitely appreciate this thinking as a starting point toward tailoring these considerations to each project.
@mschindler regarding an excerpt from your article:
“What we’re really reaching for isn’t taste. It’s a strong point of view, shaped by memory, practice, and a willingness to stay in the work long enough to know what matters.”
If taste is actually a strong point of view, how do we measure whether that perspective is actually good or bad? what leads to “good taste” or “bad taste”?
The Google seems to suggest taste can be measured.
“Taste is a neurobiological and psychological feedback loop wherein the brain’s medial orbitofrontal cortex assigns a “hedonic value” to aesthetic stimuli, triggering dopaminergic rewards that reinforce specific visual patterns as core components of the Aesthetic Self. This internal valuation is expressed through enclothed cognition , a process where the symbolic meaning of chosen styles modulates cognitive performance and identity consistency, creating a self-perpetuating cycle between neural pleasure and external expression.”
My interpretation of what @mschindler is saying is that taste still matters, but that the feedback loops “to know what matters” ultimately drive our ability to deliver tasteful outcomes that create business value, user benefit, and system impact. This is judgement.
Been thinking about this a lot lately. As I talk with more leaders, it’s becoming clearer that the question is…which loops do they need to actually own?
Feels like what’s crystallizing for me is that leaders can’t outsource these loops. AI can help generate and interpret, but owning the loop, seeing the patterns, and deciding what matters still sits with them.