Designing with AI Patterns (Glaringly Obvious)

I had a great chat with Sharang Sharma, about why old software patterns don’t cut it for AI, and how design teams can build new patterns that make AI trustworthy and transparent.

Unlike traditional systems, AI products can’t just be bolted onto existing UI models. Without new design patterns, AI risks feeling confusing or unreliable. Sharang argues that success comes from putting AI where it belongs in the workflow: making its decisions clear, preparing for mistakes, and ensuring people stay in control.

Watch my full Glaringly Obvious interview with Sharang Sharma:

Key Takeaways:

  • New UX patterns → Old models break with AI’s shifting behavior.
  • Don’t add AI just to add it → Use it only when it solves real problems.
  • Human control first → Keep people in the loop, not full automation.
  • Transparent AI decisions → Show confidence, data, and reasoning.
  • Plan for errors → Build recovery paths and feedback loops.

Sharang’s approach is a practical for teams facing AI’s complexity. Instead of treating AI like another feature, he frames it as a new design challenge: building patterns that ensure safety, clarity, and reliability. Here’s his article Design needs new patterns to build AI experiences,

When paired with UX metrics like comprehension, control, and trust, these patterns help teams see whether their AI designs are really working, where users get lost, or where systems need to improve before launch.

:speech_balloon: Discussion:
How is your team building patterns to make AI systems feel transparent and trustworthy instead of overwhelming?

4 Likes

Woah, super cool. This ties into this development framework that I read about recently:

These ideas are hitting the head. Human-in-the-loop is becoming even more important as AI helped automate a lot of basic work.

Still thinking about this framework and @nathaliesmith ‘s post around the latest trends. 5 Job Trends in Product Design Right Now

If soft skills matter, how is that becoming a part of the product? Is that through “social” interfaces?

I know for a fact that outcomes are something that everyone is starting to be able to reach to, if that makes sense. Less pure execution and output.

I feel like this is a bit up @stefaan_vuylsteke ‘s alley in terms of focusing on outcomes. Have you had any experience with AI processes yourself?

Good callout here @ben - I think that soft skills become part of the product when the product itself starts communicating like a human would:

Clear reasoning → Trust
Good boundaries → Control
Graceful recovery → Confidence

This is where my brain goes.

1 Like

Yeah, that makes to me. I think human → human interaction actually becomes more important within a product. The question would be what does that look like and how might it be scalable?

I don’t know if AI will completely be able to fill the communication gap…

1 Like