Learning Loops With Pavel (Glaringly Obvious)

I had a great conversation with Pavel Samsonov about why speed alone isn’t enough, and how AI without strong learning loops just creates more noise. He reframes AI not as an engine for endless prototypes, but as a tool that only multiplies value when paired with critique and reflection. With weak loops, AI multiplies waste.

Watch the full interview here:

Here are the themes that came up:

  • Speed ≠ impact
  • Three loops, three questions
  • Critique drives learning
  • People make decisions, not AI
  • Good > done

Pavel’s take is that great product work isn’t about how many things you ship, but how much you learn and improve with each loop. That’s why his ideas connect so well with Glare: by continuously measuring signals like success, comprehension, and confidence, teams can see whether they’re not just building faster, but building better.

:speech_balloon: Discussion:
How is your team making sure learning loops are built into your design process? Do you have ways of checking not just “did we build it right,” but also “is this the right problem to solve?”

5 Likes

Expanded on this idea by exploring how learning loops require user input.

The idea is simple. Design has always centered on users, but its feedback loops with the business have been blunt.

When those loops become fulcrums, design gains real leverage, and it starts to shape better experiences and drive business outcomes.

2 Likes

You mean that after finding “the right problem to solve” and “building the right solution to that problem”, include in the process something to check if “the right problem to solve” has changed? No, we don’t have. Metrics are more about optimising the already existing solution.

1 Like

Thanks for chiming in Raul!

We use UX metrics, which are slightly different than just analytics.

When paired with qualitative responses, they reveal patterns that are hard to see otherwise. This creates strong design signals you can use to generate ideas, evaluate approaches, and validate solutions.

2 Likes

Thanks, Bryan. Yes, I’m studying and getting familiar with these UX metrics.

2 Likes

They are great tools, but require understanding how people work.

Here’s a great example of a competitive test across 500 participants, each concept was blind. If you look at the data, you’ll see that the Kleen has a negative sentiment in unoriginal.

What did it reveal? The photos on the site had a black overlay to make the text easier to read, which made the images appear muddy and less bright. Those were changed, and the site got an immediate revenue boost the next week.

(when I point this out, you can see it at the thumbnail level, impossible to miss)

4 Likes

Big agree in the sense that people have to be sensitive to what noise and signal are.

Feedback (of those who will want the future) should be the biggest driver.

One thing I’m reminded of is that it’s tough to find the “right” people to provide that said feedback.

@Raul_Jimenez thanks for sharing! What metrics have you used or seen used for “optimizing the already existing solution”?

1 Like

This interview got my wheel turning a bit…Love this framing. Speed without structure just amplifies chaos

Curious what kind of critique loops you’ve seen work best with AI in practice?

1 Like

Learning loops align with the idea of evaluation loops with humans. @ben’s post on this is an interesting breakdown on what goes on in each loop!

1 Like

OooO will dig in further, thanks!

1 Like