I had a great conversation with Pavel Samsonov about why speed alone isn’t enough, and how AI without strong learning loops just creates more noise. He reframes AI not as an engine for endless prototypes, but as a tool that only multiplies value when paired with critique and reflection. With weak loops, AI multiplies waste.
Pavel’s take is that great product work isn’t about how many things you ship, but how much you learn and improve with each loop. That’s why his ideas connect so well with Glare: by continuously measuring signals like success, comprehension, and confidence, teams can see whether they’re not just building faster, but building better.
Discussion:
How is your team making sure learning loops are built into your design process? Do you have ways of checking not just “did we build it right,” but also “is this the right problem to solve?”
You mean that after finding “the right problem to solve” and “building the right solution to that problem”, include in the process something to check if “the right problem to solve” has changed? No, we don’t have. Metrics are more about optimising the already existing solution.
We use UX metrics, which are slightly different than just analytics.
When paired with qualitative responses, they reveal patterns that are hard to see otherwise. This creates strong design signals you can use to generate ideas, evaluate approaches, and validate solutions.
They are great tools, but require understanding how people work.
Here’s a great example of a competitive test across 500 participants, each concept was blind. If you look at the data, you’ll see that the Kleen has a negative sentiment in unoriginal.
What did it reveal? The photos on the site had a black overlay to make the text easier to read, which made the images appear muddy and less bright. Those were changed, and the site got an immediate revenue boost the next week.
(when I point this out, you can see it at the thumbnail level, impossible to miss)