#86: Building in the Age of AI Agents: Insights from Techspace Berlin
This week's newsletter is sponsored by UserTesting— the human insight platform.
Berlin, Spring Days, and the Truth About AI Agents
Last Wednesday, Berlin had one of those perfect spring days. The kind where you feel guilty being indoors. And we had 40+ people pack into Techspace for a conversation about user test validation and AI agents.
Ronja Quester took us behind the curtain of building a menopause app in the corporate world.
Her team used synthetic users in their research, and she was refreshingly candid about both sides of that:
✅ yes, they can reflect real audience needs almost immediately.
❗ But they can also miss the outliers. The edge cases. The experiences that are so specific and so human that no algorithm would think to surface them.
Then Csaba Tamas got up and said the quiet part loud: most AI projects just don’t deliver. We’re talking 5–15% success rates. And yet nearly half of enterprise AI users made at least one major business decision in 2024 based on information that turned out to be hallucinated. While trust in AI output still remains high.
We’re at this weird inflection point where the technology moves faster than our ability to be appropriately skeptical of it.
That’s not a reason to stop. It’s a reason to get smarter about how we build and validate.
Big thanks to our sponsors UserTesting and Techspace for providing the food and space, and the whole Productlab team.
Grateful to everyone who showed up, asked hard questions, and kept the conversation real. That’s the only kind worth having.
See you at the next one. 🙌👇
PMM Pub Special Edition
with Vladimir Liashenko
🗓️ 7th of May
📍 Berlin
📰 Product Leaders’ Wisdom
Brought to you weekly by Leila Montazeri
The Art of Not Using AI (And Using It Better)!
We’ve moved past the initial AI hype. In 2026, the real “flex” isn’t just plugging in an LLM; it’s knowing exactly when to step back and rely on core product principles. These three articles have been on my mind because they challenge the “AI-first” status quo in all the right ways:
Decision Surfaces: The Fine Line Between Helpful and Bossy
Priyank Sharma hits on Mind The Product, on a feeling many of us have had: why do AI features often feel “off” even when they work? He argues that because LLMs always speak with unearned confidence (even when they’re hallucinating), we have to design “Decision Surfaces.” It’s about deciding where the system should be an authority (giving an answer) versus where it should be an assistant (just showing options).
The Peril of Laziness Lost: Why “More Code” is a Trap
Is shipping 37,000 lines of code in a day actually a win? Probably not. In this piece, Bryan Cantrill defend the “virtuous laziness”, the human instinct to find the simplest, cleanest solution to avoid future headaches. In an age where AI can churn out infinite, messy code for free, our job is to be the friction. We need to make sure we aren’t just building “layer cakes of garbage” just because the AI makes it easy to do so.
The Build: Why Math Still Beats GPT-4 for Recommendations
Jay Stansell shares on Product Coalition why he literally hit “delete” on a perfectly good GPT-4 prompt and went back to classic math for his product’s recommendation engine. Why? Because in a coaching product, users deserve a “why.” They need to know that a suggestion is based on their specific skill gaps, not a black-box “vibe.” It’s a great reminder that if a problem can be solved with a transparent algorithm, that’s usually the smarter bet.
💪 Open Roles
Brought to you weekly by Alessia Marchi
🎶 Song of the Newsletter
From our DJ Marco D’Avila







