AI Makes Building Faster. Pretotyping Tells You What to Build.
AI compresses the obvious. Humans apply judgement. Together, you get to evidence faster without skipping the parts that matter.
Welcome to the Experimenter’s Edge newsletter. Great to have you here! Go from AI and product ideas to evidence in weeks. We help you validate fast, stop the wrong ideas early, and make decisions with data, not opinions. Join 1,000+ pretotypers, product leaders, and experimenters getting hands-on tactics for rapid experimentation. Not theory, real techniques you can use this week.
The Talk That Changed the Conversation
I was invited to speak with the VicRoads Product Management team last month about pretotyping and AI. About 30 people in the room, lunch-and-learn format, part demo, part honest conversation.
Halfway through, someone asked the question I hear in every session: “This makes sense, but how do we actually start?”
It’s the right question. And it’s the one most AI conversations skip entirely.
Here’s what I’ve noticed over 4,000 experiments. The organisations that waste the most money aren’t the ones with bad ideas. They’re the ones that build before they test. And AI is making that problem worse, not better.
Before AI, most teams could manage four to eight ideas a year. The failure rate was 80 to 90 percent. Astro Teller from Google X has talked about this publicly. But with only a handful of bets, the losses were survivable.
Now AI can generate and build at 10 to 100 times the speed. That’s genuinely exciting. But the failure rate hasn’t changed. You’re still going to get it wrong 80 to 90 percent of the time. The difference is that you can now get it wrong much faster and at a much greater scale.
That’s where pretotyping comes in.
The 4-Minute Demo
In another session, I ran a live demo. Idea to Lean Canvas to XYZ hypothesis to experiment design. All in about four minutes, using the Idea Validator.
The room went quiet. Not because it was flashy, but because it removed every excuse. The process that used to take a team the better part of a week now takes minutes.
But here’s the thing. The experiments still run in the real world, with real customers, measuring real behaviour. AI compresses the obvious. Humans apply judgement. Together, you get to evidence faster without skipping the parts that matter.
The SMS Story
I walked through a real example from a previous engagement. The original plan was a full integration project. Business case. Approvals. Six months of development. Significant cost.
Instead, the team bought 10 mobile phones and sent manual text messages to customers to test whether they’d engage with a new service. Two days. Roughly zero dollars.
Same answer. 60 times faster. 180 times cheaper.
That’s not a shortcut. That’s better decision-making.
Three Questions I Left the Room With
1. Are your decisions based on data or opinion? Check your last three product or project decisions. Were they backed by customer behaviour data, or by someone’s conviction in a meeting?
2. What’s your experiment velocity? How many experiments did your team run on real customers, measuring real actions, last quarter? If the answer is fewer than five, you’re guessing at scale.
3. What should you stop? Every organisation has at least one project that everyone quietly suspects won’t work but nobody can kill. That’s your first experiment. Find the $1 million save.
Spotted in the Wild
It’s been a busy few weeks. Here’s what’s been happening in the world of experimentation and AI.
Three talks in one month
I presented “Know Which Ideas Will Win: Pretotyping and AI for Smarter Innovation” to VicRoads, a gaming company, and walked a product team through the rapid experimentation approach. Each audience was different, but the core message landed the same way every time: AI solves “how will we build it?” but it doesn’t answer “should we build it?”
$5M to under $1M
I’ve met with another client that I worked with previously, and they’ve already demonstrated something powerful. By applying lean canvas thinking to a planned $5 million vendor implementation, they reduced the project scope to under $1 million. Same business outcome. A fraction of the risk. They’re now building experimentation into their core approach, and we’re looking at how AI can supercharge that process even further.
GenAI Advanced Professionals session
I ran a session with a group of senior professionals covering the practical AI toolkit. We walked through setting up an autonomous AI agent, the real costs involved ($200/month for Claude Pro Max is the sweet spot), and how to get started without burning thousands on API usage. The standout moment was Chris demonstrating how he transformed a basic pricing document into a comprehensive three-way financial model with full P&L, cash flow, and sensitivity analysis. All with AI.
Exponentially Platform updates
The platform has had a significant refresh. We’ve built out full subscription billing, redesigned the single idea page, and continued optimising the IdeaValidator tool. The focus is on making it easier and faster for teams to get from idea to experiment.
Tool of the Month
This is the one that’s changed how I work. OpenClaw is an open-source AI agent framework that went from zero to one of the most-starred GitHub repositories in history in four months. It lets you set up an autonomous AI agent that works across your calendar, email, CRM, Google Drive, and files.
I’ve been running my agent, “Blue,” since January. It has full access to my business systems (except customer data) and does things like:
Proactively gathers context from my calendar, emails, and CRM before meetings
Summarises deals, drafts follow-ups, and flags stale opportunities
Self-learns from interactions and retains memory across sessions
Runs research and content tasks while I sleep
The key difference from standard AI tools is that it doesn’t just respond when you ask. It builds context over time, learns your preferences, and acts proactively. It’s the difference between having a chatbot and having a team member who’s always on top of everything.
Setup tips:
Pair it with a Claude Pro Max subscription or OpenAI. The Sonnet model is fine and has a better personality than OpenAI in my experience.
Run it on a dedicated machine (Mac Mini is a popular choice), isolated from personal data or on a VPN. To save you time, I’ve tried both methods and found the VPN much easier, more cost-effective, and with less overhead. I’ve got a guide to set it up easily, so just reply to this email if you want it.
Start with read-only access and expand as you build trust
The agent does its own security hardening and penetration testing if set up correctly.
If you’re only going to try one new AI tool this quarter, make it this one.
Reading
A few recent posts from the blog:
Find the Best Ideas to Invest In
I work with teams to go from ideas to evidence in weeks. We embed rapid experimentation using pretotyping as a core capability to validate fast, stop the wrong ideas early, and invest in the winners.
👉 If you want to hear more about how we do this, happy to do a 15 or 30 min free call. Just reply.
👉 If you’d like to try the Exponentially Platform or any of the tools I’ve mentioned, reach out, and I’m happy to share access.
“AI compresses the obvious. Humans apply judgement. Together, you get to evidence faster without skipping the parts that matter.”
Until next month, happy innovating!
Leslie


