Day 1: "The login button should be blue, not green."
Day 2: "Can you add a dark mode?"
Day 3: "The font is too small for me."
Day 4: "I wish it had more customisation options."
Day 5: "Actually, it's too complicated now. Can you simplify it?"
Every founder faces this. Beta testers start sending feedback, and suddenly you're jumping to implement every suggestion. Green button becomes blue. Dark mode gets added. Font size increases. More options appear. Then you're simplifying again.
Haven’t we been here before?? Six weeks later, your product looks nothing like what you started with, and you're not sure if it's better or worse. You've been reacting instead of thinking strategically. Sure, you’ve been busy…but to what end?
There's a famous piece of writing advice that applies perfectly here. Stephen King wrote about feedback: if one person tells you something's wrong, they might be mistaken. If multiple people tell you the same thing, pay attention.
We’re going to apply this principal and help you work out what is worth doing and what can be safely back burnered.
Let’s get started:
Stephen King understood something crucial about feedback that most creators miss. Individual opinions often reflect personal preferences, not universal problems.
The rule: if one person mentions an issue, consider it but don't act immediately unless it’s literally a critical bug (as we discussed in yesterday) If multiple people mention the same issue independently, that's a pattern worth addressing.
This isn't about ignoring feedback. Well…it sort of is, but strategically! It's about distinguishing between individual preferences and actual problems that affect most users. This is made worse because often someone wants to be helpful and so will give you lots of feedback just for the sake of it.
When someone says "the button should be blue," that's probably preference. When three different people say "I couldn't find the submit button," that's a usability problem. Very different!
The difference matters because you can't build for everyone's personal preferences without creating a confusing mess. But you must fix problems that prevent people from using your product successfully.
Most feedback leads to iteration, not pivoting. Let’s quickly unpack this as it’s key.
Iteration means making incremental improvements to your existing approach. Better button placement, clearer labels, simpler workflows, bug fixes. These changes improve what you've built without changing your core strategy.
Pivoting means fundamentally changing your approach. Different target market, different core problem, different solution architecture. These are strategic decisions that affect your entire product direction.
Most beta feedback is about iteration. "This is confusing" suggests better design. "I can't figure out how to save" suggests interface improvements. "It's too slow" suggests performance optimisation.
Pivoting feedback is rarer but more serious. "I don't understand what problem this solves" or "This isn't actually useful for my situation" suggests deeper issues with your core premise.
The mistake most first time founders make is treating iteration feedback like pivoting decisions, or pivoting based on individual preferences rather than systematic problems. We need to protect ourselves against this to preserve our time and sanity. And make sure the product doesn’t become an absolute mess!
As the saying goes a camel is a horse designed by a committee. Personally I think this is far too mean to the poor camels. But you get the point!
OK so the feedback is coming in. How do we practically categorise and prioritise what you've collected:
High Priority (Fix Immediately)
Medium Priority (Fix Soon)
Low Priority (Consider Later)
No Priority (Ignore)
Got that? Let’s build into in a prompt that will help you sort and filter everything you've collected this week:
I need to prioritise feedback from my beta testing week. Here's all the feedback I received: [PASTE ALL FEEDBACK]
My product is: [DESCRIBE YOUR PRODUCT]
My core value proposition is: [MAIN PROBLEM YOU SOLVE]
Help me categorise this feedback using these priorities:
- High Priority: Multiple people mention same issue OR bugs that break functionality
- Medium Priority: Single person mentions significant issue OR improvements to core workflow
- Low Priority: Individual preferences OR nice-to-have features
- No Priority: Contradicts strategy OR would confuse most users
For high and medium priority items, suggest the order I should tackle them and why. For low priority items, explain why they can wait. For no priority items, explain why I should ignore them.
Give me a clear action plan for what to fix first.
This prompt will help you sort signal from noise and create a logical improvement plan. You still have to have the confidence to deprioritise feedback though! Which initially will be hard. Like all things you’ll get better at it the more you practice. Just try not to jump at every single piece of feedback!
Now we’ll pull together everything we’ve been working on and create our first “update”.
You know how software often comes in versions? That’s basically what we are doing here. V1 becomes V1.1 or (if it’s a big change) maybe even V2. Generally we use decimals for relatively minor updates as a useful convention. And to stop us ending up on V148 down the line!
Once you've prioritised your feedback, it's time to implement the obvious winners. Start with the simplest fixes first. Button placement issues, confusing labels, missing functionality that everyone expects. These quick wins build momentum and show your testers that you're listening. Here’s a prompt to help put together an action plan:
Based on my prioritised feedback, I want to implement these high-priority fixes: [LIST YOUR TOP 3-5 ITEMS]
My product was built with: [LOVABLE/CURSOR/OTHER]
Current functionality: [DESCRIBE CURRENT STATE]
For each fix, help me:
1. Create clear development instructions I can implement
2. Estimate how long each fix should take
3. Identify any potential complications or dependencies
4. Suggest the best order to implement them
Also help me write update messages I can send to my beta testers explaining what I've fixed based on their feedback.
As we discussed yesterday remember to tell your testers when you make changes based on feedback! This completes the feedback loop and reinforces that their input matters. That’s what builds relationships! Making the fix is just one part.
And any changes you didn’t implement?? Just don’t say anything. Testers aren’t expecting updates. So we only update them with good news (I took your advice and made a change) rather than bad news (your feedback was stupid so I didn’t do it). We control the flow of info here!
Keep the updates simple and specific: "Thanks for mentioning the confusing save button - just moved it to a more obvious location" or "Based on feedback from several testers, simplified the main navigation." And avoid over-explaining your decision-making process. Just acknowledge their input and describe the improvement.
Create your first strategic update:
Share your strategic approach:
"Day 25 of AI Summer Camp / Week 5 wrap: How beta feedback changed my product (and what I ignored).
Got loads of feedback from beta testers this week.
Here’s what I’ve updated based on my amazing testers: list out changes and your journey”