In his memoir "On Writing", Stephen King shares a brilliant insight about handling feedback:
This might be the best advice I've ever heard about handling feedback - whether you're writing horror novels or building AI tools.
I used to treat every piece of feedback like it was gospel.
"Add GPT-4 support!" one user would say. "It needs to handle images!" said another. "Could it write poetry?" asked a third.
On it! I’d run off and start adding whatever was being asked. I’m a good little worker!
Before I knew it, my simple, focused tool was turning into a bloated mess trying to be everything to everyone.
in this part we're going to talk about the art of selective listening - knowing which feedback to act on, which to ignore, and how to stay focused on actually launching your AI tool.
Let’s get started:
Can’t Please Everyone
Here's a real example: I recently built an AI social media post generator. During testing, I got tons of feedback:
"Can it schedule posts too?" "Could it generate images?" "What about hashtag research?" "It should integrate with every social platform!"
But there was one piece of feedback I heard again and again: "The outputs are too generic."
That is all that’s important here.
That's the kind of consensus you need to pay attention to.
Everything else? Nice ideas for version 2.0, maybe, but not critical for launch! They need to be back-burnered or else you’ll never launch the darn thing.
Following Mr. King's principle, here's when feedback demands attention:
This is also why it’s so important to talk to as many customers (or potential customers) as possible. Because we can start to see the patterns emerge.
For example for our AI Workshop Kit we have over 700 applications. And in each application we ask “why are you applying”. This gives us a TONNE of feedback we can look at to work out what problems people have and what language they use to describe those problems. This is extremely valuable when you are in the business of solving problems for them!
There’s a slight increase in difficulty when it comes to getting feedback on AI tools. This is a result of it being a (relatively) new and immature market.
Users often:
Basically people’s ideas about what AI should be able to do (that it can’t yet!) seep in. This makes it doubly important to keep the conversation around the problems they are trying to solve - as we covered in the interview process in the last Part.
Remember: your job isn't to build a general AI assistant. It's to solve a specific problem really well. Keep coming back to this when in doubt!
OK so we’ve gathered up feedback. Let’s start to work out what we should pay attention to. We start this process by first eliminating what we don’t pay attention to!
Here's what you can usually ignore:
A rule of thumb: if implementing the feedback would delay your launch by more than a day or two, it's probably not essential for version 1.0. These items can start to stack up especially if you are trying to please everyone. Leave them for future iterations - maybe. It depends on how consistently people ask!
Right let’s wrap all of the above into a prompt to help us out a bit!
You are an AI product feedback analyser. Analyse the following user feedback and categorise issues based on frequency and impact on core functionality. Focus on finding consensus rather than one-off requests.
Analyse for:
1. Recurring Issues
- Count how many users mentioned similar problems
- Group feedback into common themes
- Identify patterns in user behaviour/confusion
2. Priority Classification
HIGH: Issues that:
- Block core functionality
- Mentioned by >25% of users
- Prevent successful task completion
MEDIUM: Issues that:
- Impact user experience but don't block usage
- Mentioned by 10-25% of users
- Create friction but have workarounds
LOW: Issues that:
- Are feature requests/nice-to-haves
- Mentioned by <10% of users
- Don't impact core functionality
Output:
1. Top recurring issues (with count of mentions)
2. Priority list categorised as High/Medium/Low
3. Quick wins (high impact, easy fixes)
4. Items to defer until post-launch
Remember: Focus on issues affecting core functionality and launch readiness.
Use this prompt and a copy/paste or attach a CSV of all the feedback you received from your customer interviews. Don’t worry terribly about formatting - this is precisely what AI is good at! It’ll go through and pull out the information needed.
This prompt will not just pull out the recurring themes but also prioritise them into High, Medium, Low priority. Honestly for v1.0 I’d only worry about the High priority items and keep the rest for later. Or indeed ignore them entirely!
Crucial to remember: your goal right now is to launch. Not to build the perfect tool, not to please everyone - to launch! This is about speed and momentum.
Every piece of feedback you act on delays that launch. There’s no and, ifs or buts about this. So it’s a question of how vital the improvements actually are.
Sure, the tool might be slightly better, but is it worth pushing your launch back another playbook? Another month?
Usually, the answer is no. A big fat no.
Now my cat attracting ps ps section!
P.S. Remember, even Stephen King doesn't try to please everyone. Neither should you.
P.P.S. (!!!) If you’ve got this far we’re exploring launching a 30 Day AI Agent Accelerator where we:
1. Hone a business idea
2. Build a focused AI tool
3. Test and refine the tool
4. Market and launch
Course, community and live sessions.
Waitlist here: https://heyform.net/f/ZCCsfMqx