I was working with an NGO in Switzerland - one of those serious, UN-affiliated organisations. Proper big boys. The kind where "data privacy" isn't just a checkbox, it's an entire department.
They had this brilliant idea for an AI tool that would help process and analyse medical data to speed up assistance delivery. The potential impact was huge. Lives could literally be changed.
My client was buzzing with excitement. The use case was clear. The value was obvious. The technology was feasible.
And we had to kill it. Stone dead. Before we wrote a single line of code.
Why? Because when you're dealing with vulnerable populations and UN-level data privacy requirements, some doors are firmly locked. And they're locked for good reason.
It just wasn’t feasible because of the rules and regulations in place. The pain of killing that idea early was nothing compared to the pain we would have felt after spending months building something we could never deploy.
We’re going to hyper-accelerate this process in this Part and begin to reduce our long-list of ideas.
Let’s get started:
“You can’t do that!”
In the last Part we went wild with ideas. We generated dozens of potential AI solutions for your industry. It was the business equivalent of a brainstorming party - no idea too small, no concept too ambitious.
Now comes the hangover. Time to get brutal.
The whole reason we went wide last time is that now we are going to aggressively narrow it down.
Because here's the thing about ideas in the AI space - they don't just need to be good. They need to be legal, ethical, technically feasible, and practically deployable. I know I know…you don’t want to hear it!
Let me be clear - I'm not some regulation-loving bureaucrat who gets excited about compliance documents. And I'm not saying these more complex ideas aren't doable. They absolutely are!
Almost any AI project can be made to work with enough time, money, and legal expertise. Need to handle sensitive data? There are frameworks for that. Worried about GDPR? There are compliance experts who can guide you through it. Got regulatory hurdles? They can be cleared.
All doable.
But here's the thing: for your first AI app, we want to move fast. We want to get something out there, get feedback, and start learning. We want to remove every possible roadblock between you and your first deployed solution.
Why? Because speed matters more than complexity right now.
Those complex, highly regulated projects? They're like trying to run before you can walk. They involve multiple stakeholders, legal reviews, compliance checks, special infrastructure... each one adding weeks or months to your timeline. No bueno.
Save those for later! We can absolutely loop back to those once you've got a few wins under your belt, once you're comfortable with the basic process of building and deploying AI solutions.
Right now, we're looking for the path of least resistance.
We want to be able to deploy in a week not a year. Yes, that fast.
Let's get our first win before we try to change the world. Cool?
OK, let’s use some prompts to help us begin the winnowing process. First up, everyone’s favourite!
GDPR isn't just some annoying popup on websites. It's a set of rules that can absolutely demolish your AI dreams if you're not careful. And it's not just GDPR - every region has its own flavour of data protection laws. California's got CCPA, China's got PIPL, and the list keeps growing.
Here's your first reality-check prompt:
You are a data privacy expert and regulatory consultant. Evaluate these AI solution ideas for potential privacy and regulatory concerns.
My ideas: [Paste your ideas here]
For each idea, identify:
1. What sensitive data might be involved
2. Which regulations could apply
3. Whether it's a red flag (stop), yellow flag (proceed with caution), or green flag (relatively safe)
4. What specific issues need to be addressed
Run this first. Any ideas that come back with red flags? Bin them. Right now. Don't even think about trying to find workarounds. That takes time and saps energy.
Just because ChatGPT can write poetry doesn't mean it can do everything. Here's your technical reality check prompt:
You are an AI systems architect with extensive practical experience. Review these AI solution ideas for technical feasibility with current technology.
My ideas: [List remaining ideas]
For each idea, assess:
1. What AI capabilities are required
2. Whether these capabilities exist and are accessible
3. Known limitations or challenges
4. Whether it's a red flag (stop), yellow flag (proceed with caution), or green flag (relatively safe)
Yellow flags here are okay - technology moves fast. Red flags? Those ideas go in the bin too. We want to fill that bin up!
But we're not done. Now comes the practical reality check. For each remaining idea, ask yourself:
These ones AI can’t answer for you. They are personal to you.
If you can't answer "yes" to all of these, that idea goes in the bin too.
"But Kyle," I hear you say, "this seems really negative. We're killing so many ideas!"
Exactly. That's the point! We’re killing our darlings.
Remember that NGO project? Killing it early saved months of work and thousands of francs. More importantly, it saved us from the nightmare scenario of building something we couldn't deploy. Wasted time, energy and money.
By the end of this process, you should have eliminated at least 70% of your ideas.
If you haven't, you're not being brutal enough.
What remains are ideas that are:
In the next Part we'll take these survivors and identify which ones are actually worth building. Because passing our reality checks doesn't automatically make an idea good - it just makes it possible!
PS. If you’ve got this far we’re exploring launching a 30 Day AI Entrepreneurship Accelerator where we:
1. Hone a business idea
2. Build a focused AI tool
3. Test and refine the tool
4. Market and launch
Course, community and live sessions.
Waitlist here: https://promptentrepreneur.beehiiv.com/c/waitlist