There’s a gap between AI tools that look impressive in a demo and the ones that actually produce reliable output in real work. After building operational systems for clients, here’s the stack I use daily — and more importantly, why it works.
The Stack
Five tools. Each has a specific job. None of them are interchangeable, and understanding what each one is actually good at is more valuable than owning all of them.
Claude
Better for long-form work, extended reasoning, and honest pushback. When I’m writing a proposal, working through a system design, or reviewing a long document, Claude is where I start. It holds context better over long conversations and is more likely to tell you when something doesn’t make sense rather than just completing the task. For anything that requires sustained, careful thought — this is the default.
ChatGPT (with Search) / Gemini
For fast lookups, current information, and when I need a response grounded in something that happened last week rather than last year. ChatGPT with search is fast and reliable for factual retrieval. Gemini is useful when I need to process a very long document in a single pass — its context window handles 100k+ tokens without breaking. Different jobs, different tools.
Claude Code + GSD
For anything involving actual code — building tools, creating automations, or producing production-ready systems. Claude Code is the difference between AI that writes a function and AI that ships a working project. The GSD (Get Stuff Done) workflow solves the biggest problem with coding agents: context drift. By structuring the project into phases with a persistent plan, the agent stays on target across a full build, not just the first few prompts.
n8n / Make
This is the infrastructure layer. Claude answers questions; n8n takes actions. When a lead fills in a form, n8n creates the CRM contact, sends the confirmation, notifies the right team member, and logs everything — without anyone touching it. Make is similar and slightly more beginner-friendly, but n8n gives more control when the workflow gets complex. For most of the systems I build for clients, this is the backbone.
Perplexity
Web search with citations. When I need to verify a fact, understand a market, or get current information with sources I can actually check — Perplexity is faster than Google for research-style queries because it synthesises results rather than just listing them. It’s particularly useful when you need to understand a topic quickly and want to see where the information is coming from.
The Multiplier That Makes All of Them Better
Here’s the thing nobody talks about enough: the tool matters less than what you put into it. Context is the multiplier. The same AI model will produce completely different output depending on how much relevant information you give it upfront.
Context is the difference between an AI that produces generic output and one that produces something you can actually use. The tool is secondary. What you give it is everything.
Compare these two starting points for the same task:
“Write a follow-up email to a prospect who attended our demo.”
“Write a follow-up email to a prospect who attended our CRM automation demo. They’re a 12-person recruitment firm in Carlisle. Main concern was integration with their existing ATS. They asked about pricing for a 6-month build. Tone: direct, no fluff.”
The second version produces something usable in the first pass. The first version produces something you’ll spend 20 minutes editing into shape — or never send at all. That difference compounds across every piece of work you do.
Context engineering — giving AI the right information in the right structure — is the skill that actually matters. The tools are commodities. How you use them isn’t.
What These Tools Power at Scale
Individually, each tool saves time on a single task. Connected together into a system, they create leverage that compounds. The production use cases I build for clients using this exact stack:
- Lead scoring: n8n captures inbound enquiries, Claude analyses them against historical conversion data, and the CRM is updated with a priority score — before anyone picks up the phone.
- Personalised follow-up sequences: When a lead goes cold, an automated workflow sends a contextually relevant follow-up based on what they enquired about, not a generic drip sequence.
- CRM intelligence: Before a client call, the relevant account history, last interaction, and open actions are surfaced automatically — not retrieved manually from three different places.
- Churn signals: Patterns in client engagement data flag accounts showing early disengagement, so retention action happens before the cancellation email arrives.
None of this requires a bespoke AI model or a six-figure budget. It requires the right tools, connected properly, with enough context to make intelligent decisions.
That’s the gap most small businesses are sitting in: the tools exist, the capabilities are real, but the connection and the context are missing. That’s what operational AI infrastructure solves.
See What’s Possible for Your Business
A free architecture audit maps your current stack, identifies where AI can deliver the most leverage, and proposes a connected system — not a list of tools.
Book Free AuditAvailable to UK service firms. Remote or Cumbria-based.