Every era of software has been defined by one thing, and it has never been the technology itself — it’s been the gap between what the technology could do and what the business actually needed it to do. That gap has a name, and the name is context.
I’ve been thinking about this a lot lately as I watch people struggle with AI-assisted coding tools, prompting platforms, and agent-based workflows, because the conversation keeps circling back to “how do I get better outputs?” and the answer keeps being the same one it’s been for thirty years of software: give the system better context. The tools may have changed, but the underlying problem has remained stubbornly, persistently the same.
Let me walk you through what I mean.
The Context Problem Wore Different Clothes in Every Era
In the on-prem era, context lived entirely with your IT team, and they were the translators between what the business needed and what the system could actually do. The quality of your software deployment was directly proportional to how well those people understood your business, because everything was custom, everything required deep institutional knowledge to configure, maintain, and evolve, and the technology solved maybe 20% of the problem out of the box while the remaining 80% was human-driven context work happening behind the scenes.
Then SaaS came along and flipped that ratio. SaaS products solved for the 80% through usability, standardized workflows, and design patterns that codified the most common business processes into ready-made software, which meant you didn’t need an IT team to translate anymore because the interface did most of that work for you. Login, configure a few settings, and you’re running.
But that last 20% — the part where the software actually needed to understand your business, your customers, your edge cases — that’s where Customer Success was born. The entire discipline of Customer Success emerged specifically to close the context gap between what the product could do generically and what your specific business needed it to do, and CSMs weren’t selling so much as they were contextualizing. They made sure the implementation actually matched the way your team worked, that your data was structured correctly, that the workflows reflected your real processes and not the idealized ones the product was originally designed for.
Now we’re in the AI era, and the context problem has shifted again. The chat interface made it dramatically easier for nontechnical people to interact with AI systems, and model quality has improved to the point where LLMs can interpret natural language inputs with remarkable accuracy, which means the floor for entry has dropped significantly and more people than ever before can sit down and get something useful out of a conversation with an AI tool.
And yet, the gap persists. The onus of providing correct context has shifted from the IT team in the on-prem era, to the CSM in the SaaS era, to you — the individual user — through the quality of what you prompt into the system. The tools got friendlier, but the burden got more personal.
The Gap That Still Needs Filling
Here’s the thing that most people miss when they sit down with an AI tool: the difference between getting a passable output and getting a genuinely useful one has almost nothing to do with the model’s capability, and has everything to do with how clearly you’ve communicated what you actually need.
A nontechnical person can now walk up to any AI tool and ask a question in plain English, which is a massive leap forward from where we were even two years ago. But getting the specific output they need — the one that accounts for their constraints, their audience, their use case, their definition of “good” — still requires understanding what context the AI is missing and what assumptions it might make if you don’t fill those gaps yourself.
Think of it this way: if you walk into a coffee shop and say “I want coffee,” the barista can make you something, but if you say “I want a medium cold brew with oat milk and a shot of espresso,” you get exactly what you want. The chat interface made it so anyone can walk in and order, but the quality of your result still depends entirely on how well you’ve told the system what you’re actually trying to accomplish — your constraints, your use case, what you’re building it for, and what success looks like when you get there.
This is the skill that hasn’t been democratized yet. The interface got easier, but the thinking behind what goes into the interface did not, and that distinction matters more than most people realize.
Think Like a Product Manager, Not a Power User
I’ve written about the product management mindset before (Issue #10 Read On –>), and this is where it becomes directly applicable to every single person working with AI today. The people who get the most out of these tools are the ones who approach them the way a PM approaches a product: with an obsession for problem definition, user understanding, and iterative refinement rather than jumping straight to the solution.
The key insight is this — AI has access to far more data on the various ways that technology can solve a given problem than any individual user does, which means your job is to define the problem clearly enough that the system can work backwards from it. You are the problem definer, not the solution architect, and that is a fundamentally different skill than most people realize they need to develop.
A Framework for Defining the Problem
If you take one thing from this issue, let it be this: start with the problem, not the tool. Here’s how to think about it:
- Start with the friction point. What’s the observable moment where things break down, take too long, or produce the wrong result? Don’t jump to “I need AI to do X” — instead, stay with “this is where things go sideways,” because the more specific you are about the friction, the more useful the AI’s response will be.
- Layer in who’s experiencing it and why it matters to them. Context requirements shift depending on the user, and a report that’s too slow for a VP is a different problem than a report that’s too slow for an analyst because the downstream impact is different, the tolerance for detail is different, and the definition of “done” is different.
- Separate the symptom from the root cause. “My reports take too long” is a symptom, whereas “I don’t have a standardized template for what data belongs in each section, so I rebuild the structure from scratch every time” is the root cause. AI can help with both, but it will give you a fundamentally better answer if you hand it the root cause instead of the symptom.
- Define what success looks like concretely. Not “it should be faster” but “we ship reports in half the time” or “stakeholders stop asking for rewrites,” because the more specific your success criteria, the more precisely the AI can target its output to what you actually need.
A Framework for Setting Context
Once you’ve defined the problem, the next step is making sure the AI has what it needs to actually help you solve it, and I find it helpful to think about context setting in three layers.
First, understand your baseline — what does the AI already know versus what’s specific to your situation? If you’re asking it to draft a sales email, it knows what sales emails generally look like, but it does not know your product, your buyer, your pricing model, or your competitive positioning, and that delta between general knowledge and your specific reality is exactly where your context needs to live.
Second, identify the gaps — where might the AI make assumptions that don’t apply to you? If you ask for a project plan without specifying team size, budget constraints, or timeline, the AI will fill in those blanks with generic defaults, and every assumption it makes is a place where your output drifts further from what you actually need.
Third, stress-test those gaps by asking yourself: if the AI got this wrong, what would break? If the answer is “nothing much,” maybe the gap is fine and you can move on. But if the answer is “we’d send the wrong message to a key client” or “we’d scope the project incorrectly,” that’s a gap you absolutely need to fill before you hit enter.
And finally, map your own workflow first — where does this problem show up in your day, what information do you already have at hand versus what’s missing, and what would a great outcome actually look like in the context of the work you’re doing right now? The better you’ve mapped your own context before you start prompting, the less back-and-forth you’ll need with the AI to get to something you can actually use.
Your Takeaway This Week
The people who will get the most out of AI over the next few years are the ones who’ve gotten better at two things: defining problems clearly and providing the right context for the systems they work with, and those skills compound in ways that extend far beyond your AI interactions. Every time you articulate a problem well enough for an AI to solve it on the first or second pass, you’ve trained yourself to think more precisely about what you actually need, and that makes you better at communicating with your team, your stakeholders, and your customers too.
Every era of software rewarded the people who could bridge the gap between what the tool could do and what the business actually needed, whether that meant deep technical translators in the on-prem era, strong implementers and customer success operators in the SaaS era, or clear thinkers who can define problems and provide context with precision in the AI era. The interface changed with each wave, but the underlying skill — the ability to close the context gap — has been the throughline across all of them.
So before you open your next AI tool and start typing, pause for a moment and ask yourself: have I defined the problem clearly, or am I asking the AI to guess at what I need? Have I provided the context it requires to do this well, or am I hoping it fills in the blanks correctly on its own? The difference between those two approaches is the difference between an output you throw away and one that actually moves your work forward.
What context are you going to get better at providing this week?
If you found this issue useful, subscribe and share The Business Side of AI. I built this newsletter for people who are building the future at work.
– Annie
Comment On This Post: