Context is All You Need
What Interpreted programming languages, emojis, and MCP teach us about the future of human communication quality
You ever have one of those conversations where you barely have to explain yourself—where someone just gets what you're saying? Like you’re on the same wavelength, sharing brain bandwidth or something?
That feeling—where ideas flow without all the backstory—isn’t magic. It’s a function of context. Real, shared context.
And it turns out, the same thing that makes human conversations click is now making machines a whole lot better at understanding us and themselves.
The Same Wavelength Protocol
When two people understand each other without having to explain every detail, it’s not telepathy. It’s compression.
You’ve both got context—on each other, the topic, maybe even the history around it. In venture capital, this is the difference between a surface-level chat about a startup and a quick, nuanced take with someone who knows both you and the space.
It’s like jamming with other musicians. You’re in key, anticipating each other's next bar—not because you're guessing, but because you've built up shared knowledge. It's faster, smoother, and just… easier. It's a kind of conversational jazz.
We don’t often think about it this way, but human conversation is mostly lossy compression. As humans, we’re compressing meaning from words, tone, facial expressions—then tossing them into the air, hoping the other person decompresses them with accuracy. When we share context, that decompression gets better. Less guesswork, more signal.
From Punch Cards to Python
Consider the evolution of programming. In the early days of computers, people had to interact with machines with physical punch cards; this must have felt like interacting with some alien technology. That said, they had to manage things like memory registers directly, handled garbage collection manually—every instruction precise, with no room for error. They had to literally think like the machine.
Then came languages like Fortran and COBOL, offering slightly more human-like syntax. Eventually, we reached high-level languages like Python, where typing “import this” can conjure vast frameworks seemingly out of thin air. Each step represented a new layer of abstraction, simplifying the human's task.
But every abstraction carries what computer scientists call the abstraction penalty—a slight cost in performance or control to gain ease of use for humans.
Do most developers mind? Not really. Unless you're coding for a pacemaker or a Mars rover where every cycle counts, you're likely not micromanaging CPU usage. You're focused on higher-level logic, trusting the underlying layers to handle the intricate details. Writing in Python instead of C is like saying, "Make me a sandwich," rather than dictating the recipe, ingredient measurements, and precise knife angles. You get the same result with far less friction.
Fluff In, Fluff Out
Where we somehow as a species continue to be as verbose as we possibly can is the world of business and day-to-day speak.
Ever find yourself writing an email like, “Hi Jane, hope you’re well. Just wanted to gently circle back on our previous conversation from last Tuesday regarding the project update to see if you’ve had a chance to look at the latest figures…”?
Of course. We do this because simply writing, “Jane, status update on Project X figures?” can feel abrupt or even rude in many professional settings.
We lack guaranteed shared context about Jane's current workload, priorities, or even her preferred communication style.
So, we wrap our core message in layers of politeness and preamble – conversational padding – hoping to ensure the essential signal ("need update") survives transmission across uncertain terrain.
We're stuck in a loop: core message → fluff → transmission → hopeful extraction of core message.
It’s often inefficient, encoding and decoding social cues alongside the actual request.
Now, with LLMs becoming integrated into tools like email auto-complete (Gmail) and document assistants (Notion AI), there's a risk of amplifying this. We generate language intended for both human readers and machine processing, potentially multiplying the padding if the underlying context isn't clear to the AI.
MCP: The USB-C of AI Context
This challenge highlights why recent developments like MCP (Model Context Protocol) are so significant. First introduced by Anthropic in November 2024 and recently adopted by OpenAI, MCP acts like a standardized interface enabling LLMs to interact directly and meaningfully with applications. Think of it as a translator between LLMs and underlying applications.
Imagine LLMs spoke English and some application spoke Polish. MCP acts as the translator layer that enables LLMs to communicate with underlying applications in a standardized manner.
Without MCP, teaching an LLM to use a specific tool, like navigating a file system, might involve complex, custom-built translation layers or intricate prompts describing how the file system works. With MCP, the application can present its capabilities through a standardized protocol. It’s like saying, "Here is the file system interface; these are the available functions (list_files, read_file, check_permissions), here's how to use them." No bespoke hand-holding required.
In other words, MCP enables LLMS to actually “do stuff” instead of just regurgitating information it knows about — It’s akin to giving the AI not just pattern recognition abilities, but functional agency – the capacity to use tools effectively. Before, LLMs were like brilliant minds disconnected from hands; now, they're gaining the means to interact purposefully with digital environments.
And the best part? Context is built-in.
The LLM doesn't just know a tool exists; through MCP, it understands its structure, its functions, its APIs, its permissions. It receives a labeled floor plan, not just confirmation that a kitchen exists.
This fundamentally changes the nature of interaction, starting prominently with programming. The common pattern is shifting:
Human to LLM: You provide the high-level logic, the goals, the constraints – perhaps as bullet points, sketches, or even spoken ideas. You are the architect, defining what needs to be done and the context in which it operates.
LLM to Machine: The LLM translates these ideas into functional code, interacts with APIs (potentially via MCP), manages data flows, and checks outputs. It handles the intricate syntax and implementation details.
You transition from being the "keyboard monkey" meticulously typing every line, to becoming the logic architect and the primary provider of context.
Does that sound too sci-fi? Too abstract? Maybe. But then again, you’re already not managing memory or tuning CPU cores. That ship sailed decades ago. This is just the next level of abstraction, powered by context-rich LLMs.
Human Speak 2.0
Language is a compression format for thought.
Each sentence we construct is an imperfect snapshot—a quick sketch. We write in shorthand, speak in emotion, and trust the listener to fill in the blanks.
Now, large language models and systems like MCP are embedding that same principle into code: compressing meaning, inferring intent, and stripping away unnecessary verbosity. In doing so, they’re making us increasingly allergic to fluff.
Fluff won’t disappear entirely, but the tools to manage it are getting smarter—more contextual, less needy, better attuned to what actually matters.
To communicate at the level of abstraction we enjoy in languages like Python, we must improve our ability to transmit context. Because context isn’t just king—it’s the glue. The connective tissue. In fact, you could argue that context is all you need.