Context is a bottleneck
Context is a bottleneck
Feb 4, 2025
Most discussions about LLMs focus on their capabilities. But capability isn't a bottleneck–context is.
"Context" is the data you provide to an LLM to work with: the information it needs for summarizing, asking questions, or generating tasks. This becomes critical with agentic or automated systems, where the context must evolve as tasks progress.
Coding co-pilots are a good example. To be effective, the LLM needs to understand the codebase, grasp relevant technical concepts, identify the right files to modify, and–most importantly–understand what you actually want to achieve. Miss any of these and the quality drops dramatically.
Some challenges, like the understanding of code, will improve with time. But getting humans to distill what they want in a way that works for a specific LLM remains incredibly hard, even for prompt engineering experts.
The process of instructing a coding co-pilot requires thinking through my own reasoning flow. This comes naturally when I have a clear solution in mind, but that's rarely the case for complex tasks. And sometimes an open-ended text box is more daunting than comforting.
Many LLM-based products focus solely on making LLMs better at deduction. We should be spending just as much, if not more, energy on making it easier for us humans to distill what we want to do. The answer cannot be just "better prompt engineering".
Most discussions about LLMs focus on their capabilities. But capability isn't a bottleneck–context is.
"Context" is the data you provide to an LLM to work with: the information it needs for summarizing, asking questions, or generating tasks. This becomes critical with agentic or automated systems, where the context must evolve as tasks progress.
Coding co-pilots are a good example. To be effective, the LLM needs to understand the codebase, grasp relevant technical concepts, identify the right files to modify, and–most importantly–understand what you actually want to achieve. Miss any of these and the quality drops dramatically.
Some challenges, like the understanding of code, will improve with time. But getting humans to distill what they want in a way that works for a specific LLM remains incredibly hard, even for prompt engineering experts.
The process of instructing a coding co-pilot requires thinking through my own reasoning flow. This comes naturally when I have a clear solution in mind, but that's rarely the case for complex tasks. And sometimes an open-ended text box is more daunting than comforting.
Many LLM-based products focus solely on making LLMs better at deduction. We should be spending just as much, if not more, energy on making it easier for us humans to distill what we want to do. The answer cannot be just "better prompt engineering".