Context Density: How to Survive the AI Tidal Wave

As AI matures, how can humans continue to contribute value? Context density gives us the answer.

As the AI tidal wave continues to break on our shores, there are two existential questions we’re all struggling to answer:

  1. Knowledge workers and other content producers – how can we survive the AI wave with some kind of defensible capability we can offer our employers and our audiences that AI won’t be able to replace, even as it matures?
  2. Software vendors – how can we survive the AI wave with some kind of defensible product capability we can offer our customers that AI agents won’t be able to replace, even as they mature?

If you’re a pessimist, the situation may seem hopeless. AI is getting so much better so quickly that even if it can’t quite replace us or our software products today, it’s only a matter of time, right? Should we abandon hope?

Or perhaps you’re an optimist. There must be some aspect of what we as humans bring to the table that AI won’t be able to replace, no matter how good it gets.

If only we had a way of understanding and measuring just what that essential value-add that humans can bring to the table, whether we are creating content, addressing business needs as knowledge workers, or building software products that provide value to their users.

The good news: there is hope. Here is a way of looking at the problem that will help illuminate that je ne sais quoi – that ineffable human contribution that AI will never be able to replace.

First, understand semantic density

Generative AI (genAI) depends upon large language models (LLMs) that deal well with content that has specific, well-defined meaning. The better defined our inputs – training data, retrieval augmented generation (RAG) data, and information in prompts – the better formed our outputs.

In contrast, when the meaning of input data contains too many nuances – implications, unspoken references, intuitive leaps and the like – then LLMs fall short. The models’ creators simply have no way to build them to account for such subtleties.

Language experts have a term for how to understand such differences in meaning: semantic density.

You create a message with high semantic density by cramming a lot of meaning into a few words. In contrast, a message has low semantic density if it takes a lot of words to express a simple idea.

Humans are particularly good at creating semantically dense content – and in fact, we generally identify higher semantic density with better written content.

On the other hand, LLMs excel at both consuming and producing content with low semantic density. Such output is especially useful when we are looking for clear, precise explanations, accurate summaries, etc. – just the sorts of content we’ve come to expect and demand from genAI.

Is semantic density the answer?

An obvious conclusion at this point would be for humans to focus on creating semantically dense content to survive the onslaught of AI. Unfortunately, there are problems with this argument.

First, LLMs can also generate semantically dense content, especially when source data are also semantically dense, for example, asking genAI to create an abstract for a semantically dense academic paper.

Asking an LLM to write the paper is a recipe for plagiarism and hallucinations (as many students have learned to their chagrin), but the models are quite skilled at summarizing such content.

Second, it’s overly simplistic to equate semantically dense human-generated content with good writing vs. less dense content with poorer writing.

After all, sometimes we want human-generated content to be less semantically dense. A simple example would be writing for children – something genAI can do for sure, but the best child-oriented content still comes from real people.

On the flip side, extreme semantic density typically makes the text obscure and difficult to read – clearly not hallmarks of excellent writing.

So, while semantic density has a loose correlation to how well LLMs can perform, it’s not the whole story. The missing piece: context density.

The importance of context density

While semantic density measures the internal complexity of meaning within a message, context density measures the meaningful content around a message.

Context density is similar to semantic density. More meaning crammed into fewer words leads to more density, so it’s easy to confuse the two.

The reason context density is so important, however, is because of the role context plays in how LLMs behave – in particular, agentic behavior.

In fact, we could even say that what makes an LLM-based application into an AI agent is how it understands and takes action based upon context.

Such context can include:

  • Information about available local files, databases, and APIs
  • Available tools and how to access them
  • Security information necessary to access required assets
  • Other metadata relevant to each query.

Such context must be clear and unambiguous for the agents to behave properly. In other words, agents require context that has low context density.

In fact, this requirement for low context density is one of the reasons why the Model Context Protocol (MCP) has been such a rapid success.

The MCP is an open integration protocol standard for interactions with and among LLMs. It’s based on JSON, a flexible format for expressing data with low semantic density – or in the case of MCP, low context density.

While the creators of MCP didn’t explicitly design it with low context density in mind, they did intend for the protocol to prioritize clarity and structure over density.

Given that each system in an agentic interaction must understand the relevant context without hidden assumptions or other nuances of meaning, explicit context with low density is essential to the success of agentic systems.

What, then, is the role of high context density?

Human-to-human interactions, aka conversations, have inherently high context density – even though we rarely notice it.

Every human conversation contains layers of subtext and hidden meaning via facial expressions, hand gestures, tone of voice, words with ambiguous meaning, patterns of pauses in speech, and other subtle aspects of human communication.

Such nuance goes right over the proverbial head of AI – even LLMs that do such a good job of mimicking human conversation. In other words, it’s virtually impossible for LLMs to deal with high context density.

Agentic interactions in particular are quite sensitive to excessive context density. Agents rely so heavily on the precision possible with low context density that any nuance in context will throw them off entirely. At the very least, they will completely ignore it.

How context density helps us humans

Where agents (and genAI in general) is weak, humans are strong. Context density, therefore, helps us answer the questions at the top of this article. If we look at various applications of AI, context density drives essential distinctions:

  • Knowledge work – ask your favorite copilot to handle tasks with low context density. Focus human attention and activity on those tasks that require high context density.
  • Automation – processes with low context density are easy for AI to automate. Processes with high context density require human input and control.
  • Building software – anyone can leverage code generation tools to build applications with low context density. For applications that require high context density, code generation tools must be secondary to skilled human effort, insight, and control.

Context density thus becomes the differentiating metric between activities and applications that LLMs are well-suited for vs. those activities and applications that will continue to require human input and control, even as AI technologies mature.

The Intellyx take

The most important part of this story is not identifying where AI is useful. It’s identifying where it is not.

As AI inevitably transforms how we work and live, we must all come to terms with the fact that AI will take various tasks off our respective plates, leaving us wondering what our purpose will be in this arguably dystopian future.

Take heart: there will always be roles for us humans. We are the masters of insight, creativity, nuance, and hidden meaning – the essence of context density.

Our challenge moving forward: identifying those activities where we can provide value as individuals by offering just those capabilities that AI is so woefully unable to provide.

The opportunity for software vendors: make sure your products have high context density. That way agents won’t be able to do what your products do. Instead, agents will need to call upon your products to accomplish their tasks successfully.

The opportunity for humans: make sure your work is both semantically and contextually dense. Focus on the meaning that LLMs can’t grasp. Express your intuition, insight, and creativity in terms of meaning, both within your work as well as its human context.

AI gives us an amazing set of tools. Knowing how to use them well means focusing our efforts on providing the value that we as humans are uniquely qualified to contribute.

Copyright © Intellyx BV. Intellyx is the change agent industry analysis and advisory firm focused on enterprise transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies empowers business executives, IT professionals, and software vendors to leverage disruptive trends to succeed in a dynamic business environment. No AI was used to write this article. Image credit: Craiyon.