Journal/Spark/Continual Learning
FeaturedSparkContinual LearningAgents

Why Active Agentic Memory is the Next Shift

Human-crafted knowledge works perfectly fine, until data-driven learning surpasses it. Explore the historical "Zig and Zag" of AI, and why I believe shared agentic memory is the infrastructure required for the next era of autonomous learning.

Valentin TablanCo-founder & CTO · Memco4 min readEssay #01
Listen4 minNarrated by James
The Zig and Zag of AISix moments alternate between codifying human rules and learning from data, ending with shared agentic memory as the next Zag.THE HISTORICAL PENDULUMZAG - learn from dataZIG - codify human rulesdartmouth# Zigcodify expertrules & symbols1956perceptron# Zaglearn weightsfrom examples1958minsky/papert# Zigrules win again,first AI winter1969alexnet# Zagdeep nets beathand-crafted feats2012harnesses# Zigprompts, guards,markdown rules2025next# Zagagents learnfrom sharedexperienceactive memoryThe Zig and Zag of AISix moments alternate between codifying human rules and learning from data, ending with shared agentic memory as the next Zag.ZIG - CODIFYLEARN - ZAGdartmouth# Zigcodify expertrules & symbols1956perceptron# Zaglearn weightsfrom examples1958minsky/papert# Zigrules win,1st AI winter1969alexnet# Zagdeep nets beathand-crafted2012harnesses# Zigprompts, guards,markdown rules2025next# Zagagents learnfrom sharedexperience-> active memory
Fig. 01Seven decades, one pendulum: codify, then let the machine learn. The next swing is already underway.

Over my 25 years in AI I have watched a continuous historical pattern unfold. It is a constant oscillation between two opposing philosophies, which I call the "Zig and Zag" of AI. Understanding this pattern can help us predict what comes next. I believe the next AI frontier is continually learning agents built on top of active shared memory.

The "Zig" is when we tell machines what to know. We codify human knowledge into rules, build elaborate structures, and engineer specific features. The "Zag" is when we let machines discover what to know. We step out of the way and let them learn dynamically from data and experience.

We are currently stuck in a massive Zig. But the pendulum is about to swing, and shared agentic memory is the infrastructure that will make the next Zag possible.

The Historical Pendulum

If we look back, the entire history of artificial intelligence is defined by this tension.

AI began with a Zig at the 1956 Dartmouth workshop, built on the bold premise that we could simply codify human knowledge into expert rules. Frank Rosenblatt soon offered a Zag with the Mark I Perceptron, a machine that learned purely from examples without human rules. The pendulum swung back in 1969 when Minsky and Papert proved perceptrons couldn't learn simple logic, triggering the first AI winter.

But human-crafted rules always hit a ceiling. In speech recognition, the turning point was famously captured by Fred Jelinek at IBM: "Every time I fire a linguist, the performance of the speech recogniser goes up". In computer vision, we spent years hand-crafting features (like SIFT or HOG) so the model knew what to look for. Then, in 2012, AlexNet arrived, proving that deep learning from raw data beats human-engineered features by an unbridgeable margin.

Human-crafted knowledge works perfectly fine, until data-driven learning surpasses it. Every single time.

The Great Illusion of Modern Agentic Workflows

Despite the phenomenal capabilities of modern LLMs, we are firmly back in a Zig phase today.

Look at how we build AI agents in professional software engineering. We build elaborate harnesses around our foundation models: rigid prompts, strict guardrails, retrieval pipelines, and manual human reviews. We manage their state using markdown files, an approach that, as I wrote in April, suffers from severe structural limits as your team scales.

Right now, the intelligence of our systems is increasingly living in the scaffolding, rather than the learning process itself. We treat the model as the system's intelligence while simultaneously forcing its ephemeral context window to act as its memory. As I recently highlighted in my piece on context rot, this architecture is fundamentally flawed. When you force an agent to rely on a context window for memory, compaction silently erases up to half of your non-default project rules and conventions. The agent doesn't even know what it has forgotten; it just confidently starts violating your team's architectural decisions.

The Next Zag: The Era of Experience

To break free from this brittle scaffolding, we must enter the next Zag. Agents must shift from relying on static, human-provided prompts to learning dynamically from every interaction.

Over a year ago, I wrote about David Silver and Richard Sutton's paper describing the "Era of Experience". Their premise was that AI systems, bound by the theoretical limits of mimicking human data, must start interacting directly with their environment to discover original solutions. That logic remains as robust today as it was then. However, to learn from unmediated experience, agents fundamentally require memory. Without it, they are effectively amnesiacs, forced to re-derive solutions from first principles on every single run.

Shared Institutional Memory is the Catalyst

The shift to the next Zag will not come from giving agents isolated, personal memory. As I mapped out recently in my overview of the agentic memory design space, the most critical, compounding value for enterprise teams lives in shared, institutional memory.

This is exactly why we built Spark at Memco. We separated storage from the processing model entirely. When one agent on your team wrestles with your idiosyncratic legacy code and discovers a solution, that interaction is captured, abstracted, and stored. It is instantly distributed to every other agent across the company.

The economic impact of this shared learning loop is undeniable. As we discovered in our recent evaluations, access to shared memory cuts compute costs by roughly 50% even on tasks agents can already solve independently, simply by making their path to the solution straighter and more predictable. And as I noted last month, even when an agent completely fails at a task that exceeds its current capability ceiling, having a shared memory reduces the cost of that failure by 34% by preventing it from exploring known dead ends.

We are moving away from solitary agents reading manual guidelines, and toward a collective intelligence where every agent learns and every team gets smarter. The scaffolding is coming down. The next Zag is here, and it is built on shared memory.

Valentin TablanCo-founder & CTO · Memco

Previously NLP at Apple Siri and the University of Sheffield. Working on systems that make AI agents collectively smarter without making any single one of them larger.

Follow on X →