Vector databases are built to read fast. Write embeddings, build an index, serve similarity queries at low latency. For static knowledge bases that rarely change, this works well.
For AI agent memory — where facts update hourly and preferences shift daily — the write path is just as important as the read path. And this is where vector databases fall short.
Writes Are Not Just Inserts
Adding new data means computing embeddings and inserting them. This is well-optimized. The problem is everything else.
Updating a fact requires finding the old embedding, removing it, computing a new one, and inserting the replacement. There is no native concept of "this new fact supersedes that old fact." Merging duplicate entities requires resolution logic entirely outside the database. Deprecating outdated information without deleting it requires custom metadata management.
Each of these operations is essential for accurate AI agent memory, and none are native to the vector database paradigm.
What Happens When Writes Are Hard
Teams take shortcuts. The most common is never updating — new information gets appended, old information stays forever. The knowledge base accumulates layers of outdated, contradictory content.
A query about location might surface three different addresses from three time periods, with no temporal signal indicating which is current. The agent picks arbitrarily or presents all three.
Another shortcut is periodic full reindexing — recomputing all embeddings from scratch. This works for small datasets but becomes expensive at scale, and creates staleness windows unacceptable for time-sensitive applications.
Memory Requires Version Control
Production agent memory needs something closer to version control than write-append storage. When a user updates a preference, the new version should link to the old one — enabling both "What does this user prefer now?" and "What did they prefer before?"
This requires a write path that understands relationships between facts — temporal relationships like supersession and expiration. Git-style knowledge graphs treat every update as a new node connected to its predecessor, making the write path as powerful as the read path.
The Operational Burden
Custom write paths require coordinating across the embedding pipeline, vector index, metadata store, and application logic. Index fragmentation accumulates. Consistency is difficult when systems are separate.
Systems that learn from retrieval outcomes add another dimension — updating not just facts but learned retrieval behaviors. This circular dependency is nearly impossible to manage when write infrastructure is bolted on.
Frequently Asked Questions
Can I use soft deletes and metadata flags?
Many teams do — flag old embeddings as deprecated and filter at query time. This works for simple cases but scales poorly as the index grows with accumulated deprecated entries.
How often does agent memory need to update?
More than expected. Preferences change within sessions. Business data updates daily. If your agent interacts over weeks and months, the write path is active constantly.
Conclusion
A memory system that reads but cannot efficiently write is a memory system that decays. Vector databases were optimized for the read path. AI agents need an equally capable write path — one supporting updates, versioning, and deprecation natively. Without it, memory does not evolve. It accumulates.