Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To clarify even further how this might translate into real-world advantage: I’m not trying to displace databases or reinvent grep, I’m aiming at use cases where agents need durable, fast-access memory that doesn’t disappear when a container shuts down.

Imagine an agent running inside a microVM or container. It doesn’t call Pinecone or Redis—it mounts VexFS. When it stores an embedding, it’s not going into a remote vector store, it’s being written alongside the file it came from, locally and semantically indexed. The agent can reboot, restart, or crash and still recall the same memory—because it lives inside the OS, not in RAM or ephemeral middleware.

This also means the agent can snapshot its cognitive state—“what I knew before I re-planned,” or “the thoughts I embedded before the prompt changed.” These aren’t just filesystem snapshots, they’re points in vector space tied to contextual memory. You could even branch them, like cognitive git commits.

Even search becomes something different. Instead of listing paths or grep results, you’re getting the files most semantically relevant to a query—right at the kernel level, with mmap access or zero-copy responses.

Most importantly, all of this runs without needing an external stack. No HTTP, no gRPC, no network calls. Just a vector-native FS the agent can think through.

Still early, very early. Still rough. But if we’re going to build systems where agents operate autonomously, they’ll need more than tokens—they’ll need memory. And I think that needs to live closer to the metal.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: