Skip to main content

Pillar 1: It Remembers Everything

Memory is the foundation of CORE - the first pillar of your digital brain. It’s a temporal knowledge graph that captures everything - conversations, decisions, preferences, relationships - and organizes it so the right context surfaces at the right time. Unlike simple note-taking or RAG systems that store text chunks, CORE’s memory understands what kind of information it’s storing, who it’s about, and when it changed.

How Memory is Structured

  • Episodes are the documents you see on your CORE dashboard. Every conversation, email, or synced app activity becomes an episode - the raw source of truth for everything CORE knows.
  • Entities are the people, projects, companies, and concepts extracted from your episodes. When the same entity (say “Sarah”) appears in multiple episodes, those episodes are automatically linked through that entity - connecting a Slack conversation about a bug to a GitHub PR to a Linear issue, all because Sarah was mentioned in each. Learn more about entity types.
  • Statements are atomic facts extracted from each episode. “User works on TaskMaster” or “User prefers TypeScript” - each traceable back to its source episode. Every statement is classified into one of 11 aspects (preference, decision, directive, goal, etc.) so CORE can filter precisely - asking for your coding preferences doesn’t surface your meeting schedule.

How Your Memory Grows

CORE automatically syncs conversations from your connected AI tools - Claude Code, Cursor, Windsurf, and others after you connect them via MCP. Every conversation they have with your memory becomes a new episode, so your knowledge keeps evolving without any manual effort. You can also add information manually: upload documents directly from the dashboard, or ask the CORE Agent to pull and save information from your connected apps (“Save my last 10 emails” or “Sync my recent Linear issues”). Any conversation you have with the CORE Agent is automatically added to memory too. The more you talk to it, the more it knows about you - your preferences, decisions, and how you work. Learn more about how CORE ingests memory.

How CORE Searches Memory

CORE’s memory isn’t a simple Graph RAG that embeds text and returns the top-k similar chunks. It uses intent-driven retrieval - first classifying what kind of question you’re asking, then applying the optimal search strategy. Asking for your coding preferences? CORE filters by the Preference aspect and returns only preference facts. Asking what happened last week? It scopes to that time range. Asking how two people are connected? It traverses the entity graph across multiple hops. Under the hood, CORE combines vector search (semantic similarity), BM25 keyword search (precise term matching), and graph traversal (relationship discovery) - running them in parallel and fusing the results. This makes searches 3-4x faster and significantly more precise than traditional RAG. Learn more about the search pipeline.

Organizing Memory

CORE classifies every query into one of 5 query types - aspect, entity, temporal, exploratory, and relationship - each with its own optimized search strategy. This means you don’t need to think about how to search; just ask naturally and CORE routes to the right approach. You can also organize your memory with labels - workspace-scoped tags that let you group episodes by project, topic, or context. Labels narrow the search space before execution, making filtered searches faster and more precise.