Overview
Ingestion is how raw information becomes structured knowledge in your graph. Every conversation, app event, and document goes through a pipeline that extracts entities, identifies relationships, classifies facts by aspect, and links everything into your growing knowledge graph.The Ingestion Pipeline
Raw Input Capture
Information arrives from conversations with the CORE Agent, integration activity (GitHub commits, Slack messages, Linear issues), manual document uploads, or scheduled syncs from connected apps.
Episode Creation
The raw content becomes an Episode - the atomic unit of memory. Original content is preserved as source of truth, metadata is captured (timestamp, source, channel), and labels are applied based on context.
Entity Extraction
CORE identifies all entities mentioned - people, projects, technologies, concepts, companies. Entities are normalized and deduplicated (e.g., “Sarah”, “sarah”, “@sarah” resolve to the same entity).
Statement Extraction
Facts are extracted as structured triples: subject → predicate → object. Each statement is classified into one of 11 aspects (Identity, Preference, Decision, etc.).
Automatic vs Manual Ingestion
Automatic: Connected integrations feed activity into memory in real-time or via scheduled syncs. Conversations with the CORE Agent are captured automatically. Zero effort - happens in the background. Manual: Tell the agent directly: “Remember this: I prefer using Fastify over Express for APIs because of better TypeScript support.” Or use the MCPmemory_ingest tool from any connected AI agent.
