Persistent, cross-session memory for AI agents. Not context windows — real memory that survives between calls, users, and days.
5–10K tokens in GPU VRAM. Instant recall within a session. $0.003 per 1K tokens on RecordorAI hardware — 800x cheaper than GPT-5.4 for the same context density.
Full conversation history in pgvector + Supabase. Infinite retention, queried on demand. No per-token cost scaling with history length.
Each entity (user, project, decision) compressed to ~500 tokens. No hallucination risk — extractive, not generative. Costs almost nothing to store.
Auto-select Gemma 4 vs GLM-5.1 based on task complexity and cost. Fast tasks on Gemma, deep reasoning on GLM. No thought required.
Python and JavaScript. Works with LangChain, AutoGPT, CrewAI, or your own stack. Drop-in memory backend — no rewrites.
Browser-based UI for end users, connected to the same memory layer. Every conversation benefits from tiered memory automatically.
Supabase is SOC 2 Type II certified and HIPAA-eligible. Customer data stays isolated. Works for security-sensitive workloads.
vLLM, pgvector, Supabase — no vendor lock-in. The memory layer is yours. Deploy anywhere.
Memory events stream in real time. Build reactive workflows on top of your agent's memory.
No API markups. No per-token royalties. You pay RecordorAI — we run the infra.
Free during beta. No credit card. Just access.