Loading...
Loading...
> Multi-scope vector memory for contextual AI conversations.
The AI Memory system provides persistent, semantic memory for your AI features. Memories are stored with vector embeddings and retrieved via similarity search. Supports multiple scopes (chat, project, user) so context stays relevant. Uses Qdrant for production and an in-memory fallback for development β gracefully degrades when services are unavailable.
DESC: Set SERVICE_VECTOR_DB=true in your environment to use Qdrant for production memory storage.
1$# .env.local2$SERVICE_VECTOR_DB=true3$FEATURE_MEMORY=true45$# Qdrant settings (defaults shown)6$QDRANT_HOST=localhost7$QDRANT_PORT=63338$QDRANT_COLLECTION=fabrk_memories
DESC: Run Qdrant locally via Docker, or uncomment the qdrant service in docker-compose.yml.
1$docker run -p 6333:6333 qdrant/qdrant
DESC: Add and search memories via the REST API or directly in server code.
1// Server-side usage2import { addMemory, searchMemory } from '@/lib/ai/memory';34// Store a memory5await addMemory('User prefers dark mode', 'user', userId);67// Search memories8const results = await searchMemory(9 'theme preferences',10 'user',11 userId,12 5 // limit13);1415// REST API16// POST /api/ai/memory { content, scope, scopeId }17// GET /api/ai/memory?query=...&scope=user&scopeId=...18// DELETE /api/ai/memory { ids: [...] }