AI memory OS for LLM and Agent systems(moltbot,clawdbot,openclaw), enabling persistent Skill memory for cross-task skill reuse and evolution.
-
Updated
Apr 20, 2026 - Python
AI memory OS for LLM and Agent systems(moltbot,clawdbot,openclaw), enabling persistent Skill memory for cross-task skill reuse and evolution.
Mirix is a multi-agent personal assistant designed to track on-screen activities and answer user questions intelligently. By capturing real-time visual data and consolidating it into structured memories, Mirix transforms raw inputs into a rich knowledge base that adapts to your digital experiences.
User Profile-Based Long-Term Memory for AI Chatbot Applications.
Awesome AI Memory | LLM Memory | A curated knowledge base on AI memory for LLMs and agents, covering long-term memory, reasoning, retrieval, and memory-native system design. Awesome-AI-Memory 是一个 集中式、持续更新的 AI 记忆知识库,系统性整理了与 大模型记忆(LLM Memory)与智能体记忆(Agent Memory) 相关的前沿研究、工程框架、系统设计、评测基准与真实应用实践。
Give Claude Code photographic memory in ONE portable file. No database, no SQLite, no ChromaDB - just a single .mv2 file you can git commit, scp, or share. Native Rust core with sub-ms operations.
Curated systems, benchmarks, and papers etc. on memory for LLMs/MLLMs --- long-term context, retrieval, and reasoning.
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
Lightweight Long-Term Memory for LLM Agents.
[COLM 2025] Know Me, Respond to Me: Benchmarking LLMs for Dynamic User Profiling and Personalized Responses at Scale
HaluMem is the first operation level hallucination evaluation benchmark tailored to agent memory systems.
World's first local-only AI memory to break 74% retrieval and 60% zero-LLM on LoCoMo. No cloud, no APIs, no data leaves your machine. Additionally, mode C (LLM/Cloud) - 87.7% LoCoMo. Research-backed. arXiv: 2603.14588
✨ mem0 MCP Server: A memory system using mem0 for AI applications with model context protocl (MCP) integration. Enables long-term memory for AI agents as a drop-in MCP server.
Reliable and Efficient Semantic Prompt Caching with vCache
Research of DeepSeek Engram Architecture based on Qwen-3 and Stable Diffusion series.
Reproducible Structured Memory for LLMs
Predictive memory layer for AI agents. MongoDB + Qdrant + Neo4j with multi-tier caching, custom schema support & GraphQL. 91% Stanford STARK accuracy, <100ms on-device retrieval
Next-gen AI memory layer with importance scoring, temporal decay, hierarchical memory, and YMYL prioritization
LLM-supervised persistent memory for AI agents — graph-based recall, cross-session knowledge, single binary. Works with Claude Code, OpenClaw, and any CLI agent.
Persistent memory extension for pi with daily logs, scratchpad, and optional qmd‑powered semantic search.
Recursive Language Models for Claude Code - Infinite memory solution inspired by MIT CSAIL paper
Add a description, image, and links to the llm-memory topic page so that developers can more easily learn about it.
To associate your repository with the llm-memory topic, visit your repo's landing page and select "manage topics."