Engineering

Engineering

Add Memory to OpenClaw: The Complete Mem0 Integration Guide (2026)

We recently built memory for OpenClaw and received a large number of people testing OpenClaw with Mem0.

If you haven't tried that setup yet, this tutorial is for you.

When you run OpenClaw for the first time, it collects information about you and writes it into memory files that the agent can reference during conversations. The agent may ask about your work, your preferences, or how you want it to behave. As you keep using it, responses may begin to reflect that information, which makes the agent feel like it is learning over time.

But as conversations grow longer or span multiple sessions, that expectation starts to break down. Details you shared earlier stop showing up in responses. Information that felt important to the task is no longer recalled. In some cases, the agent behaves as if the information was never provided at all.

This happens because OpenClaw's default memory system does not guarantee persistence or memory recall. Memory storage and retrieval are left to the LLM, guided by prompts, heuristics, and a small set of markdown files.

The model decides what to save, when to search memory, and whether previously stored information is relevant enough to be loaded back into the current context. There is simply no guarantee that information will be persisted or reloaded when needed.

This article shows how to add enforced, persistent memory to OpenClaw using the Mem0 plugin, @mem0/openclaw-mem0.

TLDR

  • OpenClaw provides memory files and memory tools, but it does not guarantee when information is saved or recalled

  • Memory persistence and retrieval are optional behaviors controlled by prompts and model heuristics

  • Long conversations and context compaction reduce the reliability of recall

  • @mem0/openclaw-mem0 enforces automatic memory capture outside the agent lifecycle

  • Relevant memory is injected into every response automatically

  • Memory survives restarts and session boundaries, making agents reliable across runs

What persistent memory means in OpenClaw agents

In OpenClaw, persistent memory refers to memory that is stored outside the agent's execution lifecycle and can be reintroduced after a session ends or the process restarts.

Agents do not run forever. Sessions end. Context gets trimmed. Processes restart. If memory only lives inside the active prompt, it will disappear. Persistent memory solves that by living outside the agent lifecycle and being reintroduced when needed.

Without it, agents rely on short term context and best effort recall. With it, agents can actually build on past interactions over time.

How OpenClaw's memory system works, and why it fails for long-term recall

Out of the box, OpenClaw stores memory as markdown files on disk.

ls
ls
ls

You will see files like:

AGENTS.md
IDENTITY.md
MEMORY.md
USER.md
AGENTS.md
IDENTITY.md
MEMORY.md
USER.md
AGENTS.md
IDENTITY.md
MEMORY.md
USER.md

As you talk to your agent, OpenClaw gives the LLM access to memory tools such as memory_search and memory_get. At first glance, this looks reasonable. Memory exists and tools exist.

The problem is how those tools are used.

  • Saving memory: When you tell your agent something important, OpenClaw does not force it into memory. The LLM decides if the information is worth saving. If it decides no, the information is ignored forever. There is no guarantee it will be saved.

  • Recall: Even when something was saved, recall is still not guaranteed. OpenClaw provides tools like memory_search, but the agent must decide to call them. Most of the time, it chooses to answer from its training data instead.

User: I usually build backend APIs in Python
Agent: Okay, noted

[new session]

User: Suggest a project idea for me
Agent: You could build a mobile app or a game
User: I usually build backend APIs in Python
Agent: Okay, noted

[new session]

User: Suggest a project idea for me
Agent: You could build a mobile app or a game
User: I usually build backend APIs in Python
Agent: Okay, noted

[new session]

User: Suggest a project idea for me
Agent: You could build a mobile app or a game
  • Context compaction: To avoid hitting token limits, OpenClaw compacts context while older messages are summarized or removed from the active conversation. If the agent does not decide to search memory again after compaction, it answers without that context entirely.

  • Built-in memory search: OpenClaw builds a vector index over markdown memory files, but in practice search results are inconsistent, search calls may fail silently, and the agent may not call search at all.

By now the pattern is obvious. Information may exist on disk, but there is no guarantee it will be saved, searched, or reintroduced when needed.

For short demos, the built-in memory is usually fine. But once you start doing real work, long sessions, agents that run across days all things fall apart quickly.

How Mem0 adds persistent memory to OpenClaw agents

@mem0/openclaw-mem0 moves memory control out of the agent loop and into the system layer. It does this through two mechanisms that run on every turn, silently, with no manual configuration required.

  • Auto-Capture: After the agent responds, the exchange is sent to Mem0, which decides what is worth keeping and stores it as structured memory outside the session. Memory capture does not depend on the agent deciding what is important.

  • Auto-Recall: Before the agent responds, memories matching the current message are retrieved and injected directly into context, long-term memories first, then session memories. The agent reasons with the memory already present. No memory_search call required.

User: I usually build backend APIs in Python

Auto-Capture: detects user preference, stores as user-scoped memory
Auto-Recall: next turn, that memory is already in context before the agent responds
User: I usually build backend APIs in Python

Auto-Capture: detects user preference, stores as user-scoped memory
Auto-Recall: next turn, that memory is already in context before the agent responds
User: I usually build backend APIs in Python

Auto-Capture: detects user preference, stores as user-scoped memory
Auto-Recall: next turn, that memory is already in context before the agent responds
  • Memory survives sessions: You can stop the agent, restart it, continue the conversation. The memory still exists because it lives outside the session. This is what makes it a real memory system for AI agents.

Short-term vs long-term memory

The plugin organises memory into two scopes, and understanding the difference matters for how you use the tools.

  • Session memory (short-term): Auto-capture stores memories scoped to the current session using Mem0's run_id parameter. These are contextual to the ongoing conversation and do not carry forward indefinitely.

  • User memory (long-term): Persists across all sessions for the user. When the agent calls memory_store explicitly, it defaults to long-term storage (longTerm: true).

During auto-recall, both scopes are searched and presented separately i.e, long-term memories first, then session memories, so the agent has full context before it reasons.

Setting up @mem0/openclaw-mem0 step by step

You no longer need manual config editing to get started. Everything happens inside the OpenClaw chat itself.

Step 1: Get the setup command

Go to mem0.ai/claw-setup. You'll see a single command ready to copy:

Setup Mem0 from mem0.ai/claw-setup
Setup Mem0 from mem0.ai/claw-setup
Setup Mem0 from mem0.ai/claw-setup

Step 2: Send it to your OpenClaw agent

Open any OpenClaw channel including, Telegram, WhatsApp, your default chat, wherever your agent lives. Paste and send the command from the previous step.

OpenClaw responds with a Mem0 setup card and immediately asks:

"What's your email address? I'll send you a verification code to connect your Mem0 account."

Step 3: Enter your email

Type your email address and send it. Mem0 sends back:

"Check your email for a 6-digit code and paste it here."

Step 4: Paste the OTP

Copy the 6-digit code from your inbox and paste it into the chat:

223716
223716
223716

You'll see the confirmation:

"Connected to Mem0."

That's it. No API key. No config file editing. No environment variables. The plugin is now active and auto-capture and auto-recall are running on every turn.

Prefer to self-host? Use open-source mode

If you want to run everything locally without connecting to Mem0 Cloud, you can still use open-source mode. This path does require a manual config edit.

Open your config file:

Add this under plugins.entries:

"openclaw-mem0": {
  "enabled": true,
  "config": {
    "mode": "open-source",
    "userId": "your-user-id"
  }
}
"openclaw-mem0": {
  "enabled": true,
  "config": {
    "mode": "open-source",
    "userId": "your-user-id"
  }
}
"openclaw-mem0": {
  "enabled": true,
  "config": {
    "mode": "open-source",
    "userId": "your-user-id"
  }
}

To customise the embedder, vector store, or LLM:

"openclaw-mem0": {
  "enabled": true,
  "config": {
    "mode": "open-source",
    "userId": "your-user-id",
    "oss": {
      "embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
      "vectorStore": { "provider": "qdrant", "config": { "host": "localhost", "port": 6333 } },
      "llm": { "provider": "openai", "config": { "model": "gpt-4o" } }
    }
  }
}
"openclaw-mem0": {
  "enabled": true,
  "config": {
    "mode": "open-source",
    "userId": "your-user-id",
    "oss": {
      "embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
      "vectorStore": { "provider": "qdrant", "config": { "host": "localhost", "port": 6333 } },
      "llm": { "provider": "openai", "config": { "model": "gpt-4o" } }
    }
  }
}
"openclaw-mem0": {
  "enabled": true,
  "config": {
    "mode": "open-source",
    "userId": "your-user-id",
    "oss": {
      "embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
      "vectorStore": { "provider": "qdrant", "config": { "host": "localhost", "port": 6333 } },
      "llm": { "provider": "openai", "config": { "model": "gpt-4o" } }
    }
  }
}

Restart the gateway after saving:

All oss fields are optional. The defaults use OpenAI embeddings (text-embedding-3-small), an in-memory vector store, and OpenAI LLM. See the Mem0 OSS docs for the full list of available providers.

What tools your agent now has access to

Once the plugin is enabled, your agent gains five memory tools automatically:

Tool

Description

memory_search

Search memories by natural language

memory_list

List all stored memories for a user

memory_store

Explicitly save a fact

memory_get

Retrieve a memory by ID

memory_forget

Delete by ID or by query

memory_search and memory_list both accept a scope parameter to control which memories are queried: "session" for short-term only, "long-term" for cross-session only, or "all" for both.

For normal usage you do not need to call these manually. Auto-capture and auto-recall handle most cases. They are there when you need explicit control.

Verifying that persistent memory works

At this point, everything should be wired up. The only thing left is to confirm memory actually persists.

Start by telling your agent something worth remembering:

User: I usually build backend APIs in Python
Agent: Got it. I've noted that you build backend APIs in Python.
User: I usually build backend APIs in Python
Agent: Got it. I've noted that you build backend APIs in Python.
User: I usually build backend APIs in Python
Agent: Got it. I've noted that you build backend APIs in Python.

You should see this in your logs immediately:

21:49:09 [plugins] openclaw-mem0: auto-captured 1 memories
21:49:09 [plugins] openclaw-mem0: auto-captured 1 memories
21:49:09 [plugins] openclaw-mem0: auto-captured 1 memories

Stop the agent. Start it again so you are in a new session:

Ask something that depends on that memory:

User: Suggest a project idea for me
Agent: Since you build backend APIs in Python, you could build a small API 
       gateway with rate limiting and API key support

User: Suggest a project idea for me
Agent: Since you build backend APIs in Python, you could build a small API 
       gateway with rate limiting and API key support

User: Suggest a project idea for me
Agent: Since you build backend APIs in Python, you could build a small API 
       gateway with rate limiting and API key support

Then confirm the memory exists directly using the CLI:

# Search by exact phrase
openclaw mem0 search "backend APIs in Python"

# Search by natural language
openclaw mem0 search "what does the user usually build"

# Search only long-term memories
openclaw mem0 search "backend APIs" --scope long-term

# Search only session memories
openclaw mem0 search "backend APIs" --scope session

# View overall memory stats

# Search by exact phrase
openclaw mem0 search "backend APIs in Python"

# Search by natural language
openclaw mem0 search "what does the user usually build"

# Search only long-term memories
openclaw mem0 search "backend APIs" --scope long-term

# Search only session memories
openclaw mem0 search "backend APIs" --scope session

# View overall memory stats

# Search by exact phrase
openclaw mem0 search "backend APIs in Python"

# Search by natural language
openclaw mem0 search "what does the user usually build"

# Search only long-term memories
openclaw mem0 search "backend APIs" --scope long-term

# Search only session memories
openclaw mem0 search "backend APIs" --scope session

# View overall memory stats

You should see:

Found 1 memory
- User usually builds backend APIs in Python
Found 1 memory
- User usually builds backend APIs in Python
Found 1 memory
- User usually builds backend APIs in Python

At this point there is nothing left to assume. The memory exists. It survives restarts. It is injected into every response. The agent is no longer guessing.

Configuration reference

Core options

Key

Type

Default

Description

mode

"platform" / "open-source"

"platform"

Which backend to use

userId

string

"default"

Scope memories per user

autoRecall

boolean

true

Inject memories before each turn

autoCapture

boolean

true

Store facts after each turn

topK

number

5

Max memories injected per recall

searchThreshold

number

0.3

Minimum similarity score (0–1)

Platform mode options

Key

Description

apiKey

Supports ${MEM0_API_KEY} env var syntax

orgId

Optional. Organisation ID for multi-org setups

projectId

Optional. Project ID for scoping

customInstructions

Override what gets extracted and how it is formatted

customCategories

Override the 12 default memory category tags

Open-source mode options (oss)

Key

Default

Description

oss.embedder.provider

"openai"

Embedding provider ("openai", "ollama", etc.)

oss.vectorStore.provider

"memory"

Vector store ("memory", "qdrant", "chroma", etc.)

oss.llm.provider

"openai"

LLM provider ("openai", "anthropic", "ollama", etc.)

oss.historyDbPath

SQLite path for memory edit history

Start building agents that actually remember

OpenClaw agents forget because memory is treated as a suggestion, not a requirement. Facts may or may not be saved. Memory may or may not be searched. Context may disappear at any time due to compaction. When all of that is left to the LLM, forgetting is the expected outcome.

@mem0/openclaw-mem0 changes this by enforcing memory capture and recall at the system layer rather than leaving it to the prompt. Memory is captured outside the agent session. Relevant memory is reintroduced on every turn. Restarts do not matter. Long conversations do not matter. The agent reasons with the same facts every time.

You do not need to rewrite prompts or change how your agent works. You only replace the memory layer.

If you are building agents that run across sessions, handle real user preferences, or are expected to behave consistently over time, persistent memory is not optional. It is the foundation.

The simplest next step is to install the plugin, restart your agent, and watch it stop guessing and start remembering.

FAQs

Do I need to change my agent prompts to use @mem0/openclaw-mem0?

No. The plugin works at the memory layer. Your prompts and agent logic stay the same.

Does this replace OpenClaw's built-in memory tools?

Yes. The plugin replaces the default memory behavior with persistent memory backed by Mem0.

GET TLDR from:

Summarize

Website/Footer

Summarize

Website/Footer

Summarize

Website/Footer

Summarize

Website/Footer

Give your AI a memory and personality.

Instant memory for LLMs—better, cheaper, personal.

Give your AI a memory and personality.

Instant memory for LLMs—better, cheaper, personal.

Give your AI a memory and personality.

Instant memory for LLMs—better, cheaper, personal.