Cross-tool memory — same memory in Claude Code, Codex, Cursor
Point all three clients at the same memory MCP. Discuss tradeoffs of local-stdio vs SaaS for cross-device sharing.
Cross-tool memory
You finished Recipe 4.1 — Claude Code has memory. Now you open Codex to do a quick task and Codex has no memory. Open Cursor for a code review, Cursor has no memory either. Same machine, same person, three different brains. That is the failure mode this recipe fixes.
Point all three clients at the same memory MCP server. Use the same email for the OAuth login. Now what you save in Claude is searchable in Codex one minute later, and what Cursor learns about your codebase is visible from Claude on your phone.
Step 1: Decide local stdio vs SaaS
This decision is bigger than it looks because it determines the cross-tool story.
- Local stdio (
local-memory-mcpor self-hostedmcp-nex) — runs as a subprocess of each client. The data lives on your machine. Each client launches its own subprocess, but they all read and write the same file backend. Works on one device. Does NOT sync to other devices unless you put the storage directory on iCloud / Syncthing / Dropbox. - SaaS (
memory.studiomeyer.iois the canonical, others exist) — runs as an HTTP MCP server. Same URL works from any device. Same OAuth account = same data. Cross-device by design.
For one-machine setups either works. For "I open Claude on Mac, Codex on Windows, Cursor on Linux", SaaS is the only path that does not require network drive mounts.
The rest of this recipe assumes SaaS for the cross-device story. If you go local, the install commands change but the principle is identical.
Step 2: Connect Claude Code
claude mcp add --transport http memory https://memory.studiomeyer.io/mcp
Restart Claude Code. Type /mcp, you should see "memory" in the list. The first tool call opens a browser for OAuth — magic link in your email, click it, done.
If the browser does not auto-open, the magic-link URL is printed in the Claude Code log. On Mac the log is at ~/Library/Logs/Claude/mcp-*.log.
Step 3: Connect Claude Desktop
Settings → Developer → Edit Config. The file is claude_desktop_config.json.
{
"mcpServers": {
"memory": {
"url": "https://memory.studiomeyer.io/mcp",
"type": "http"
}
}
}
Save, completely quit Claude Desktop, restart. First tool call → magic link → done.
For older Claude Desktop builds that do not yet support "type": "http", fall back to the mcp-remote stdio bridge:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://memory.studiomeyer.io/mcp"
]
}
}
}
Step 4: Connect Codex
Codex has its own config file at ~/.codex/config.toml. Edit it:
[mcp_servers.memory]
url = "https://memory.studiomeyer.io/mcp"
type = "http"
Restart Codex (Desktop app or VS Code Extension). First tool call → magic link.
Important: use the same email you used for Claude. OAuth lookup is by email, so same email = same tenant = same data. Different email = a brand new empty memory tenant — your Claude entries will not appear.
Known port collision: if you run Codex Desktop and the Codex VS Code Extension in parallel, they fight over the OAuth callback port (1455). Workaround: completely quit VS Code, finish the magic-link login in the Desktop app, then restart VS Code — the extension reuses the existing session.
Step 5: Connect Cursor
Settings → Features → Model Context Protocol. Add:
{
"mcpServers": {
"memory": {
"url": "https://memory.studiomeyer.io/mcp",
"type": "http"
}
}
}
Same magic-link flow. Use the same email.
Step 6: Verify
Run aiguide_validate_step from Claude Code. The validator confirms a memory MCP is registered.
The real verification is the cross-tool round-trip:
In Claude:
Save to memory: I work in TypeScript strict, vitest for tests, Docker for deploys.
Wait one minute. In Codex:
What do you know about my preferences?
Codex calls nex_search, returns the entry you just saved in Claude. That round-trip is the proof. Memory is portable.
In Cursor, same test in a code-related context:
Save to memory: For ProjectAcme, the deploy script lives in scripts/deploy.sh and runs prisma generate first.
Then back in Claude:
How do I deploy ProjectAcme?
Claude finds the entry that Cursor wrote. Same data, three clients.
claude mcp list 2>&1 | grep -iE "memory|nex|local-memory|mem0" | head -5
Step 7: Standing rule for session start
To close the loop, add a rule to your global CLAUDE.md (and via the symlink, AGENTS.md):
## Memory hygiene
Every session, after greeting:
1. Call nex_session_start to load my context.
2. Call nex_proactive to surface stale items, open decisions, knowledge gaps.
Then we begin.
Every session end:
1. Call nex_summarize to write a short session log.
2. Call nex_session_end to close cleanly.
That makes the memory system load and persist around every session in every client. With the symlink to AGENTS.md, Codex picks up the same rule on its end.
What you have now
A memory layer that survives client switches, machine switches, and even tool switches between Claude / Codex / Cursor. Nothing else in the AI tooling world ships this out of the box — Anthropic's own Claude Memory only works on claude.ai and Claude Desktop Web, not on Claude Code, not on the API, not in Cursor or Codex. Building this yourself with MCP is the only way to get true cross-tool memory in 2026.
The strategic upside: once memory is portable, the tools become interchangeable. If OpenAI doubles Codex prices tomorrow, you switch to Claude Code and your data comes with you. If Anthropic ships a feature you do not like in 6 months, you go to Cursor — same memory, no migration. Memory is the layer that makes vendor lock-in optional.