Importing ChatGPT and Claude.ai history
Export your existing conversations and pipe them into nex_import. Years of context become searchable memory in one batch.
Importing ChatGPT and Claude.ai history
You already have years of context locked inside ChatGPT, Claude.ai, Gemini, Copilot, or Perplexity. Most of it is forgotten — those tools' built-in memory only surfaces a fraction of what is in your conversation history. Importing the full export into a memory MCP server makes all of it searchable.
The nex_import tool handles five platforms with auto-detection: ChatGPT, Claude (claude.ai), Gemini, Copilot, Perplexity. It uses Haiku for fact extraction (turns long conversations into searchable learnings) and the same gatekeeper that handles deduplication on regular writes (Recipe 4.1).
This is a feature you do not get anywhere else — Anthropic's own Claude Memory does not import ChatGPT history, and ChatGPT does not import Claude.
Step 1: Decide what to import
Three paths, increasing in scope:
- Just the most recent few months. Quickest, highest signal. Past three months is what you actually remember and use.
- Last year. Larger import, more deduplication work. Still high-value because you forgot half of it.
- Everything. Years of conversations. Powerful but expensive — every conversation goes through Haiku for extraction. Reserve for the case where you really want all your past context discoverable.
Most setups start with the recent few months and decide whether to go further once they see what comes back.
Step 2: Export from ChatGPT
ChatGPT exports take a few hours to generate.
- Settings → Data Controls → Export Data.
- Click "Export". OpenAI emails you a download link within a few hours.
- Download the zip. It contains
conversations.json— that is the file you want.
Same flow for Claude.ai: Settings → Privacy → Export Data. You get conversations.json in the zip.
For Gemini: Google Takeout → select "Gemini Apps Activity". For Copilot: GitHub Settings → Export Account Data. For Perplexity: Profile → Settings → Export.
Step 3: Run the import
Drop the export file somewhere accessible (e.g. ~/Downloads/chatgpt-export/conversations.json).
In Claude Code:
Import my ChatGPT history from ~/Downloads/chatgpt-export/conversations.json
Claude calls nex_import with the file path. Auto-detection identifies the platform from the file structure. Three things happen:
- Conversations are parsed.
- Each conversation is run through Haiku to extract atomic facts (what you decided, what you learned, who you talked about).
- Each extracted fact goes through the gatekeeper — ADDED, UPDATED, or NOOP'd against what is already in memory.
The output reports a count: e.g. "Imported 412 conversations → extracted 1,847 facts → 1,203 added, 312 updated, 332 deduplicated."
For SaaS memory (memory.studiomeyer.io), there is also a REST endpoint: POST /api/import with the file as multipart form data. Useful when the file is too large for a Claude Code tool call to hold.
Step 4: Verify
Run aiguide_validate_step. The validator confirms a memory MCP is active. After the import finishes, count the new entries:
How many memory entries were added today?
Claude calls nex_recall with a date filter and returns the count.
claude mcp list 2>&1 | grep -iE "memory|nex|local-memory|mem0" | head -5
Step 5: Search what you imported
The point of import is retrieval. Try things you remember discussing months ago:
Search my memory for that side-project idea I had in February about a recipe builder
The fuzzy + semantic search finds it even if you used different words now than you did then. Recent imports rank slightly lower than fresh writes (because the original conversation date is the timestamp), but they are fully searchable.
For decisions specifically:
Find decisions in my imported history about hosting providers
nex_decide entries are extracted from your conversations when you said something that sounded like a decision ("I'm going to go with..."). They are searchable separately from flat learnings, which makes "what did I decide about X" queries clean.
Step 6: Cleanup after import
Two things worth doing once the import finishes:
- Tag old entries by project. Recipe 4.2 covers tagging. The importer cannot guess which project each conversation belonged to. Use search to find clusters and re-tag the important ones with their project slug.
- Run a contradictions scan. Two years of conversations contain contradictions ("I prefer Postgres" → six months later "I prefer SQLite"). The
nex_contradictions scan_learningstool finds them. You decide which one is current.
Cost and time
For SaaS memory with the gatekeeper enabled:
- Time: roughly 1-2 minutes per 100 conversations on a normal connection. Most of the time is Haiku extraction, not network.
- Cost: Haiku is cheap. A two-year ChatGPT export with ~2000 conversations costs less than one dollar in extraction calls. Import is one of the highest ROI operations you do because the data is permanent — pay once, query forever.
For local-memory-mcp without gatekeeper, import is free but you lose deduplication — duplicates from across the years will show up in search until you manually clean them up. SaaS is the easier path here.
After the import
Two-thirds of the value comes from the next time you ask Claude something you do not consciously remember discussing. "Did I ever look into framework X?" — turns out yes, six months ago, and you decided against it for a specific reason. That answer would not have surfaced without the import.
The other third comes from being able to truly leave a tool. If you ever switch from ChatGPT to Claude or vice versa, your context comes with you instead of staying locked behind that vendor's UI. Memory is portable. The vendors are interchangeable.