Topic: how to search claude history

How to Search Your Claude Conversations (the Four Levels)

From the in-app search at claude.ai to jq against your full export — four escalating tools for finding what Claude said three months ago, and the moment each one runs out.

TL;DR

Level 1: in-app search at claude.ai sidebar — fast, works for the last few weeks, weak on Artifact bodies. Level 2: ripgrep against the export's conversations.json after a flatten step — every message you've ever sent, including the inside of every Artifact. Level 3: jq queries — structured filtering by date, model, project, sender, conversation. Level 4: decision extraction — when you've stopped looking for words and started looking for outcomes.

Why search isn't built-in (well)

Anthropic's product is the chat and the Artifact, not the archive. Sidebar search exists at claude.ai, and Project views have their own scoped search, but neither is designed for "give me every conversation last quarter where I weighed Postgres against MongoDB." For real retrieval, you have to take the data local. The good news: the Claude export is complete and structurally simpler than ChatGPT's — a flat chat_messages array per conversation, no DAG, no edit branches. The four levels below escalate from zero-effort to engineering-effort, in that order.

Level 1 — In-app search at claude.ai

Click the magnifying-glass at the top of the sidebar, or focus the conversation list and type. It indexes conversation titles and message bodies for recent chats. Inside a Project, the project view has its own scoped search that limits hits to that project's conversations. Best for: "the chat with Claude about pricing last week, where is it?" Worst for: anything more than 30 days old, anything inside an Artifact body, or any structured query (date range, model, sender).

Two known limits: (1) Artifact bodies index unreliably. The <antartifact> wrapper gets stripped before indexing, but heavy-formatted Artifacts (long code, multi-section markdown) often only surface when you search for the Artifact title. (2) The index lags. Conversations from the last hour may not be searchable yet. If a query you're sure should hit returns nothing, that's the signal to drop to level 2 — don't keep refining the query, the index doesn't have it.

Level 2 — ripgrep against the export

The Claude export ships conversations.json as a flat JSON array. The fastest "find any text I or Claude ever said" search is to flatten it once into newline-separated records and ripgrep that:

# one-time flatten: every message body on its own line
jq -r '.[] | .chat_messages[] | "\(.created_at)\t\(.sender)\t\(.text | gsub("\n"; " "))"' \
   conversations.json > messages.tsv

# then ripgrep
rg -i 'postgres.*mongo' messages.tsv

Pros: case-insensitive, real regex, finds matches inside Artifact bodies (the <antartifact> wrapper survives the flatten and shows up in the output as a visual marker — useful for narrowing). Cons: you lose the conversation grouping in the output, so you'll need a second pass to find the surrounding context. For literal-string lookups, ripgrep is the fastest path; for anything structural, drop to level 3.

Level 3 — jq against conversations.json

When you need filtering by date, by sender, by model, by project, drop to jq. The shape is documented at Claude conversation export format; the recipes that come up most often:

# every conversation title, sorted by recency
jq -r '.[] | "\(.updated_at) \(.name)"' conversations.json | sort -r

# every human message containing "postgres" (any case)
jq -r '.[] | .chat_messages[]
       | select(.sender=="human")
       | select(.text|test("postgres";"i"))
       | .text' conversations.json

# every conversation in a specific Project
jq -r '.[] | select(.project_uuid=="abc-...") | .name' conversations.json

# every Artifact body across the entire export
jq -r '.[] | .chat_messages[] | .text
       | scan("<antartifact[^>]*>([\\s\\S]*?)</antartifact>")[]' \
   conversations.json

# decisions made between two ISO dates
jq -r '.[] | select(.created_at >= "2026-01-01" and .created_at < "2026-04-01")
       | .name' conversations.json

Slow on multi-hundred-megabyte exports — expect 2–10 seconds per query — but precise and composable. Pipe to fzf for interactive narrowing. The "every Artifact body" recipe is the one most level-2 ripgrep users wish they'd known about earlier — it gives you the structured payload, not the prose around it.

Level 4 — Decision extraction

Levels 1–3 all answer the same shape of question: "find me text matching X." There's a class of question they can't answer well: "what architecture decisions did I make in Q1, and what trade-offs did Claude help me weigh?" The answer requires understanding which messages contained a decision, which contained a clarification question, and which were Claude's brainstorming. No keyword search resolves that — half the decisions don't contain the word "decision," and the rationale is often spread across three turns.

This is the level WhyChose's open-source extractor targets. It walks conversations.json, sniffs the format (Claude vs ChatGPT, automatically), applies the heuristic patterns documented in patterns.md, and emits a structured decision record per match — chose / rejected / rationale / source-conversation-uuid / model. Artifact bodies are first-class: a decision often is the Artifact, so the extractor surfaces them as the canonical source of truth and uses the prose around them as the rationale. For a quarterly review of "what did we actually decide and why?" extraction is the right tool — search will exhaust you long before you've found everything.

Get early access

Picking the right level

Question shapeRight level
"That chat from Tuesday about the React onboarding component…"Level 1 (sidebar)
"Did Claude ever give me a regex for matching ULIDs?"Level 2 (ripgrep on the flatten)
"Every conversation in my Project X, sorted by last activity"Level 3 (jq)
"Every Artifact I shipped last quarter, with what was in the conversation around it"Level 4 (extraction)

How this differs from searching ChatGPT

The mental model is the same — four escalating levels — but the moving parts are different in two places. (1) No DAG to flatten. Claude's chat_messages array is already in render order, so level-2 ripgrep gets the right text without a leaf-walk. ChatGPT requires you to walk the mapping tree first; Claude doesn't. (2) Artifacts are inline, not separate. Every Artifact body lives inside an assistant message's text field wrapped in <antartifact> tags. So level-3 Artifact extraction is just a regex; you don't need a second file or a separate API call. The trade-off: ChatGPT preserves edit branches in the export, Claude overwrites in place — if you need to see what you originally typed before editing it, ChatGPT's export has it and Claude's doesn't.

For practitioners running both: the ChatGPT-side companion walks the same four levels for the OpenAI export.

Related questions

Does Claude.ai have a search bar?

Yes — the sidebar has a search input at the top of the conversation list, and Projects have their own scoped search. Both index titles and recent message bodies, but Artifact contents index unreliably (the wrapper is stripped, so heavy-formatted Artifacts often only surface by title). Drop to a local search of the export for full coverage.

Why is searching Artifacts so flaky?

Artifacts live inside the assistant's text field, wrapped in <antartifact> pseudo-XML. The in-app search strips that wrapper before indexing, but the indexer doesn't always pick up the inner text in long Artifacts. To search Artifact contents reliably, run ripgrep against the flattened export — the wrapper tag survives the flatten and acts as a visual marker for "this match is inside an Artifact, not the prose around it."

Can I search across Projects?

The in-app search at claude.ai is global by default — it returns hits from every conversation regardless of Project ownership. To restrict to one Project, open the Project first and use its scoped search. To filter by Project from the export, use jq '.[] | select(.project_uuid == "abc-...")'.

When should I escalate from search to extraction?

When the question shifts from "find me a phrase" to "find me an outcome." Search returns matching messages; extraction returns structured records you can audit. Use search to recall a single chat, extraction to audit a quarter of work.

Further reading