Topic: claude project export
How to Export a Claude Project (system prompt, knowledge base, all conversations)
Claude Projects bundle a system prompt, custom instructions, a knowledge base, and dozens of conversations under one project_uuid. The standard export includes the bundle, but flattens it: project conversations sit in the same array as your non-project chats. Here's how to rebuild the per-project view with a 22-line jq script — plus the one piece the export deliberately omits and how to handle that gap.
TL;DR
The Claude export is one big conversations.json. Project metadata lives at the top of the file (system prompt, custom instructions, file-name list); every conversation in chat_messages carries a project_uuid if it belonged to a Project, or null if it was a free chat. To get a clean per-project archive: read the metadata, write the system prompt to system-prompt.md, group conversations by project_uuid, write each one to {project-name}/{date}-{title}.md. The script below does it in 22 lines. Caveat: the knowledge-base files themselves (PDFs, .txt uploads, code attachments) are NOT in the export — only their file names. Download those manually from the Knowledge tab before exporting, or accept the gap and document it in your archive.
What's in a Claude Project (and what gets exported)
Claude's Project feature, as of early 2026, packages five things under one project_uuid:
- A name and description — visible in the left sidebar.
- A system prompt + custom instructions — applied to every conversation in the project as background context.
- A knowledge base — uploaded files (PDFs, .txt, .docx, code) that Claude can reference during any conversation in the project.
- All conversations created inside the project — each carrying the
project_uuidso the binding is preserved. - (Pro/Max only) Custom integrations — toolset bindings, currently not exported.
The export at Settings → Privacy → Export your data includes 1, 2, 4, and the file-name list from 3 — but excludes the file bytes themselves. That's an intentional Anthropic choice: the knowledge-base files are often copyrighted material the user uploaded, and re-distributing them via export creates a compliance surface Anthropic doesn't want to own. From an archival perspective it's the gap to plan around.
The export's shape
If you've already exported, your conversations.json looks roughly like this (simplified):
{
"projects": [
{
"uuid": "8f3a-...",
"name": "Mobile rewrite Q2",
"description": "RN → Flutter migration scoping",
"prompt_template": "You are helping me scope a migration from React Native to Flutter for a 200k-LOC app. Always ask clarifying questions before recommending. Code style: Effective Dart.",
"files": [
{ "file_name": "current-arch.md", "size": 4821 },
{ "file_name": "perf-baseline.csv", "size": 11290 }
],
"created_at": "2026-02-18T11:14:02Z"
},
...
],
"conversations": [
{
"uuid": "1b22-...",
"name": "Bottom-sheet navigation patterns",
"project_uuid": "8f3a-...",
"created_at": "2026-03-04T09:11:43Z",
"chat_messages": [...]
},
{
"uuid": "9a01-...",
"name": "Random debugging chat",
"project_uuid": null,
"created_at": "2026-03-05T14:22:01Z",
"chat_messages": [...]
},
...
]
}
Older exports keep the projects under a different top-level key (sometimes workspaces); newer exports nest project metadata at the same level as conversations. The script below probes both keys and falls back gracefully.
The 22-line jq script
Save as export-claude-projects.sh. Reads conversations.json, writes one directory per project containing the system prompt, the file list, and one Markdown file per conversation:
#!/usr/bin/env bash
set -euo pipefail
mkdir -p projects
# 1. Walk every project entry; write metadata + system prompt + file list
jq -c '(.projects // .workspaces // [])[]' conversations.json |
while IFS= read -r project; do
uuid=$(echo "$project" | jq -r '.uuid')
name=$(echo "$project" | jq -r '.name' | tr ' /' '-_')
dir="projects/${name}-${uuid:0:8}"
mkdir -p "$dir"
echo "$project" | jq -r '.prompt_template // empty' > "$dir/system-prompt.md"
echo "$project" | jq -r '.description // empty' > "$dir/description.md"
echo "$project" | jq -r '.files // [] | .[].file_name' > "$dir/knowledge-base-files.txt"
done
# 2. Walk every conversation; route to its project's directory (or "unfiled")
jq -c '.conversations[]' conversations.json |
while IFS= read -r conv; do
puuid=$(echo "$conv" | jq -r '.project_uuid // "none"')
pname=$(jq -r --arg u "$puuid" '(.projects // .workspaces // [])[] | select(.uuid==$u) | .name // "unfiled"' conversations.json | tr ' /' '-_')
[[ -z "$pname" ]] && pname="unfiled"
dir="projects/${pname}-${puuid:0:8}"
mkdir -p "$dir"
title=$(echo "$conv" | jq -r '.name // .uuid' | tr ' /' '-_' | cut -c1-60)
date=$(echo "$conv" | jq -r '.created_at' | cut -c1-10)
out="$dir/${date}-${title}.md"
{
echo "# $(echo "$conv" | jq -r '.name')"
echo "_conversation $(echo "$conv" | jq -r '.uuid') · ${date}_"
echo
echo "$conv" | jq -r '.chat_messages[] | "## " + (.sender|ascii_upcase) + "\n\n" + .text + "\n"'
} > "$out"
done
What this leaves behind:
projects/
├── Mobile-rewrite-Q2-8f3a/
│ ├── system-prompt.md ← the project's persistent instructions
│ ├── description.md ← the project's tagline
│ ├── knowledge-base-files.txt ← list of uploaded files (just names!)
│ ├── 2026-03-04-Bottom-sheet-navigation-patterns.md
│ ├── 2026-03-09-Async-state-management-comparison.md
│ ├── ...
├── Pricing-experiments-2c1d/
│ ├── system-prompt.md
│ ├── ...
└── unfiled-none/ ← conversations with project_uuid=null
├── 2026-03-05-Random-debugging-chat.md
└── ...
Each project directory is now self-contained — system prompt at the top, every conversation that ran inside that context as a separate Markdown file, the knowledge-base manifest available even though the bytes aren't. The "unfiled" bucket holds your free-floating chats.
The knowledge-base gap
The most common surprise: your uploaded files are not in the export. knowledge-base-files.txt shows you what was attached, but the actual PDFs, .txt files, and code uploads have to be retrieved separately. Three options:
- Pre-export download — before you run the export, open each Project, click into the Knowledge tab, and download every file. This is the only way to get the bytes. Save them into the matching
projects/{name}-{uuid}/knowledge/directory after the script runs. - Document the gap — if you don't need the files in the archive (just need to know they existed), let
knowledge-base-files.txtstand as the manifest and add a note tosystem-prompt.md: "Knowledge base contained N files: [list]. Files themselves not in this archive — see {original location} or re-upload." - Re-upload as part of restore — if you ever need to reconstitute the project, the system prompt + file names tell you which assets to re-attach. The reasoning preserved in the conversation Markdown is enough that you don't need the original files to follow what was decided; you only need them to re-run new conversations with the same context.
Option 1 is the only complete archive. Options 2 and 3 are pragmatic when the knowledge base wasn't load-bearing for the decisions made in the project. (For a typical engineering project — system prompt + a couple of architecture diagrams — option 2 is usually fine.)
Why this matters for decision archaeology
The Project boundary IS context. A conversation about pricing inside the "Pricing experiments" Project means something different from the same conversation outside any project — the system prompt was setting the frame, the knowledge base was constraining the answer space, and the trade-offs being weighed were specific to what the project was set up to evaluate. Flatten the project boundary and you lose the frame.
This is especially load-bearing for senior engineers who use Projects the way they used to use design docs: each Project corresponds to one durable architecture initiative (mobile rewrite, billing migration, auth system swap), the system prompt encodes the constraints, the conversations work through the trade-offs. Extracting decisions from a Project-bundled conversation set is more valuable than extracting from free chats because the project name itself is a tag — every decision in the Mobile-rewrite-Q2 project shares the same architectural context, even if the individual conversations didn't repeat it.
If your team treats Claude Projects as the workspace for in-progress architecture work, the export-and-extract path is how you turn that work into a durable record at quarter-end — without the team having to remember to write ADRs alongside their thinking.
How WhyChose helps
Manually running the script gets you a structured archive — useful, but flat as text. WhyChose's open-source extractor goes a step further: it preserves project_uuid as a first-class column in the extracted decision log, so each row carries (chosen option, alternatives, rationale, conversation_uuid, project_uuid, project_name). You can filter the log by project to see "every durable decision made inside Mobile rewrite Q2" without manually grouping. Claude Artifacts are extracted alongside as the structured decision-output layer; the surrounding prose becomes the rationale field.
The hosted product adds team sharing scoped per-project (so only the mobile team sees the mobile-rewrite log), Notion / Linear export with project as the folder structure, and a "promote to ADR" action that drops a Nygard-formatted file into your doc/decisions/ directory with the project context preserved as a header link. Same MIT-licensed parser under the hood; everything else is convenience.
Related questions
Does the standard Claude export include Projects?
Yes — conversations.json includes every conversation that lived inside a Project, tagged with project_uuid and project name. What it doesn't do is preserve the project boundary as a folder or separate file: project conversations sit in the same flat array as your non-project chats. To rebuild the per-project view (system prompt + instructions + all conversations + name), group by project_uuid yourself. The 22-line jq script above does exactly that.
What about the knowledge base — are uploaded files in the export?
No. The export excludes uploaded files (PDFs, .txt, .docx, code attachments); you only get file names referenced in the project metadata, not the bytes. Recover the actual content by downloading each file from the Project's Knowledge tab in the Claude UI before the export — there's no API path. Document the gap explicitly in your archive ("knowledge base contained N files, listed but not included") so future-you doesn't think the export is complete.
Where does the project's system prompt live in the JSON?
Inside the project metadata block at the top of the export, keyed by project_uuid. Each project entry carries name, description, prompt_template (the system prompt + custom instructions Claude sees on every turn), and a list of file references for the knowledge base. The script reads this block first, writes prompt_template to system-prompt.md inside the per-project directory, then walks the conversations array filtered by project_uuid.
Can I tell which conversations were part of which project after the export?
Yes — every conversation object carries its project_uuid (or null for free-floating chats outside any project). Group by that field and you can reconstruct the project's full conversation history. WhyChose's extractor preserves project_uuid as a column in the decision log so each extracted decision links back to the project it was made in.
Further reading
- How to export your Claude conversations — produce
conversations.jsonfirst; this page is the prerequisite step. - Claude conversation export format reference — full schema, including the
project_uuidfield and thechat_messagesshape. - How to export your Claude Artifacts — Artifacts ride inside
chat_messages; once you've grouped conversations by project, the same artifact-extractor runs per-project. - How to search your Claude conversations (the four levels) — once the per-project archive exists, all four search levels apply per directory.
- How to extract decisions from your AI chats — the level-4 step beyond search; works on Claude Project bundles too, with project_uuid preserved.
- How to extract decisions from your Claude conversations — Claude-specific extraction recipe; preserves
project_uuidas a first-class column so per-project decision queries are one filter, not a manual group-by. - The open-source extractor — MIT, runs locally, preserves project_uuid in the decision log output.