Topic: chatgpt team export difference
ChatGPT Team Export — Differences From Plus, Workspace Admin Flow, and the Compliance API
ChatGPT Team and Enterprise plans don't export the way Plus does. Workspace admins (not individual members) run the export, the ZIP is workspace-scoped not account-scoped, and the audit log is the artifact a compliance auditor will actually ask for. This page documents the four ways Team exports differ from Plus, the admin-managed flow end to end, the structure of audit-log.jsonl, and the Compliance API path that Enterprise plans get for programmatic export.
TL;DR
Team-tier exports are workspace-scoped, not account-scoped — every member's workspace conversations land in one ZIP. The per-member Settings → Data Controls → Export Data button is usually disabled by the admin (it bypasses the audit trail the workspace pays for), so export requests route through Workspace Settings → Data Controls → Export Workspace Data instead. The four Team-vs-Plus differences are scope, audit log, members + shared GPTs, and Project scoping. Enterprise plans add the Compliance API (/v1/compliance/exports) for programmatic export with SIEM ingest. The audit log at audit-log.jsonl is the single most valuable file for compliance audits — it answers "who had what access to which Project at which time" in a way per-conversation timestamps alone cannot. The conversion script at the bottom of this page rebuilds a per-member, per-Project archive shape that matches the symmetric Claude Team workspace export recipe.
Why Team exports are a separate flow
Three structural reasons. First, the audit-trail bypass risk. The whole point of a Team or Enterprise plan is workspace-level governance — admin permissions, shared GPTs, audit logging, SOC 2 attestation. If every member could click Settings → Export Data and walk away with their own conversations, the audit log would lose its primary value (the workspace can't track exfiltration its admins didn't authorize). So Team plans default the per-member export off, and route every legitimate request through the admin-managed flow that writes export.requested into the audit log itself.
Second, missing workspace context. A per-member personal export attributes everything to the account holder and shows their personal view of every Project. Workspace exports carry workspace-level metadata — the workspace name, member roster with roles, per-Project membership, audit log, shared GPT configurations — that no individual member's personal view contains. Without that, a multi-month archive can't answer questions like "who else had access to the rationale when this decision was made" or "what was the Project's custom-instructions context six months ago."
Third, per-member-not-per-workspace shape. Even if a workspace admin chose to run twenty separate per-member exports (one per teammate), the resulting twenty ZIPs lack the workspace-level metadata above and require manual stitching to produce a workspace view. The workspace-managed export does the stitching server-side and produces one archive whose shape the WhyChose extractor and most downstream tooling expects.
The four Team-vs-Plus differences
Concrete answers to "what's different" — for the workspace admin who's run a Plus export before and is about to run their first Team export.
| Aspect | ChatGPT Plus export | ChatGPT Team / Enterprise export |
|---|---|---|
| Scope | Account-scoped — only the requesting account's conversations. | Workspace-scoped — every member's workspace conversations in one ZIP. |
| Audit log | Not present — no admin layer to audit. | audit-log.jsonl at top level, one JSONL event per admin action since workspace creation. |
| Members & GPTs | Not present — no workspace. | members.json (roster + roles) and shared-gpts/ directory with workspace-scoped Custom GPT configurations. |
| Project scoping | Includes the requesting member's view of Projects. | Workspace-shared Project boundaries with per-Project member roster. |
Each of these is the answer to a question a compliance auditor will eventually ask. Plus exports answer "what did this account do" — useful for personal archival. Team exports answer "what did the workspace do, who participated, and who had access to what" — necessary for any environment with a real audit surface.
Three request paths (admin-managed)
- In-app at Workspace Settings → Data Controls → Export Workspace Data (Workspace Owner or Admin role only). The default and recommended path for Team plans. Generates a downloadable ZIP within minutes-to-hours depending on workspace size; the download link is delivered via email to the admin's account email and expires 24 hours after generation. The export request itself is logged as
export.requestedin the audit log; export completion (and the requesting admin's identity) is logged asexport.completed. Most workspaces find this path sufficient. - DSAR via privacy@openai.com with the workspace ID and admin verification, for the rare case of a workspace whose admin-portal access is broken (departed admin, billing-frozen plan, custom contract with self-serve export contractually disabled). Response time is typically 5 business days for a full export and 2 business days for a scoped export (specific date range or specific member). Used as the fallback when path 1 isn't available; not the default.
- Compliance API at
/v1/compliance/exports(Enterprise plan only). Programmatic export-and-delete endpoint with full audit-trailed access. Documented at the OpenAI platform docs; covered in the next section because the API shape is genuinely different from the manual flows.
The workspace export ZIP — what's inside
A real Team export unpacks to roughly this layout (paths are stable across observed exports; presence of optional files depends on workspace usage):
chatgpt-workspace-export-<date>.zip
├── workspace.json # workspace metadata (id, name, plan, created_at)
├── members.json # roster: id, email, role, added_at, removed_at, last_active
├── audit-log.jsonl # one JSONL event per admin action since workspace creation
├── projects/
│ └── <project_uuid>/
│ ├── project.json # name, description, custom_instructions, created_by, file_manifest
│ ├── members.json # per-Project member roster
│ └── conversations/
│ └── <conv_id>.json # full conversation with DAG mapping, project_id set
├── unscoped-conversations/
│ └── <conv_id>.json # workspace conversations not pinned to a Project
├── shared-gpts/
│ └── <gpt_id>.json # workspace-scoped Custom GPT configurations
└── export-manifest.json # generated_at, requested_by_admin, file_inventory, version
Per-conversation files inside projects/<uuid>/conversations/ carry two top-level additions absent from Plus exports: created_by_user_id (which member actually authored the conversation, not just which account ran the export) and participants[] (every member who contributed turns to the conversation, useful for shared Projects with multiple contributors). The DAG-shaped mapping tree documented in the conversations.json format reference is otherwise identical.
Sample audit-log.jsonl entries
The audit log is the differentiator. A typical hour of workspace activity looks like:
{"event":"member.invited","ts":"2026-04-15T09:12:08Z","actor":"u_admin1","subject":"sara@acme.com","role":"member"}
{"event":"member.accepted","ts":"2026-04-15T09:14:22Z","actor":"u_sara","workspace_id":"w_abc123"}
{"event":"project.created","ts":"2026-04-15T10:08:01Z","actor":"u_sara","project_id":"p_xyz","name":"Q2 Mobile Rewrite"}
{"event":"project.member_added","ts":"2026-04-15T10:09:15Z","actor":"u_sara","project_id":"p_xyz","added":"u_alex"}
{"event":"project.kb_uploaded","ts":"2026-04-15T10:14:33Z","actor":"u_alex","project_id":"p_xyz","filename":"current-arch-v1.pdf","size":482113}
{"event":"gpt.shared","ts":"2026-04-15T10:31:07Z","actor":"u_admin1","gpt_id":"g-arch-review","scope":"workspace"}
{"event":"member.role_changed","ts":"2026-04-15T11:02:44Z","actor":"u_admin1","subject":"u_sara","from":"member","to":"admin"}
{"event":"export.requested","ts":"2026-04-15T11:45:00Z","actor":"u_admin1","scope":"workspace"}
{"event":"export.completed","ts":"2026-04-15T11:51:33Z","actor":"u_admin1","scope":"workspace","size":"2.3GB"}
Three things to know. (1) The schema is stable but undocumented. OpenAI doesn't publish the full event taxonomy; most fields are obvious from real exports. Treat the schema as best-effort — additions show up across releases (e.g. the gpt.shared scope field was added when workspace-scoped GPTs went GA). (2) Timestamps are ISO 8601 UTC. Don't try to parse the older epoch-seconds format; that's only used inside per-conversation timestamps. (3) The actor field is always a user ID, not an email — you have to join against members.json to get human-readable names. A 12-line jq one-liner does the join cleanly.
The Compliance API (Enterprise only)
Enterprise plans get programmatic export via three endpoints under /v1/compliance/. The shape:
# 1. Trigger an export (returns an export_id)
curl https://api.openai.com/v1/compliance/exports \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{ "scope": "workspace", "format": "zip" }'
# 2. Poll until ready
curl https://api.openai.com/v1/compliance/exports/$EXPORT_ID \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY"
# Response includes status: pending|in_progress|complete|failed
# When complete, includes signed download URL valid for 24h
# 3. Optional — programmatic deletion (GDPR right-to-be-forgotten)
curl https://api.openai.com/v1/compliance/deletions \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-d '{ "subject_id": "u_alex", "scope": "all_conversations" }'
Three reasons to use the Compliance API instead of the admin portal. (a) Audit-trailed programmatic access — every API call is logged with the admin key's identity, so SIEM ingestion happens without polluting the audit log with manual click events. (b) Operational scale — workspaces producing weekly or monthly compliance dumps shouldn't require a human in the admin portal each time. (c) Bidirectional integration — programmatic deletion satisfies GDPR/CCPA right-to-be-forgotten requests without manual portal action, and the response carries a deletion confirmation event the workspace can store as evidence of completion.
The Compliance API is gated behind the Enterprise contract; Team plans (no Enterprise) get the admin portal only. If your workspace is Team-tier and the manual flow is operationally painful, the upgrade path to Enterprise has the Compliance API as a primary justification — the manual flow doesn't scale past about a hundred members because the export ZIP gets large enough to make the 24-hour download window matter.
The conversion script — workspace-shape to per-member archive
This script rebuilds a per-member, per-Project view from the workspace ZIP — useful when handing a member their own conversations on departure (DSAR/right-of-access) without exposing other members' chats, or when running the WhyChose extractor scoped to one member's history.
#!/usr/bin/env bash
# split-chatgpt-workspace.sh — rebuild per-member archive from a workspace export
# Usage: ./split-chatgpt-workspace.sh path/to/workspace-extracted/ ./out
set -euo pipefail
src=$1
out=$2
mkdir -p "$out"
# 1. Build a member lookup table (id -> email)
jq -r '.members[] | "\(.id)\t\(.email)\t\(.role)"' "$src/members.json" > "$out/.members.tsv"
# 2. Per-member directories with their conversations
while IFS=$'\t' read -r mid email role; do
mdir="$out/$email"
mkdir -p "$mdir/projects" "$mdir/unscoped" "$mdir/audit"
# README captures the member's role + who they collaborated with
{
echo "# $email"
echo
echo "Role: $role"
echo "Member ID: $mid"
echo
echo "Conversations authored by this member, plus audit-log entries where they were actor or subject."
} > "$mdir/README.md"
# Project conversations authored by this member
find "$src/projects" -name '*.json' -path '*/conversations/*' | while read -r conv; do
auth=$(jq -r '.created_by_user_id // empty' "$conv")
if [ "$auth" = "$mid" ]; then
pid=$(basename "$(dirname "$(dirname "$conv")")")
mkdir -p "$mdir/projects/$pid"
cp "$conv" "$mdir/projects/$pid/"
fi
done
# Unscoped workspace conversations
for conv in "$src"/unscoped-conversations/*.json; do
[ -e "$conv" ] || continue
auth=$(jq -r '.created_by_user_id // empty' "$conv")
[ "$auth" = "$mid" ] && cp "$conv" "$mdir/unscoped/"
done
# Audit-log slice — events where this member was actor or subject
jq -c --arg mid "$mid" 'select(.actor == $mid or .subject == $mid)' \
"$src/audit-log.jsonl" > "$mdir/audit/member-events.jsonl"
done < "$out/.members.tsv"
rm -f "$out/.members.tsv"
echo "split $(ls "$out" | wc -l) members into $out/"
The output layout intentionally mirrors the symmetric Claude Team workspace export recipe so multi-platform compliance archives use the same downstream tooling. For the WhyChose extractor specifically, the per-member projects/<pid>/ directories drop in directly — the extractor's --scope-member flag reads exactly this shape and produces a decision log scoped to one member's authored conversations.
Three sanity checks for the workspace export
- Member count matches roster.
jq '.members | length' members.jsonshould equal the count visible in Workspace Settings → Members (counting both active and removed-but-tombstoned). Off-by-one usually means a member was removed during the export request window; the audit log shows whether their conversations were retained or deleted alongside. - Project count matches workspace UI.
ls projects/ | wc -lshould equal the Project count visible to the admin in the workspace's Projects sidebar. Mismatch usually means a recently-created Project hadn't yet propagated to the export builder; re-request after 24 hours captures it. - Audit-log span covers workspace lifetime.
head -1 audit-log.jsonlshould be aworkspace.createdor earliest member.invited event;tail -1 audit-log.jsonlshould be theexport.completedevent for this export request itself. Gaps in the middle (timestamps with multi-day jumps) usually mean low-activity periods, not missing events; truncation only happens for workspaces older than three years that have hit OpenAI's audit-log retention cap (rare).
Three known gaps
Things missing from Team exports despite reasonable expectations. Mitigations for each.
- Project knowledge-base file binaries. Same gap as Plus exports — the
file_manifestships filename/size/upload-date/MIME but not the bytes. Recovery is the same: pre-export download via the Projects UI is the only path; there's no bulk-download API. Document the gap inproject.json's description so a future reader knows which files were on disk and which were referenced only. - Custom GPT configurations created outside the workspace. Workspace-scoped GPTs (created via Workspace Settings → GPTs → Create) ARE in
shared-gpts/. Personal GPTs that workspace members happened to use in workspace conversations are NOT — those belong to the creator's personal account and require their separate Plus/Team export. The conversation transcript still shows the GPT's outputs, but the GPT's system prompt, knowledge files, and actions are unrecoverable from the workspace export alone. - Personal-context conversations under workspace members. When a member was on Plus before joining the workspace, their pre-Team conversations belong to their personal account and are NOT in the workspace export — regardless of whether the workspace pays for that member's seat now. This is the strongest contrarian-but-cited fact about Team exports: the workspace owns workspace conversations, not the member's history. Cross-platform parallel: the same gap exists in Claude Team workspace exports for personal-context conversations.
Cadence — when to run a workspace export
Recommended cadences vary by what the workspace is used for.
- Quarterly minimum for any Team workspace doing real work. Quarterly aligns with audit cycles, matches the WhyChose extractor's quarterly-batch model, and produces archives small enough to be operationally manageable.
- Monthly for SOC 2 / ISO 27001 / HIPAA workspaces. The compliance frameworks don't require monthly export specifically, but monthly is the cadence at which an auditor expects you to be able to produce a workspace dump on demand without scrambling.
- On-demand triggered on member departure (always — depart-day archive is the cleanest evidence of what the member had access to), major decision sequences (acquisition diligence, board review of architecture decisions), or compliance investigations.
- Programmatic via Compliance API for Enterprise workspaces with SIEM ingest. Daily or weekly cadence becomes practical when no human is in the loop; the audit-log delta becomes the operational artifact (each new export ingests only events since the last completed export).
One anti-pattern to avoid: running per-member personal exports as a substitute for workspace exports. Per-member exports lack the workspace-level metadata, the audit log, and the per-Project member roster — and the merge cost (stitching twenty per-member ZIPs into one workspace shape) is high enough that running the admin-managed workspace export from the start is always less work.
How WhyChose fits in
The workspace export is the input layer for shared decision archaeology — every member's conversations, the access boundary metadata, and the audit log all in one archive. The WhyChose extractor reads per-Project conversation files and surfaces decision-shaped exchanges with the original chat snippet, the member who authored the call (via created_by_user_id), the Project the call lived inside, and the trade-offs considered at the time. Per-Project decision audits become a one-filter query; per-member decision-history reviews become a two-filter query; the audit log integrates as a second source so "who had access to the rationale when this decision was made" is answerable months later. The Pro tier exports decision logs to Notion / Linear / Obsidian; the Team tier ships shared decision logs scoped by Project membership — the same access boundary the workspace export carries forward, with the audit log driving the access-control history.
Related questions
Can workspace members run their own export in parallel with the admin's?
Only if the admin has explicitly enabled per-member self-serve export in Workspace Settings → Member Permissions. Most audited workspaces leave this off. When enabled, a per-member export only contains that member's authored conversations and lacks all workspace metadata (members.json, audit-log, shared-gpts, per-Project rosters).
Does the workspace export include conversations members deleted?
No — deletion is honored at the conversation level. The audit log includes the conversation.deleted event with timestamp and actor, so the deletion is itself part of the audit record, but the conversation body is unrecoverable. Workspaces that need delete-immutable retention should either disable per-member deletion via Workspace Settings (admins only), or rely on an SIEM ingesting the audit log for delete events.
How big is a typical Team workspace export?
For a 10-person workspace doing real work for six months, expect 200-800 MB compressed. For 50-person workspaces over a year, 2-8 GB is normal. Most of the weight is in conversation content; shared-gpts/ and members.json are tiny. Audit-log files at workspace scale rarely exceed 50 MB even for high-activity year-old workspaces. Plan for the export-link 24-hour expiration window when sizing your archival flow.
What if I need just one member's data (DSAR right-of-access)?
Two paths. (1) Run the workspace export and use the conversion script in this page to slice the per-member view; this gives you the workspace-context-aware archive but exports everyone's data along the way. (2) Email privacy@openai.com requesting a member-scoped DSAR; OpenAI will produce a scoped export covering only the named member's conversations + audit-log entries where they were actor or subject. Path 2 is cleaner for legal-sensitive cases (e.g. employee separation with active disputes); path 1 is operationally simpler for routine DSARs.
Does the export include conversations sent through ChatGPT API keys associated with the workspace?
No — the workspace export covers the ChatGPT product (chat.openai.com), not the API. API usage is exported separately via the platform's usage and audit log endpoints under /v1/organization/audit_logs for Enterprise plans. If your workspace uses both the chat product and the API, an honest audit needs both export streams; the conversation export covers chat conversations, the API audit log covers programmatic completions.
Further reading
- How to export your ChatGPT history — the prerequisite for Plus accounts; covers the personal export flow, what's in the ZIP, and the 72-hour download window.
- Claude Team workspace export — the symmetric Anthropic-side recipe; output layout matches so multi-platform compliance archives use the same tooling.
- ChatGPT Projects export — Project-boundary preservation in personal exports; Team exports preserve the same boundary plus the per-Project member roster.
- ChatGPT conversations.json format reference — the per-conversation schema; the same DAG mapping applies inside Team exports, with two added top-level fields (created_by_user_id, participants[]).
- ChatGPT export not working — eight failure modes — Mode 7 (Enterprise admin block) covers the workspace-context override that this page documents end to end.
- Convert your ChatGPT export to Markdown — the per-conversation Markdown conversion works identically on workspace exports; route by
created_by_user_idfor per-member Markdown bundles. - How to search your ChatGPT history (the four levels) — Level 3 jq recipes work better on workspace exports because the per-Project directory structure makes member + Project filters trivial.
- Extract decisions from your ChatGPT chats — the level-4 step beyond search; the extractor reads workspace exports natively, exposing
created_by_user_idandproject_idas filterable columns. - Claude export not working — eight failure modes — Mode 7 (Team workspace admin block) is the symmetric mirror of the ChatGPT Team flow; the recovery patterns mirror.
- The open-source extractor — reads per-Project archives natively;
created_by_user_idandproject_idare first-class columns in the decision log output.