Context
Users often need to migrate their data out of LangSmith for analysis, backup, or transitioning between environments. Migration can include traces, datasets, and experiments. The supported method today is through bulk export, which delivers the data in a format that can be stored or analyzed externally.
This guide covers the available migration paths, setup instructions, and common troubleshooting steps.
Answer
To successfully migrate your LangSmith data, follow these steps:
Migration Context
Refer to the LangSmith data migration tool for scripts that help export datasets, experiments, and traces.
Important notes:
For traces, use the bulk export feature (available on Plus and Enterprise tiers).
Re-importing traces into LangSmith is not currently supported.
Exported data can be stored in S3 and analyzed externally using tools of your choice.
Setting Up Bulk Export
Bulk export currently supports Amazon S3 as the destination.
Follow the bulk export setup guide.
If you want to filter your exports, see the trace query language documentation.
Checklist for setup:
Ensure your S3 bucket permissions are configured:
IAM role/user has PutObject permissions.
Bucket policy allows writes from LangSmith.
Confirm bucket configuration:
Bucket name and path are correct.
Endpoint URL format is valid (common error: extra “s” in amazonaws.com).
Initial Troubleshooting
If your migration export is not completing:
Verify the IAM role or user has the correct S3 permissions.
Check that your bucket policy explicitly allows LangSmith to write objects.
Confirm the bucket name and path are accurate.
Review job completion logs for error messages.
Check for credential-related errors if exports show "Completed" but no files appear in S3:
Look for error messages like "The AWS Access Key Id you provided does not exist in our records" or "The blob store credentials provided are not valid" in the run details
If credentials are invalid, rotate your AWS access keys and update the bulk export destination configuration
See the LangSmith Data Export Debugging Guide for more details.
Logging & Errors
Successful job logs will show entries like:
@tenant_id:<tenant_id> service:saq-export-queue
@bulk_export_run_id:<run_id> status:warn
Common error example:
OverflowError in fastparquet/writer.py
→ Usually occurs during parquet file writing. Retry after confirming bucket configuration and permissions.
Common Issues
“No partitions to export, setting to Completed”
→ No matching data was found for the filters applied.
Permission errors
→ Typically caused by missing IAM role permissions or misconfigured bucket policies.
Exports marked "Completed" but no files in S3
→ Usually caused by invalid or expired AWS credentials. Check run details for credential error messages and rotate access keys if needed.
Traceback in fastparquet
→ Internal parquet writer error; retry after verifying configuration.