When creating a custom evaluator that connects to an on-premise LiteLLM proxy, you may encounter error "The api_key client option must be set" even if your proxy doesn't require authentication.
LangSmith's evaluator configuration validates that an API key is present before allowing evaluator creation. The error appears as:
Failed to save evaluator: Evaluator failed validation:
{"detail":"Failed to validate chain: 400: The api_key client
option must be set either by passing api_key to the client or
by setting the OPENAI_API_KEY environment variable"}This validation happens even when connecting to custom LiteLLM proxies that ignore authorization headers, and it differs from how the Prompt Engineering section handles authentication.
Solution
Set a Workspace OpenAI API key in your LangSmith workspace settings before creating the evaluator:
Navigate to workspace settings in LangSmith
Go to the API keys or secrets section
Add an OpenAI API key (can be a dummy/placeholder value like
sk-dummy-keyif your proxy ignores the authorization header)Return to the evaluator creation screen
Configure your evaluator with the same LiteLLM proxy settings used in Prompt Engineering
The evaluator will now pass validation and your on-premise proxy can continue to ignore the authorization header as configured.
Why This Happens
The Prompt Engineering and Evaluator sections handle API key validation differently:
Prompt Engineering: Allows bypassing authorization headers by passing a dummy key directly in the configuration
Evaluators: Require a workspace-level OpenAI API key to be set before creation, validating its presence even for custom proxies
This workspace-level validation ensures consistent credential management across all evaluators while still supporting custom proxy configurations that may ignore the actual key value.
Relevant docs: