Context
When setting up tracing projects programmatically as part of a deployment script, you may need to create LLM-as-judge evaluators via API calls since this functionality is not yet available in the SDK. This allows you to automate the creation of evaluators that reference hub prompts for evaluation purposes.
Answer
To create LLM-as-judge evaluators programmatically, you need to make a POST request to the LangSmith API with the proper structure that includes both the hub prompt reference and the model configuration.
Here's the complete API call structure:
import requests
response = requests.post(
url='https://api.smith.langchain.com/api/v1/runs/rules',
json={
'display_name': 'Your Evaluator Name',
'session_id': 'your-session-id',
'sampling_rate': 0.3,
'evaluators': [
{
'structured': {
'hub_ref': 'your_prompt:latest',
'model': {
'lc': 1,
'type': 'constructor',
'id': ['langchain', 'chat_models', 'openai', 'ChatOpenAI'],
'name': 'ChatOpenAI',
'kwargs': {
'model_name': 'gpt-4.1-mini',
'temperature': 0,
'openai_api_key': {
'lc': 1,
'type': 'secret',
'id': ['OPENAI_API_KEY']
}
}
}
}
}
]
},
headers={
"x-api-key": your_api_key,
"x-tenant-id": your_workspace_id
},
)For Azure OpenAI users:
If you're using Azure OpenAI, replace the model configuration with:
'model': {
'lc': 1,
'type': 'constructor',
'id': ['langchain', 'chat_models', 'azure_openai', 'AzureChatOpenAI'],
'name': 'AzureChatOpenAI',
'kwargs': {
'azure_deployment': 'your-deployment-name',
'azure_endpoint': 'https://your-resource.openai.azure.com/',
'api_version': '2024-08-01-preview',
'temperature': 0,
'openai_api_key': {
'lc': 1,
'type': 'secret',
'id': ['AZURE_OPENAI_API_KEY']
}
}
}Key requirements:
The prompt must be a StructuredPrompt type
You must include both the
hub_refandmodelobjects in thestructuredsectionThe model configuration follows LangChain's serialization format
For additional API usage guidance, you can also reference chat.langchain.com to help generate correct API queries.