What’s happening
When you use LangChain’s @tool decorator with nested Pydantic schemas, the LLM may fail to follow the structure correctly.
A common failure mode is flattening nested fields — for example, placing vendor_guid directly under params instead of inside params.filters.
This behavior is non-deterministic. It may work several times in a row and then suddenly break on a later call.
The fix
Enable strict schema enforcement when binding tools:
llm_with_tools = llm.bind_tools([your_tool], strict=True)
Setting strict=True tells OpenAI to enforce your schema at the API level, ensuring the model must return the exact structure you defined.
The catch
Strict mode does not work with Pydantic fields that have default values.
For example, this will break strict mode:
class Filters(BaseModel):
vendor_guid: list[str] | None = Field(default=None, description="...")
You must remove the default:
class Filters(BaseModel):
vendor_guid: list[str] | None = Field(description="...")
The field can still be nullable (| None), but it cannot have a default value. When the field is not needed, the model will explicitly return null.
Why it works this way
Strict mode requires the model to return every field in the schema.
Default values imply that a field can be omitted, which conflicts with this requirement. As a result, OpenAI disallows default values when strict mode is enabled.
What to check
Add
strict=Trueto yourbind_tools()callRemove all
default=arguments from your Pydantic modelsCheck nested models as well — such as
Filters,AggregationDict, and similar schemas — not just the top-level model
https://docs.langchain.com/oss/python/integrations/chat/openai#strict-mode