OutputMode::Json
- generate structured output via LLM’s native JSON generationOutputMode::JsonSchema
- use LLM’s strict JSON Schema mode to enforce JSON SchemaOutputMode::Tools
- use tool calling API to get LLM follow provided schemaOutputMode::MdJson
- use prompting to generate structured output; fallback for the models that do not support JSON generation or tool callingText
and Unrestricted
modes to get LLM to generate text output without any structured data extraction.
Those modes are not useful for StructuredOutput
class (as it is focused on structured output generation) but can be used with Inference
class.
OutputMode::Text
- generate text outputOutputMode::Unrestricted
- generate unrestricted output based on inputs provided by the user (with no enforcement of specific output format)StructuredOutput::create()
method.
The default mode is OutputMode::Tools
, which leverages OpenAI-style tool calls.
OutputMode::Tools
OutputMode::Json
OutputMode::JsonSchema
OutputMode::Json
which may not always manage to meet the schema requirements,
OutputMode::JsonSchema
is strict and guarantees the response to be a valid JSON object that matches
the provided schema.
It is currently supported only by new OpenAI models (check their docs for details).
NOTE: OpenAI JsonSchema mode does not support optional properties. If you need to have optional
properties in your schema, use OutputMode::Tools
or OutputMode::Json
.
See more about JSONSchema mode in:
OutputMode::MdJson