This section covers how to create requests to LLM providers using the Polyglot library. It includes examples of basic requests, handling multiple messages, and using different message formats.
Creating Requests
The with()
method is the main way to set the parameters of the requests to LLM providers.
It accepts several parameters:
public function with(
string|array $messages = [], // The messages to send to the LLM
string $model = '', // The model to use (overrides default)
array $tools = [], // Tools/functions for the model to use
string|array $toolChoice = [], // Tool selection preference
array $responseFormat = [], // Response format specification
array $options = [], // Additional request options
Mode $mode = OutputMode::Text // Output mode (Text, JSON, etc.)
) : self
Basic Request Example
Here’s a simple example of creating a request:
<?php
use Cognesy\Polyglot\Inference\Inference;
$inference = new Inference();
$response = $inference
->with(
messages: 'What is the capital of France?'
)
->create() // create pending inference
->get(); // get the data - here it executes the request
echo "Response: $response";
Request with Multiple Messages
For chat-based interactions, you can pass an array of messages:
<?php
use Cognesy\Polyglot\Inference\Inference;
$inference = new Inference();
$response = $inference
->withMessages([
['role' => 'system', 'content' => 'You are a helpful assistant who provides concise answers.'],
['role' => 'user', 'content' => 'What is the capital of France?'],
['role' => 'assistant', 'content' => 'Paris.'],
['role' => 'user', 'content' => 'And what about Germany?']
])
->get();
echo "Response: $response";
Polyglot supports different message formats depending on the provider:
- String: A simple string will be converted to a user message
- Array of messages: Each message should have a
role
and content
field
- Multimodal content: Some providers support images in messages
Example with image (for providers that support it):
<?php
use Cognesy\Polyglot\Inference\Inference;
$imageData = base64_encode(file_get_contents('image.jpg'));
$messages = [
[
'role' => 'user',
'content' => [
[
'type' => 'text',
'text' => 'What\'s in this image?'
],
[
'type' => 'image_url',
'image_url' => [
'url' => "data:image/jpeg;base64,$imageData"
]
]
]
]
];
$response = new Inference()
->using('openai')
->withModel('gpt-4o'); // use multimodal model
->with(messages: $messages)
->toText();