Configure the Client
Reference documentation on preparing and configuring the Mindee client.
Requirements
Before proceeding you'll need to have one of the official Mindee client libraries installed.
You'll also need to use one of your Integration Overview and one or several Models configured.
Overview
Before sending any files to the Mindee servers for processing, you'll need to initialize your client and set inference options.
These settings will determine how your files are sent, including any extra options, and how you want to process the result.
Initialize the Mindee Client
This should be the first step in your code. It will determine which organization is used to make the calls.
You should reuse the same client instance for all calls of the same organization.
The client instance is thread-safe where applicable.
First import the needed classes:
from mindee import ClientV2, InferenceParametersFor the API key, you can pass it directly to the client. This is useful for quick testing.
api_key = "MY_API_KEY"
mindee_client = ClientV2(api_key)Instead of passing the key directly, you can also set the following environment variable:
MINDEE_V2_API_KEY
This is recommended for production use.
In this way there is no need to pass the api_key when initializing the client.
mindee_client = ClientV2()First import the needed classes. We recommend using TypeScript.
const mindee = require("mindee");
// for TS or modules:
// import * as mindee from "mindee";For the API key, you can pass it directly to the client. This is useful for quick testing.
const apiKey = "MY_API_KEY";
const mindeeClient = new mindee.ClientV2({ apiKey: apiKey });Instead of passing the key directly, you can also set the following environment variable:
MINDEE_V2_API_KEY
This is recommended for production use.
In this way there is no need to pass the apiKey argument when initializing the client.
const mindeeClient = new mindee.ClientV2();First import the needed classes:
use Mindee\ClientV2;
use Mindee\Input\InferenceParameters;
use Mindee\Error\MindeeException;For the API key, you can pass it directly to the client. This is useful for quick testing.
$apiKey = "MY_API_KEY";
$mindeeClient = new ClientV2($apiKey);Instead of passing the key directly, you can also set the following environment variable:
MINDEE_V2_API_KEY
This is recommended for production use.
In this way there is no need to pass the $apiKey when initializing the client.
$mindeeClient = new ClientV2($apiKey);First import the Mindee package:
require 'mindee'For the API key, you can pass it directly to the client. This is useful for quick testing.
api_key = 'MY_API_KEY'
mindee_client = Mindee::ClientV2.new(api_key: api_key)Instead of passing the key directly, you can also set the following environment variable:
MINDEE_V2_API_KEY
This is recommended for production use.
In this way there is no need to pass the api_key when initializing the client.
mindee_client = Mindee::ClientV2.new()First import the needed classes:
import com.mindee.MindeeClientV2;
import com.mindee.InferenceParameters;For the API key, you can pass it directly to the client. This is useful for quick testing.
String apiKey = "MY_API_KEY";
MindeeClientV2 mindeeClient = new MindeeClientV2(apiKey);Instead of passing the key directly, you can also set the following environment variable:
MINDEE_V2_API_KEY
This is recommended for production use.
In this way there is no need to pass the apiKey argument when initializing the client.
MindeeClientV2 mindeeClient = new MindeeClientV2();First add the required namespaces.
using Mindee;
using Mindee.Input;For the API key, you can pass it directly to the client. This is useful for quick testing.
string apiKey = "MY_API_KEY";
MindeeClientV2 mindeeClient = new MindeeClientV2(apiKey);Instead of passing the key directly, you can also set the following environment variable:
MindeeV2__ApiKey
This is recommended for production use.
In this way there is no need to pass the apiKey argument when initializing the client.
MindeeClientV2 mindeeClient = new MindeeClientV2();Set Inference Parameters
Inference parameters control:
which model to use
server-side processing options
how the results will be returned to you
Use an Alias
You can set an alias for linking the sent file to your own system.
For example, you could use an internal PO number, or your database ID.
Aliases are not unique in our database, you can use the same alias multiple times.
Only the model_id is required.
inference_params = InferenceParameters(
# ID of the model, required.
model_id="MY_MODEL_ID",
# Use an alias to link the file to your own DB.
# If empty, no alias will be used.
alias="MY_ALIAS",
# ... any other options ...
)Only the modelId is required.
const inferenceParams = {
// ID of the model, required.
modelId: "MY_MODEL_ID",
// Use an alias to link the file to your own DB.
// If empty, no alias will be used.
alias: "MY_ALIAS",
// ... any other options ...
};Only the modelId is required.
$inferenceParams = new InferenceParameters(
// ID of the model, required.
"MY_MODEL_ID",
// Use an alias to link the file to your own DB.
// If empty, no alias will be used.
alias: "MY_ALIAS",
// ... any other options ...
);Only the model_id is required.
inference_params = Mindee::Input::InferenceParameters.new(
# ID of the model, required.
'MY_MODEL_ID',
# Use an alias to link the file to your own DB.
# If empty, no alias will be used.
file_alias: 'MY_ALIAS',
# ... any other options ...
)Only the modelId is required.
InferenceParameters params = InferenceParameters
// ID of the model, required.
.builder("MY_MODEL_ID")
// Use an alias to link the file to your own DB.
// If empty, no alias will be used.
.alias("MY_ALIAS")
// ... any other options ...
// complete the builder
.build();Only the modelId is required.
var inferenceParams = new InferenceParameters(
// ID of the model, required.
modelId: "MY_MODEL_ID"
// Use an alias to link the file to your own DB.
// If empty, no alias will be used.
, alias: "MY_ALIAS"
// ... any other options ...
);Optional Features Configuration
Enable or disable Optional Features.
The default activation states for Optional Features are set on the platform. Any values set here will override the defaults.
Leave empty or null to use the default values.
For example: if the Polygon feature is enabled on the platform, and polygon is explicitly set to false in the parameters ⇒ the Polygon feature will not be enabled for the API call.
Only the model_id is required.
inference_params = InferenceParameters(
# ID of the model, required.
model_id="MY_MODEL_ID",
# Optional Features: set to `True` or `False` to override defaults
# Enhance extraction accuracy with Retrieval-Augmented Generation.
rag=None,
# Extract the full text content from the document as strings.
raw_text=None,
# Calculate bounding box polygons for all fields.
polygon=None,
# Boost the precision and accuracy of all extractions.
# Calculate confidence scores for all fields.
confidence=None,
# ... any other options ...
)Only the modelId is required.
const inferenceParams = {
// ID of the model, required.
modelId: "MY_MODEL_ID",
// Optional Features: set to `true` or `false` to override defaults
// Enhance extraction accuracy with Retrieval-Augmented Generation.
rag: undefined,
// Extract the full text content from the document as strings.
rawText: undefined,
// Calculate bounding box polygons for all fields.
polygon: undefined,
// Boost the precision and accuracy of all extractions.
// Calculate confidence scores for all fields.
confidence: undefined,
// ... any other options ...
};Only the modelId is required.
$inferenceParams = new InferenceParameters(
// ID of the model, required.
"MY_MODEL_ID",
// Optional Features: set to `true` or `false` to override defaults
// Enhance extraction accuracy with Retrieval-Augmented Generation.
rag: null,
// Extract the full text content from the document as strings.
rawText: null,
// Calculate bounding box polygons for all fields.
polygon: null,
// Boost the precision and accuracy of all extractions.
// Calculate confidence scores for all fields.
confidence: null,
// ... any other options ...
);Only the model_id is required.
inference_params = Mindee::Input::InferenceParameters.new(
# ID of the model, required.
'MY_MODEL_ID',
# Options: set to `true` or `false` to override defaults
# Enhance extraction accuracy with Retrieval-Augmented Generation.
rag: nil,
# Extract the full text content from the document as strings.
raw_text: nil,
# Calculate bounding box polygons for all fields.
polygon: nil,
# Boost the precision and accuracy of all extractions.
# Calculate confidence scores for all fields.
confidence: nil,
# ... any other options ...
)Only the modelId is required.
InferenceParameters params = InferenceParameters
// ID of the model, required.
.builder("MY_MODEL_ID")
// Optional Features: set to `true` or `false` to override defaults
// Enhance extraction accuracy with Retrieval-Augmented Generation.
.rag(null)
// Extract the full text content from the document as strings.
.rawText(null)
// Calculate bounding box polygons for all fields.
.polygon(null)
// Boost the precision and accuracy of all extractions.
// Calculate confidence scores for all fields.
.confidence(null)
// ... any other options ...
// complete the builder
.build();Only the modelId is required.
var inferenceParams = new InferenceParameters(
// ID of the model, required.
modelId: "MY_MODEL_ID"
// Optional Features: set to `true` or `false` to override defaults
// Enhance extraction accuracy with Retrieval-Augmented Generation.
, rag: null
// Extract the full text content from the document as strings.
, rawText: null
// Calculate bounding box polygons for all fields.
, polygon: null
// Boost the precision and accuracy of all extractions.
// Calculate confidence scores for all fields.
, confidence: null
// ... any other options ...
);Polling Configuration
The client library will POST the request for you, and then automatically poll the API.
When polling you really only need to set the model_id .
inference_params = InferenceParameters(model_id="MY_MODEL_ID")You can also set the various polling parameters. However, we do not recommend setting this option unless you are encountering timeout problems.
from mindee import PollingOptions
inference_params = InferenceParameters(
# ID of the model, required.
model_id="MY_MODEL_ID",
# Set only if having timeout issues.
polling_options=PollingOptions(
# Initial delay before the first polling attempt.
initial_delay_sec=3,
# Delay between each polling attempt.
delay_sec=1.5,
# Total number of polling attempts.
max_retries=80,
),
# ... any other options ...
)When polling you really only need to set the modelId .
const inferenceParams = {modelId: "MY_MODEL_ID"};You can also set the various polling parameters. However, we do not recommend setting this option unless you are encountering timeout problems.
const inferenceParams = {
// ID of the model, required.
modelId: "MY_MODEL_ID",
// Set only if having timeout issues.
pollingOptions: {
// Initial delay before the first polling attempt.
initialDelaySec: 3.0,
// Delay between each polling attempt.
delaySec: 1.5,
// Total number of polling attempts.
maxRetries: 80
}
// ... any other options ...
}When polling you really only need to set the modelId .
$inferenceParams = new InferenceParameters(modelId: "MY_MODEL_ID");You can also set the various polling parameters. However, we do not recommend setting this option unless you are encountering timeout problems.
use Mindee\Input\PollingOptions;
$inferenceParams = new InferenceParameters(
// ID of the model, required.
modelId: "MY_MODEL_ID",
// Set only if having timeout issues.
pollingOptions: new PollingOptions(
// Initial delay before the first polling attempt.
initialDelaySec: 3.0,
// Delay between each polling attempt.
delaySec: 1.5,
// Total number of polling attempts.
maxRetries: 80,
),
// ... any other options ...
);When polling you really only need to set the model_id .
inference_params = Mindee::Input::InferenceParameters.new("MY_MODEL_ID")You can also set the various polling parameters. However, we do not recommend setting this option unless you are encountering timeout problems.
require 'mindee'
inference_params = Mindee::Input::InferenceParameters.new(
# ID of the model, required.
'MY_MODEL_ID',
# Set only if having timeout issues.
polling_options: Mindee::Input::PollingOptions.new(
# Initial delay before the first polling attempt.
initial_delay_sec: 3,
# Delay between each polling attempt.
delay_sec: 1.5,
# Total number of polling attempts.
max_retries: 80,
),
# ... any other options ...
)When polling you really only need to set the modelId .
InferenceParameters params = InferenceParameters
.builder("MY_MODEL_ID")
.build();You can also set the various polling parameters. However, we do not recommend setting this option unless you are encountering timeout problems.
InferenceParameters params = InferenceParameters
// ID of the model, required.
.builder("MY_MODEL_ID")
// Set only if having timeout issues.
.pollingOptions(
AsyncPollingOptions.builder()
// Initial delay before the first polling attempt.
.initialDelaySec(3.0)
// Delay between each polling attempt.
.intervalSec(1.5)
// Total number of polling attempts.
.maxRetries(80)
// complete the polling builder
.build()
)
// ... any other options ...
.build();When polling you really only need to set the modelId .
var inferenceParams = new InferenceParameters(modelId: "MY_MODEL_ID");You can also set the various polling parameters. However, we do not recommend setting this option unless you are encountering timeout problems.
var inferenceParams = new InferenceParameters(
// ID of the model, required.
modelId: "MY_MODEL_ID"
// Set only if having timeout issues.
, pollingOptions: new AsyncPollingOptions(
// Initial delay before the first polling attempt.
initialDelaySec: 3.0,
// Delay between each polling attempt.
intervalSec: 1.5,
// Total number of polling attempts.
maxRetries: 80
)
// ... any other options ...
);Webhook Configuration
The client library will POST the request to your Web server, as configured by your webhook endpoint.
For more information on webhooks, take a look at the Webhook Results page.
When using a webhook, you'll need to set the model ID and the webhook ID(s) to use.
inference_params = InferenceParameters(
# ID of the model, required.
model_id="MY_MODEL_ID",
# Add any number of webhook IDs here.
webhook_ids=["ENDPOINT_1_UUID"],
# ... any other options ...
)const inferenceParams = {
// ID of the model, required.
modelId: "MY_MODEL_ID",
// Add any number of webhook IDs here.
webhookIds: ["ENDPOINT_1_UUID"],
// ... any other options ...
};$inferenceParams = new InferenceParameters(
// ID of the model, required.
modelId: "MY_MODEL_ID",
// Add any number of webhook IDs here.
// Note: PHP 8.1 only allows a single ID to be passed.
webhooksIds: array("ENDPOINT_1_UUID"),
// ... any other options ...
);inference_params = Mindee::Input::InferenceParameters.new(
# ID of the model, required.
'MY_MODEL_ID',
# Add any number of webhook IDs here.
webhook_ids: ["ENDPOINT_1_UUID"],
# ... any other options ...
)InferenceParameters params = InferenceParameters
// ID of the model, required.
.builder("MY_MODEL_ID")
// Add any number of webhook IDs here.
.webhookIds(new String[]{"ENDPOINT_1_UUID"})
// ... any other options ...
.build();var inferenceParams = new InferenceParameters(
// ID of the model, required.
modelId: "MY_MODEL_ID"
// Add any number of webhook IDs here.
, webhookIds: new List<string>{ "ENDPOINT_1_UUID" }
// ... any other options ...
);You can specify any number of webhook endpoint IDs, each will be sent the payload.
Next Steps
Now that everything is ready to, it's time to send your files to the Mindee servers.
If you're sending a local file, head on over to the Load and Adjust a File section for details on the next step.
If you're sending an URL, head on over to the Send a File or URL section.
Last updated
Was this helpful?

