Configure the Client

Reference documentation on preparing and configuring the Mindee client.

This is reference documentation. Looking for a quick TL;DR?

  • Take a look at the Integrating Mindee page.

  • Use the search bar above to ask our documentation AI to write code samples for you.

Requirements

Before proceeding you'll need to have one of the official Mindee client libraries installed.

You'll also need to use one of your API Keys and one or several Models configured.

Overview

Before sending any files to the Mindee servers for processing, you'll need to initialize your client and set inference options.

These settings will determine how your files are sent, including any extra options, and how you want to process the result.

Initialize the Mindee Client

This should be the first step in your code. It will determine which organization is used to make the calls.

You should reuse the same client instance for all calls of the same organization.

The client instance is thread-safe where applicable.

First import the needed classes:

from mindee import ClientV2, InferenceParameters

For the API key, you can pass it directly to the client. This is useful for quick testing.

api_key = "MY_API_KEY"

mindee_client = ClientV2(api_key)

Instead of passing the key directly, you can also set the following environment variable:

MINDEE_V2_API_KEY

This is recommended for production use. In this way there is no need to pass the api_key when initializing the client.

mindee_client = ClientV2()

Set Inference Parameters

Inference parameters control:

  • which model to use

  • server-side processing options

  • how the results will be returned to you

Processing Options

These are mostly the same options as present in the Web API.

Only the model_id is required.

inference_params = InferenceParameters(
    # ID of the model, required.
    model_id="MY_MODEL_ID",
    
    # Options:

    # If set to `True`, will enable Retrieval-Augmented Generation.
    rag=True,

    # Use an alias to link the file to your own DB.
    # If empty, no alias will be used.
    alias="MY_ALIAS",
)

Polling Configuration

The client library will POST the request for you, and then automatically poll the API.

When polling you really only need to set the model_id .

inference_params = InferenceParameters(model_id="MY_MODEL_ID")

You can also set the various polling parameters. However, we do not recommend setting this option unless you are encountering timeout problems.

from mindee import PollingOptions

inference_params = InferenceParameters(
    model_id="MY_MODEL_ID",
    
    # Set only if having timeout issues.
    polling_options=PollingOptions(
        # Initial delay before the first polling attempt.
        initial_delay_sec=3,
        # Delay between each polling attempt.
        delay_sec=1.5,
        # Total number of polling attempts.
        max_retries=80,
    ),
    # ... any other options ...
)

Webhook Configuration

The client library will POST the request to your Web server, as configured by your webhook endpoint.

For more information on webhooks, take a look at the Webhooks page.

When using a webhook, you'll need to set the model ID and the webhook ID(s) to use.

inference_params = InferenceParameters(
    model_id="MY_MODEL_ID",
    webhook_ids=["ENDPOINT_1_UUID"],
    
    # ... any other options ...
)

You can specify any number of webhook endpoint IDs, each will be sent the payload.

Next Steps

Now that everything is ready to, it's time to send your files to the Mindee servers.

If you're sending a local file, head on over to the Load and Adjust a File page for details on the next step.

If you're sending an URL, head on over to the Send a File or URL section.

Last updated

Was this helpful?