Skip to content

AI Providers

This guide provides an overview of the AI providers supported by CentralMind Gateway, along with configuration options and examples.

Supported Providers

We support the following AI providers:

We’ve tested with OpenAI o3-mini, Anthropic Claude 3.7 and Gemini 2.0 Flash Thinking, which we recommend for optimal performance.

Google Gemini provides a generous free tier.

For best performance, we recommend using:

  • OpenAI: o3-mini
  • Anthropic: Claude 3.7
  • Google: Gemini 2.0 Flash Thinking (Free tier available)

These models provide a good balance of performance, speed, and cost for most use cases.

Configuration Schema

Below is the configuration schema for all supported AI providers:

FieldTypeRequiredDescription
ai-providerstringNoAI provider to use. Options: openai, anthropic, bedrock, gemini, anthropic-vertexai. Defaults to openai
ai-endpointstringNoCustom OpenAI-compatible API endpoint URL
ai-api-keystringNoAI API token for authentication
bedrock-regionstringNoAWS region for Amazon Bedrock
vertexai-regionstringNoGoogle Cloud region for Vertex AI
vertexai-projectstringNoGoogle Cloud project ID for Vertex AI
ai-modelstringNoAI model to use (provider-specific)
ai-max-tokensintegerNoMaximum tokens to use in the response (0 = provider default)
ai-temperaturefloatNoTemperature for AI responses (-1.0 = provider default)
ai-reasoningbooleanNoEnable reasoning mode for supported models (default: true)

Example

First specify OPENAI_API_KEY in the environment. You can get OpenAI API Key on OpenAI Platform.

Terminal window
export OPENAI_API_KEY='yourkey'
Terminal window
./gateway discover \
--ai-provider openai \
--config connection.yaml

Additional Configuration Options

You can further customize the AI behavior with these optional parameters:

Terminal window
./gateway discover \
--ai-provider openai \
--ai-api-key your-openai-api-key \
--ai-model o3-mini \
--ai-max-tokens 8192 \
--ai-temperature 1.0 \
--ai-reasoning=true \
--config connection.yaml