AI Providers
This guide provides an overview of the AI providers supported by CentralMind Gateway, along with configuration options and examples.
Supported Providers
We support the following AI providers:
- OpenAI and all OpenAI-compatible providers
- Anthropic
- Amazon Bedrock
- Google Vertex AI (Anthropic)
- Google Gemini
We’ve tested with OpenAI o3-mini
, Anthropic Claude 3.7
and Gemini 2.0 Flash Thinking
, which we recommend for optimal performance.
Google Gemini provides a generous free tier.
Recommended Models
For best performance, we recommend using:
- OpenAI: o3-mini
- Anthropic: Claude 3.7
- Google: Gemini 2.0 Flash Thinking (Free tier available)
These models provide a good balance of performance, speed, and cost for most use cases.
Configuration Schema
Below is the configuration schema for all supported AI providers:
Field | Type | Required | Description |
---|---|---|---|
ai-provider | string | No | AI provider to use. Options: openai , anthropic , bedrock , gemini , anthropic-vertexai . Defaults to openai |
ai-endpoint | string | No | Custom OpenAI-compatible API endpoint URL |
ai-api-key | string | No | AI API token for authentication |
bedrock-region | string | No | AWS region for Amazon Bedrock |
vertexai-region | string | No | Google Cloud region for Vertex AI |
vertexai-project | string | No | Google Cloud project ID for Vertex AI |
ai-model | string | No | AI model to use (provider-specific) |
ai-max-tokens | integer | No | Maximum tokens to use in the response (0 = provider default) |
ai-temperature | float | No | Temperature for AI responses (-1.0 = provider default) |
ai-reasoning | boolean | No | Enable reasoning mode for supported models (default: true) |
Example
First specify OPENAI_API_KEY
in the environment. You can get OpenAI API Key on OpenAI Platform.
export OPENAI_API_KEY='yourkey'
./gateway discover \ --ai-provider openai \ --config connection.yaml
Additional Configuration Options
You can further customize the AI behavior with these optional parameters:
./gateway discover \ --ai-provider openai \ --ai-api-key your-openai-api-key \ --ai-model o3-mini \ --ai-max-tokens 8192 \ --ai-temperature 1.0 \ --ai-reasoning=true \ --config connection.yaml