OpenAI
The OpenAI provider enables direct integration with OpenAI’s models including o3-mini. This guide covers setup, configuration, authentication, and usage options for connecting to OpenAI’s API.
Overview
The OpenAI provider allows direct access to OpenAI’s language models, including the latest O series and GPT models. This provider offers a streamlined integration with robust authentication, flexible configuration options, and support for advanced features like high reasoning effort and JSON responses.
Authentication
The provider uses API key authentication to access OpenAI’s services. You’ll need to obtain an API key from OpenAI to use this provider:
- Environment Variable: Set
OPENAI_API_KEY
to provide your credentials. - Command Line: Pass your API key using the appropriate flag.
For more details on obtaining and managing OpenAI API keys, refer to OpenAI’s documentation:
Example Usage
export OPENAI_API_KEY='yourkey'
Below is a basic example of how to use the provider with a connection configuration file:
./gateway discover \ --ai-provider openai \ --config connection.yaml
Endpoint Configuration
By default, the provider connects to OpenAI’s standard API endpoint. You can specify a custom endpoint using one of the following methods:
- Configuration File: Include the endpoint in your connection.yaml
- Environment Variable: Set the
OPENAI_ENDPOINT
variable.
Example with a custom endpoint:
export OPENAI_ENDPOINT="https://custom-openai-endpoint.example.com"./gateway discover \ --ai-provider openai \ --config connection.yaml
Model Selection
By default, the OpenAI provider uses o3-mini
. You can specify a different model using one of the following methods:
- Command-line Flag: Use the
--ai-model
flag. - Environment Variable: Set the
OPENAI_MODEL_ID
.
Examples:
# Specify model via command line./gateway discover \ --ai-provider openai \ --ai-model o3-mini \ --config connection.yaml
# Or via environment variableexport OPENAI_MODEL_ID=o3-mini./gateway discover \ --ai-provider openai \ --config connection.yaml
Advanced Configuration
Reasoning Mode
Enable OpenAI’s high reasoning effort for complex tasks:
./gateway discover \ --ai-provider openai \ --ai-reasoning=true \ --config connection.yaml
When reasoning mode is enabled, the provider sets the reasoning_effort
parameter to “high”.
Response Length Control
Control the maximum token count in responses:
./gateway discover \ --ai-provider openai \ --ai-max-tokens 8192 \ --config connection.yaml
If not specified, the default maximum token count is 100,000 tokens.
Temperature Adjustment
Adjust the randomness of responses with the temperature parameter:
./gateway discover \ --ai-provider openai \ --ai-temperature 0.7 \ --config connection.yaml
Lower values produce more deterministic outputs, while higher values increase creativity and randomness.
Usage Costs
The OpenAI provider includes cost estimation for various models:
Model | Input Cost (per 1M tokens) | Output Cost (per 1M tokens) |
---|---|---|
gpt-4o-mini | $0.15 | $0.60 |
gpt-4o | $2.50 | $10.00 |
o3-mini | $1.10 | $4.40 |
o1 | $15.00 | $60.00 |
Costs are calculated based on the actual token usage for each request.
Recommended Best Practices
- API Key Security: Use environment variables for sensitive settings like API keys.
- Manage Token Count: Set a reasonable maximum token count to control costs.
- Enable Reasoning Mode: Use for complex analytical tasks.
- Select the Right Model: Choose models based on your specific needs and budget constraints.