haive.core.models.llm.providers.openai¶
OpenAI Provider Module.
This module implements the OpenAI language model provider for the Haive framework, supporting GPT-3.5, GPT-4, and other OpenAI models through a clean, consistent interface.
The provider handles API key management, model configuration, and safe imports of the langchain-openai package dependencies.
Examples
Basic usage:
from haive.core.models.llm.providers.openai import OpenAIProvider
provider = OpenAIProvider(
model="gpt-4",
temperature=0.7,
max_tokens=1000
)
llm = provider.instantiate()
With rate limiting:
provider = OpenAIProvider(
model="gpt-3.5-turbo",
requests_per_second=10,
tokens_per_minute=90000
)
llm = provider.instantiate()
|
OpenAI language model provider configuration. |
Classes¶
OpenAI language model provider configuration. |
Module Contents¶
- class haive.core.models.llm.providers.openai.OpenAIProvider(/, **data)[source]¶
Bases:
haive.core.models.llm.providers.base.BaseLLMProvider
OpenAI language model provider configuration.
This provider supports all OpenAI chat models including GPT-3.5-turbo, GPT-4, and GPT-4-turbo variants. It handles API authentication, model selection, and advanced parameters like temperature and token limits.
- Parameters:
data (Any)
requests_per_second (float | None)
tokens_per_second (int | None)
tokens_per_minute (int | None)
max_retries (int)
retry_delay (float)
check_every_n_seconds (float | None)
burst_size (int | None)
provider (LLMProvider)
model (str | None)
name (str | None)
api_key (SecretStr)
cache_enabled (bool)
cache_ttl (int | None)
debug (bool)
temperature (float | None)
max_tokens (int | None)
top_p (float | None)
frequency_penalty (float | None)
presence_penalty (float | None)
n (int | None)
organization (str | None)
- provider¶
Always LLMProvider.OPENAI
- model¶
Model name (default: “gpt-3.5-turbo”)
- temperature¶
Sampling temperature (0-2)
- max_tokens¶
Maximum tokens to generate
- top_p¶
Nucleus sampling parameter
- frequency_penalty¶
Frequency penalty (-2 to 2)
- presence_penalty¶
Presence penalty (-2 to 2)
- n¶
Number of completions to generate
- Environment Variables:
OPENAI_API_KEY: API key for authentication OPENAI_ORG_ID: Optional organization ID
Examples
Creating a GPT-4 instance:
provider = OpenAIProvider( model="gpt-4", temperature=0.7, max_tokens=2000 ) llm = provider.instantiate() response = llm.invoke("Explain quantum computing")
Using with custom parameters:
provider = OpenAIProvider( model="gpt-3.5-turbo-16k", temperature=0.2, top_p=0.9, frequency_penalty=0.5 )
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- classmethod get_models()[source]¶
Get available OpenAI models.
- Returns:
List of available model names
- Raises:
ImportError – If openai package is not installed
Exception – If API call fails
- Return type: