haive.core.models.llm.providers.azureΒΆ

Azure OpenAI Provider Module.

This module implements the Azure OpenAI language model provider for the Haive framework, supporting GPT models deployed on Microsoft Azure with enhanced security and compliance.

The provider handles Azure-specific authentication, endpoint configuration, and model deployment access through the langchain-openai package dependencies.

Examples

Basic usage:

from haive.core.models.llm.providers.azure import AzureOpenAIProvider

provider = AzureOpenAIProvider(
    deployment_name="gpt-4-deployment",
    azure_endpoint="https://myresource.openai.azure.com/",
    api_version="2024-02-15-preview",
    temperature=0.7
)
llm = provider.instantiate()

With Azure AD authentication:

provider = AzureOpenAIProvider(
    deployment_name="gpt-35-turbo",
    azure_endpoint="https://myresource.openai.azure.com/",
    use_azure_ad=True
)

AzureOpenAIProvider(*[, ...])

Azure OpenAI language model provider configuration.

ClassesΒΆ

AzureOpenAIProvider

Azure OpenAI language model provider configuration.

Module ContentsΒΆ

class haive.core.models.llm.providers.azure.AzureOpenAIProvider(/, **data)[source]ΒΆ

Bases: haive.core.models.llm.providers.base.BaseLLMProvider

Azure OpenAI language model provider configuration.

This provider supports all OpenAI models deployed on Microsoft Azure, including GPT-4, GPT-3.5-turbo, and others with enterprise-grade security.

Parameters:
  • data (Any)

  • requests_per_second (float | None)

  • tokens_per_second (int | None)

  • tokens_per_minute (int | None)

  • max_retries (int)

  • retry_delay (float)

  • check_every_n_seconds (float | None)

  • burst_size (int | None)

  • provider (LLMProvider)

  • model (str | None)

  • name (str | None)

  • api_key (SecretStr)

  • cache_enabled (bool)

  • cache_ttl (int | None)

  • extra_params (dict[str, Any] | None)

  • debug (bool)

  • deployment_name (str)

  • azure_endpoint (str)

  • api_version (str)

  • use_azure_ad (bool)

  • temperature (float | None)

  • max_tokens (int | None)

  • top_p (float | None)

  • frequency_penalty (float | None)

  • presence_penalty (float | None)

providerΒΆ

Always LLMProvider.AZURE

Type:

LLMProvider

deployment_nameΒΆ

Azure deployment name for the model

Type:

str

azure_endpointΒΆ

Azure OpenAI resource endpoint URL

Type:

str

api_versionΒΆ

Azure OpenAI API version

Type:

str

use_azure_adΒΆ

Whether to use Azure AD authentication

Type:

bool

temperatureΒΆ

Sampling temperature (0.0-2.0)

Type:

float

max_tokensΒΆ

Maximum tokens in response

Type:

int

top_pΒΆ

Nucleus sampling parameter

Type:

float

frequency_penaltyΒΆ

Frequency penalty parameter

Type:

float

presence_penaltyΒΆ

Presence penalty parameter

Type:

float

Examples

Standard deployment:

provider = AzureOpenAIProvider(
    deployment_name="gpt-4",
    azure_endpoint="https://myresource.openai.azure.com/",
    temperature=0.7,
    max_tokens=1000
)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod validate_endpoint(v)[source]ΒΆ

Validate Azure endpoint format.

Parameters:

v (str)

Return type:

str

max_tokens: int | None = NoneΒΆ

Get maximum total tokens for this model.