haive.core.models.llm.providers.base¶

Base provider module for LLM configurations.

This module provides the base classes and utilities for all LLM provider implementations in the Haive framework. It includes the base configuration class with metadata support, rate limiting capabilities, and common functionality shared across all providers.

The module structure ensures consistent interfaces, proper error handling for optional dependencies, and clean separation of concerns between different LLM providers.

Classes:

BaseLLMProvider: Abstract base class for all LLM provider configurations ProviderImportError: Custom exception for provider import failures

Examples

Creating a custom provider:

from haive.core.models.llm.providers.base import BaseLLMProvider
from haive.core.models.llm.provider_types import LLMProvider

class CustomLLMProvider(BaseLLMProvider):
    provider = LLMProvider.CUSTOM

    def _get_chat_class(self):
        from langchain_custom import ChatCustom
        return ChatCustom

    def _get_default_model(self):
        return "custom-model-v1"

BaseLLMProvider(*[, requests_per_second, ...])

Abstract base class for all LLM provider configurations.

ProviderImportError(provider, package[, message])

Custom exception for provider-specific import failures.

Exceptions¶

ProviderImportError

Custom exception for provider-specific import failures.

Classes¶

BaseLLMProvider

Abstract base class for all LLM provider configurations.

Module Contents¶

exception haive.core.models.llm.providers.base.ProviderImportError(provider, package, message=None)[source]¶

Bases: ImportError

Custom exception for provider-specific import failures.

This exception provides clearer error messages when LLM provider dependencies are not installed, including the package name needed for installation.

provider¶

The provider that failed to import

package¶

The package name to install

message¶

Custom error message

Initialize the provider import error.

Parameters:
  • provider (str) – Name of the provider

  • package (str) – Package name for pip install

  • message (str | None) – Optional custom message

class haive.core.models.llm.providers.base.BaseLLMProvider(/, **data)[source]¶

Bases: haive.core.common.mixins.secure_config.SecureConfigMixin, haive.core.models.metadata_mixin.ModelMetadataMixin, haive.core.models.llm.rate_limiting_mixin.RateLimitingMixin, pydantic.BaseModel, abc.ABC

Abstract base class for all LLM provider configurations.

This class provides the common functionality and interface that all LLM provider implementations must follow. It includes:

  • Secure API key management with environment variable fallbacks

  • Model metadata access (context windows, capabilities, pricing)

  • Rate limiting configuration

  • Common configuration parameters

  • Safe import handling for optional dependencies

Subclasses must implement:
  • _get_chat_class(): Return the LangChain chat class

  • _get_default_model(): Return the default model name

  • _get_import_package(): Return the pip package name

Parameters:
  • data (Any)

  • requests_per_second (float | None)

  • tokens_per_second (int | None)

  • tokens_per_minute (int | None)

  • max_retries (int)

  • retry_delay (float)

  • check_every_n_seconds (float | None)

  • burst_size (int | None)

  • provider (LLMProvider)

  • model (str | None)

  • name (str | None)

  • api_key (SecretStr)

  • cache_enabled (bool)

  • cache_ttl (int | None)

  • extra_params (dict[str, Any] | None)

  • debug (bool)

provider¶

The LLM provider enum value

model¶

The specific model identifier

name¶

Optional friendly name for the model

api_key¶

Secure storage of API key with env fallback

cache_enabled¶

Whether to enable response caching

cache_ttl¶

Time-to-live for cached responses

extra_params¶

Additional provider-specific parameters

debug¶

Enable detailed debug output

Examples

Creating a provider configuration:

from haive.core.models.llm.providers.openai import OpenAIProvider

provider = OpenAIProvider(
    model="gpt-4",
    temperature=0.7,
    max_tokens=1000
)

llm = provider.instantiate()

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

create_graph_transformer()[source]¶

Create an LLMGraphTransformer using this LLM.

Returns:

LLMGraphTransformer instance

Return type:

Any

classmethod get_models()[source]¶
Abstractmethod:

Return type:

list[str]

Get available models for this provider.

This method attempts to retrieve the list of available models from the provider’s API. Not all providers support this.

Returns:

List of available model names

Raises:

NotImplementedError – If provider doesn’t support listing models

Return type:

list[str]

instantiate(**kwargs)[source]¶

Instantiate the LLM with rate limiting if configured.

This method creates an instance of the LLM using the provider’s chat class and configuration. It also applies rate limiting if any rate limit parameters are configured.

Parameters:

**kwargs – Additional parameters to pass to the LLM

Returns:

The instantiated LLM, potentially wrapped with rate limiting

Raises:
Return type:

Any

classmethod load_api_key(v, info)[source]¶

Load API key from environment if not provided.

Parameters:
  • v (pydantic.SecretStr) – The provided API key value

  • info – Validation info containing the instance

Returns:

The API key (from input or environment)

Return type:

pydantic.SecretStr

set_defaults()[source]¶

Set default values after initialization.

This validator ensures that model and name have appropriate default values if not provided during initialization.

Returns:

The validated instance

Return type:

Self

model_config¶

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].