haive.core.models.llm.baseΒΆ

Base LLM configuration with model metadata support.

This module provides base classes and implementations for LLM providers with support for model metadata, context windows, and capabilities.

Deprecated since version 0.2.0: This module is deprecated. Use haive.core.models.llm.providers instead. The individual provider configurations have been moved to separate modules for better organization and maintainability.

ClassesΒΆ

AI21LLMConfig

Configuration for AI21 models.

AlephAlphaLLMConfig

Configuration for Aleph Alpha models.

AnthropicLLMConfig

Configuration for Anthropic models.

AzureLLMConfig

Configuration specific to Azure OpenAI.

BedrockLLMConfig

Configuration for AWS Bedrock models.

CohereLLMConfig

Configuration for Cohere models.

DatabricksLLMConfig

Configuration for Databricks models.

DeepSeekLLMConfig

Configuration for DeepSeek models.

FireworksAILLMConfig

Configuration for Fireworks AI models.

GeminiLLMConfig

Configuration for Google Gemini models.

GooseAILLMConfig

Configuration for GooseAI models.

GroqLLMConfig

Configuration for Groq models.

HuggingFaceLLMConfig

Configuration for Hugging Face models.

LLMConfig

Base configuration for Language Model providers with security and metadata support.

LlamaCppLLMConfig

Configuration for Llama.cpp local models.

MistralLLMConfig

Configuration for Mistral models.

MosaicMLLLMConfig

Configuration for MosaicML models.

NLPCloudLLMConfig

Configuration for NLP Cloud models.

NVIDIALLMConfig

Configuration for NVIDIA AI Endpoints models.

OllamaLLMConfig

Configuration for Ollama local models.

OpenAILLMConfig

Configuration for OpenAI models.

OpenLMLLMConfig

Configuration for OpenLM models.

PerplexityLLMConfig

Configuration for Perplexity AI models.

PetalsLLMConfig

Configuration for Petals distributed models.

ReplicateLLMConfig

Configuration for Replicate models.

TogetherAILLMConfig

Configuration for Together AI models.

UpstageLLMConfig

Configuration for Upstage models.

VertexAILLMConfig

Configuration for Google Vertex AI models.

WatsonxLLMConfig

Configuration for IBM Watson.x models.

XAILLMConfig

Configuration for xAI models.

Module ContentsΒΆ

class haive.core.models.llm.base.AI21LLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for AI21 models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate AI21 Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.AlephAlphaLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Aleph Alpha models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Aleph Alpha Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.AnthropicLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Anthropic models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

classmethod get_models()[source]ΒΆ

Get all available Anthropic models.

Return type:

list[str]

instantiate(**kwargs)[source]ΒΆ

Instantiate Anthropic Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

classmethod load_model(v)[source]ΒΆ

Load model from environment if not provided.

Parameters:

v (str)

Return type:

str

class haive.core.models.llm.base.AzureLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration specific to Azure OpenAI.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Azure OpenAI Chat model with robust error handling.

Return type:

Any

classmethod load_api_base(v)[source]ΒΆ

Load API base from environment if not provided.

Parameters:

v (str)

Return type:

str

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

classmethod load_api_type(v)[source]ΒΆ

Load API type from environment if not provided.

Parameters:

v (str)

Return type:

str

classmethod load_api_version(v)[source]ΒΆ

Load API version from environment if not provided.

Parameters:

v (str)

Return type:

str

class haive.core.models.llm.base.BedrockLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for AWS Bedrock models.

AWS Bedrock provides access to foundation models from various providers including Anthropic, AI21, Cohere, and Amazon’s own models.

Parameters:

data (Any)

model_idΒΆ

The Bedrock model ID (e.g., β€˜anthropic.claude-v2’)

region_nameΒΆ

AWS region for Bedrock service

aws_access_key_idΒΆ

AWS access key (optional, uses AWS credentials chain)

aws_secret_access_keyΒΆ

AWS secret key (optional, uses AWS credentials chain)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

instantiate(**kwargs)[source]ΒΆ

Instantiate AWS Bedrock Chat model.

Return type:

Any

class haive.core.models.llm.base.CohereLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Cohere models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Cohere Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.DatabricksLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Databricks models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Databricks Chat model.

Return type:

Any

class haive.core.models.llm.base.DeepSeekLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for DeepSeek models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

classmethod get_models()[source]ΒΆ

Get all available DeepSeek models.

Return type:

list[str]

instantiate(**kwargs)[source]ΒΆ

Instantiate DeepSeek Chat model.

Return type:

Any

class haive.core.models.llm.base.FireworksAILLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Fireworks AI models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Fireworks AI Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.GeminiLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Google Gemini models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Google Gemini Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.GooseAILLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for GooseAI models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate GooseAI Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.GroqLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Groq models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Groq Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.HuggingFaceLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Hugging Face models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Hugging Face model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.LLMConfig(/, **data)[source]ΒΆ

Bases: haive.core.common.mixins.secure_config.SecureConfigMixin, haive.core.models.metadata_mixin.ModelMetadataMixin, haive.core.models.llm.rate_limiting_mixin.RateLimitingMixin, pydantic.BaseModel

Base configuration for Language Model providers with security and metadata support.

This class provides: 1. Secure API key handling with environment variable fallbacks 2. Model metadata access (context windows, capabilities, pricing) 3. Common configuration parameters 4. Graph transformation utilities 5. Rate limiting capabilities via RateLimitingMixin

All LLM configurations inherit from this base class, providing a consistent interface for configuration, instantiation, and management of language models from various providers.

Parameters:

data (Any)

providerΒΆ

The LLM provider enum value

modelΒΆ

The specific model identifier

nameΒΆ

Optional friendly name for the model

api_keyΒΆ

Secure storage of API key with env fallback

cache_enabledΒΆ

Whether to enable response caching

cache_ttlΒΆ

Time-to-live for cached responses

extra_paramsΒΆ

Additional provider-specific parameters

debugΒΆ

Enable detailed debug output

Examples

Direct instantiation (not recommended):

config = LLMConfig(
    provider=LLMProvider.OPENAI,
    model="gpt-4",
    api_key=SecretStr("your-key")
)

Using provider-specific config (recommended):

config = OpenAILLMConfig(
    model="gpt-4",
    temperature=0.7
)
llm = config.instantiate()

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

check_context_window_fit(messages, tools=None, reserve_output_tokens=1000)[source]ΒΆ

Check if messages fit within the model’s context window.

This method helps prevent β€œcontext length exceeded” errors by validating message length before making API calls.

Args:

messages: Sequence of chat messages to check tools: Optional sequence of function schemas for tool calls reserve_output_tokens: Number of tokens to reserve for output

Returns:

Dictionary with fit analysis: {

β€œfits”: bool, β€œinput_tokens”: int, β€œcontext_window”: int, β€œavailable_tokens”: int, β€œtokens_over_limit”: int # 0 if fits, positive if over

}

Example:

Examples

>>> config = OpenAILLMConfig(model="gpt-3.5-turbo")
>>>
>>> # Check if messages fit
>>> fit_check = config.check_context_window_fit(messages)
>>> if not fit_check["fits"]:
>>> print(f"Messages exceed context window by {fit_check['tokens_over_limit']} tokens")
Parameters:
Return type:

dict[str, bool | int]

create_graph_transformer()[source]ΒΆ

Creates an LLMGraphTransformer instance using the LLM.

Return type:

Any

estimate_cost_from_messages(messages, tools=None, include_output_estimate=True, estimated_output_tokens=None)[source]ΒΆ

Estimate the cost of processing messages with this model.

This method combines token counting with pricing metadata to estimate costs before making API calls, helping with budget management and cost optimization.

Args:

messages: Sequence of chat messages tools: Optional sequence of function schemas for tool calls include_output_estimate: Whether to include estimated output costs estimated_output_tokens: Manual override for output token estimation

Returns:

Dictionary with cost breakdown: {

β€œinput_tokens”: int, β€œinput_cost”: float, β€œestimated_output_tokens”: int, β€œestimated_output_cost”: float, β€œtotal_estimated_cost”: float

}

Example:

Examples

>>> from langchain_core.messages import HumanMessage
>>>
>>> config = OpenAILLMConfig(model="gpt-4")
>>> messages = [HumanMessage(content="Write a short story about AI.")]
>>>
>>> cost_estimate = config.estimate_cost_from_messages(messages)
>>> print(f"Estimated total cost: ${cost_estimate['total_estimated_cost']:.6f}")
Parameters:
Return type:

dict[str, float]

estimate_cost_from_text(text, include_output_estimate=True, estimated_output_tokens=None)[source]ΒΆ

Estimate the cost of processing a single text string.

Args:

text: Raw text string to estimate cost for include_output_estimate: Whether to include estimated output costs estimated_output_tokens: Manual override for output token estimation

Returns:

Dictionary with cost breakdown (same format as estimate_cost_from_messages)

Example:

Examples

>>> config = AnthropicLLMConfig(model="claude-3-opus-20240229")
>>> text = "Explain quantum computing in simple terms."
>>>
>>> cost_estimate = config.estimate_cost_from_text(text)
>>> print(f"Input cost: ${cost_estimate['input_cost']:.6f}")
Parameters:
  • text (str)

  • include_output_estimate (bool)

  • estimated_output_tokens (int | None)

Return type:

dict[str, float]

format_metadata_for_display()[source]ΒΆ

Format metadata for structured display or comparison.

Returns:

Dictionary with formatted metadata

Return type:

dict[str, Any]

get_num_tokens(text)[source]ΒΆ

Count tokens in a single text string.

This method instantiates the model temporarily to count tokens, preserving the serializability of the configuration object.

Args:

text: Raw text string to count tokens for

Returns:

Integer count of tokens in the text

Example:

Examples

>>> config = OpenAILLMConfig(model="gpt-3.5-turbo")
>>> text = "Hello, world!"
>>> token_count = config.get_num_tokens(text)
>>> print(f"Tokens in text: {token_count}")
Parameters:

text (str)

Return type:

int

get_num_tokens_from_messages(messages, tools=None)[source]ΒΆ

Count tokens in a sequence of messages.

This method instantiates the model temporarily to count tokens, preserving the serializability of the configuration object.

Args:

messages: Sequence of chat messages (HumanMessage, AIMessage, etc.) tools: Optional sequence of function schemas for tool calls

Returns:

Integer count of tokens across all messages

Example:

Examples

>>> from langchain_core.messages import HumanMessage, AIMessage
>>>
>>> config = OpenAILLMConfig(model="gpt-3.5-turbo")
>>> messages = [
>>> HumanMessage(content="Translate 'Hello' to French."),
>>> AIMessage(content="Bonjour"),
>>> ]
>>>
>>> token_count = config.get_num_tokens_from_messages(messages)
>>> print(f"Total tokens: {token_count}")
Parameters:
Return type:

int

abstractmethod instantiate(**kwargs)[source]ΒΆ

Abstract method to instantiate the configured LLM.

This method must be implemented by all provider-specific subclasses to handle the actual creation of the LLM instance.

Parameters:

**kwargs – Additional parameters to pass to the LLM constructor

Returns:

Instantiated LLM object ready for use

Raises:

NotImplementedError – If not overridden by a subclass

Return type:

langchain.chat_models.base.BaseChatModel

load_model_metadata()[source]ΒΆ

Load and validate model metadata after initialization.

Return type:

Self

set_default_name()[source]ΒΆ

Set a default name for the model if not provided.

Return type:

Self

model_configΒΆ

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class haive.core.models.llm.base.LlamaCppLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Llama.cpp local models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Llama.cpp Chat model.

Return type:

Any

class haive.core.models.llm.base.MistralLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Mistral models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

classmethod get_models()[source]ΒΆ

Get all available Mistral models.

Return type:

list[str]

instantiate(**kwargs)[source]ΒΆ

Instantiate Mistral Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.MosaicMLLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for MosaicML models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate MosaicML Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.NLPCloudLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for NLP Cloud models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate NLP Cloud Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.NVIDIALLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for NVIDIA AI Endpoints models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate NVIDIA Chat model.

Return type:

Any

class haive.core.models.llm.base.OllamaLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Ollama local models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Ollama Chat model.

Return type:

Any

class haive.core.models.llm.base.OpenAILLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for OpenAI models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

classmethod get_models()[source]ΒΆ

Get all available OpenAI models.

Return type:

list[str]

instantiate(**kwargs)[source]ΒΆ

Instantiate OpenAI Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.OpenLMLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for OpenLM models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate OpenLM Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.PerplexityLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Perplexity AI models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Perplexity AI Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.PetalsLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Petals distributed models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Petals Chat model.

Return type:

Any

class haive.core.models.llm.base.ReplicateLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Replicate models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Replicate Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.TogetherAILLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Together AI models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Together AI Chat model.

Return type:

Any

classmethod load_api_key(v)[source]ΒΆ

Load API key from environment if not provided.

Parameters:

v (pydantic.SecretStr)

Return type:

pydantic.SecretStr

class haive.core.models.llm.base.UpstageLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Upstage models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Upstage Chat model.

Return type:

Any

class haive.core.models.llm.base.VertexAILLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for Google Vertex AI models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Google Vertex AI Chat model.

Return type:

Any

classmethod load_project(v)[source]ΒΆ

Load project from environment if not provided.

Parameters:

v (str)

Return type:

str

class haive.core.models.llm.base.WatsonxLLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for IBM Watson.x models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate Watson.x Chat model.

Return type:

Any

class haive.core.models.llm.base.XAILLMConfig(/, **data)[source]ΒΆ

Bases: LLMConfig

Configuration for xAI models.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

instantiate(**kwargs)[source]ΒΆ

Instantiate xAI Chat model.

Return type:

Any