haive.core.models.llm.factory¶
LLM Factory Module for Haive Framework.
This module provides a universal factory pattern for creating Language Model instances from various providers with a consistent interface. It supports dynamic provider detection, optional dependency handling, and rate limiting capabilities.
The factory pattern allows for clean instantiation of LLMs from 20+ different providers including OpenAI, Anthropic, Google, AWS, and many others, with automatic configuration and error handling.
Examples
Basic usage with provider enum:
from haive.core.models.llm.factory import create_llm
from haive.core.models.llm.provider_types import LLMProvider
# Create an OpenAI model
llm = create_llm(
provider=LLMProvider.OPENAI,
model="gpt-4",
api_key="your-api-key"
)
Using string provider name:
# Provider can also be specified as string
llm = create_llm(
provider="anthropic",
model="claude-3-opus-20240229",
rate_limiting={"requests_per_second": 10}
)
With rate limiting:
# Add rate limiting to any provider
llm = create_llm(
provider=LLMProvider.GROQ,
model="llama3-70b-8192",
requests_per_second=5,
tokens_per_minute=10000
)
- Module Structure:
LLMFactory
: Main factory class for creating LLM instancescreate_llm()
: Convenience function for creating LLMsget_available_providers()
: List all available providersget_provider_models()
: Get available models for a provider
Factory class for creating Language Model instances. |
|
|
Create an LLM instance using the global factory. |
Get list of all available LLM providers. |
|
|
Get available models for a specific provider. |
Classes¶
Factory class for creating Language Model instances. |
Functions¶
|
Create an LLM instance using the global factory. |
Get list of all available LLM providers. |
|
|
Get available models for a specific provider. |
Module Contents¶
- class haive.core.models.llm.factory.LLMFactory[source]¶
Factory class for creating Language Model instances.
This class provides a centralized way to create LLM instances from various providers with consistent configuration and error handling. It supports dynamic import of provider-specific dependencies and graceful fallback when dependencies are not installed.
The factory maintains a registry of provider configurations and handles the complexity of instantiating models with provider-specific parameters while presenting a unified interface.
- _provider_configs¶
Internal registry mapping providers to config classes
- _provider_imports¶
Internal registry of required imports per provider
Examples
Creating models from different providers:
factory = LLMFactory() # OpenAI openai_llm = factory.create( provider=LLMProvider.OPENAI, model="gpt-4", temperature=0.7 ) # Anthropic with rate limiting anthropic_llm = factory.create( provider=LLMProvider.ANTHROPIC, model="claude-3-opus-20240229", requests_per_second=10 ) # Local Ollama model ollama_llm = factory.create( provider=LLMProvider.OLLAMA, model="llama3", base_url="http://localhost:11434" )
Initialize the LLM Factory.
- create(provider, model=None, **kwargs)[source]¶
Create an LLM instance for the specified provider.
This method creates and configures an LLM instance based on the provider and parameters. It handles provider-specific configuration, optional imports, and rate limiting if specified.
- Parameters:
provider (haive.core.models.llm.provider_types.LLMProvider | str) – The LLM provider (enum or string)
model (str | None) – The model name/ID (provider-specific)
**kwargs – Additional configuration parameters including: - api_key: API key for the provider - temperature: Sampling temperature - max_tokens: Maximum tokens to generate - requests_per_second: Rate limiting parameter - tokens_per_minute: Rate limiting parameter - Any provider-specific parameters
- Returns:
Configured LLM instance ready for use
- Raises:
ValueError – If provider is not supported or required config missing
ImportError – If provider dependencies are not installed
RuntimeError – If LLM instantiation fails
- Return type:
Any
Examples
Basic creation:
llm = factory.create( provider="openai", model="gpt-4", temperature=0.7 )
With rate limiting:
llm = factory.create( provider=LLMProvider.ANTHROPIC, model="claude-3-sonnet-20240229", api_key="your-key", requests_per_second=10, tokens_per_minute=100000 )
- get_available_providers()[source]¶
Get list of all available LLM providers.
Examples
List available providers:
factory = LLMFactory() providers = factory.get_available_providers() print(providers) # ['openai', 'anthropic', 'azure', ...]
- get_provider_info(provider)[source]¶
Get information about a specific provider.
- Parameters:
provider (haive.core.models.llm.provider_types.LLMProvider | str) – The provider to get info for
- Returns:
name: Provider name
config_class: Configuration class name
import_required: Required import package
available: Whether dependencies are installed
- Return type:
Dictionary containing provider information including
Examples
Get provider information:
info = factory.get_provider_info("openai") print(info) # { # 'name': 'openai', # 'config_class': 'OpenAILLMConfig', # 'import_required': 'langchain-openai', # 'available': True # }
- haive.core.models.llm.factory.create_llm(provider, model=None, **kwargs)[source]¶
Create an LLM instance using the global factory.
This is a convenience function that uses a global LLMFactory instance to create LLM instances. It provides a simpler interface for common use cases.
- Parameters:
provider (LLMProvider | str) – The LLM provider (enum or string)
model (str | None) – The model name/ID (provider-specific)
**kwargs – Additional configuration parameters
- Returns:
Configured LLM instance
- Raises:
ValueError – If provider is not supported
ImportError – If provider dependencies are not installed
RuntimeError – If LLM instantiation fails
- Return type:
Examples
Create OpenAI model:
llm = create_llm("openai", "gpt-4", temperature=0.7)
Create Anthropic model with rate limiting:
llm = create_llm( provider=LLMProvider.ANTHROPIC, model="claude-3-opus-20240229", requests_per_second=5 )
Create local Ollama model:
llm = create_llm("ollama", "llama3", base_url="http://localhost:11434")
- haive.core.models.llm.factory.get_available_providers()[source]¶
Get list of all available LLM providers.
Examples
List providers:
providers = get_available_providers() print(f"Available providers: {', '.join(providers)}")
- haive.core.models.llm.factory.get_provider_models(provider)[source]¶
Get available models for a specific provider.
This function attempts to retrieve the list of available models from the provider’s API. Not all providers support this functionality.
- Parameters:
provider (LLMProvider | str) – The provider to get models for
- Returns:
List of available model names
- Raises:
ValueError – If provider is not supported
NotImplementedError – If provider doesn’t support listing models
- Return type:
Examples
Get OpenAI models:
models = get_provider_models("openai") print(f"OpenAI models: {models}")