haive.core.models.llm.providers.xai¶
xAI Provider Module.
This module implements the xAI language model provider for the Haive framework, supporting Grok models developed by Elon Musk’s xAI company.
The provider handles API key management, model configuration, and safe imports of the langchain-xai package dependencies.
Examples
Basic usage:
from haive.core.models.llm.providers.xai import XAIProvider
provider = XAIProvider(
model="grok-beta",
temperature=0.7,
max_tokens=1000
)
llm = provider.instantiate()
With custom parameters:
provider = XAIProvider(
model="grok-1",
temperature=0.1,
top_p=0.9,
stream=True
)
|
xAI language model provider configuration. |
Classes¶
xAI language model provider configuration. |
Module Contents¶
- class haive.core.models.llm.providers.xai.XAIProvider(/, **data)[source]¶
Bases:
haive.core.models.llm.providers.base.BaseLLMProvider
xAI language model provider configuration.
This provider supports xAI’s Grok family of models known for their real-time information access and conversational capabilities.
- Parameters:
data (Any)
requests_per_second (float | None)
tokens_per_second (int | None)
tokens_per_minute (int | None)
max_retries (int)
retry_delay (float)
check_every_n_seconds (float | None)
burst_size (int | None)
provider (LLMProvider)
model (str | None)
name (str | None)
api_key (SecretStr)
cache_enabled (bool)
cache_ttl (int | None)
debug (bool)
temperature (float | None)
max_tokens (int | None)
top_p (float | None)
stream (bool)
- provider¶
Always LLMProvider.XAI
- Type:
Examples
Grok Beta for general conversation:
provider = XAIProvider( model="grok-beta", temperature=0.7, max_tokens=2000 )
Grok with streaming:
provider = XAIProvider( model="grok-1", temperature=0.1, stream=True, top_p=0.9 )
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.