haive.core.models.llm.providers.groqΒΆ

Groq Provider Module.

This module implements the Groq language model provider for the Haive framework, supporting ultra-fast inference with Groq’s Language Processing Units (LPUs).

The provider handles API key management, model configuration, and safe imports of the langchain-groq package dependencies for high-speed LLM inference.

Examples

Basic usage:

from haive.core.models.llm.providers.groq import GroqProvider

provider = GroqProvider(
    model="mixtral-8x7b-32768",
    temperature=0.7,
    max_tokens=1000
)
llm = provider.instantiate()

With streaming for real-time responses:

provider = GroqProvider(
    model="llama2-70b-4096",
    streaming=True,
    temperature=0.1
)

GroqProvider(*[, requests_per_second, ...])

Groq language model provider configuration.

ClassesΒΆ

GroqProvider

Groq language model provider configuration.

Module ContentsΒΆ

class haive.core.models.llm.providers.groq.GroqProvider(/, **data)[source]ΒΆ

Bases: haive.core.models.llm.providers.base.BaseLLMProvider

Groq language model provider configuration.

This provider supports Groq’s high-speed LLM inference including Mixtral, Llama 2, and other optimized models running on Language Processing Units.

Parameters:
  • data (Any)

  • requests_per_second (float | None)

  • tokens_per_second (int | None)

  • tokens_per_minute (int | None)

  • max_retries (int)

  • retry_delay (float)

  • check_every_n_seconds (float | None)

  • burst_size (int | None)

  • provider (LLMProvider)

  • model (str | None)

  • name (str | None)

  • api_key (SecretStr)

  • cache_enabled (bool)

  • cache_ttl (int | None)

  • extra_params (dict[str, Any] | None)

  • debug (bool)

  • temperature (float | None)

  • max_tokens (int | None)

  • top_p (float | None)

  • stream (bool)

  • stop (list[str] | None)

providerΒΆ

Always LLMProvider.GROQ

Type:

LLMProvider

modelΒΆ

The Groq model to use

Type:

str

temperatureΒΆ

Sampling temperature (0.0-2.0)

Type:

float

max_tokensΒΆ

Maximum tokens in response

Type:

int

top_pΒΆ

Nucleus sampling parameter

Type:

float

streamΒΆ

Enable streaming responses

Type:

bool

stopΒΆ

Stop sequences for generation

Type:

list

Examples

High-speed inference:

provider = GroqProvider(
    model="mixtral-8x7b-32768",
    temperature=0.7,
    max_tokens=2000
)

Streaming responses:

provider = GroqProvider(
    model="llama2-70b-4096",
    stream=True,
    temperature=0.1
)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod get_models()[source]ΒΆ

Get available Groq models.

Return type:

list[str]

max_tokens: int | None = NoneΒΆ

Get maximum total tokens for this model.