haive.core.models.llm.providers.together¶

Together AI Provider Module.

This module implements the Together AI language model provider for the Haive framework, supporting a wide variety of open-source models through Together’s inference platform.

The provider handles API key management, model configuration, and safe imports of the langchain-together package dependencies.

Examples

Basic usage:

from haive.core.models.llm.providers.together import TogetherProvider

provider = TogetherProvider(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    temperature=0.7,
    max_tokens=1000
)
llm = provider.instantiate()

With custom parameters:

provider = TogetherProvider(
    model="meta-llama/Llama-2-70b-chat-hf",
    temperature=0.1,
    top_p=0.9,
    top_k=50,
    repetition_penalty=1.1
)

TogetherProvider(*[, requests_per_second, ...])

Together AI language model provider configuration.

Classes¶

TogetherProvider

Together AI language model provider configuration.

Module Contents¶

class haive.core.models.llm.providers.together.TogetherProvider(/, **data)[source]¶

Bases: haive.core.models.llm.providers.base.BaseLLMProvider

Together AI language model provider configuration.

This provider supports a wide variety of open-source models through Together’s inference platform, including Llama, Mixtral, CodeLlama, and many others.

Parameters:
  • data (Any)

  • requests_per_second (float | None)

  • tokens_per_second (int | None)

  • tokens_per_minute (int | None)

  • max_retries (int)

  • retry_delay (float)

  • check_every_n_seconds (float | None)

  • burst_size (int | None)

  • provider (LLMProvider)

  • model (str | None)

  • name (str | None)

  • api_key (SecretStr)

  • cache_enabled (bool)

  • cache_ttl (int | None)

  • extra_params (dict[str, Any] | None)

  • debug (bool)

  • temperature (float | None)

  • max_tokens (int | None)

  • top_p (float | None)

  • top_k (int | None)

  • repetition_penalty (float | None)

  • stop (list[str] | None)

provider¶

Always LLMProvider.TOGETHER_AI

Type:

LLMProvider

model¶

The Together model to use (full model path)

Type:

str

temperature¶

Sampling temperature (0.0-1.0)

Type:

float

max_tokens¶

Maximum tokens in response

Type:

int

top_p¶

Nucleus sampling parameter

Type:

float

top_k¶

Top-k sampling parameter

Type:

int

repetition_penalty¶

Repetition penalty parameter

Type:

float

stop¶

Stop sequences for generation

Type:

list

Examples

Mixtral model for reasoning:

provider = TogetherProvider(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    temperature=0.3,
    max_tokens=2000
)

Llama 2 for conversation:

provider = TogetherProvider(
    model="meta-llama/Llama-2-70b-chat-hf",
    temperature=0.7,
    top_p=0.9,
    repetition_penalty=1.1
)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod get_models()[source]¶

Get popular Together models.

Return type:

list[str]

max_tokens: int | None = None¶

Get maximum total tokens for this model.