haive.core.models.llm.providers.ai21ΒΆ

AI21 Labs Provider Module.

This module implements the AI21 Labs language model provider for the Haive framework, supporting Jurassic models known for their strong performance on various NLP tasks.

The provider handles API key management, model configuration, and safe imports of the langchain-ai21 package dependencies.

Examples

Basic usage:

from haive.core.models.llm.providers.ai21 import AI21Provider

provider = AI21Provider(
    model="j2-ultra",
    temperature=0.7,
    max_tokens=1000
)
llm = provider.instantiate()

With custom parameters:

provider = AI21Provider(
    model="j2-grande-instruct",
    temperature=0.1,
    top_p=0.9,
    frequency_penalty=0.2
)

AI21Provider(*[, requests_per_second, ...])

AI21 Labs language model provider configuration.

ClassesΒΆ

AI21Provider

AI21 Labs language model provider configuration.

Module ContentsΒΆ

class haive.core.models.llm.providers.ai21.AI21Provider(/, **data)[source]ΒΆ

Bases: haive.core.models.llm.providers.base.BaseLLMProvider

AI21 Labs language model provider configuration.

This provider supports AI21’s Jurassic family of models including J2-Ultra, J2-Mid, and instruction-tuned variants optimized for various tasks.

Parameters:
  • data (Any)

  • requests_per_second (float | None)

  • tokens_per_second (int | None)

  • tokens_per_minute (int | None)

  • max_retries (int)

  • retry_delay (float)

  • check_every_n_seconds (float | None)

  • burst_size (int | None)

  • provider (LLMProvider)

  • model (str | None)

  • name (str | None)

  • api_key (SecretStr)

  • cache_enabled (bool)

  • cache_ttl (int | None)

  • extra_params (dict[str, Any] | None)

  • debug (bool)

  • temperature (float | None)

  • max_tokens (int | None)

  • top_p (float | None)

  • top_k_return (int | None)

  • frequency_penalty (dict[str, Any] | None)

  • presence_penalty (dict[str, Any] | None)

  • count_penalty (dict[str, Any] | None)

providerΒΆ

Always LLMProvider.AI21

Type:

LLMProvider

modelΒΆ

The AI21 model to use

Type:

str

temperatureΒΆ

Sampling temperature (0.0-2.0)

Type:

float

max_tokensΒΆ

Maximum tokens in response

Type:

int

top_pΒΆ

Nucleus sampling parameter

Type:

float

top_k_returnΒΆ

Number of top tokens to consider

Type:

int

frequency_penaltyΒΆ

Frequency penalty settings

Type:

dict

presence_penaltyΒΆ

Presence penalty settings

Type:

dict

count_penaltyΒΆ

Count penalty settings

Type:

dict

Examples

Ultra model for complex tasks:

provider = AI21Provider(
    model="j2-ultra",
    temperature=0.7,
    max_tokens=2000
)

Instruct model with penalties:

provider = AI21Provider(
    model="j2-grande-instruct",
    temperature=0.1,
    frequency_penalty={"scale": 0.2, "apply_to_whitespaces": False}
)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod get_models()[source]ΒΆ

Get available AI21 models.

Return type:

list[str]

max_tokens: int | None = NoneΒΆ

Get maximum total tokens for this model.