haive.core.models.llm.providers.google¶
Google AI Providers Module.
This module implements both Google Generative AI (Gemini) and Vertex AI providers for the Haive framework. It supports Gemini models through the standard API and enterprise Vertex AI deployments.
The providers handle API key management, model configuration, and safe imports of the langchain-google packages.
Examples
Using Gemini:
from haive.core.models.llm.providers.google import GeminiProvider
provider = GeminiProvider(
model="gemini-1.5-pro",
temperature=0.7
)
llm = provider.instantiate()
Using Vertex AI:
from haive.core.models.llm.providers.google import VertexAIProvider
provider = VertexAIProvider(
model="gemini-1.5-pro",
project="my-project",
location="us-central1"
)
llm = provider.instantiate()
|
Google Gemini language model provider configuration. |
|
Google Vertex AI language model provider configuration. |
Classes¶
Google Gemini language model provider configuration. |
|
Google Vertex AI language model provider configuration. |
Module Contents¶
- class haive.core.models.llm.providers.google.GeminiProvider(/, **data)[source]¶
Bases:
haive.core.models.llm.providers.base.BaseLLMProvider
Google Gemini language model provider configuration.
This provider supports Google’s Gemini models through the Generative AI API. It’s suitable for general use with API key authentication.
- Parameters:
data (Any)
requests_per_second (float | None)
tokens_per_second (int | None)
tokens_per_minute (int | None)
max_retries (int)
retry_delay (float)
check_every_n_seconds (float | None)
burst_size (int | None)
provider (LLMProvider)
model (str | None)
name (str | None)
api_key (SecretStr)
cache_enabled (bool)
cache_ttl (int | None)
debug (bool)
temperature (float | None)
max_output_tokens (int | None)
top_p (float | None)
top_k (int | None)
n (int | None)
- provider¶
Always LLMProvider.GEMINI
- model¶
Model name (default: “gemini-1.5-pro”)
- temperature¶
Sampling temperature (0-1)
- max_output_tokens¶
Maximum tokens to generate
- top_p¶
Nucleus sampling parameter
- top_k¶
Top-k sampling parameter
- n¶
Number of responses to generate
- Environment Variables:
GOOGLE_API_KEY: API key for authentication GEMINI_API_KEY: Alternative API key environment variable
- Model Variants:
gemini-1.5-pro: Most capable, 1M token context
gemini-1.5-flash: Faster, more efficient
gemini-pro: Previous generation
gemini-pro-vision: Multimodal support
Examples
Basic usage:
provider = GeminiProvider( model="gemini-1.5-pro", temperature=0.7, max_output_tokens=2048 ) llm = provider.instantiate()
With advanced sampling:
provider = GeminiProvider( model="gemini-1.5-flash", temperature=0.9, top_p=0.95, top_k=40 )
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- class haive.core.models.llm.providers.google.VertexAIProvider(/, **data)[source]¶
Bases:
haive.core.models.llm.providers.base.BaseLLMProvider
Google Vertex AI language model provider configuration.
This provider supports Google’s models through Vertex AI, suitable for enterprise deployments with project-based authentication and regional control.
- Parameters:
data (Any)
requests_per_second (float | None)
tokens_per_second (int | None)
tokens_per_minute (int | None)
max_retries (int)
retry_delay (float)
check_every_n_seconds (float | None)
burst_size (int | None)
provider (LLMProvider)
model (str | None)
name (str | None)
api_key (SecretStr)
cache_enabled (bool)
cache_ttl (int | None)
debug (bool)
project (str | None)
location (str)
temperature (float | None)
max_output_tokens (int | None)
top_p (float | None)
top_k (int | None)
- provider¶
Always LLMProvider.VERTEX_AI
- model¶
Model name (default: “gemini-1.5-pro”)
- project¶
Google Cloud project ID
- location¶
Google Cloud region (default: “us-central1”)
- temperature¶
Sampling temperature (0-1)
- max_output_tokens¶
Maximum tokens to generate
- top_p¶
Nucleus sampling parameter
- top_k¶
Top-k sampling parameter
- Environment Variables:
GOOGLE_CLOUD_PROJECT: Default project ID GOOGLE_APPLICATION_CREDENTIALS: Path to service account JSON
- Authentication:
Vertex AI uses Google Cloud authentication. You can authenticate by: 1. Setting GOOGLE_APPLICATION_CREDENTIALS to service account key path 2. Using gcloud auth application-default login 3. Running on Google Cloud with appropriate IAM roles
Examples
Basic usage:
provider = VertexAIProvider( model="gemini-1.5-pro", project="my-project", location="us-central1" ) llm = provider.instantiate()
With custom parameters:
provider = VertexAIProvider( model="gemini-1.5-flash", project="my-project", location="europe-west1", temperature=0.5, max_output_tokens=1024 )
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.