dataflow.api.routes.llm_routes¶

LLM model API routes and generation endpoints.

This module provides FastAPI routes for interacting with various LLM providers and models. It supports generating text completions, chat completions, and streaming responses from models like OpenAI GPT, Anthropic Claude, Google Gemini, and others.

The routes handle provider-specific configurations, authentication, and proper error handling. They serve as the interface between clients and the underlying LLM functionality provided by the Haive core modules.

Key features: - Multi-provider support (OpenAI, Azure, Anthropic, Gemini, etc.) - Text and chat completion endpoints - Streaming response support - Model information endpoints - Authentication and rate limiting

Typical usage example:

# Client-side code to generate text import requests

response = requests.post(

“http://localhost:8000/api/llm/generate”, json={

“provider”: “openai”, “model”: “gpt-4”, “messages”: [

{“role”: “system”, “content”: “You are a helpful assistant.”}, {“role”: “user”, “content”: “Tell me about AI.”}

], “temperature”: 0.7, “max_tokens”: 500

}, headers={“Authorization”: “Bearer YOUR_TOKEN”}

)

generated_text = response.json()[“generated_text”]

Classes¶

LLMConfigRequest

Request model for LLM configuration.

LLMGenerationResponse

Response model for LLM generation.

ToolConfig

Configuration for a tool to be used with the LLM.

Functions¶

batch_generate(request[, user_id])

Generate responses from multiple LLM configurations in parallel.

generate_response(request[, query, user_id])

Generate a response using dynamically configured LLM.

get_env_api_key(provider)

Retrieve API key from environment variables based on provider.

Module Contents¶

class dataflow.api.routes.llm_routes.LLMConfigRequest(/, **data)¶

Bases: pydantic.BaseModel

Request model for LLM configuration.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

model_config¶

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class dataflow.api.routes.llm_routes.LLMGenerationResponse(/, **data)¶

Bases: pydantic.BaseModel

Response model for LLM generation.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

model_config¶

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class dataflow.api.routes.llm_routes.ToolConfig(/, **data)¶

Bases: pydantic.BaseModel

Configuration for a tool to be used with the LLM.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

model_config¶

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

async dataflow.api.routes.llm_routes.batch_generate(request, user_id=Depends(require_auth))¶

Generate responses from multiple LLM configurations in parallel.

Parameters:
  • request (fastapi.Request) – The HTTP request containing the configurations

  • user_id (str) – Authenticated user ID

async dataflow.api.routes.llm_routes.generate_response(request, query=Query(..., description='The input query or message to generate a response for'), user_id=Depends(require_auth))¶

Generate a response using dynamically configured LLM.

Parameters:
  • request (LLMConfigRequest) – LLM configuration details

  • query (str) – User’s input query

  • user_id (str) – Authenticated user ID

dataflow.api.routes.llm_routes.get_env_api_key(provider)¶

Retrieve API key from environment variables based on provider.

Parameters:

provider (haive.dataflow.api.routes.models.llm.provider_types.LLMProvider)

Return type:

str | None