haive.core.engine.aug_llm.configΒΆ

AugLLM configuration system for enhanced LLM chains.

from typing import Any, Dict This module provides a comprehensive configuration system for creating and managing enhanced LLM chains within the Haive framework. The AugLLMConfig class serves as a central configuration point that integrates prompts, tools, output parsers, and structured output models with extensive validation and debugging capabilities.

Key features: - Flexible prompt template creation and management with support for few-shot learning - Comprehensive tool integration with automatic discovery and configuration - Structured output handling via two approaches (v1: parser-based, v2: tool-based) - Rich debugging and validation to ensure proper configuration - Pre/post processing hooks for customizing input and output - Support for both synchronous and asynchronous execution

The configuration system is designed to be highly customizable while providing sensible defaults and automatic detection of configuration requirements.

ClassesΒΆ

AugLLMConfig

Configuration for creating enhanced LLM chains with flexible message handling.

FunctionsΒΆ

debug_print(*args, **kwargs)

Print debug output only if DEBUG_OUTPUT is enabled.

Module ContentsΒΆ

class haive.core.engine.aug_llm.config.AugLLMConfig[source]ΒΆ

Bases: *_get_augllm_base_classes()

Configuration for creating enhanced LLM chains with flexible message handling.

AugLLMConfig provides a structured way to configure and create LLM chains with prompts, tools, output parsers, and structured output models with comprehensive validation and automatic updates. It serves as the central configuration class for language model interactions in the Haive framework.

This class integrates several key functionalities: 1. Prompt template management with support for few-shot learning 2. Tool integration and discovery with automatic routing 3. Structured output handling (both parser-based and tool-based approaches) 4. Message handling for chat-based LLMs 5. Pre/post processing hooks for customization

The configuration system is designed to be highly flexible while enforcing consistent patterns and proper validation, making it easier to create reliable language model interactions.

engine_typeΒΆ

The type of engine (always LLM).

Type:

EngineType

llm_configΒΆ

Configuration for the LLM provider.

Type:

LLMConfig

prompt_templateΒΆ

Template for structuring prompts.

Type:

Optional[BasePromptTemplate]

system_messageΒΆ

System message for chat models.

Type:

Optional[str]

toolsΒΆ

Tools that can be bound to the LLM.

Type:

Sequence[Union[Type[BaseTool], Type[BaseModel], Callable, StructuredTool, BaseModel]]

structured_output_modelΒΆ

Pydantic model for structured outputs.

Type:

Optional[Type[BaseModel]]

structured_output_versionΒΆ

Version of structured output handling (v1: parser-based, v2: tool-based).

Type:

Optional[StructuredOutputVersion]

temperatureΒΆ

Temperature parameter for the LLM.

Type:

Optional[float]

max_tokensΒΆ

Maximum number of tokens to generate.

Type:

Optional[int]

preprocessΒΆ

Function to preprocess input before sending to LLM.

Type:

Optional[Callable]

postprocessΒΆ

Function to postprocess output from LLM.

Type:

Optional[Callable]

Examples

>>> from haive.core.engine.aug_llm.config import AugLLMConfig
>>> from haive.core.models.llm.base import AzureLLMConfig
>>> from pydantic import BaseModel, Field
>>>
>>> # Define a structured output model
>>> class MovieReview(BaseModel):
...     title: str = Field(description="Title of the movie")
...     rating: int = Field(description="Rating from 1-10")
...     review: str = Field(description="Detailed review of the movie")
>>>
>>> # Create a basic configuration
>>> config = AugLLMConfig(
...     name="movie_reviewer",
...     llm_config=AzureLLMConfig(model="gpt-4"),
...     system_message="You are a professional movie critic.",
...     structured_output_model=MovieReview,
...     temperature=0.7
... )
>>>
>>> # Create a runnable from the configuration
>>> reviewer = config.create_runnable()
>>>
>>> # Use the runnable
>>> result = reviewer.invoke("Review the movie 'Inception'")
add_format_instructions(model=None, as_tools=False, var_name='format_instructions')[source]ΒΆ

Add format instructions to partial_variables without changing structured output configuration.

Parameters:
  • model (type[pydantic.BaseModel] | None)

  • as_tools (bool)

  • var_name (str)

Return type:

AugLLMConfig

add_human_message(content)[source]ΒΆ

Add a human message to the prompt template.

Parameters:

content (str)

Return type:

AugLLMConfig

add_optional_variable(var_name)[source]ΒΆ

Add an optional variable to the prompt template.

Parameters:

var_name (str)

Return type:

AugLLMConfig

add_prompt_template(prompt_template)[source]ΒΆ

Add a prompt template to the configuration.

Parameters:

prompt_template (langchain_core.prompts.BasePromptTemplate)

Return type:

AugLLMConfig

add_system_message(content)[source]ΒΆ

Add or update system message in the prompt template.

Parameters:

content (str)

Return type:

AugLLMConfig

add_tool(tool, name=None, route=None)[source]ΒΆ

Add a single tool with optional name and route.

Parameters:
  • tool (Any)

  • name (str | None)

  • route (str | None)

Return type:

AugLLMConfig

add_tool_with_route(tool, route, name=None, metadata=None)[source]ΒΆ

Add a tool with explicit route and metadata.

Parameters:
  • tool (Any)

  • route (str)

  • name (str | None)

  • metadata (dict[str, Any] | None)

Return type:

AugLLMConfig

apply_runnable_config(runnable_config=None)[source]ΒΆ

Extract parameters from runnable_config relevant to this engine.

Parameters:

runnable_config (langchain_core.runnables.RunnableConfig | None)

Return type:

dict[str, Any]

clear_tools()[source]ΒΆ

Clear all tools.

Returns:

Self for method chaining

Return type:

AugLLMConfig

comprehensive_validation_and_setup()[source]ΒΆ

Comprehensive validation and setup after initialization.

Return type:

Self

create_runnable(runnable_config=None)[source]ΒΆ

Create a runnable LLM chain based on this configuration.

Parameters:

runnable_config (langchain_core.runnables.RunnableConfig | None)

Return type:

langchain_core.runnables.Runnable

create_tool_from_config(config, name=None, route=None, **kwargs)[source]ΒΆ

Create a tool from another config object.

Parameters:
  • config (Any) – Configuration object that has a to_tool method

  • name (str | None) – Tool name

  • route (str | None) – Tool route to set

  • **kwargs – Additional kwargs for tool creation

Returns:

Created tool

Return type:

Any

debug_tool_configuration()[source]ΒΆ

Print detailed debug information about tool configuration.

Return type:

AugLLMConfig

classmethod default_schemas_to_tools(data)[source]ΒΆ

Default schemas to tools if schemas isn’t provided but tools has values.

Parameters:

data (dict[str, Any])

classmethod ensure_structured_output_as_tool(data)[source]ΒΆ

Ensure structured output model is properly configured for both v1 and v2.

Parameters:

data (dict[str, Any])

classmethod from_few_shot(examples, example_prompt, prefix, suffix, input_variables, llm_config=None, **kwargs)[source]ΒΆ

Create with few-shot examples.

Parameters:
classmethod from_few_shot_chat(examples, example_prompt, system_message=None, llm_config=None, **kwargs)[source]ΒΆ

Create with few-shot examples for chat templates.

Parameters:
classmethod from_format_instructions(model, system_message=None, llm_config=None, as_tool=False, var_name='format_instructions', **kwargs)[source]ΒΆ

Create config with format instructions but without structured output.

Parameters:
classmethod from_llm_config(llm_config, **kwargs)[source]ΒΆ

Create from an existing LLMConfig.

Parameters:

llm_config (haive.core.models.llm.base.LLMConfig)

classmethod from_prompt(prompt, llm_config=None, **kwargs)[source]ΒΆ

Create from a prompt template.

Parameters:
classmethod from_pydantic_tools(tool_models, system_message=None, llm_config=None, include_instructions=True, force_tool_use=False, **kwargs)[source]ΒΆ

Create with Pydantic tool models.

Parameters:
classmethod from_structured_output_v1(model, system_message=None, llm_config=None, include_instructions=True, **kwargs)[source]ΒΆ

Create with v1 structured output using traditional parsing.

Parameters:
classmethod from_structured_output_v2(model, system_message=None, llm_config=None, include_instructions=False, output_field_name=None, **kwargs)[source]ΒΆ

Create with v2 structured output using the tool-based approach.

Parameters:
classmethod from_system_and_few_shot(system_message, examples, example_prompt, prefix, suffix, input_variables, llm_config=None, **kwargs)[source]ΒΆ

Create with system message and few-shot examples.

Parameters:
classmethod from_system_prompt(system_prompt, llm_config=None, **kwargs)[source]ΒΆ

Create from a system prompt string.

Parameters:
classmethod from_tools(tools, system_message=None, llm_config=None, use_tool_for_format_instructions=None, force_tool_use=False, **kwargs)[source]ΒΆ

Create with specified tools.

Parameters:
get_active_template()[source]ΒΆ

Get the name of the currently active template.

Return type:

str | None

get_format_instructions(model=None, as_tools=False)[source]ΒΆ

Get format instructions for a model without changing the config.

Parameters:
  • model (type[pydantic.BaseModel] | None)

  • as_tools (bool)

Return type:

str

get_input_fields()[source]ΒΆ

Get schema fields for input.

Return type:

dict[str, tuple[type, Any]]

get_output_fields()[source]ΒΆ

Get schema fields for output.

Return type:

dict[str, tuple[type, Any]]

instantiate_llm()[source]ΒΆ

Instantiate the LLM based on the configuration.

Return type:

Any

list_prompt_templates()[source]ΒΆ

List available template names.

Return type:

list[str]

model_post_init(__context)[source]ΒΆ

Proper Pydantic post-initialization.

Return type:

None

remove_message(index)[source]ΒΆ

Remove a message from the prompt template.

Parameters:

index (int)

Return type:

AugLLMConfig

remove_prompt_template(name=None)[source]ΒΆ

Remove a template or disable the active one.

Parameters:

name (str | None) – Template name to remove. If None, disables active template.

Returns:

Self for method chaining

Return type:

AugLLMConfig

remove_tool(tool)[source]ΒΆ

Remove a tool and update all related configurations.

Parameters:

tool (Any)

Return type:

AugLLMConfig

replace_message(index, message)[source]ΒΆ

Replace a message in the prompt template.

Parameters:
  • index (int)

  • message (str | langchain_core.messages.BaseMessage)

Return type:

AugLLMConfig

classmethod set_default_structured_output_version(data)[source]ΒΆ

Set default structured output version to v2 (tools) when model is provided but version is not.

Parameters:

data (dict[str, Any])

use_prompt_template(name)[source]ΒΆ

Switch to using a specific named template.

Parameters:

name (str) – Name of the template to activate

Returns:

Self for method chaining

Raises:

ValueError – If template name not found

Return type:

AugLLMConfig

classmethod validate_prompt_template(v)[source]ΒΆ

Validate and reconstruct prompt template from dict data.

Return type:

Any

classmethod validate_schemas(v)[source]ΒΆ

Validate and auto-name schemas.

Return type:

Any

classmethod validate_structured_output_model(v)[source]ΒΆ

Validate structured output model and default to tools-based validation.

Return type:

Any

classmethod validate_tools(v)[source]ΒΆ

Validate and auto-name tools.

Return type:

Any

with_format_instructions(model, as_tool=False, var_name='format_instructions')[source]ΒΆ

Add format instructions without setting up structured output or parser.

Parameters:
  • model (type[pydantic.BaseModel])

  • as_tool (bool)

  • var_name (str)

Return type:

AugLLMConfig

with_pydantic_tools(tool_models, include_instructions=True, force_use=False)[source]ΒΆ

Configure with Pydantic tools output parsing.

Parameters:
  • tool_models (list[type[pydantic.BaseModel]])

  • include_instructions (bool)

  • force_use (bool)

Return type:

AugLLMConfig

with_structured_output(model, include_instructions=True, version='v2')[source]ΒΆ

Configure with Pydantic structured output.

Parameters:
  • model (type[pydantic.BaseModel])

  • include_instructions (bool)

  • version (str)

Return type:

AugLLMConfig

with_tools(tools, force_use=False, specific_tool=None)[source]ΒΆ

Configure with specified tools.

Parameters:
Return type:

AugLLMConfig

haive.core.engine.aug_llm.config.debug_print(*args, **kwargs)[source]ΒΆ

Print debug output only if DEBUG_OUTPUT is enabled.

Return type:

None