haive.core.engine.aug_llm.configΒΆ
AugLLM configuration system for enhanced LLM chains.
from typing import Any, Dict This module provides a comprehensive configuration system for creating and managing enhanced LLM chains within the Haive framework. The AugLLMConfig class serves as a central configuration point that integrates prompts, tools, output parsers, and structured output models with extensive validation and debugging capabilities.
Key features: - Flexible prompt template creation and management with support for few-shot learning - Comprehensive tool integration with automatic discovery and configuration - Structured output handling via two approaches (v1: parser-based, v2: tool-based) - Rich debugging and validation to ensure proper configuration - Pre/post processing hooks for customizing input and output - Support for both synchronous and asynchronous execution
The configuration system is designed to be highly customizable while providing sensible defaults and automatic detection of configuration requirements.
ClassesΒΆ
Configuration for creating enhanced LLM chains with flexible message handling. |
FunctionsΒΆ
|
Print debug output only if DEBUG_OUTPUT is enabled. |
Module ContentsΒΆ
- class haive.core.engine.aug_llm.config.AugLLMConfig[source]ΒΆ
Bases:
*_get_augllm_base_classes
()Configuration for creating enhanced LLM chains with flexible message handling.
AugLLMConfig provides a structured way to configure and create LLM chains with prompts, tools, output parsers, and structured output models with comprehensive validation and automatic updates. It serves as the central configuration class for language model interactions in the Haive framework.
This class integrates several key functionalities: 1. Prompt template management with support for few-shot learning 2. Tool integration and discovery with automatic routing 3. Structured output handling (both parser-based and tool-based approaches) 4. Message handling for chat-based LLMs 5. Pre/post processing hooks for customization
The configuration system is designed to be highly flexible while enforcing consistent patterns and proper validation, making it easier to create reliable language model interactions.
- engine_typeΒΆ
The type of engine (always LLM).
- Type:
- prompt_templateΒΆ
Template for structuring prompts.
- Type:
Optional[BasePromptTemplate]
- toolsΒΆ
Tools that can be bound to the LLM.
- Type:
Sequence[Union[Type[BaseTool], Type[BaseModel], Callable, StructuredTool, BaseModel]]
- structured_output_modelΒΆ
Pydantic model for structured outputs.
- Type:
Optional[Type[BaseModel]]
- structured_output_versionΒΆ
Version of structured output handling (v1: parser-based, v2: tool-based).
- Type:
Optional[StructuredOutputVersion]
- preprocessΒΆ
Function to preprocess input before sending to LLM.
- Type:
Optional[Callable]
- postprocessΒΆ
Function to postprocess output from LLM.
- Type:
Optional[Callable]
Examples
>>> from haive.core.engine.aug_llm.config import AugLLMConfig >>> from haive.core.models.llm.base import AzureLLMConfig >>> from pydantic import BaseModel, Field >>> >>> # Define a structured output model >>> class MovieReview(BaseModel): ... title: str = Field(description="Title of the movie") ... rating: int = Field(description="Rating from 1-10") ... review: str = Field(description="Detailed review of the movie") >>> >>> # Create a basic configuration >>> config = AugLLMConfig( ... name="movie_reviewer", ... llm_config=AzureLLMConfig(model="gpt-4"), ... system_message="You are a professional movie critic.", ... structured_output_model=MovieReview, ... temperature=0.7 ... ) >>> >>> # Create a runnable from the configuration >>> reviewer = config.create_runnable() >>> >>> # Use the runnable >>> result = reviewer.invoke("Review the movie 'Inception'")
- add_format_instructions(model=None, as_tools=False, var_name='format_instructions')[source]ΒΆ
Add format instructions to partial_variables without changing structured output configuration.
- Parameters:
- Return type:
- add_human_message(content)[source]ΒΆ
Add a human message to the prompt template.
- Parameters:
content (str)
- Return type:
- add_optional_variable(var_name)[source]ΒΆ
Add an optional variable to the prompt template.
- Parameters:
var_name (str)
- Return type:
- add_prompt_template(prompt_template)[source]ΒΆ
Add a prompt template to the configuration.
- Parameters:
prompt_template (langchain_core.prompts.BasePromptTemplate)
- Return type:
- add_system_message(content)[source]ΒΆ
Add or update system message in the prompt template.
- Parameters:
content (str)
- Return type:
- add_tool(tool, name=None, route=None)[source]ΒΆ
Add a single tool with optional name and route.
- Parameters:
- Return type:
- add_tool_with_route(tool, route, name=None, metadata=None)[source]ΒΆ
Add a tool with explicit route and metadata.
- Parameters:
- Return type:
- apply_runnable_config(runnable_config=None)[source]ΒΆ
Extract parameters from runnable_config relevant to this engine.
- comprehensive_validation_and_setup()[source]ΒΆ
Comprehensive validation and setup after initialization.
- Return type:
Self
- create_runnable(runnable_config=None)[source]ΒΆ
Create a runnable LLM chain based on this configuration.
- Parameters:
runnable_config (langchain_core.runnables.RunnableConfig | None)
- Return type:
langchain_core.runnables.Runnable
- create_tool_from_config(config, name=None, route=None, **kwargs)[source]ΒΆ
Create a tool from another config object.
- debug_tool_configuration()[source]ΒΆ
Print detailed debug information about tool configuration.
- Return type:
- classmethod default_schemas_to_tools(data)[source]ΒΆ
Default schemas to tools if schemas isnβt provided but tools has values.
- classmethod ensure_structured_output_as_tool(data)[source]ΒΆ
Ensure structured output model is properly configured for both v1 and v2.
- classmethod from_few_shot(examples, example_prompt, prefix, suffix, input_variables, llm_config=None, **kwargs)[source]ΒΆ
Create with few-shot examples.
- classmethod from_few_shot_chat(examples, example_prompt, system_message=None, llm_config=None, **kwargs)[source]ΒΆ
Create with few-shot examples for chat templates.
- Parameters:
example_prompt (langchain_core.prompts.ChatPromptTemplate)
system_message (str | None)
llm_config (haive.core.models.llm.base.LLMConfig | None)
- classmethod from_format_instructions(model, system_message=None, llm_config=None, as_tool=False, var_name='format_instructions', **kwargs)[source]ΒΆ
Create config with format instructions but without structured output.
- Parameters:
model (type[pydantic.BaseModel])
system_message (str | None)
llm_config (haive.core.models.llm.base.LLMConfig | None)
as_tool (bool)
var_name (str)
- classmethod from_llm_config(llm_config, **kwargs)[source]ΒΆ
Create from an existing LLMConfig.
- Parameters:
llm_config (haive.core.models.llm.base.LLMConfig)
- classmethod from_prompt(prompt, llm_config=None, **kwargs)[source]ΒΆ
Create from a prompt template.
- Parameters:
prompt (langchain_core.prompts.BasePromptTemplate)
llm_config (haive.core.models.llm.base.LLMConfig | None)
- classmethod from_pydantic_tools(tool_models, system_message=None, llm_config=None, include_instructions=True, force_tool_use=False, **kwargs)[source]ΒΆ
Create with Pydantic tool models.
- classmethod from_structured_output_v1(model, system_message=None, llm_config=None, include_instructions=True, **kwargs)[source]ΒΆ
Create with v1 structured output using traditional parsing.
- Parameters:
model (type[pydantic.BaseModel])
system_message (str | None)
llm_config (haive.core.models.llm.base.LLMConfig | None)
include_instructions (bool)
- classmethod from_structured_output_v2(model, system_message=None, llm_config=None, include_instructions=False, output_field_name=None, **kwargs)[source]ΒΆ
Create with v2 structured output using the tool-based approach.
- Parameters:
model (type[pydantic.BaseModel])
system_message (str | None)
llm_config (haive.core.models.llm.base.LLMConfig | None)
include_instructions (bool)
output_field_name (str | None)
- classmethod from_system_and_few_shot(system_message, examples, example_prompt, prefix, suffix, input_variables, llm_config=None, **kwargs)[source]ΒΆ
Create with system message and few-shot examples.
- classmethod from_system_prompt(system_prompt, llm_config=None, **kwargs)[source]ΒΆ
Create from a system prompt string.
- Parameters:
system_prompt (str)
llm_config (haive.core.models.llm.base.LLMConfig | None)
- classmethod from_tools(tools, system_message=None, llm_config=None, use_tool_for_format_instructions=None, force_tool_use=False, **kwargs)[source]ΒΆ
Create with specified tools.
- get_active_template()[source]ΒΆ
Get the name of the currently active template.
- Return type:
str | None
- get_format_instructions(model=None, as_tools=False)[source]ΒΆ
Get format instructions for a model without changing the config.
- remove_message(index)[source]ΒΆ
Remove a message from the prompt template.
- Parameters:
index (int)
- Return type:
- remove_prompt_template(name=None)[source]ΒΆ
Remove a template or disable the active one.
- Parameters:
name (str | None) β Template name to remove. If None, disables active template.
- Returns:
Self for method chaining
- Return type:
- remove_tool(tool)[source]ΒΆ
Remove a tool and update all related configurations.
- Parameters:
tool (Any)
- Return type:
- replace_message(index, message)[source]ΒΆ
Replace a message in the prompt template.
- Parameters:
- Return type:
- classmethod set_default_structured_output_version(data)[source]ΒΆ
Set default structured output version to v2 (tools) when model is provided but version is not.
- use_prompt_template(name)[source]ΒΆ
Switch to using a specific named template.
- Parameters:
name (str) β Name of the template to activate
- Returns:
Self for method chaining
- Raises:
ValueError β If template name not found
- Return type:
- classmethod validate_prompt_template(v)[source]ΒΆ
Validate and reconstruct prompt template from dict data.
- Return type:
Any
- classmethod validate_structured_output_model(v)[source]ΒΆ
Validate structured output model and default to tools-based validation.
- Return type:
Any
- with_format_instructions(model, as_tool=False, var_name='format_instructions')[source]ΒΆ
Add format instructions without setting up structured output or parser.
- Parameters:
- Return type:
- with_pydantic_tools(tool_models, include_instructions=True, force_use=False)[source]ΒΆ
Configure with Pydantic tools output parsing.
- Parameters:
- Return type:
- with_structured_output(model, include_instructions=True, version='v2')[source]ΒΆ
Configure with Pydantic structured output.
- Parameters:
- Return type: