haive.core.tools.interrupt_tool_wrapper¶
from typing import Any Human-in-the-Loop Tool Wrapper for LangGraph Agents.
This module defines a utility function add_human_in_the_loop that allows LangChain tools to be wrapped with interrupt-based human review via LangGraph. This enables human approval, editing, or feedback substitution before a tool is executed.
- Typical usage:
from this_module import add_human_in_the_loop
@tool def search_docs(query: str) -> str:
return f”Results for: {query}”
safe_tool = add_human_in_the_loop(search_docs) result = safe_tool.invoke({“query”: “pydantic base models”})
Functions¶
|
Wrap a LangChain tool with human-in-the-loop interrupt logic for approval, editing, or feedback. |
Module Contents¶
- haive.core.tools.interrupt_tool_wrapper.add_human_in_the_loop(tool, *, interrupt_config=None)[source]¶
Wrap a LangChain tool with human-in-the-loop interrupt logic for approval, editing, or feedback.
This function wraps an existing LangChain tool (or plain callable) with LangGraph’s interrupt system, allowing a human to review each call before it is executed.
- Parameters:
tool (Callable | BaseTool) – The LangChain tool (or callable function) to wrap with human-in-the-loop support. If a plain callable is passed, it will be converted into a LangChain BaseTool.
interrupt_config (HumanInterruptConfig, optional) –
Configuration dict defining which types of human input are allowed. If not provided, the default enables all three options:
{ "allow_accept": True, "allow_edit": True, "allow_respond": True, }
- Returns:
A LangChain-compatible tool that prompts a human to approve, edit, or respond to each call before invoking the original tool logic.
- Return type:
BaseTool
- Raises:
ValueError – If the human interrupt returns an unsupported response type.
Example
>>> @tool ... def get_user_profile(user_id: str) -> str: ... return f"Profile for {user_id}"
>>> reviewed_tool = add_human_in_the_loop(get_user_profile) >>> reviewed_tool.invoke({"user_id": "123"}) # Will prompt human before invoking