HAP Models

The models module provides the core data structures for HAP workflows.

HAPContext

The HAPContext class is the central state container that flows through HAP execution.

class haive.hap.models.context.HAPContext(*, engine=None, engines=<factory>, execution_path=<factory>, agent_metadata=<factory>, graph_context=<factory>, legacy_inputs=<factory>, legacy_outputs=<factory>, legacy_state=<factory>, legacy_meta=<factory>)[source]

Bases: StateSchema

HAP execution context inheriting from real Haive StateSchema.

Parameters:
agent_metadata: Dict[str, Any]
engine: TEngine | None
engines: builtins.dict[str, Engine]
execution_path: List[str]
graph_context: Dict[str, Any]
property inputs: Dict[str, Any]

Backward compatibility for inputs.

legacy_inputs: Dict[str, Any]
legacy_meta: Dict[str, Any]
legacy_outputs: Dict[str, Any]
legacy_state: Dict[str, Any]
property meta: Dict[str, Any]

Backward compatibility for meta.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

property outputs: Dict[str, Any]

Backward compatibility for outputs.

property state: Dict[str, Any]

Backward compatibility for state.

Key Features

  • StateSchema Inheritance: Properly integrates with Haive’s state management

  • Execution Tracking: Records the path through the graph

  • Metadata Storage: Keeps agent-specific information

  • Backward Compatibility: Supports legacy properties from earlier versions

Usage Example

from haive.hap.models.context import HAPContext

# Create context
context = HAPContext()

# Track execution
context.execution_path.append("analyzer")
context.execution_path.append("summarizer")

# Store metadata
context.agent_metadata["analyzer"] = {
    "duration": 1.5,
    "tokens_used": 150,
    "tool_calls": ["word_counter"]
}

# Use backward compatibility
context.inputs["text"] = "Document to process"
context.outputs["summary"] = "Processed summary"

# Serialize/deserialize
data = context.model_dump()
restored = HAPContext.model_validate(data)

HAPGraph

The HAPGraph class manages the workflow structure.

class haive.hap.models.graph.HAPGraph(*, nodes=<factory>, entry_node='')[source]

Bases: BaseModel

HAP graph with agent orchestration capabilities.

Parameters:
add_agent_node(node_id, agent, next_nodes=None)[source]

Add an agent as a node to the graph.

Parameters:
  • node_id (str)

  • agent (Agent)

  • next_nodes (List[str])

Return type:

HAPNode

add_entrypoint_node(node_id, entrypoint, next_nodes=None)[source]

Add a node by entrypoint string.

Parameters:
Return type:

HAPNode

async execute(initial_context)[source]

Execute the entire graph.

Parameters:

initial_context (Dict[str, Any])

Return type:

HAPContext

topological_order()[source]

Get topological ordering of nodes.

Return type:

List[str]

entry_node: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

nodes: Dict[str, HAPNode]

Graph Building

from haive.hap.models.graph import HAPGraph

graph = HAPGraph()

# Add nodes with agents
graph.add_agent_node("start", agent1, next_nodes=["middle"])
graph.add_agent_node("middle", agent2, next_nodes=["end"])
graph.add_agent_node("end", agent3)

# Or use entrypoints
graph.add_entrypoint_node(
    "processor",
    "mymodule.agents:ProcessorAgent",
    next_nodes=["validator"]
)

# Set entry point
graph.entry_node = "start"

# Get execution order
order = graph.topological_order()  # ["start", "middle", "end"]

HAPNode

Individual nodes in the graph.

class haive.hap.models.graph.HAPNode(*, id, agent_entrypoint, agent_instance=None, next_nodes=<factory>)[source]

Bases: BaseModel

HAP node that can contain an agent.

Parameters:
  • id (str)

  • agent_entrypoint (str)

  • agent_instance (Agent | None)

  • next_nodes (List[str])

async execute(context)[source]

Execute this node’s agent.

Parameters:

context (HAPContext)

Return type:

HAPContext

load_agent()[source]

Load agent from entrypoint if not already loaded.

Return type:

Agent

agent_entrypoint: str
agent_instance: Agent | None
id: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

next_nodes: List[str]

Node Types

Nodes can contain either:

  1. Agent Instance: Direct agent object

  2. Agent Entrypoint: String like "module:ClassName"

from haive.hap.models.graph import HAPNode

# Node with agent instance
node1 = HAPNode(
    id="worker",
    agent_instance=my_agent,
    next_nodes=["reviewer"]
)

# Node with entrypoint
node2 = HAPNode(
    id="reviewer",
    agent_entrypoint="haive.agents.simple:SimpleAgent"
)

# Load agent when needed
agent = await node2.load_agent()

Backward Compatibility

For backward compatibility with earlier versions, the following aliases are provided:

from haive.hap.models import (
    HAPContext,  # Same as HAPContext
    AgentGraph,    # Alias for HAPGraph
    AgentNode      # Alias for HAPNode
)

Property Mappings

HAPContext maintains these backward-compatible properties:

Old Property

New Field

Usage

inputs

legacy_inputs

Input data storage

outputs

legacy_outputs

Output data storage

state

legacy_state

State information

meta

legacy_meta

Metadata storage

Model Relationships

HAPGraph
├── nodes: Dict[str, HAPNode]
├── entry_node: str
└── metadata: Dict[str, Any]

HAPNode
├── id: str
├── agent_instance: Optional[Agent]
├── agent_entrypoint: Optional[str]
└── next_nodes: List[str]

HAPContext (extends StateSchema)
├── execution_path: List[str]
├── agent_metadata: Dict[str, Any]
├── graph_context: Dict[str, Any]
└── legacy fields (backward compatibility)

Best Practices

  1. Use Type Hints: Define clear types for all fields

  2. Validate Early: Use Pydantic validation

  3. Track Metadata: Store useful debugging info

  4. Handle None: Check optional fields

  5. Serialize Safely: Use model_dump/model_validate

Common Patterns

Sequential Workflow

graph = HAPGraph()
for i, agent in enumerate(agents):
    next_nodes = [f"step_{i+1}"] if i < len(agents)-1 else []
    graph.add_agent_node(f"step_{i}", agent, next_nodes)
graph.entry_node = "step_0"

Branching Workflow

graph = HAPGraph()
graph.add_agent_node("classifier", classifier, ["type_a", "type_b"])
graph.add_agent_node("type_a", handler_a)
graph.add_agent_node("type_b", handler_b)

Parallel Execution

graph = HAPGraph()
graph.add_agent_node("splitter", splitter, ["worker1", "worker2", "worker3"])
graph.add_agent_node("worker1", w1, ["joiner"])
graph.add_agent_node("worker2", w2, ["joiner"])
graph.add_agent_node("worker3", w3, ["joiner"])
graph.add_agent_node("joiner", joiner)