HAP Models¶
The models module provides the core data structures for HAP workflows.
HAPContext¶
The HAPContext
class is the central state container that flows through HAP execution.
- class haive.hap.models.context.HAPContext(*, engine=None, engines=<factory>, execution_path=<factory>, agent_metadata=<factory>, graph_context=<factory>, legacy_inputs=<factory>, legacy_outputs=<factory>, legacy_state=<factory>, legacy_meta=<factory>)[source]¶
Bases:
StateSchema
HAP execution context inheriting from real Haive StateSchema.
- Parameters:
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
Key Features¶
StateSchema Inheritance: Properly integrates with Haive’s state management
Execution Tracking: Records the path through the graph
Metadata Storage: Keeps agent-specific information
Backward Compatibility: Supports legacy properties from earlier versions
Usage Example¶
from haive.hap.models.context import HAPContext
# Create context
context = HAPContext()
# Track execution
context.execution_path.append("analyzer")
context.execution_path.append("summarizer")
# Store metadata
context.agent_metadata["analyzer"] = {
"duration": 1.5,
"tokens_used": 150,
"tool_calls": ["word_counter"]
}
# Use backward compatibility
context.inputs["text"] = "Document to process"
context.outputs["summary"] = "Processed summary"
# Serialize/deserialize
data = context.model_dump()
restored = HAPContext.model_validate(data)
HAPGraph¶
The HAPGraph
class manages the workflow structure.
- class haive.hap.models.graph.HAPGraph(*, nodes=<factory>, entry_node='')[source]¶
Bases:
BaseModel
HAP graph with agent orchestration capabilities.
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
Graph Building¶
from haive.hap.models.graph import HAPGraph
graph = HAPGraph()
# Add nodes with agents
graph.add_agent_node("start", agent1, next_nodes=["middle"])
graph.add_agent_node("middle", agent2, next_nodes=["end"])
graph.add_agent_node("end", agent3)
# Or use entrypoints
graph.add_entrypoint_node(
"processor",
"mymodule.agents:ProcessorAgent",
next_nodes=["validator"]
)
# Set entry point
graph.entry_node = "start"
# Get execution order
order = graph.topological_order() # ["start", "middle", "end"]
HAPNode¶
Individual nodes in the graph.
- class haive.hap.models.graph.HAPNode(*, id, agent_entrypoint, agent_instance=None, next_nodes=<factory>)[source]¶
Bases:
BaseModel
HAP node that can contain an agent.
- async execute(context)[source]¶
Execute this node’s agent.
- Parameters:
context (HAPContext)
- Return type:
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
Node Types¶
Nodes can contain either:
Agent Instance: Direct agent object
Agent Entrypoint: String like
"module:ClassName"
from haive.hap.models.graph import HAPNode
# Node with agent instance
node1 = HAPNode(
id="worker",
agent_instance=my_agent,
next_nodes=["reviewer"]
)
# Node with entrypoint
node2 = HAPNode(
id="reviewer",
agent_entrypoint="haive.agents.simple:SimpleAgent"
)
# Load agent when needed
agent = await node2.load_agent()
Backward Compatibility¶
For backward compatibility with earlier versions, the following aliases are provided:
from haive.hap.models import (
HAPContext, # Same as HAPContext
AgentGraph, # Alias for HAPGraph
AgentNode # Alias for HAPNode
)
Property Mappings¶
HAPContext maintains these backward-compatible properties:
Old Property |
New Field |
Usage |
---|---|---|
|
|
Input data storage |
|
|
Output data storage |
|
|
State information |
|
|
Metadata storage |
Model Relationships¶
HAPGraph
├── nodes: Dict[str, HAPNode]
├── entry_node: str
└── metadata: Dict[str, Any]
HAPNode
├── id: str
├── agent_instance: Optional[Agent]
├── agent_entrypoint: Optional[str]
└── next_nodes: List[str]
HAPContext (extends StateSchema)
├── execution_path: List[str]
├── agent_metadata: Dict[str, Any]
├── graph_context: Dict[str, Any]
└── legacy fields (backward compatibility)
Best Practices¶
Use Type Hints: Define clear types for all fields
Validate Early: Use Pydantic validation
Track Metadata: Store useful debugging info
Handle None: Check optional fields
Serialize Safely: Use model_dump/model_validate
Common Patterns¶
Sequential Workflow¶
graph = HAPGraph()
for i, agent in enumerate(agents):
next_nodes = [f"step_{i+1}"] if i < len(agents)-1 else []
graph.add_agent_node(f"step_{i}", agent, next_nodes)
graph.entry_node = "step_0"
Branching Workflow¶
graph = HAPGraph()
graph.add_agent_node("classifier", classifier, ["type_a", "type_b"])
graph.add_agent_node("type_a", handler_a)
graph.add_agent_node("type_b", handler_b)
Parallel Execution¶
graph = HAPGraph()
graph.add_agent_node("splitter", splitter, ["worker1", "worker2", "worker3"])
graph.add_agent_node("worker1", w1, ["joiner"])
graph.add_agent_node("worker2", w2, ["joiner"])
graph.add_agent_node("worker3", w3, ["joiner"])
graph.add_agent_node("joiner", joiner)