Understanding HAPΒΆ
Tutorial 1: Learn the fundamentals of Haive Agent Protocol (HAP).
Note
π Beta Tutorial: This tutorial is currently in beta. Examples and API may change.
Learning ObjectivesΒΆ
By the end of this tutorial, you will understand:
What HAP is and how it relates to MCP
Core HAP concepts: Agents, Graphs, Nodes, Runtime
The difference between HAP models and protocol layers
HAPβs role in the Haive ecosystem
What is HAP?ΒΆ
HAP (Haive Agent Protocol) is βMCP for Agentsβ - a framework for orchestrating AI agents in complex workflows.
Key Comparison:
Feature |
MCP (Model Context Protocol) |
HAP (Haive Agent Protocol) |
---|---|---|
Purpose |
Expose tools, resources, prompts |
Orchestrate agents, workflows |
Protocol |
JSON-RPC 2.0 |
JSON-RPC 2.0 (HAP extension) |
State |
Stateless tool calls |
Stateful agent workflows |
Execution |
Single tool invocations |
Multi-agent graph execution |
Use Case |
Tool integration |
Workflow orchestration |
HAP enables: - Agent Composition: Combine specialized agents into workflows - Graph Execution: Define execution order and dependencies - State Management: Maintain context across agent interactions - Protocol Integration: Future distributed agent systems
Core ConceptsΒΆ
1. AgentsΒΆ
Agents are the processing units in HAP:
from haive.agents.simple.agent import SimpleAgent
from haive.core.engine.aug_llm import AugLLMConfig
# Create a specialized agent
agent = SimpleAgent(
name="data_analyst",
engine=AugLLMConfig(
temperature=0.3,
system_message="You are a data analyst specializing in sales metrics."
)
)
2. GraphsΒΆ
Graphs define the workflow structure:
from haive.hap.models import HAPGraph
# Create workflow graph
graph = HAPGraph()
graph.add_agent_node("analyze", analyzer_agent, next_nodes=["summarize"])
graph.add_agent_node("summarize", summarizer_agent)
graph.entry_node = "analyze"
3. NodesΒΆ
Nodes are individual steps in the workflow:
from haive.hap.models import HAPNode
# Node with agent instance
node = HAPNode(
id="processor",
agent_instance=processor_agent,
next_nodes=["validator"]
)
# Node with agent entrypoint (loaded dynamically)
node = HAPNode(
id="validator",
agent_entrypoint="haive.agents.simple:SimpleAgent"
)
4. RuntimeΒΆ
Runtime executes the graph:
from haive.hap.server.runtime import HAPRuntime
# Create runtime engine
runtime = HAPRuntime(graph)
# Execute workflow
result = await runtime.run({
"input": "Sales data: Q1=$100k, Q2=$120k, Q3=$140k, Q4=$160k"
})
5. ContextΒΆ
Context flows through the execution:
from haive.hap.models import HAPContext
# Execution context
context = HAPContext()
context.execution_path = ["analyze", "summarize"]
context.agent_metadata = {
"analyze": {"duration": 2.1, "tokens": 150},
"summarize": {"duration": 1.3, "tokens": 75}
}
HAP ArchitectureΒΆ
HAP has three layers:
1. Models Layer (haive.hap.models
):
- HAPGraph
: Workflow structure
- HAPNode
: Individual workflow steps
- HAPContext
: Execution state and tracking
2. Server Layer (haive.hap.server
):
- HAPRuntime
: Graph execution engine
- Agent loading and management
- Error handling and recovery
3. Protocol Layer (haive.hap.hap
):
- JSON-RPC 2.0 communication
- Resource and authentication providers
- Progress tracking and logging
Visual Architecture:
βββββββββββββββββββββββββββββββββββββββ
β Protocol Layer (HAP) β
β JSON-RPC 2.0, Auth, Resources β
βββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββ
β Server Layer β
β HAPRuntime, Agent Loading β
βββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββ
β Models Layer β
β HAPGraph, HAPNode, HAPContext β
βββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββ
β Haive Agents β
β SimpleAgent, ReactAgent, etc. β
βββββββββββββββββββββββββββββββββββββββ
Simple ExampleΒΆ
Letβs build a complete HAP workflow step by step:
Step 1: Import Components
import asyncio
from haive.hap.models import HAPGraph
from haive.hap.server.runtime import HAPRuntime
from haive.agents.simple.agent import SimpleAgent
from haive.core.engine.aug_llm import AugLLMConfig
Step 2: Create Agents
# Specialized agents
researcher = SimpleAgent(
name="researcher",
engine=AugLLMConfig(
temperature=0.7,
system_message="Research topics and gather information."
)
)
writer = SimpleAgent(
name="writer",
engine=AugLLMConfig(
temperature=0.8,
system_message="Write clear, engaging content based on research."
)
)
Step 3: Build Workflow Graph
# Create workflow: Research β Write
graph = HAPGraph()
graph.add_agent_node("research", researcher, next_nodes=["write"])
graph.add_agent_node("write", writer)
graph.entry_node = "research"
Step 4: Execute Workflow
async def main():
# Create runtime
runtime = HAPRuntime(graph)
# Execute workflow
result = await runtime.run({
"topic": "Benefits of AI in Healthcare",
"task": "Research and write a brief article"
})
print(f"β
Workflow complete!")
print(f"π Path: {' β '.join(result.execution_path)}")
print(f"π Article: {result.outputs}")
asyncio.run(main())
Complete Working Example:
"""Complete HAP workflow example."""
import asyncio
from haive.hap.models import HAPGraph
from haive.hap.server.runtime import HAPRuntime
from haive.agents.simple.agent import SimpleAgent
from haive.core.engine.aug_llm import AugLLMConfig
async def understanding_hap_example():
"""Demonstrate core HAP concepts."""
print("π HAP Tutorial 1: Understanding HAP")
# 1. Create specialized agents
print("\nπ Step 1: Creating specialized agents")
researcher = SimpleAgent(
name="researcher",
engine=AugLLMConfig(
temperature=0.7,
system_message="You research topics thoroughly and provide key facts."
)
)
writer = SimpleAgent(
name="writer",
engine=AugLLMConfig(
temperature=0.8,
system_message="You write clear, engaging summaries based on research."
)
)
print(f" β
Created researcher: {researcher.name}")
print(f" β
Created writer: {writer.name}")
# 2. Build workflow graph
print("\nπ§ Step 2: Building workflow graph")
graph = HAPGraph()
graph.add_agent_node("research", researcher, next_nodes=["write"])
graph.add_agent_node("write", writer)
graph.entry_node = "research"
print(f" π Graph nodes: {list(graph.nodes.keys())}")
print(f" πͺ Entry node: {graph.entry_node}")
print(f" π Execution order: {graph.topological_order()}")
# 3. Execute workflow
print("\nβΆοΈ Step 3: Executing workflow")
runtime = HAPRuntime(graph)
result = await runtime.run({
"topic": "Artificial Intelligence in Education",
"requirements": "Focus on benefits and challenges"
})
# 4. Display results
print(f"\nπ― Workflow Results:")
print(f" π Execution Path: {' β '.join(result.execution_path)}")
print(f" β±οΈ Total Steps: {len(result.execution_path)}")
# Show agent metadata
print(f"\nπ Agent Metadata:")
for node_id in result.execution_path:
metadata = result.agent_metadata.get(node_id, {})
print(f" {node_id}: {metadata}")
print(f"\nπ Final Output:")
print(f" {result.outputs}")
print(f"\nβ
HAP Tutorial 1 complete!")
return result
if __name__ == "__main__":
asyncio.run(understanding_hap_example())
HAP vs. Traditional ApproachesΒΆ
Traditional Sequential Code:
# Traditional approach - rigid, hard to modify
def traditional_workflow(topic):
research_result = research_function(topic)
writing_result = write_function(research_result)
return writing_result
HAP Approach:
# HAP approach - flexible, composable, observable
graph = HAPGraph()
graph.add_agent_node("research", researcher, ["write"])
graph.add_agent_node("write", writer)
runtime = HAPRuntime(graph)
result = await runtime.run({"topic": topic})
# Rich metadata, execution tracking, error handling built-in
HAP Benefits:
Composability: Mix and match agents
Observability: Track execution paths and metadata
Flexibility: Dynamic agent loading, conditional routing
Scalability: Async execution, parallel processing
Maintainability: Clear separation of concerns
Key TakeawaysΒΆ
π― What youβve learned:
HAP Purpose: Agent workflow orchestration (vs MCP tool integration)
Core Components: Agents β Nodes β Graphs β Runtime β Results
Architecture: Models, Server, Protocol layers
Benefits: Composability, observability, flexibility
π Key Concepts:
Agents: Processing units with LLM engines
Graphs: Workflow structure with nodes and edges
Runtime: Execution engine with error handling
Context: State that flows through execution
π‘ Best Practices:
Create specialized agents for specific tasks
Use descriptive names for nodes and agents
Leverage execution metadata for debugging
Build incrementally from simple to complex workflows
Next StepsΒΆ
Now that you understand HAP fundamentals:
Next Tutorial: Tutorial 2: First Workflow - Coming Soon - Build your first working workflow
Reference: Study the HAP Models documentation for detailed API info
Examples: Explore
examples/01_basic_concepts/
for more code samples
Practice Exercise:
Create a simple 2-agent workflow: - Agent 1: Analyzes a given text - Agent 2: Summarizes the analysis - Connect them in a graph and execute
Ready to build your first workflow? Continue to Tutorial 2: First Workflow - Coming Soon!