Haive Core - AI Agent Framework Foundationยถ
๐ Haive Core
Build Production-Ready AI Agent Systems
The foundational framework for creating sophisticated AI agents with state management, graph-based workflows, tool integration, and advanced orchestration capabilities.
๐ Latest Release: v0.1.0
Whatโs New:
MetaStateSchema - Revolutionary agent composition with type-safe state management
Dynamic Graphs - Runtime graph modification and node composition
Tool Orchestration - Seamless tool discovery, registration, and validation
Performance - 2x faster state transitions with optimized reducers
Developer Experience - Enhanced debugging tools and error messages
Documentation Hubยถ
Tutorials
Key Concepts to Master:
Engines - The computational heart of agents (LLMs, retrievers, vector stores)
Schemas - Type-safe state management with Pydantic validation
Graphs - Workflow orchestration with nodes, edges, and conditional routing
Tools - External capabilities and function calling
Core Architecture
- Engine Architecture
- Schema System
- ๐ Graph System - Visual AI Workflow Orchestration
- The Visual Revolution ๐จ
- ๐ Beyond Linear Execution
- Core Architecture ๐๏ธ
- Revolutionary Features ๐
- Advanced Patterns ๐ฏ
- Real-World Examples ๐
- Advanced Orchestration
- Performance Optimization
- Integration Examples ๐
- Best Practices ๐
- API Reference ๐
- Enterprise Features
- See Also ๐
Implementation
Architecture Highlights:
Modular Design - Compose complex systems from simple, reusable components
Type Safety - Full Pydantic integration for runtime validation
Async-First - Built on asyncio for high-performance concurrent operations
Extensible - Plugin architecture for custom engines and tools
๐ฎ Core Systems
๐ง Supporting Systems
๐ Guides
Project Information
Community & Support
Install via pip or poetry
Source code and issues
Community discussions
License
MIT License - Free for commercial and personal use
Core Capabilitiesยถ
Augmented LLM Engine
Multi-provider support (OpenAI, Anthropic, Azure)
Structured output with Pydantic models
Token management and cost tracking
Streaming and async execution
Retriever & Vector Stores
ChromaDB, FAISS, Pinecone integrations
Embedding model flexibility
Hybrid search capabilities
Document processing pipelines
Type-Safe Schemas
Pydantic-based validation
Automatic serialization/deserialization
State composition and inheritance
Custom field validators
MetaStateSchema
Agent state embedding
Execution tracking
Recompilation management
Graph context preservation
StateGraph Architecture
Node-based computation
Conditional branching
Parallel execution paths
Dynamic graph modification
Advanced Features
Checkpointing and recovery
Graph visualization
Performance profiling
Error handling and retry
Built-in Tools
File operations
Web scraping
API integrations
Database connectors
Tool Management
Automatic discovery
Runtime registration
Validation framework
Human-in-the-loop support
Quick Examplesยถ
Setting up an Augmented LLM Engine:
from haive.core.engine.aug_llm import AugLLMConfig
from haive.core.schema.prebuilt.messages_state import MessagesState
# Configure the engine with Azure OpenAI
config = AugLLMConfig(
model="gpt-4",
temperature=0.7,
max_tokens=2000,
system_message="You are a helpful AI assistant.",
provider="azure", # or "openai", "anthropic"
api_base="https://your-resource.openai.azure.com/"
)
# Initialize state management
state = MessagesState()
state.add_user_message("Explain quantum computing")
# The engine is now ready for use in agents
Building a multi-step workflow:
from haive.core.graph.state_graph import BaseGraph
from haive.core.graph.node import create_node
from haive.core.schema.prebuilt import MessagesState
# Create a workflow graph
graph = BaseGraph(state_schema=MessagesState)
# Define processing nodes
async def analyze_node(state: MessagesState):
# Process and analyze input
return {"analysis": "completed"}
async def generate_node(state: MessagesState):
# Generate response based on analysis
return {"response": "generated"}
# Build the graph
graph.add_node("analyze", analyze_node)
graph.add_node("generate", generate_node)
graph.add_edge("analyze", "generate")
graph.set_entry_point("analyze")
# Compile and execute
workflow = graph.compile()
result = await workflow.ainvoke(state)
Creating and registering custom tools:
from langchain_core.tools import tool
from haive.core.registry import get_registry
from typing import Annotated
@tool
def calculate_compound_interest(
principal: Annotated[float, "Initial amount"],
rate: Annotated[float, "Annual interest rate (as decimal)"],
time: Annotated[int, "Time period in years"]
) -> float:
"""Calculate compound interest with annual compounding."""
amount = principal * (1 + rate) ** time
return round(amount - principal, 2)
# Register the tool globally
registry = get_registry()
registry.register_tool("compound_interest", calculate_compound_interest)
# Tools are now available to all agents
from haive.core.tools import get_available_tools
tools = get_available_tools()
Setting up a RAG system with vector stores:
from haive.core.models.vectorstore import VectorStoreConfig
from haive.core.models.embeddings import HuggingFaceEmbeddingConfig
from haive.core.engine.document import DocumentProcessor
# Configure embeddings
embedding_config = HuggingFaceEmbeddingConfig(
model_name="sentence-transformers/all-mpnet-base-v2",
model_kwargs={"device": "cpu"},
encode_kwargs={"normalize_embeddings": True}
)
# Setup vector store
vector_config = VectorStoreConfig(
provider="Chroma",
embedding_config=embedding_config,
collection_name="knowledge_base",
persist_directory="./chroma_db"
)
# Initialize and populate
vector_store = vector_config.create()
# Process documents
processor = DocumentProcessor(chunk_size=1000, chunk_overlap=200)
documents = processor.process_files(["./docs/*.pdf"])
vector_store.add_documents(documents)
# Ready for retrieval
results = vector_store.similarity_search("What is haive?", k=5)
Architecture Overviewยถ
graph TB subgraph "Haive Core Architecture" A[Application Layer] A --> B[Engine Layer] A --> C[Schema Layer] A --> D[Graph Layer] B --> B1[AugLLM<br/>Engine] B --> B2[Retriever<br/>Engine] B --> B3[VectorStore<br/>Engine] C --> C1[State<br/>Schemas] C --> C2[Message<br/>Formats] C --> C3[Validation<br/>Rules] D --> D1[Nodes &<br/>Edges] D --> D2[Execution<br/>Runtime] D --> D3[State<br/>Management] E[Tool Registry] F[Persistence Layer] B1 & B2 & B3 --> E D --> F end style A fill:#8b5cf6,color:#fff style B fill:#6d28d9,color:#fff style C fill:#6d28d9,color:#fff style D fill:#6d28d9,color:#fff
Performance & Scalabilityยถ
Async/await throughout
Optimized state transitions
Efficient memory usage
Connection pooling
Distributed execution
Horizontal scaling
Queue-based processing
Load balancing
Automatic retries
Error recovery
State persistence
Health monitoring
Getting Helpยถ
Comprehensive guides and API reference
Join our Discord for discussions
Report bugs or request features