Haive Core - AI Agent Framework Foundationยถ

๐Ÿš€ Haive Core

Build Production-Ready AI Agent Systems

The foundational framework for creating sophisticated AI agents with state management, graph-based workflows, tool integration, and advanced orchestration capabilities.

๐ŸŽ‰ Latest Release: v0.1.0

Whatโ€™s New:

  • MetaStateSchema - Revolutionary agent composition with type-safe state management

  • Dynamic Graphs - Runtime graph modification and node composition

  • Tool Orchestration - Seamless tool discovery, registration, and validation

  • Performance - 2x faster state transitions with optimized reducers

  • Developer Experience - Enhanced debugging tools and error messages

Documentation Hubยถ

Key Concepts to Master:

  • Engines - The computational heart of agents (LLMs, retrievers, vector stores)

  • Schemas - Type-safe state management with Pydantic validation

  • Graphs - Workflow orchestration with nodes, edges, and conditional routing

  • Tools - External capabilities and function calling

Architecture Highlights:

  • Modular Design - Compose complex systems from simple, reusable components

  • Type Safety - Full Pydantic integration for runtime validation

  • Async-First - Built on asyncio for high-performance concurrent operations

  • Extensible - Plugin architecture for custom engines and tools

๐Ÿ”ง Supporting Systems

Project Information

Community & Support

๐Ÿ“ฆ PyPI Package

Install via pip or poetry

๐Ÿ’ป GitHub

Source code and issues

๐Ÿ’ฌ Discord

Community discussions

License

MIT License - Free for commercial and personal use

Core Capabilitiesยถ

๐ŸŽฎ Engine System

Augmented LLM Engine

  • Multi-provider support (OpenAI, Anthropic, Azure)

  • Structured output with Pydantic models

  • Token management and cost tracking

  • Streaming and async execution

Retriever & Vector Stores

  • ChromaDB, FAISS, Pinecone integrations

  • Embedding model flexibility

  • Hybrid search capabilities

  • Document processing pipelines

๐Ÿ“‹ State Management

Type-Safe Schemas

  • Pydantic-based validation

  • Automatic serialization/deserialization

  • State composition and inheritance

  • Custom field validators

MetaStateSchema

  • Agent state embedding

  • Execution tracking

  • Recompilation management

  • Graph context preservation

๐Ÿ”„ Graph Workflows

StateGraph Architecture

  • Node-based computation

  • Conditional branching

  • Parallel execution paths

  • Dynamic graph modification

Advanced Features

  • Checkpointing and recovery

  • Graph visualization

  • Performance profiling

  • Error handling and retry

๐Ÿ”ง Tool Ecosystem

Built-in Tools

  • File operations

  • Web scraping

  • API integrations

  • Database connectors

Tool Management

  • Automatic discovery

  • Runtime registration

  • Validation framework

  • Human-in-the-loop support

Quick Examplesยถ

Setting up an Augmented LLM Engine:

from haive.core.engine.aug_llm import AugLLMConfig
from haive.core.schema.prebuilt.messages_state import MessagesState

# Configure the engine with Azure OpenAI
config = AugLLMConfig(
    model="gpt-4",
    temperature=0.7,
    max_tokens=2000,
    system_message="You are a helpful AI assistant.",
    provider="azure",  # or "openai", "anthropic"
    api_base="https://your-resource.openai.azure.com/"
)

# Initialize state management
state = MessagesState()
state.add_user_message("Explain quantum computing")

# The engine is now ready for use in agents

Building a multi-step workflow:

from haive.core.graph.state_graph import BaseGraph
from haive.core.graph.node import create_node
from haive.core.schema.prebuilt import MessagesState

# Create a workflow graph
graph = BaseGraph(state_schema=MessagesState)

# Define processing nodes
async def analyze_node(state: MessagesState):
    # Process and analyze input
    return {"analysis": "completed"}

async def generate_node(state: MessagesState):
    # Generate response based on analysis
    return {"response": "generated"}

# Build the graph
graph.add_node("analyze", analyze_node)
graph.add_node("generate", generate_node)
graph.add_edge("analyze", "generate")
graph.set_entry_point("analyze")

# Compile and execute
workflow = graph.compile()
result = await workflow.ainvoke(state)

Creating and registering custom tools:

from langchain_core.tools import tool
from haive.core.registry import get_registry
from typing import Annotated

@tool
def calculate_compound_interest(
    principal: Annotated[float, "Initial amount"],
    rate: Annotated[float, "Annual interest rate (as decimal)"],
    time: Annotated[int, "Time period in years"]
) -> float:
    """Calculate compound interest with annual compounding."""
    amount = principal * (1 + rate) ** time
    return round(amount - principal, 2)

# Register the tool globally
registry = get_registry()
registry.register_tool("compound_interest", calculate_compound_interest)

# Tools are now available to all agents
from haive.core.tools import get_available_tools
tools = get_available_tools()

Setting up a RAG system with vector stores:

from haive.core.models.vectorstore import VectorStoreConfig
from haive.core.models.embeddings import HuggingFaceEmbeddingConfig
from haive.core.engine.document import DocumentProcessor

# Configure embeddings
embedding_config = HuggingFaceEmbeddingConfig(
    model_name="sentence-transformers/all-mpnet-base-v2",
    model_kwargs={"device": "cpu"},
    encode_kwargs={"normalize_embeddings": True}
)

# Setup vector store
vector_config = VectorStoreConfig(
    provider="Chroma",
    embedding_config=embedding_config,
    collection_name="knowledge_base",
    persist_directory="./chroma_db"
)

# Initialize and populate
vector_store = vector_config.create()

# Process documents
processor = DocumentProcessor(chunk_size=1000, chunk_overlap=200)
documents = processor.process_files(["./docs/*.pdf"])
vector_store.add_documents(documents)

# Ready for retrieval
results = vector_store.similarity_search("What is haive?", k=5)

Architecture Overviewยถ

        graph TB
   subgraph "Haive Core Architecture"
      A[Application Layer]
      A --> B[Engine Layer]
      A --> C[Schema Layer]
      A --> D[Graph Layer]

      B --> B1[AugLLM<br/>Engine]
      B --> B2[Retriever<br/>Engine]
      B --> B3[VectorStore<br/>Engine]

      C --> C1[State<br/>Schemas]
      C --> C2[Message<br/>Formats]
      C --> C3[Validation<br/>Rules]

      D --> D1[Nodes &<br/>Edges]
      D --> D2[Execution<br/>Runtime]
      D --> D3[State<br/>Management]

      E[Tool Registry]
      F[Persistence Layer]

      B1 & B2 & B3 --> E
      D --> F
   end

   style A fill:#8b5cf6,color:#fff
   style B fill:#6d28d9,color:#fff
   style C fill:#6d28d9,color:#fff
   style D fill:#6d28d9,color:#fff
    

Performance & Scalabilityยถ

โšก Fast
  • Async/await throughout

  • Optimized state transitions

  • Efficient memory usage

  • Connection pooling

๐Ÿ“ˆ Scalable
  • Distributed execution

  • Horizontal scaling

  • Queue-based processing

  • Load balancing

๐Ÿ’ช Reliable
  • Automatic retries

  • Error recovery

  • State persistence

  • Health monitoring

Getting Helpยถ

๐Ÿ“– Documentation

Comprehensive guides and API reference

๐Ÿ’ฌ Community

Join our Discord for discussions

๐Ÿ› Issues

Report bugs or request features

Search & Navigationยถ

๐Ÿ” Search Documentation

Search Page

๐Ÿ“‘ Full Index

Index

๐Ÿ“ฆ Module Index

Module Index


Note

Documentation Version: This documentation is automatically generated from the latest source code and is synchronized with version 0.1.0 of haive-core. All code examples are tested in our CI pipeline.

See also

Related Packages in the Haive Ecosystem: