agents.rag.chain_collection

Complete Collection of RAG Agents using ChainAgent.

from typing import Any, Dict This module provides a comprehensive collection of Retrieval-Augmented Generation (RAG) agents implemented using the ChainAgent framework. Each agent represents a different RAG strategy or pattern, optimized for specific use cases.

Examples

>>> from haive.agents.rag.chain_collection import RAGChainCollection
>>> from langchain_core.documents import Document
>>> from haive.core.models.llm.base import AzureLLMConfig
>>>
>>> docs = [Document(page_content="AI is transforming industries...")]
>>> llm_config = AzureLLMConfig(deployment_name="gpt-4")
>>> collection = RAGChainCollection()
>>> agent = collection.create_simple_rag(docs, llm_config)
Typical usage:
  • Create documents for retrieval

  • Choose appropriate RAG strategy

  • Configure LLM and retrieval settings

  • Build agent using collection methods

  • Execute queries through agent interface

Available RAG Strategies:
  • Simple RAG: Basic retrieve-and-generate pattern

  • HyDE RAG: Hypothetical document generation for enhanced retrieval

  • Fusion RAG: Multi-query retrieval with reciprocal rank fusion

  • Step-Back RAG: Abstract reasoning before specific answers

  • Speculative RAG: Hypothesis generation and verification

  • Memory-Aware RAG: Conversation context integration

  • FLARE RAG: Forward-looking active retrieval with refinement

Classes

RAGChainCollection

Collection of all RAG agents as ChainAgents.

Functions

create_rag_chain(rag_type, documents[, llm_config])

Create any RAG chain by type.

create_rag_pipeline(rag_types, documents[, ...])

Create a pipeline of multiple RAG approaches.

Module Contents

class agents.rag.chain_collection.RAGChainCollection

Collection of all RAG agents as ChainAgents.

This class provides static factory methods for creating different types of RAG agents using the ChainAgent framework. Each method builds a complete RAG workflow with appropriate retrieval and generation steps.

Examples

>>> collection = RAGChainCollection()
>>> agent = collection.create_simple_rag(documents, llm_config)
>>> response = agent.invoke({"query": "What is machine learning?"})
static create_flare_rag(documents, llm_config)

FLARE RAG - forward-looking active retrieval.

Parameters:
  • documents (list[langchain_core.documents.Document])

  • llm_config (haive.core.models.llm.base.LLMConfig)

Return type:

haive.agents.chain.ChainAgent

static create_fusion_rag(documents, llm_config)

Fusion RAG - multiple queries with reciprocal rank fusion.

Parameters:
  • documents (list[langchain_core.documents.Document])

  • llm_config (haive.core.models.llm.base.LLMConfig)

Return type:

haive.agents.chain.ChainAgent

static create_hyde_rag(documents, llm_config)

HyDE RAG - generate hypothetical document first.

Parameters:
  • documents (list[langchain_core.documents.Document])

  • llm_config (haive.core.models.llm.base.LLMConfig)

Return type:

haive.agents.chain.ChainAgent

static create_memory_aware_rag(documents, llm_config)

Memory-Aware RAG - uses conversation memory.

Parameters:
  • documents (list[langchain_core.documents.Document])

  • llm_config (haive.core.models.llm.base.LLMConfig)

Return type:

haive.agents.chain.ChainAgent

static create_simple_rag(documents, llm_config)

Create a simple RAG agent with basic retrieve-and-generate pattern.

This is the most straightforward RAG implementation: retrieve relevant documents based on the query, then generate an answer using those documents as context.

Parameters:
  • documents (List[Document]) – Documents to use for retrieval.

  • llm_config (LLMConfig) – LLM configuration for generation.

Returns:

A configured simple RAG agent.

Return type:

ChainAgent

Examples

>>> from langchain_core.documents import Document
>>> docs = [Document(page_content="AI helps solve problems...")]
>>> agent = RAGChainCollection.create_simple_rag(docs, llm_config)
static create_speculative_rag(documents, llm_config)

Speculative RAG - generate and verify hypotheses.

Parameters:
  • documents (list[langchain_core.documents.Document])

  • llm_config (haive.core.models.llm.base.LLMConfig)

Return type:

haive.agents.chain.ChainAgent

static create_step_back_rag(documents, llm_config)

Step-Back RAG - abstract reasoning before specific answer.

Parameters:
  • documents (list[langchain_core.documents.Document])

  • llm_config (haive.core.models.llm.base.LLMConfig)

Return type:

haive.agents.chain.ChainAgent

agents.rag.chain_collection.create_rag_chain(rag_type, documents, llm_config=None, **kwargs)

Create any RAG chain by type.

Parameters:
  • rag_type (str)

  • documents (list[langchain_core.documents.Document])

  • llm_config (haive.core.models.llm.base.LLMConfig | None)

Return type:

haive.agents.chain.ChainAgent

agents.rag.chain_collection.create_rag_pipeline(rag_types, documents, combination_strategy='sequential', llm_config=None)

Create a pipeline of multiple RAG approaches.

Parameters:
  • rag_types (list[str])

  • documents (list[langchain_core.documents.Document])

  • combination_strategy (str)

  • llm_config (haive.core.models.llm.base.LLMConfig | None)

Return type:

haive.agents.chain.ChainAgent