haive.core.engine.retriever.providers.LlamaIndexRetrieverConfig

LlamaIndex Retriever implementation for the Haive framework.

from typing import Any This module provides a configuration class for the LlamaIndex retriever, which integrates LlamaIndex’s retrieval capabilities with LangChain. LlamaIndex provides a data framework for LLM applications with sophisticated indexing and retrieval mechanisms.

The LlamaIndexRetriever works by: 1. Using LlamaIndex’s retrieval engines 2. Supporting various index types (vector, keyword, graph, etc.) 3. Enabling sophisticated query processing 4. Providing LlamaIndex-specific optimizations

This retriever is particularly useful when: - Integrating LlamaIndex with LangChain workflows - Need LlamaIndex’s advanced indexing capabilities - Want to leverage LlamaIndex’s query engines - Building complex retrieval pipelines - Using LlamaIndex’s data connectors

The implementation integrates LlamaIndex retrievers with LangChain while providing a consistent Haive configuration interface.

Classes

LlamaIndexRetrieverConfig

Configuration for LlamaIndex retriever in the Haive framework.

Module Contents

class haive.core.engine.retriever.providers.LlamaIndexRetrieverConfig.LlamaIndexRetrieverConfig[source]

Bases: haive.core.engine.retriever.retriever.BaseRetrieverConfig

Configuration for LlamaIndex retriever in the Haive framework.

This retriever integrates LlamaIndex’s retrieval capabilities with LangChain, enabling the use of LlamaIndex’s sophisticated indexing and query mechanisms.

retriever_type

The type of retriever (always LLAMA_INDEX).

Type:

RetrieverType

index_path

Path to a persisted LlamaIndex index.

Type:

Optional[str]

documents

Documents to index (if not loading from path).

Type:

List[Document]

k

Number of documents to retrieve.

Type:

int

index_type

Type of LlamaIndex index to create.

Type:

str

similarity_top_k

Top-k for similarity search.

Type:

int

Examples

>>> from haive.core.engine.retriever import LlamaIndexRetrieverConfig
>>> from langchain_core.documents import Document
>>>
>>> # Create documents
>>> docs = [
...     Document(page_content="LlamaIndex provides data framework for LLMs"),
...     Document(page_content="Vector stores enable semantic search"),
...     Document(page_content="Graph indexes capture relationships")
... ]
>>>
>>> # Create the LlamaIndex retriever config
>>> config = LlamaIndexRetrieverConfig(
...     name="llamaindex_retriever",
...     documents=docs,
...     k=5,
...     index_type="vector",
...     similarity_top_k=10
... )
>>>
>>> # Instantiate and use the retriever
>>> retriever = config.instantiate()
>>> docs = retriever.get_relevant_documents("semantic search with vectors")
>>>
>>> # Example with graph index
>>> graph_config = LlamaIndexRetrieverConfig(
...     name="llamaindex_graph_retriever",
...     documents=docs,
...     index_type="knowledge_graph",
...     k=3
... )
get_input_fields()[source]

Return input field definitions for LlamaIndex retriever.

Return type:

dict[str, tuple[type, Any]]

get_output_fields()[source]

Return output field definitions for LlamaIndex retriever.

Return type:

dict[str, tuple[type, Any]]

instantiate()[source]

Create a LlamaIndex retriever from this configuration.

Returns:

Instantiated retriever ready for LlamaIndex-powered search.

Return type:

LlamaIndexRetriever

Raises: