haive.core.engine.retriever.providers.ContextualCompressionRetrieverConfig¶
Contextual Compression Retriever implementation for the Haive framework.
This module provides a configuration class for the Contextual Compression retriever, which compresses retrieved documents to extract only the most relevant information relative to the query, improving both relevance and efficiency.
The ContextualCompressionRetriever works by: 1. Using a base retriever to get initial document candidates 2. Applying a compressor (LLM or extractive) to compress each document 3. Extracting only the parts of documents that are relevant to the query 4. Returning compressed, more focused document content
This retriever is particularly useful when: - Documents are long and contain irrelevant sections - Need to reduce token usage in downstream processing - Want to improve precision by filtering out noise - Building systems with strict context length limits
The implementation integrates with LangChainâs ContextualCompressionRetriever while providing a consistent Haive configuration interface with flexible compression options.
Classes¶
Configuration for Contextual Compression retriever in the Haive framework. |
Module Contents¶
- class haive.core.engine.retriever.providers.ContextualCompressionRetrieverConfig.ContextualCompressionRetrieverConfig[source]¶
Bases:
haive.core.engine.retriever.retriever.BaseRetrieverConfig
Configuration for Contextual Compression retriever in the Haive framework.
This retriever compresses retrieved documents to extract only the most relevant information relative to the query, improving both relevance and efficiency.
- retriever_type¶
The type of retriever (always CONTEXTUAL_COMPRESSION).
- Type:
- base_retriever¶
The underlying retriever to get initial candidates.
- Type:
- compressor_type¶
Type of compressor to use (âllm_chain_extractâ, âllm_chain_filterâ).
- Type:
- llm_config¶
LLM configuration for compression (required for LLM compressors).
- Type:
Optional[AugLLMConfig]
Examples
>>> from haive.core.engine.retriever import ContextualCompressionRetrieverConfig >>> from haive.core.engine.retriever.providers.VectorStoreRetrieverConfig import VectorStoreRetrieverConfig >>> from haive.core.engine.aug_llm import AugLLMConfig >>> >>> # Create base retriever and LLM config >>> base_config = VectorStoreRetrieverConfig(name="base", vectorstore_config=vs_config) >>> llm_config = AugLLMConfig(model_name="gpt-3.5-turbo", provider="openai") >>> >>> # Create contextual compression retriever >>> config = ContextualCompressionRetrieverConfig( ... name="compression_retriever", ... base_retriever=base_config, ... compressor_type="llm_chain_extract", ... llm_config=llm_config ... ) >>> >>> # Instantiate and use the retriever >>> retriever = config.instantiate() >>> docs = retriever.get_relevant_documents("machine learning algorithms")
- instantiate()[source]¶
Create a Contextual Compression retriever from this configuration.
- Returns:
Instantiated retriever ready for compression retrieval.
- Return type:
ContextualCompressionRetriever
- Raises:
ImportError â If required packages are not available.
ValueError â If configuration is invalid.