agents.memory_v2.simple_memory_agent

SimpleMemoryAgent with token-aware memory management and summarization.

This agent follows V3 enhanced patterns with automatic summarization when approaching token limits, similar to LangMem’s approach.

Classes

SimpleMemoryAgent

Memory agent with token tracking and automatic summarization.

TokenAwareMemoryConfig

Configuration for token-aware memory management.

Module Contents

class agents.memory_v2.simple_memory_agent.SimpleMemoryAgent

Bases: haive.agents.simple.enhanced_agent_v3.EnhancedSimpleAgent

Memory agent with token tracking and automatic summarization.

This agent follows V3 enhanced patterns and implements LangMem-style memory management with:

  • Automatic token tracking for all operations

  • Progressive summarization when approaching limits

  • Running summary maintenance

  • Memory rewriting for compression

  • Smart retrieval with token awareness

The agent monitors token usage and automatically triggers summarization or memory rewriting to stay within context limits while preserving important information.

Examples

Basic usage:

agent = SimpleMemoryAgent(
    name="assistant_memory",
    memory_config=TokenAwareMemoryConfig(
        max_context_tokens=4000,
        summarization_strategy="progressive"
    )
)

# Store memories
agent.run("Remember that I prefer coffee over tea")
agent.run("My favorite coffee is Ethiopian single origin")

# Retrieve with token awareness
response = agent.run("What beverages do I like?")

With custom thresholds:

config = TokenAwareMemoryConfig(
    max_context_tokens=8000,
    warning_threshold=0.6,
    critical_threshold=0.8,
    preserve_recent_memories=20
)

agent = SimpleMemoryAgent(
    name="long_term_memory",
    memory_config=config,
    debug_mode=True
)
build_graph()

Build memory graph with pre-hook system and token-aware branching.

The graph implements a pre-hook pattern: 1. Pre-hook node (checks tokens, decides routing) 2. Branching based on pre-hook decisions 3. Memory processing (store/retrieve/search) 4. Summarization (when triggered by pre-hook) 5. Running summary updates

Flow:
START -> pre_hook -> {process_memory, summarize_critical, summarize_warning}

-> [optional: update_summary] -> END

Return type:

haive.core.graph.state_graph.base_graph2.BaseGraph

check_tokens_node(state)

Check token usage and determine if action needed.

Parameters:

state (agents.memory_v2.memory_state_original.MemoryState)

Return type:

dict[str, Any]

consolidate_memories_node(state)

Consolidate related memories to reduce count.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

create_summary_node(state)

Create initial running summary.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

emergency_compress_node(state)

Emergency compression when critically over limits.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

extract_entities_node(state)

Extract entities from content using LLM.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

extract_relationships_node(state)

Extract relationships from content using LLM.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

get_memory_status()

Get comprehensive memory and token status.

Return type:

dict[str, Any]

pre_hook_node(state)

Pre-hook node that analyzes state and decides routing.

This is the core of the pre-hook system. It: 1. Analyzes current token usage 2. Examines incoming messages 3. Decides the appropriate route 4. Prepares any necessary data for downstream nodes

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens) – Current memory state with token tracking

Returns:

Command to update state with routing decisions

Return type:

langgraph.types.Command

process_memory_node(state)

Process memory operations (store/retrieve/search).

This is the main node that handles all memory operations based on the user’s input, using the appropriate memory tools.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

rewrite_memories_node(state)

Rewrite memories for maximum compression.

Parameters:

state (agents.memory_v2.memory_state_original.MemoryState)

Return type:

dict[str, Any]

route_by_token_status(state)

Route based on token usage status.

Parameters:

state (dict[str, Any])

Return type:

str

route_from_pre_hook(state)

Route based on pre-hook analysis.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens) – State with pre-hook analysis results

Returns:

Route name for conditional edge routing

Return type:

str

setup_agent()

Setup memory agent with token tracking and tools.

Return type:

None

summarize_critical_node(state)

Critical summarization when approaching token limits.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

summarize_memories_node(state)

Summarize memories to reduce token usage.

Parameters:

state (agents.memory_v2.memory_state_original.MemoryState)

Return type:

dict[str, Any]

summarize_warning_node(state)

Warning-level summarization for memory consolidation.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

transform_to_graph_node(state)

Transform memories and messages into a knowledge graph.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

update_graph_node(state)

Update existing knowledge graph with new content.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

update_running_summary_node(state)

Update the running summary with new memories.

Parameters:

state (agents.memory_v2.memory_state_original.MemoryState)

Return type:

dict[str, Any]

update_summary_node(state)

Update existing running summary.

Parameters:

state (agents.memory_v2.memory_state_with_tokens.MemoryStateWithTokens)

Return type:

langgraph.types.Command

class agents.memory_v2.simple_memory_agent.TokenAwareMemoryConfig(/, **data)

Bases: agents.memory_v2.memory_tools.MemoryConfig

Configuration for token-aware memory management.

Extends base MemoryConfig with token tracking and summarization settings.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)