agents.memory_v2.graph_memory_agent¶
Graph Memory Agent with advanced knowledge graph capabilities.
This module provides a sophisticated graph-based memory system that combines multiple approaches for storing and retrieving structured knowledge:
LLMGraphTransformer: Intelligent entity and relationship extraction from text
Text-to-Neo4j (TNT): Direct storage of graph structures in Neo4j database
Graph RAG: Intelligent querying using graph traversal and vector similarity
Memory Consolidation: Automatic organization of related memories into concepts
Multi-modal Search: Combining graph structure with vector embeddings
The GraphMemoryAgent is designed for applications requiring sophisticated knowledge representation, relationship discovery, and contextual memory retrieval. It excels at maintaining complex interconnected information while providing fast, relevant access to stored knowledge.
- Key Features:
Entity Extraction: Automatic identification of people, organizations, concepts
Relationship Mapping: Discovery and storage of semantic relationships
Graph Constraints: Performance-optimized Neo4j constraints and indexes
Vector Integration: Semantic similarity search on graph nodes
Memory Consolidation: Intelligent clustering of related information
Multi-user Support: Isolated memory spaces for different users
Flexible Querying: Natural language and Cypher query support
- Architecture:
The agent operates in multiple configurable modes:
EXTRACT_ONLY: Extract entities/relationships without storage
STORE_ONLY: Store pre-extracted graph data directly
EXTRACT_AND_STORE: Full pipeline from text to graph storage
QUERY_ONLY: Search existing graph without modifications
FULL: All capabilities including consolidation and analytics
Examples
Basic usage with automatic extraction and storage:
config = GraphMemoryConfig(
neo4j_uri="bolt://localhost:7687",
neo4j_username="neo4j",
neo4j_password="password",
user_id="alice",
mode=GraphMemoryMode.FULL
)
agent = GraphMemoryAgent(config)
# Store complex information with automatic entity extraction
result = await agent.run(
"John Smith is the CEO of TechCorp in San Francisco. "
"He previously worked at DataCorp for 5 years and "
"specializes in machine learning applications."
)
# Query with natural language
query_result = await agent.query_graph(
"Who are the executives in San Francisco?"
)
Advanced graph exploration:
# Get entity-centered subgraph
subgraph = await agent.get_memory_subgraph(
"John Smith",
max_depth=2,
relationship_types=["WORKS_FOR", "LOCATED_IN"]
)
# Find similar memories using vector search
similar = await agent.search_similar_memories(
"technology executives",
node_type="Person",
k=5
)
Memory consolidation and analytics:
# Consolidate related memories into higher-level concepts
consolidation = await agent.consolidate_memories(
time_window="7 days",
min_connections=3
)
# Use as a tool in other agents
memory_tool = GraphMemoryAgent.as_tool(config)
result = await memory_tool(
"Remember: Sarah leads the AI research team",
operation="full"
)
See also
GraphMemoryConfig
: Configuration options and defaultsGraphMemoryMode
: Available operation modeshaive.agents.rag.db_rag.graph_db
: Graph database RAG integrationlangchain_experimental.graph_transformers
: LLM graph transformation
Note
This agent requires a running Neo4j instance and appropriate LLM configuration. For production use, ensure proper authentication, backup, and monitoring of the Neo4j database. Vector embeddings require OpenAI API access by default.
Classes¶
Agent that manages memory using a knowledge graph. |
|
Comprehensive configuration for GraphMemoryAgent with database and extraction settings. |
|
Operation modes for GraphMemoryAgent defining different processing workflows. |
Functions¶
Comprehensive example demonstrating GraphMemoryAgent capabilities. |
Module Contents¶
- class agents.memory_v2.graph_memory_agent.GraphMemoryAgent(config)¶
Agent that manages memory using a knowledge graph.
This agent provides: - Entity and relationship extraction from text - Direct storage to Neo4j (TNT - Text to Neo4j) - Graph-based retrieval using Cypher queries - Vector similarity search on graph nodes - Complex graph traversal for memory retrieval
Initialize GraphMemoryAgent with comprehensive graph memory capabilities.
Sets up all necessary components for graph-based memory operations including Neo4j database connections, entity extraction models, RAG components, and vector indexes. The initialization process creates database constraints, validates configurations, and prepares all subsystems.
- Parameters:
config (GraphMemoryConfig) – GraphMemoryConfig containing database connection details, extraction preferences, operation mode, and performance settings. All required components are validated during initialization.
- Raises:
ConnectionError – If Neo4j database connection fails or authentication is invalid. Check database status and credentials.
ImportError – If required optional dependencies are missing for specific features (e.g., graph transformers, RAG components).
ValueError – If configuration contains invalid settings or conflicting options (e.g., invalid node types, malformed relationship patterns).
Examples
Basic initialization with defaults:
config = GraphMemoryConfig( neo4j_uri="bolt://localhost:7687", neo4j_username="neo4j", neo4j_password="password", user_id="user123" ) try: agent = GraphMemoryAgent(config) print("Agent initialized successfully") except ConnectionError as e: print(f"Database connection failed: {e}")
Initialize with custom domain knowledge:
# Research paper domain config = GraphMemoryConfig( allowed_nodes=["Author", "Paper", "Conference", "Topic"], allowed_relationships=[ ("Author", "WROTE", "Paper"), ("Paper", "PRESENTED_AT", "Conference"), ("Paper", "ABOUT", "Topic") ], extract_properties=True, node_properties=["year", "citations", "h_index"], mode=GraphMemoryMode.FULL ) agent = GraphMemoryAgent(config)
Performance-optimized initialization:
config = GraphMemoryConfig( mode=GraphMemoryMode.EXTRACT_AND_STORE, # Skip query components enable_vector_index=False, # Skip vector embeddings llm_config=AugLLMConfig( model="gpt-3.5-turbo", # Faster model temperature=0.0 # Deterministic ) ) agent = GraphMemoryAgent(config)
Note
The initialization process creates database indexes and constraints automatically. For production deployments, ensure the Neo4j user has sufficient privileges for schema modifications. Vector indexing requires OpenAI API access and will be disabled gracefully if unavailable.
- classmethod as_tool(config)¶
Convert GraphMemoryAgent to a LangChain tool for integration with other agents.
Creates a tool interface that allows other Haive agents to use graph memory capabilities through the standard tool calling mechanism. This enables sophisticated multi-agent workflows where specialized agents can leverage shared graph-based knowledge storage and retrieval.
- Parameters:
config (GraphMemoryConfig) – GraphMemoryConfig instance with database connection and processing settings. The same configuration will be used for all tool invocations.
- Returns:
- Configured tool that can be added to agent tool lists.
The tool accepts text input and operation type, returning JSON-formatted results from graph memory processing.
- Return type:
LangChain Tool
- Tool Interface:
Name: “graph_memory_tool”
Description: “Process text with graph memory. Operations: extract, store, query, full.”
- Input Schema:
text (str): Text content to process
operation (str): Operation type (extract/store/query/full)
Output: JSON string containing processing results
Examples
Create tool for ReactAgent:
# Configure graph memory config = GraphMemoryConfig( neo4j_uri="bolt://localhost:7687", neo4j_username="neo4j", neo4j_password="password", user_id="research_agent" ) # Create memory tool memory_tool = GraphMemoryAgent.as_tool(config) # Add to agent's toolkit agent = ReactAgent( name="research_assistant", engine=llm_config, tools=[memory_tool, other_tools...] ) # Agent can now use graph memory result = await agent.arun( "Remember: Dr. Smith works at MIT and studies quantum computing. " "Then find all researchers connected to quantum computing." )
Multi-agent knowledge sharing:
# Shared memory configuration shared_config = GraphMemoryConfig( user_id="team_shared", mode=GraphMemoryMode.FULL ) memory_tool = GraphMemoryAgent.as_tool(shared_config) # Multiple agents with shared memory data_agent = ReactAgent( name="data_collector", tools=[memory_tool, data_tools...] ) analysis_agent = ReactAgent( name="data_analyzer", tools=[memory_tool, analysis_tools...] ) # Agents can share knowledge through graph memory await data_agent.arun("Store research findings in memory") await analysis_agent.arun("Analyze stored research data")
Domain-specific memory tool:
# Research domain configuration research_config = GraphMemoryConfig( allowed_nodes=["Researcher", "Paper", "Institution", "Topic"], allowed_relationships=[ ("Researcher", "AUTHORED", "Paper"), ("Paper", "ABOUT", "Topic"), ("Researcher", "AFFILIATED_WITH", "Institution") ], extract_properties=True ) research_memory = GraphMemoryAgent.as_tool(research_config) # Specialized research agent research_agent = ReactAgent( name="research_manager", tools=[research_memory], system_message="You manage research knowledge using graph memory." )
- Tool Operation Types:
extract: Extract entities/relationships without storage
store: Store pre-extracted graph data directly
query: Query existing graph knowledge
full: Complete processing including extraction, storage, and querying
Note
The tool maintains its own GraphMemoryAgent instance, so multiple tool calls share the same configuration and database connection. For production deployments, consider the resource implications of multiple agents accessing the same Neo4j database concurrently.
- async consolidate_memories(time_window='1 day', min_connections=2)¶
Consolidate related memories into higher-level concepts.
- async extract_graph_from_text(text, metadata=None)¶
Extract entities and relationships from unstructured text using LLM analysis.
Processes natural language text to identify entities (people, organizations, concepts, etc.) and their semantic relationships. Uses advanced graph transformation techniques to convert unstructured information into structured knowledge graph format.
- Parameters:
text (str) – Input text to analyze and extract from. Can be any natural language content including documents, conversations, articles, or structured data descriptions.
metadata (dict[str, Any] | None) – Optional dictionary containing additional context information such as source, timestamp, importance, or domain-specific tags. This metadata is preserved and associated with extracted entities.
- Returns:
- List of structured graph documents containing:
nodes: Extracted entities with types and properties
relationships: Semantic connections between entities
source: Original document reference with metadata
Each GraphDocument represents a coherent knowledge structure.
- Return type:
List[GraphDocument]
- Raises:
ValueError – If input text is empty or contains only whitespace.
LLMError – If language model processing fails or returns invalid results.
ConfigurationError – If required extraction components are not properly configured or missing required settings.
Examples
Extract entities from business information:
text = ("John Smith is the CEO of TechCorp, a software company based in " "San Francisco. The company specializes in AI solutions and was " "founded in 2020. John previously worked at DataCorp for 8 years.") graph_docs = await agent.extract_graph_from_text( text, metadata={ "source": "company_database", "confidence": 0.9, "verified": True } ) # Expected entities: John Smith (Person), TechCorp (Organization), # San Francisco (Location), AI solutions (Concept) # Expected relationships: John WORKS_FOR TechCorp, # TechCorp LOCATED_IN San Francisco for doc in graph_docs: print(f"Extracted {len(doc.nodes)} entities") print(f"Found {len(doc.relationships)} relationships") for node in doc.nodes: print(f"Entity: {node.id} ({node.type})") if node.properties: print(f" Properties: {node.properties}")
Extract from research content:
research_text = ("The study by Dr. Sarah Chen at MIT demonstrates that transformer " "models can achieve 95% accuracy on sentiment analysis tasks. " "This research builds on earlier work by the Stanford NLP team.") graph_docs = await agent.extract_graph_from_text( research_text, metadata={ "domain": "research", "publication_year": 2024, "field": "natural_language_processing" } ) # Expected: Dr. Sarah Chen (Person), MIT (Organization), # transformer models (Concept), sentiment analysis (Concept)
Handle extraction with custom settings:
# Agent configured for specific domain config = GraphMemoryConfig( allowed_nodes=["Researcher", "Institution", "Technology"], extract_properties=True, node_properties=["expertise", "ranking", "impact"] ) agent = GraphMemoryAgent(config) graph_docs = await agent.extract_graph_from_text( "Dr. Alice Wang leads the quantum computing research at IBM" )
Note
The extraction quality depends on the configured allowed_nodes and allowed_relationships. For domain-specific applications, customize these settings to match your knowledge domain. The method automatically falls back to LangChain’s LLMGraphTransformer if Haive’s enhanced GraphTransformer is unavailable.
- async get_memory_subgraph(entity_name, max_depth=2, relationship_types=None)¶
Get a subgraph centered around an entity.
- async query_graph(query, query_type='natural', include_context=True)¶
Query the knowledge graph using natural language or direct Cypher queries.
Provides flexible query interface supporting both human-friendly natural language questions and precise Cypher database queries. Integrates graph structure with RAG capabilities to deliver comprehensive, contextual answers.
- Parameters:
query (str) – The question or query to execute. For natural language queries, use conversational questions like “Who works at TechCorp?” or “What connections exist between AI and healthcare?”. For Cypher queries, use valid Neo4j Cypher syntax.
query_type (str) – Query processing mode: - “natural”: Process as natural language using Graph RAG and LLM - “cypher”: Execute directly as Cypher query against Neo4j
include_context (bool) – Whether to include additional contextual information in results such as related entities, recent memories, and graph neighborhood data for enhanced understanding.
- Returns:
- Comprehensive query results containing:
result/results: Main query answers or data
cypher_statement: Generated or provided Cypher query
context: Additional contextual information (if include_context=True)
intermediate_steps: Query processing steps (for debugging)
execution_time_ms: Query execution time
error: Error message if query fails
fallback_used: Whether fallback processing was required
- Return type:
Dict[str, Any]
- Raises:
QuerySyntaxError – If Cypher query contains syntax errors.
AuthenticationError – If database access is denied.
TimeoutError – If query execution exceeds configured timeout.
ValueError – If query_type is not “natural” or “cypher”.
Examples
Natural language queries:
# Find people and their roles result = await agent.query_graph( "Who are the executives at technology companies?", query_type="natural", include_context=True ) print(f"Answer: {result.get('result', 'No answer found')}") print(f"Generated Cypher: {result.get('cypher_statement')}") if result.get('context'): recent_nodes = result['context'].get('recent_nodes', []) print(f"Related context: {len(recent_nodes)} recent entities")
Relationship discovery:
result = await agent.query_graph( "How is John Smith connected to machine learning?" ) # Graph RAG finds connection paths and provides explanation if 'result' in result: print(f"Connection analysis: {result['result']}")
Direct Cypher queries:
# Precise database queries for specific data cypher_query = ( "MATCH (p:Person)-[r:WORKS_FOR]->(o:Organization) " "WHERE o.name CONTAINS 'Tech' " "RETURN p.name, o.name, r.role " "LIMIT 10" ) result = await agent.query_graph( cypher_query, query_type="cypher", include_context=False # Skip context for performance ) if 'results' in result: for row in result['results']: print(f"{row[0]} works at {row[1]} as {row[2]}")
Complex analytical queries:
# Find influential entities result = await agent.query_graph( "Which organizations have the most connections to AI research?" ) # Explore entity relationships result = await agent.query_graph( "What are all the ways that startups and venture capital are connected?" )
Error handling and fallbacks:
result = await agent.query_graph( "Complex ambiguous question about entities" ) if result.get('error'): print(f"Query failed: {result['error']}") if result.get('fallback_used'): print("Fallback processing was attempted") else: print(f"Successful result: {result['result']}")
Note
Natural language queries leverage the Graph RAG agent when available, falling back to the Cypher chain for broader compatibility. The system automatically handles user isolation by filtering results to the current user’s memory space. For production use, monitor query performance and consider caching for frequently accessed patterns.
- async run(input_text, mode=None, auto_store=True)¶
Main entry point for comprehensive graph memory processing.
Orchestrates the complete graph memory workflow from text input to knowledge storage and retrieval. This method provides a unified interface for all graph memory operations, automatically selecting appropriate processing steps based on the specified mode.
- Parameters:
input_text (str) – Raw text input to process. Can be any natural language content including documents, conversations, reports, or structured data descriptions. The text will be analyzed for entities and relationships according to the configured extraction settings.
mode (GraphMemoryMode | None) – Operation mode override for this specific processing run. If None, uses the agent’s default mode from configuration. See GraphMemoryMode for available options and their behaviors.
auto_store (bool) – Whether to automatically store extracted graph structures in the Neo4j database. When False, extraction is performed but results are only returned without persistence (useful for analysis).
- Returns:
- Comprehensive processing results containing:
input: Original input text for reference
mode: Processing mode used (actual mode after any overrides)
timestamp: Processing timestamp in ISO format
extracted_graph: Entity and relationship extraction statistics
storage: Storage operation results (if auto_store=True)
query_result: Query results for relevant memories (in FULL mode)
processing_time_ms: Total processing time
warnings: Any warnings encountered during processing
entity_summary: Summary of extracted entity types and counts
- Return type:
Dict[str, Any]
- Raises:
ProcessingError – If text processing or entity extraction fails.
StorageError – If database storage operations fail (when auto_store=True).
ConfigurationError – If agent mode or settings are invalid for operation.
TimeoutError – If processing exceeds configured timeout limits.
Examples
Basic memory storage:
result = await agent.run( "Dr. Alice Chen joined Stanford's AI Lab as a research scientist. " "She previously worked on neural networks at Google for 3 years." ) print(f"Processing mode: {result['mode']}") print(f"Entities extracted: {result['extracted_graph']['total_nodes']}") print(f"Relationships found: {result['extracted_graph']['total_relationships']}") if result.get('storage'): print(f"Storage successful: {result['storage']['success']}")
Analysis without storage:
# Extract and analyze without permanent storage result = await agent.run( "Complex business relationship description...", mode=GraphMemoryMode.EXTRACT_ONLY, auto_store=False ) # Examine extraction results extraction = result['extracted_graph'] print(f"Would create {extraction['total_nodes']} entities") print(f"Would create {extraction['total_relationships']} relationships")
Full processing with query:
# Process and immediately query for related information result = await agent.run( "New information about TechCorp's expansion to Europe", mode=GraphMemoryMode.FULL ) # Includes automatic query for related memories if result.get('query_result'): print(f"Related information: {result['query_result']}")
Performance monitoring:
import time start_time = time.time() result = await agent.run(large_text_document) total_time = time.time() - start_time print(f"Total processing: {total_time:.2f}s") print(f"Agent processing: {result.get('processing_time_ms', 0):.1f}ms") if result.get('warnings'): for warning in result['warnings']: print(f"Warning: {warning}")
Batch processing pattern:
texts = [ "First document with entities...", "Second document with relationships...", "Third document with events..." ] results = [] for text in texts: result = await agent.run(text) results.append(result) # Monitor progress entities = result['extracted_graph']['total_nodes'] print(f"Processed: {entities} entities") # Aggregate statistics total_entities = sum( r['extracted_graph']['total_nodes'] for r in results ) print(f"Total entities across all documents: {total_entities}")
Note
The processing workflow adapts based on the specified mode. FULL mode provides the most comprehensive processing including automatic querying for related memories, while other modes focus on specific operations for performance optimization. Monitor processing times for large documents and consider chunking for very long texts.
- async search_similar_memories(query, node_type=None, k=5)¶
Search for similar memories using vector similarity.
- async store_graph_documents(graph_documents, merge_nodes=True)¶
Store extracted graph documents in Neo4j database with intelligent merging.
Implements the Text-to-Neo4j (TNT) pattern for direct storage of structured knowledge graphs. Handles entity deduplication, relationship creation, and user isolation while maintaining data integrity and performance optimization.
- Parameters:
graph_documents (list[langchain_neo4j.graphs.graph_document.GraphDocument]) – List of GraphDocument objects containing nodes and relationships to store. Each document represents a coherent knowledge structure extracted from source text.
merge_nodes (bool) – Whether to merge with existing entities or create new ones. When True, entities with matching names/types are consolidated. When False, all entities are created as new nodes (may cause duplicates).
- Returns:
- Comprehensive storage statistics containing:
nodes_created: Number of new entity nodes stored
relationships_created: Number of new relationships established
errors: List of any errors encountered during storage
success: Boolean indicating whether operation completed successfully
storage_time_ms: Time taken for storage operation
merge_conflicts: Number of merge conflicts resolved
- Return type:
Dict[str, Any]
- Raises:
DatabaseError – If Neo4j database connection fails or transaction errors occur.
ValidationError – If graph documents contain invalid node types or relationships not allowed by current configuration.
AuthorizationError – If database user lacks required write permissions.
Examples
Store extracted entities with merging:
# Extract from multiple sources doc1 = await agent.extract_graph_from_text( "John Smith works at TechCorp in San Francisco" ) doc2 = await agent.extract_graph_from_text( "TechCorp's CEO John Smith announced new AI initiatives" ) # Store with intelligent merging result = await agent.store_graph_documents( doc1 + doc2, merge_nodes=True # Consolidate duplicate entities ) print(f"Stored {result['nodes_created']} unique entities") print(f"Created {result['relationships_created']} relationships") if result['errors']: print(f"Encountered {len(result['errors'])} errors")
Store without merging for temporal analysis:
# Preserve all instances for timeline analysis result = await agent.store_graph_documents( graph_docs, merge_nodes=False # Keep all entity instances ) # Useful for tracking entity evolution over time print(f"Stored {result['nodes_created']} entity instances")
Batch storage with error handling:
try: result = await agent.store_graph_documents(large_document_set) if result['success']: print("Storage completed successfully") else: print(f"Partial storage: {len(result['errors'])} errors") for error in result['errors']: print(f"Error: {error}") except DatabaseError as e: print(f"Database operation failed: {e}")
Performance monitoring:
import time start_time = time.time() result = await agent.store_graph_documents(graph_docs) storage_time = time.time() - start_time nodes_per_second = result['nodes_created'] / storage_time print(f"Storage rate: {nodes_per_second:.1f} nodes/second") print(f"Database time: {result.get('storage_time_ms', 0):.1f}ms")
Note
The storage operation automatically adds user_id tags for multi-user isolation and timestamps for temporal analysis. When merge_nodes=True, the system intelligently consolidates entities with matching identifiers while preserving unique properties and relationships. For high-volume storage operations, consider batch processing and monitor database performance metrics.
- class agents.memory_v2.graph_memory_agent.GraphMemoryConfig¶
Comprehensive configuration for GraphMemoryAgent with database and extraction settings.
This configuration class provides fine-grained control over all aspects of the graph memory system, from database connections to entity extraction preferences. The configuration supports multi-user environments, custom knowledge domains, and performance optimization.
- neo4j_uri¶
Neo4j database connection URI. Supports bolt://, bolt+s://, and neo4j:// protocols. Default connects to local instance.
- neo4j_username¶
Database username for authentication. Default is ‘neo4j’.
- neo4j_password¶
Database password for authentication. Required for connection.
- database_name¶
Specific Neo4j database name. Use ‘neo4j’ for default database.
- allowed_nodes¶
List of node types that can be extracted from text. Controls the vocabulary of entity types recognized by the system.
- allowed_relationships¶
List of valid relationship triplets (source, relation, target). Defines the relationship schema for structured knowledge extraction.
- extract_properties¶
Whether to extract and store node/relationship properties beyond just names and types. Enables richer entity descriptions.
- node_properties¶
List of property names to extract for entities (e.g., ‘role’, ‘description’, ‘importance’). Only used when extract_properties=True.
- relationship_properties¶
List of property names for relationships (e.g., ‘since’, ‘strength’, ‘context’). Provides temporal and contextual information.
- user_id¶
Unique identifier for memory isolation in multi-user environments. All stored entities and relationships are tagged with this identifier.
- mode¶
Operation mode determining which components are active. See GraphMemoryMode for available options.
- llm_config¶
Configuration for the language model used in entity extraction and query processing. Affects extraction quality and cost.
- enable_vector_index¶
Whether to create vector embeddings for semantic search. Requires additional computational resources but enables similarity search.
- embedding_model¶
Model provider for vector embeddings. Currently supports ‘openai’.
Examples
Basic configuration for local development:
config = GraphMemoryConfig( neo4j_uri="bolt://localhost:7687", neo4j_username="neo4j", neo4j_password="password", user_id="developer", mode=GraphMemoryMode.FULL )
Production configuration with custom schema:
config = GraphMemoryConfig( neo4j_uri="bolt+s://production.neo4j.com:7687", neo4j_username="prod_user", neo4j_password=os.getenv("NEO4J_PASSWORD"), database_name="knowledge_base", user_id="production_system", # Custom domain-specific entities allowed_nodes=[ "Researcher", "Paper", "Institution", "Dataset", "Algorithm", "Experiment", "Finding" ], allowed_relationships=[ ("Researcher", "AUTHORED", "Paper"), ("Paper", "CITES", "Paper"), ("Researcher", "AFFILIATED_WITH", "Institution"), ("Experiment", "USES", "Dataset"), ("Paper", "PROPOSES", "Algorithm") ], # Enhanced property extraction extract_properties=True, node_properties=[ "publication_year", "impact_factor", "field", "methodology", "significance" ], relationship_properties=[ "citation_count", "collaboration_strength", "temporal_proximity", "research_area" ], # Performance optimization llm_config=AugLLMConfig( model="gpt-4", temperature=0.1, # Deterministic extraction max_tokens=2000 ), enable_vector_index=True )
Memory-optimized configuration:
config = GraphMemoryConfig( # Use faster extraction with basic schema allowed_nodes=["Person", "Organization", "Location"], extract_properties=False, # Skip detailed properties enable_vector_index=False, # Skip vector embeddings llm_config=AugLLMConfig( model="gpt-3.5-turbo", # Faster, cheaper model temperature=0.0, max_tokens=1000 ) )
Note
The allowed_relationships format uses triplets to define valid relationship patterns. This helps ensure extracted relationships follow expected schemas and improves query predictability. Vector indexing significantly improves semantic search but requires additional storage and computational resources.
- class agents.memory_v2.graph_memory_agent.GraphMemoryMode¶
-
Operation modes for GraphMemoryAgent defining different processing workflows.
These modes control which components of the graph memory pipeline are active, allowing for flexible deployment patterns and performance optimization based on specific use cases.
- EXTRACT_ONLY¶
Extract entities and relationships from text without storing in the database. Useful for analysis, validation, or external storage.
- STORE_ONLY¶
Store pre-processed graph documents directly in Neo4j without extraction. Suitable when entities/relationships are already identified.
- EXTRACT_AND_STORE¶
Complete pipeline from raw text to graph storage. Recommended for most applications requiring automatic knowledge extraction.
- QUERY_ONLY¶
Search and retrieve from existing graph without modifications. Ideal for read-only applications or when extraction is handled separately.
- FULL¶
All capabilities including extraction, storage, querying, and consolidation. Provides complete graph memory functionality.
Examples
Mode selection based on use case:
# For analysis without permanent storage config = GraphMemoryConfig(mode=GraphMemoryMode.EXTRACT_ONLY) agent = GraphMemoryAgent(config) result = await agent.run("Analyze this text for entities") entities = result["extracted_graph"] # For high-performance querying config = GraphMemoryConfig(mode=GraphMemoryMode.QUERY_ONLY) agent = GraphMemoryAgent(config) result = await agent.query_graph("Find all connections to AI") # For production knowledge base config = GraphMemoryConfig(mode=GraphMemoryMode.FULL) agent = GraphMemoryAgent(config) await agent.run("Store and process this information...")
Note
The FULL mode provides the most comprehensive functionality but requires more computational resources. For production deployments, consider the performance implications of each mode based on your specific requirements.
Initialize self. See help(type(self)) for accurate signature.
- async agents.memory_v2.graph_memory_agent.example_graph_memory()¶
Comprehensive example demonstrating GraphMemoryAgent capabilities.
This example showcases the full range of graph memory functionality including entity extraction, knowledge storage, querying, and advanced operations like memory consolidation and subgraph exploration.