agents.memory_v2.memory_tools

Memory tools for modular memory operations.

Provides separate tools for memory operations following proper Haive patterns. Tools are designed to be used by memory agents and can be easily tested and composed together.

Classes

MemoryConfig

Configuration for memory operations.

MemoryMetadata

!!! abstract "Usage Documentation"

Functions

classify_memory(content[, context, config])

Classify memory type and extract metadata.

get_memory_stats([namespace, config])

Get comprehensive statistics about stored memories.

retrieve_memory(query[, memory_type, ...])

Retrieve memories based on query and filters.

search_memory([query, filters, sort_by, sort_order, ...])

Search memories with flexible filtering and sorting options.

store_memory(content[, memory_type, importance, tags, ...])

Store a memory with classification and metadata.

Module Contents

class agents.memory_v2.memory_tools.MemoryConfig(/, **data)

Bases: pydantic.BaseModel

Configuration for memory operations.

Provides centralized configuration for memory storage, retrieval, and classification operations with proper validation.

Parameters:

data (Any)

storage_backend

Backend for memory storage (json_file, sqlite, neo4j, vector_db)

storage_path

Path for file-based storage backends

max_memories

Maximum number of memories to store (-1 for unlimited)

memory_ttl

Time-to-live for memories in seconds (-1 for permanent)

enable_embedding

Whether to generate embeddings for similarity search

embedding_model

Model to use for embeddings

similarity_threshold

Minimum similarity score for retrieval

classification_enabled

Whether to automatically classify memories

auto_cleanup

Whether to automatically clean up old/low-importance memories

cache_size

Size of in-memory cache for frequently accessed memories

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod validate_storage_path(v, info)

Validate storage path based on backend.

class agents.memory_v2.memory_tools.MemoryMetadata(/, **data)

Bases: pydantic.BaseModel

!!! abstract “Usage Documentation”

[Models](../concepts/models.md)

A base class for creating Pydantic models.

Parameters:

data (Any)

__class_vars__

The names of the class variables defined on the model.

__private_attributes__

Metadata about the private attributes of the model.

__signature__

The synthesized __init__ [Signature][inspect.Signature] of the model.

__pydantic_complete__

Whether model building is completed, or if there are still undefined fields.

__pydantic_core_schema__

The core schema of the model.

__pydantic_custom_init__

Whether the model has a custom __init__ function.

__pydantic_decorators__

Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.

__pydantic_generic_metadata__

Metadata for generic models; contains data used for a similar purpose to __args__, __origin__, __parameters__ in typing-module generics. May eventually be replaced by these.

__pydantic_parent_namespace__

Parent namespace of the model, used for automatic rebuilding of models.

__pydantic_post_init__

The name of the post-init method for the model, if defined.

__pydantic_root_model__

Whether the model is a [RootModel][pydantic.root_model.RootModel].

__pydantic_serializer__

The pydantic-core SchemaSerializer used to dump instances of the model.

__pydantic_validator__

The pydantic-core SchemaValidator used to validate instances of the model.

__pydantic_fields__

A dictionary of field names and their corresponding [FieldInfo][pydantic.fields.FieldInfo] objects.

__pydantic_computed_fields__

A dictionary of computed field names and their corresponding [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] objects.

__pydantic_extra__

A dictionary containing extra values, if [extra][pydantic.config.ConfigDict.extra] is set to ‘allow’.

__pydantic_fields_set__

The names of fields explicitly set during instantiation.

__pydantic_private__

Values of private attributes set on the model instance.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

agents.memory_v2.memory_tools.classify_memory(content, context=None, config=None)

Classify memory type and extract metadata.

Analyzes memory content to automatically determine memory type, importance level, and extract relevant metadata like entities and tags.

Parameters:
  • content (str) – Memory content to classify

  • context (str | None) – Optional context for better classification

  • config (dict[str, Any] | None) – Optional configuration override

Returns:

Dictionary with classification results and extracted metadata

Return type:

dict[str, Any]

Examples

Basic classification:

result = classify_memory("I met Alice at the conference")
print(result["memory_type"])  # "episodic"
print(result["entities"])     # ["Alice"]

With context:

result = classify_memory(
    "The project deadline is next Friday",
    context="work meeting discussion"
)
agents.memory_v2.memory_tools.get_memory_stats(namespace='default', config=None)

Get comprehensive statistics about stored memories.

Provides detailed analytics about memory storage, including counts by type and importance, performance metrics, and usage patterns.

Parameters:
  • namespace (str) – Namespace to analyze

  • config (dict[str, Any] | None) – Optional configuration override

Returns:

Dictionary with comprehensive memory statistics

Return type:

dict[str, Any]

Examples

Get basic stats:

stats = get_memory_stats()
print(f"Total memories: {stats['total_memories']}")
print(f"Memory types: {stats['memory_types']}")
agents.memory_v2.memory_tools.retrieve_memory(query, memory_type=None, importance_filter=None, limit=5, namespace='default', config=None)

Retrieve memories based on query and filters.

Searches for relevant memories using content similarity and metadata filters. Returns the most relevant memories sorted by relevance score.

Parameters:
  • query (str) – Search query to find relevant memories

  • memory_type (str | None) – Filter by specific memory type

  • importance_filter (list[str] | None) – Filter by importance levels

  • limit (int) – Maximum number of memories to return

  • namespace (str) – Namespace to search in

  • config (dict[str, Any] | None) – Optional configuration override

Returns:

List of memory dictionaries with content, metadata, and similarity scores

Return type:

list[dict[str, Any]]

Examples

Basic retrieval:

memories = retrieve_memory("coffee preferences")
for memory in memories:
    print(f"Content: {memory['content']}")
    print(f"Score: {memory['similarity_score']}")

Filtered retrieval:

memories = retrieve_memory(
    "research work",
    memory_type="episodic",
    importance_filter=["high", "critical"]
)
agents.memory_v2.memory_tools.search_memory(query=None, filters=None, sort_by='timestamp', sort_order='desc', limit=10, namespace='default', config=None)

Search memories with flexible filtering and sorting options.

Provides advanced search capabilities with multiple filter options and sorting criteria for comprehensive memory exploration.

Parameters:
  • query (str | None) – Optional text query for content search

  • filters (dict[str, Any] | None) – Dictionary of filters to apply

  • sort_by (str) – Field to sort by (timestamp, importance, retrieval_count)

  • sort_order (str) – Sort order (asc, desc)

  • limit (int) – Maximum number of results

  • namespace (str) – Namespace to search in

  • config (dict[str, Any] | None) – Optional configuration override

Returns:

List of memory dictionaries matching the search criteria

Return type:

list[dict[str, Any]]

Examples

Search by filters:

memories = search_memory(
    filters={
        "memory_type": "semantic",
        "importance": ["high", "critical"],
        "tags": ["work", "projects"]
    }
)

Search with query and sorting:

memories = search_memory(
    query="coffee",
    sort_by="retrieval_count",
    sort_order="desc"
)
agents.memory_v2.memory_tools.store_memory(content, memory_type='semantic', importance='medium', tags=None, context_id=None, namespace='default', config=None)

Store a memory with classification and metadata.

Stores a memory entry with automatic classification, metadata extraction, and optional embedding generation for similarity search.

Parameters:
  • content (str) – The memory content to store

  • memory_type (str) – Type of memory (semantic, episodic, procedural)

  • importance (str) – Importance level (critical, high, medium, low, transient)

  • tags (list[str] | None) – Optional list of tags for categorization

  • context_id (str | None) – Optional ID to group related memories

  • namespace (str) – Namespace for memory organization

  • config (dict[str, Any] | None) – Optional configuration override

Returns:

String indicating success and memory ID

Return type:

str

Examples

Basic usage:

result = store_memory("I prefer coffee over tea", "semantic", "medium")
# Returns: "Memory stored successfully with ID: uuid-string"

With tags and context:

result = store_memory(
    "Alice introduced herself as a researcher",
    memory_type="episodic",
    tags=["people", "introductions"],
    context_id="meeting_2024_01_15"
)