agents.memory_v2.token_tracker

Token tracking component for memory operations.

Monitors token usage across memory operations and triggers summarization or rewriting when approaching context limits.

Classes

TokenThresholds

Token usage thresholds for different alert levels.

TokenTracker

Track token usage across memory operations with intelligent monitoring.

TokenUsageEntry

Single token usage entry for tracking.

Module Contents

class agents.memory_v2.token_tracker.TokenThresholds(/, **data)

Bases: pydantic.BaseModel

Token usage thresholds for different alert levels.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

get_status(usage_ratio)

Get status based on usage ratio.

Parameters:

usage_ratio (float)

Return type:

str

class agents.memory_v2.token_tracker.TokenTracker(/, **data)

Bases: pydantic.BaseModel

Track token usage across memory operations with intelligent monitoring.

Features: - Real-time token tracking by operation - Threshold monitoring with alerts - Usage pattern analysis - Recommendations for optimization

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)

can_fit_operation(estimated_tokens)

Check if an operation can fit within remaining tokens.

Parameters:

estimated_tokens (int) – Estimated tokens for the operation

Returns:

True if operation can fit, False otherwise

Return type:

bool

estimate_tokens_for_content(content)

Estimate tokens for given content.

Simple estimation: ~4 characters per token (rough approximation). In production, would use proper tokenizer.

Parameters:

content (str) – Text content to estimate

Returns:

Estimated token count

Return type:

int

get_recommendations()

Get recommendations based on usage patterns.

Returns:

List of recommendation strings

Return type:

list[str]

get_remaining_tokens()

Get number of tokens remaining before limit.

Return type:

int

get_status()

Get current status based on thresholds.

Return type:

str

get_usage_ratio()

Get current usage ratio (0.0 to 1.0).

Return type:

float

get_usage_summary()

Get comprehensive usage summary.

Returns:

Dictionary with usage statistics and analysis

Return type:

dict[str, Any]

reset_tokens(keep_history=True)

Reset token counts while optionally keeping history.

Parameters:

keep_history (bool) – Whether to keep usage history

Return type:

None

suggest_compression_targets(target_reduction=0.3)

Suggest operations to target for compression.

Parameters:

target_reduction (float) – Target reduction ratio (0.0 to 1.0)

Returns:

List of (operation, potential_savings) tuples

Return type:

list[tuple[str, int]]

track(operation, tokens, metadata=None)

Track tokens for an operation.

Parameters:
  • operation (str) – Name of the operation

  • tokens (int) – Number of tokens used

  • metadata (dict[str, Any] | None) – Optional metadata about the operation

Return type:

None

class agents.memory_v2.token_tracker.TokenUsageEntry(/, **data)

Bases: pydantic.BaseModel

Single token usage entry for tracking.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Parameters:

data (Any)