agents.research.open_perplexity.modelsΒΆ

Models for the open_perplexity research agent.

from typing import Any This module defines data models used for representing, tracking, and evaluating research sources, findings, and summaries. It includes enumerations for categorizing data source types, content reliability, freshness, and research depth.

ClassesΒΆ

ContentFreshness

Enumeration of content freshness levels.

ContentReliability

Enumeration of content reliability levels.

DataSourceConfig

Configuration for a data source.

DataSourceType

Enumeration of data source types.

ResearchDepth

Enumeration of research depth levels.

ResearchFinding

Model for a specific research finding.

ResearchSource

Model for tracking and evaluating research sources.

ResearchSummary

Summary of research findings and assessment.

Module ContentsΒΆ

class agents.research.open_perplexity.models.ContentFreshnessΒΆ

Bases: str, enum.Enum

Enumeration of content freshness levels.

Categorizes how recent or up-to-date the information content is.

VERY_RECENTΒΆ

Content from the last few days

RECENTΒΆ

Content from the last few weeks

SOMEWHAT_RECENTΒΆ

Content from the last few months

OUTDATEDΒΆ

Content from years ago

UNKNOWNΒΆ

Content with unknown or unclear publication date

Initialize self. See help(type(self)) for accurate signature.

class agents.research.open_perplexity.models.ContentReliabilityΒΆ

Bases: str, enum.Enum

Enumeration of content reliability levels.

Categorizes the trustworthiness and reliability of information sources.

HIGHΒΆ

Highly reliable sources (peer-reviewed, authoritative)

MEDIUMΒΆ

Moderately reliable sources (reputable but not authoritative)

LOWΒΆ

Low reliability sources (potentially biased or unverified)

UNKNOWNΒΆ

Sources with unknown or unclear reliability

Initialize self. See help(type(self)) for accurate signature.

class agents.research.open_perplexity.models.DataSourceConfig(/, **data)ΒΆ

Bases: pydantic.BaseModel

Configuration for a data source.

Specifies parameters for interacting with a particular data source, including API keys and search parameters.

Parameters:

data (Any)

nameΒΆ

Name of the data source

source_typeΒΆ

Type of data source

enabledΒΆ

Whether this source is enabled

priorityΒΆ

Priority (1-10, higher = more important)

api_keyΒΆ

API key for the data source if required

max_resultsΒΆ

Maximum number of results to return

search_paramsΒΆ

Custom search parameters

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod validate_priority(v)ΒΆ

Ensure priority is between 1 and 10.

Parameters:

v – The priority value to validate

Returns:

The validated priority value, clamped between 1 and 10

Return type:

int

class agents.research.open_perplexity.models.DataSourceTypeΒΆ

Bases: str, enum.Enum

Enumeration of data source types.

Categorizes the different types of sources where research information can be found.

WEBΒΆ

General web content

GITHUBΒΆ

Code repositories and issues from GitHub

ACADEMICΒΆ

Academic papers and research publications

NEWSΒΆ

News articles and press releases

SOCIAL_MEDIAΒΆ

Content from social media platforms

DOCUMENTSΒΆ

Uploaded or local documents

APIΒΆ

Data retrieved from APIs

OTHERΒΆ

Any other source type not covered above

Initialize self. See help(type(self)) for accurate signature.

class agents.research.open_perplexity.models.ResearchDepthΒΆ

Bases: str, enum.Enum

Enumeration of research depth levels.

Categorizes the comprehensiveness and thoroughness of the research.

SUPERFICIALΒΆ

Basic overview with minimal sources

INTERMEDIATEΒΆ

Moderate depth with several sources

DEEPΒΆ

In-depth research with many high-quality sources

COMPREHENSIVEΒΆ

Exhaustive research with extensive sources

Initialize self. See help(type(self)) for accurate signature.

class agents.research.open_perplexity.models.ResearchFinding(/, **data)ΒΆ

Bases: pydantic.BaseModel

Model for a specific research finding.

Represents an individual insight or finding from the research, including supporting sources and confidence assessment.

Parameters:

data (Any)

findingΒΆ

The actual finding or insight

confidenceΒΆ

Confidence level in this finding (0.0 - 1.0)

sourcesΒΆ

Sources supporting this finding

explanationΒΆ

Explanation of the finding’s significance

related_findingsΒΆ

Related findings

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod validate_confidence(v)ΒΆ

Ensure confidence is between 0 and 1.

Parameters:

v – The confidence value to validate

Returns:

The validated confidence value, clamped between 0.0 and 1.0

Return type:

float

class agents.research.open_perplexity.models.ResearchSource(/, **data)ΒΆ

Bases: pydantic.BaseModel

Model for tracking and evaluating research sources.

Represents a source of information used in research, including metadata about its reliability, relevance, and content.

Parameters:

data (Any)

urlΒΆ

URL of the source

titleΒΆ

Title of the source

source_typeΒΆ

Type of data source

content_snippetΒΆ

Snippet of relevant content

reliabilityΒΆ

Assessed reliability of the source

freshnessΒΆ

Content freshness/recency

relevance_scoreΒΆ

Relevance score from 0.0 to 1.0

citationΒΆ

Formatted citation for the source

access_timestampΒΆ

When the source was accessed

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod validate_relevance_score(v)ΒΆ

Ensure relevance score is between 0 and 1.

Parameters:

v – The relevance score to validate

Returns:

The validated relevance score, clamped between 0.0 and 1.0

Return type:

float

class agents.research.open_perplexity.models.ResearchSummary(/, **data)ΒΆ

Bases: pydantic.BaseModel

Summary of research findings and assessment.

Provides an overall summary of the research, including key findings, assessment of source quality, and confidence evaluation.

Parameters:

data (Any)

topicΒΆ

Research topic

questionΒΆ

Specific research question

key_findingsΒΆ

Key findings from research

sources_countΒΆ

Total number of sources consulted

high_reliability_sourcesΒΆ

Number of high reliability sources

recent_sourcesΒΆ

Number of recent sources

research_depthΒΆ

Overall research depth

contradictionsΒΆ

Contradictory findings identified

confidence_scoreΒΆ

Overall confidence score

limitationsΒΆ

Research limitations

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

assess_depth()ΒΆ

Assess research depth based on source counts and diversity.

Evaluates the depth of research based on the number of sources and the proportion of high reliability sources.

Returns:

The assessed research depth level

Return type:

ResearchDepth

classmethod validate_confidence_score(v)ΒΆ

Ensure confidence score is between 0 and 1.

Parameters:

v – The confidence score to validate

Returns:

The validated confidence score, clamped between 0.0 and 1.0

Return type:

float