haive.core.utils.debugkit.analysis.static¶
Static analysis orchestrator for integrating multiple Python analysis tools.
This module provides a unified interface for running and coordinating multiple static analysis tools including type checkers, linters, complexity analyzers, and code quality tools. It orchestrates tools like mypy, pyright, radon, vulture, and many others from the development toolchain.
The orchestrator handles tool execution, result parsing, and provides unified reporting across all analysis tools.
Classes¶
A single finding from static analysis. |
|
Results from running a static analysis tool. |
|
Types of static analysis. |
|
Mypy static type checker analyzer. |
|
Pyflakes code quality analyzer. |
|
Radon complexity analyzer. |
|
Analysis finding severity levels. |
|
Orchestrator for running multiple static analysis tools. |
|
Base class for individual tool analyzers. |
|
Vulture dead code analyzer. |
Module Contents¶
- class haive.core.utils.debugkit.analysis.static.AnalysisFinding[source]¶
A single finding from static analysis.
Represents an issue, suggestion, or metric found by a static analysis tool.
- tool_name¶
Name of the tool that generated this finding
- analysis_type¶
Type of analysis that found this issue
- severity¶
Severity level of the finding
- message¶
Human-readable description of the finding
- file_path¶
Path to the file containing the issue
- line_number¶
Line number where the issue was found
- column_number¶
Column number where the issue was found
- rule_id¶
Tool-specific rule or check identifier
- suggestion¶
Suggested fix for the issue
- context¶
Additional context about the finding
Examples
Create a finding from tool output:
finding = AnalysisFinding( tool_name="mypy", analysis_type=AnalysisType.TYPE_CHECKING, severity=Severity.HIGH, message="Argument has incompatible type", file_path="my_module.py", line_number=42, rule_id="arg-type" )
- class haive.core.utils.debugkit.analysis.static.AnalysisResult[source]¶
Results from running a static analysis tool.
Contains all findings and metadata from running a single analysis tool on a file or project.
- tool_name¶
Name of the analysis tool
- analysis_type¶
Type of analysis performed
- success¶
Whether the tool ran successfully
- execution_time¶
Time taken to run the analysis (seconds)
- findings¶
List of findings discovered
- metrics¶
Numerical metrics collected by the tool
- suggestions¶
High-level suggestions from the tool
- raw_output¶
Raw output from the tool for debugging
- command_used¶
Command line used to run the tool
- exit_code¶
Exit code from the tool execution
- error_message¶
Error message if tool failed
Examples
Process analysis results:
result = orchestrator.run_tool("mypy", file_path) if result.success: print(f"Found {len(result.findings)} issues") for finding in result.findings: if finding.severity == Severity.HIGH: print(f"High severity: {finding.message}") else: print(f"Tool failed: {result.error_message}")
- get_critical_findings()[source]¶
Get only critical and high severity findings.
- Returns:
Critical and high severity findings
- Return type:
List[AnalysisFinding]
- class haive.core.utils.debugkit.analysis.static.AnalysisType[source]¶
-
Types of static analysis.
- TYPE_CHECKING¶
Static type analysis (mypy, pyright)
- COMPLEXITY¶
Code complexity analysis (radon, xenon, mccabe)
- QUALITY¶
Code quality analysis (pyflakes, vulture)
- STYLE¶
Code style analysis (pycodestyle, autopep8)
- SECURITY¶
Security analysis (bandit, safety)
- PERFORMANCE¶
Performance analysis (py-spy, scalene)
- MODERNIZATION¶
Code modernization (pyupgrade, flynt)
- DEAD_CODE¶
Dead code detection (vulture, dead)
- METRICS¶
Code metrics collection (radon, wily)
Initialize self. See help(type(self)) for accurate signature.
- class haive.core.utils.debugkit.analysis.static.MypyAnalyzer[source]¶
Bases:
ToolAnalyzer
Mypy static type checker analyzer.
Initialize mypy analyzer.
- class haive.core.utils.debugkit.analysis.static.PyflakesAnalyzer[source]¶
Bases:
ToolAnalyzer
Pyflakes code quality analyzer.
Initialize pyflakes analyzer.
- class haive.core.utils.debugkit.analysis.static.RadonAnalyzer[source]¶
Bases:
ToolAnalyzer
Radon complexity analyzer.
Initialize radon analyzer.
- class haive.core.utils.debugkit.analysis.static.Severity[source]¶
-
Analysis finding severity levels.
- INFO¶
Informational finding
- LOW¶
Low severity issue
- MEDIUM¶
Medium severity issue
- HIGH¶
High severity issue
- CRITICAL¶
Critical issue requiring immediate attention
Initialize self. See help(type(self)) for accurate signature.
- class haive.core.utils.debugkit.analysis.static.StaticAnalysisOrchestrator(max_workers=4, timeout=60, custom_analyzers=None)[source]¶
Orchestrator for running multiple static analysis tools.
This class coordinates the execution of multiple static analysis tools, manages their results, and provides unified reporting capabilities. It supports both individual tool execution and batch analysis across multiple tools.
- available_tools¶
Dictionary of available tool analyzers
- default_tool_set¶
Default set of tools to run
- max_workers¶
Maximum number of concurrent tool executions
- timeout¶
Default timeout for tool execution
Examples
Basic orchestration:
orchestrator = StaticAnalysisOrchestrator() # Run specific tools results = orchestrator.analyze_file( Path("my_module.py"), tools=["mypy", "radon", "vulture"] ) # Generate unified report report = orchestrator.generate_report(results) print(report)
Batch analysis:
# Analyze entire project project_results = orchestrator.analyze_project( Path("./src"), tools=["mypy", "pyflakes", "radon"], parallel=True ) # Get summary statistics summary = orchestrator.get_project_summary(project_results)
Initialize the static analysis orchestrator.
- Parameters:
max_workers (int) – Maximum number of concurrent tool executions
timeout (int) – Default timeout for tool execution in seconds
custom_analyzers (dict[str, ToolAnalyzer] | None) – Custom tool analyzers to include
- analyze_file(file_path, tools=None, parallel=True, **kwargs)[source]¶
Analyze a single file with specified tools.
- Parameters:
file_path (pathlib.Path) – Path to the file to analyze
tools (list[str] | None) – List of tool names to run (None for default set)
parallel (bool) – Whether to run tools in parallel
**kwargs – Additional arguments passed to tools
- Returns:
Results keyed by tool name
- Return type:
Dict[str, AnalysisResult]
- Raises:
FileNotFoundError – If the file doesn’t exist
ValueError – If unknown tools are specified
Examples
Analyze with specific tools:
results = orchestrator.analyze_file( Path("complex_module.py"), tools=["mypy", "radon"], strict=True # Passed to mypy ) for tool_name, result in results.items(): print(f"{tool_name}: {len(result.findings)} findings")
- analyze_project(project_path, tools=None, file_patterns=None, exclude_patterns=None, parallel=True, max_files=None)[source]¶
Analyze an entire project with specified tools.
- Parameters:
project_path (pathlib.Path) – Path to the project directory
file_patterns (list[str] | None) – Glob patterns for files to include
exclude_patterns (list[str] | None) – Glob patterns for files to exclude
parallel (bool) – Whether to analyze files in parallel
max_files (int | None) – Maximum number of files to analyze
- Returns:
Results nested by file path and tool name
- Return type:
Dict[str, Dict[str, AnalysisResult]]
Examples
Analyze Python project:
results = orchestrator.analyze_project( Path("./my_project"), tools=["mypy", "pyflakes", "radon"], file_patterns=["**/*.py"], exclude_patterns=["**/test_*.py", "**/__pycache__/**"] ) # Count total issues total_issues = sum( len(tool_result.findings) for file_results in results.values() for tool_result in file_results.values() )
- generate_report(results, format='markdown')[source]¶
Generate a unified report from analysis results.
- Parameters:
results (dict[str, AnalysisResult] | dict[str, dict[str, AnalysisResult]]) – Analysis results from analyze_file or analyze_project
format (str) – Report format (“markdown”, “json”, “text”)
- Returns:
Formatted report
- Return type:
Examples
Generate markdown report:
results = orchestrator.analyze_file(Path("module.py")) report = orchestrator.generate_report(results, format="markdown") with open("analysis_report.md", "w") as f: f.write(report)
- class haive.core.utils.debugkit.analysis.static.ToolAnalyzer(tool_name, analysis_type)[source]¶
Base class for individual tool analyzers.
Each static analysis tool has its own analyzer that knows how to execute the tool, parse its output, and convert results to the unified AnalysisResult format.
- tool_name¶
Name of the analysis tool
- analysis_type¶
Type of analysis this tool performs
- command_template¶
Template for the command line execution
- available¶
Whether the tool is available on the system
Initialize the tool analyzer.
- Parameters:
tool_name (str) – Name of the analysis tool
analysis_type (AnalysisType) – Type of analysis this tool performs
- analyze_file(file_path, **kwargs)[source]¶
Analyze a single file with this tool.
- Parameters:
file_path (pathlib.Path) – Path to the file to analyze
**kwargs – Additional tool-specific arguments
- Returns:
Analysis results from the tool
- Return type:
- class haive.core.utils.debugkit.analysis.static.VultureAnalyzer[source]¶
Bases:
ToolAnalyzer
Vulture dead code analyzer.
Initialize vulture analyzer.