dataflow.llms.api¶
API endpoints for LLM model information and availability.
This module provides FastAPI endpoints to access and manage LLM model data stored in Supabase. It helps bridge the client application with the database while providing additional server-side logic.
Functions¶
Get all possible LLM capabilities. |
|
|
Get a specific model by ID with all related information. |
|
Get all models from the database. |
Get all possible LLM operation modes. |
|
Get all providers from the database. |
|
|
Get a specific model by ID. |
|
Get all models with optional filtering. |
Get all LLM providers. |
|
|
Get recommended models based on capabilities and task requirements. |
Module Contents¶
- async dataflow.llms.api.get_capabilities()¶
Get all possible LLM capabilities.
- dataflow.llms.api.get_model_by_id(model_id)¶
Get a specific model by ID with all related information.
- dataflow.llms.api.get_models(provider=None)¶
Get all models from the database.
- async dataflow.llms.api.get_modes()¶
Get all possible LLM operation modes.
- dataflow.llms.api.get_providers()¶
Get all providers from the database.
- Return type:
list[haive.dataflow.llms.api.llms.models.Provider]
- async dataflow.llms.api.read_model(model_id)¶
Get a specific model by ID.
- Parameters:
model_id (str)
- async dataflow.llms.api.read_models(provider=None, capability=None)¶
Get all models with optional filtering.
- async dataflow.llms.api.read_providers()¶
Get all LLM providers.
- async dataflow.llms.api.recommended_models(task=None, vision=False, function_calling=False, audio=False, web_search=False)¶
Get recommended models based on capabilities and task requirements.
- Parameters:
task (str | None) – The type of task (chat, completion, embedding, etc.)
vision (bool | None) – Whether vision capabilities are required
function_calling (bool | None) – Whether function calling is required
audio (bool | None) – Whether audio processing is required
web_search (bool | None) – Whether web search is required