haive.core.engine.embedding.providers.OllamaEmbeddingConfigΒΆ
Ollama embedding configuration.
ClassesΒΆ
Configuration for Ollama embeddings. |
Module ContentsΒΆ
- class haive.core.engine.embedding.providers.OllamaEmbeddingConfig.OllamaEmbeddingConfig[source]ΒΆ
Bases:
haive.core.engine.embedding.base.BaseEmbeddingConfig
Configuration for Ollama embeddings.
This configuration provides access to locally hosted Ollama embedding models including nomic-embed-text, mxbai-embed-large, and other supported models.
Examples
Basic usage:
config = OllamaEmbeddingConfig( name="ollama_embeddings", model="nomic-embed-text", base_url="http://localhost:11434" ) embeddings = config.instantiate()
With custom headers:
config = OllamaEmbeddingConfig( name="ollama_embeddings", model="mxbai-embed-large", base_url="http://localhost:11434", headers={"Authorization": "Bearer token"} )
With custom options:
config = OllamaEmbeddingConfig( name="ollama_embeddings", model="nomic-embed-text", base_url="http://localhost:11434", model_options={"temperature": 0.1} )
- embedding_typeΒΆ
Always EmbeddingType.OLLAMA
- modelΒΆ
Ollama model name (e.g., βnomic-embed-textβ)
- base_urlΒΆ
Ollama server URL
- headersΒΆ
Optional HTTP headers for requests
- model_optionsΒΆ
Optional model-specific options
- instantiate()[source]ΒΆ
Create an Ollama embeddings instance.
- Returns:
OllamaEmbeddings instance configured with the provided parameters
- Raises:
ImportError β If langchain-ollama is not installed
ValueError β If configuration is invalid
- Return type:
Any
- test_connection()[source]ΒΆ
Test connection to Ollama server.
- Returns:
True if connection is successful, False otherwise
- Return type: