The text content to be embedded. Can be a single string or an array of strings for batch processing.
// Single text
const requestOne: EmbeddingRequest = { texts: "Hello, world!" };
// Batch of texts
const requestBatch: EmbeddingRequest = { texts: ["First document.", "Second document."] };
Optional modelOptional: The explicit ID of the embedding model to use. If not provided, the EmbeddingManager will select a model based on its configured strategy (e.g., default model, dynamic selection).
"text-embedding-3-small"
Optional providerOptional: The explicit ID of the LLM provider to use.
This is typically used in conjunction with modelId. If modelId is provided
and has a configured provider, this field might be used for validation or override
if the architecture supports it. Generally, the model's configured provider is preferred.
"openai"
Optional userOptional: Identifier for the user making the request. This can be used for logging, auditing, or if the underlying LLM provider requires user-specific API keys or applies user-based rate limits.
"user-12345"
Optional collectionOptional: Identifier for a data collection or namespace. This can be used by dynamic model selection strategies (e.g., 'dynamic_collection_preference') to choose a model best suited for the content of a specific collection.
"financial_reports_q3_2024"
Optional customOptional: Custom parameters to pass through to the embedding generation process. This could include provider-specific options or hints for the EmbeddingManager. The exact interpretation of these parameters is implementation-dependent.
{ "priority": "high", "target_latency_ms": 500 }
Represents a request to generate embeddings. This structure encapsulates the text(s) to be embedded and any parameters that might influence the embedding process, such as model selection hints or user context.
EmbeddingRequest