Interface: EmbeddingRequest
Defined in: packages/agentos/src/rag/IEmbeddingManager.ts:33
Represents a request to generate embeddings. This structure encapsulates the text(s) to be embedded and any parameters that might influence the embedding process, such as model selection hints or user context.
EmbeddingRequest
Properties
collectionId?
optionalcollectionId:string
Defined in: packages/agentos/src/rag/IEmbeddingManager.ts:86
Optional: Identifier for a data collection or namespace. This can be used by dynamic model selection strategies (e.g., 'dynamic_collection_preference') to choose a model best suited for the content of a specific collection.
Optional
Example
"financial_reports_q3_2024"
customParameters?
optionalcustomParameters:Record<string,any>
Defined in: packages/agentos/src/rag/IEmbeddingManager.ts:96
Optional: Custom parameters to pass through to the embedding generation process. This could include provider-specific options or hints for the EmbeddingManager. The exact interpretation of these parameters is implementation-dependent.
Optional
Example
{ "priority": "high", "target_latency_ms": 500 }
modelId?
optionalmodelId:string
Defined in: packages/agentos/src/rag/IEmbeddingManager.ts:54
Optional: The explicit ID of the embedding model to use. If not provided, the EmbeddingManager will select a model based on its configured strategy (e.g., default model, dynamic selection).
Optional
Example
"text-embedding-3-small"
providerId?
optionalproviderId:string
Defined in: packages/agentos/src/rag/IEmbeddingManager.ts:65
Optional: The explicit ID of the LLM provider to use.
This is typically used in conjunction with modelId. If modelId is provided
and has a configured provider, this field might be used for validation or override
if the architecture supports it. Generally, the model's configured provider is preferred.
Optional
Example
"openai"
texts
texts:
string|string[]
Defined in: packages/agentos/src/rag/IEmbeddingManager.ts:44
The text content to be embedded. Can be a single string or an array of strings for batch processing.
Example
// Single text
const requestOne: EmbeddingRequest = { texts: "Hello, world!" };
// Batch of texts
const requestBatch: EmbeddingRequest = { texts: ["First document.", "Second document."] };
userId?
optionaluserId:string
Defined in: packages/agentos/src/rag/IEmbeddingManager.ts:75
Optional: Identifier for the user making the request. This can be used for logging, auditing, or if the underlying LLM provider requires user-specific API keys or applies user-based rate limits.
Optional
Example
"user-12345"