Interface: IPromptEngine
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:442
Core interface for the PromptEngine, responsible for intelligent and adaptive prompt construction based on rich contextual information and persona definitions.
The PromptEngine serves as the central orchestrator for AgentOS's sophisticated prompting system, capable of dynamically selecting contextual elements, managing token budgets, integrating multi-modal content, and optimizing prompts for different AI models and interaction patterns.
IPromptEngine
Methods
clearCache()
clearCache(
selectivePattern?):Promise<void>
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:584
Clears internal caches used by the PromptEngine (e.g., for prompt construction results or token counts). This can be used to free memory or to force re-computation for debugging or after configuration changes.
Parameters
selectivePattern?
string
Optional. A pattern or key to clear only specific cache entries (e.g., "modelId:gpt-4o*"). If omitted, the entire cache is cleared. The exact format of the pattern is implementation-dependent.
Returns
Promise<void>
A promise that resolves when the cache clearing operation is complete.
Async
constructPrompt()
constructPrompt(
baseComponents,modelTargetInfo,executionContext?,templateName?):Promise<PromptEngineResult>
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:489
The primary method for constructing an adaptive and contextually relevant prompt. This orchestrates the entire pipeline: contextual element evaluation and selection, augmentation of base components, token budget management (including truncation and summarization), and final formatting using a model-appropriate template.
Parameters
baseComponents
Readonly<PromptComponents>
The core, static components of the prompt, such as system instructions, conversation history, and current user input. These are read-only to prevent unintended modification by the method.
modelTargetInfo
Readonly<ModelTargetInfo>
Detailed information about the target AI model, including its ID, provider, capabilities, token limits, and expected prompt format. This is crucial for tailoring the prompt effectively.
executionContext?
Readonly<PromptExecutionContext>
Optional. The rich runtime context, including active persona, working memory, user state, and task details. This drives the dynamic selection and application of contextual prompt elements.
templateName?
string
Optional. The explicit name of a prompt template to use.
If not provided, the engine selects a default template based on modelTargetInfo.promptFormatType
or the defaultTemplateName from its configuration.
Returns
Promise<PromptEngineResult>
A promise resolving to a PromptEngineResult object,
which contains the final formatted prompt, along with metadata about its construction,
token counts, any issues encountered, and modifications made.
Async
Throws
If a non-recoverable error occurs during any stage of prompt construction (e.g., template not found, critical component missing, tokenization failure).
estimateTokenCount()
estimateTokenCount(
content,modelId?):Promise<number>
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:528
Estimates the token count for a given piece of text, optionally using a specific model ID to inform a more precise estimation if available (e.g., using a model-specific tokenizer). This is used internally for token budgeting and can also be exposed as a utility.
Parameters
content
string
The text content for which to estimate token count.
modelId?
string
Optional. The ID of the target model. If provided, the engine may attempt a more precise tokenization based on this model's characteristics.