Interface: PromptEngineConfig
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:274
Configuration options for the PromptEngine's behavior, optimization strategies, and integration with other services like IUtilityAI.
Interface
PromptEngineConfig
Properties
availableTemplates
availableTemplates:
Record<string,PromptTemplateFunction>
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:278
A record of available prompt template functions, keyed by template name.
contextManagement
contextManagement:
object
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:294
Configuration for managing retrieved context (e.g., from RAG).
maxRAGContextTokens
maxRAGContextTokens:
number
minContextRelevanceThreshold?
optionalminContextRelevanceThreshold:number
preserveSourceAttributionInSummary
preserveSourceAttributionInSummary:
boolean
summarizationQualityTier
summarizationQualityTier:
"balanced"|"fast"|"high_quality"
contextualElementSelection
contextualElementSelection:
object
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:301
Configuration for selecting and applying contextual elements.
conflictResolutionStrategy
conflictResolutionStrategy:
"skip_conflicting"|"merge_compatible"|"error_on_conflict"
defaultMaxElementsPerType
defaultMaxElementsPerType:
number
maxElementsPerType
maxElementsPerType:
Partial<Record<ContextualElementType,number>>
Max number of elements to apply per type.
priorityResolutionStrategy
priorityResolutionStrategy:
"highest_first"|"weighted_random"|"persona_preference"
debugging?
optionaldebugging:object
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:316
Debugging and logging settings.
includeDebugMetadataInResult?
optionalincludeDebugMetadataInResult:boolean
logConstructionSteps?
optionallogConstructionSteps:boolean
logSelectedContextualElements?
optionallogSelectedContextualElements:boolean
defaultTemplateName
defaultTemplateName:
string
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:276
Default template name (from availableTemplates) to use if none is specified or inferable.
historyManagement
historyManagement:
object
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:287
Configuration for managing conversation history within prompts.
defaultMaxMessages
defaultMaxMessages:
number
maxTokensForHistory
maxTokensForHistory:
number
preserveImportantMessages
preserveImportantMessages:
boolean
summarizationTriggerRatio
summarizationTriggerRatio:
number
performance
performance:
object
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:309
Performance optimization settings.
cacheTimeoutSeconds
cacheTimeoutSeconds:
number
enableCaching
enableCaching:
boolean
maxCacheSizeBytes?
optionalmaxCacheSizeBytes:number
tokenCounting
tokenCounting:
object
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:280
Configuration for token counting strategies.
estimationModel?
optionalestimationModel:"gpt-3.5-turbo"|"gpt-4"|"claude-3"|"generic"
strategy
strategy:
"estimated"|"precise"|"hybrid"
toolSchemaManifest?
optionaltoolSchemaManifest:Record<string, {disabledToolIds?:string[];enabledToolIds?:string[];modelOverrides?:Record<string,string[]>; }>
Defined in: packages/agentos/src/core/llm/IPromptEngine.ts:345
Optional tool schema registration manifest enabling per-persona and per-model enable/disable semantics.
Structure:
{
[personaId: string]: {
enabledToolIds: string[];
disabledToolIds?: string[];
modelOverrides?: {
[modelId: string]: string[];
};
};
}
Resolution Order when filtering tools for prompt construction:
- If personaId present in manifest: a. If modelOverrides[modelId] exists => allowed set = that array (disabledToolIds still removes). b. Else allowed base = enabledToolIds (if defined) else all runtime tools. c. Remove any disabledToolIds from allowed set.
- If personaId absent => all runtime tools (no filtering). Note: Unknown tool IDs in manifest are ignored gracefully.