Interface PromptEngineConfig

Configuration options for the PromptEngine's behavior, optimization strategies, and integration with other services like IUtilityAI. PromptEngineConfig

interface PromptEngineConfig {
    defaultTemplateName: string;
    availableTemplates: Record<string, PromptTemplateFunction>;
    tokenCounting: {
        strategy: "estimated" | "precise" | "hybrid";
        estimationModel?: "gpt-3.5-turbo" | "gpt-4" | "claude-3" | "generic";
    };
    historyManagement: {
        defaultMaxMessages: number;
        maxTokensForHistory: number;
        summarizationTriggerRatio: number;
        preserveImportantMessages: boolean;
    };
    contextManagement: {
        maxRAGContextTokens: number;
        summarizationQualityTier: "balanced" | "fast" | "high_quality";
        preserveSourceAttributionInSummary: boolean;
        minContextRelevanceThreshold?: number;
    };
    contextualElementSelection: {
        maxElementsPerType: Partial<Record<ContextualElementType, number>>;
        defaultMaxElementsPerType: number;
        priorityResolutionStrategy: "highest_first" | "weighted_random" | "persona_preference";
        conflictResolutionStrategy: "skip_conflicting" | "merge_compatible" | "error_on_conflict";
    };
    performance: {
        enableCaching: boolean;
        cacheTimeoutSeconds: number;
        maxCacheSizeBytes?: number;
    };
    debugging?: {
        logConstructionSteps?: boolean;
        includeDebugMetadataInResult?: boolean;
        logSelectedContextualElements?: boolean;
    };
    toolSchemaManifest?: Record<string, {
        enabledToolIds?: string[];
        disabledToolIds?: string[];
        modelOverrides?: Record<string, string[]>;
    }>;
}

Properties

defaultTemplateName: string

Default template name (from availableTemplates) to use if none is specified or inferable.

availableTemplates: Record<string, PromptTemplateFunction>

A record of available prompt template functions, keyed by template name.

tokenCounting: {
    strategy: "estimated" | "precise" | "hybrid";
    estimationModel?: "gpt-3.5-turbo" | "gpt-4" | "claude-3" | "generic";
}

Configuration for token counting strategies.

Type declaration

  • strategy: "estimated" | "precise" | "hybrid"
  • Optional estimationModel?: "gpt-3.5-turbo" | "gpt-4" | "claude-3" | "generic"
historyManagement: {
    defaultMaxMessages: number;
    maxTokensForHistory: number;
    summarizationTriggerRatio: number;
    preserveImportantMessages: boolean;
}

Configuration for managing conversation history within prompts.

Type declaration

  • defaultMaxMessages: number
  • maxTokensForHistory: number
  • summarizationTriggerRatio: number
  • preserveImportantMessages: boolean
contextManagement: {
    maxRAGContextTokens: number;
    summarizationQualityTier: "balanced" | "fast" | "high_quality";
    preserveSourceAttributionInSummary: boolean;
    minContextRelevanceThreshold?: number;
}

Configuration for managing retrieved context (e.g., from RAG).

Type declaration

  • maxRAGContextTokens: number
  • summarizationQualityTier: "balanced" | "fast" | "high_quality"
  • preserveSourceAttributionInSummary: boolean
  • Optional minContextRelevanceThreshold?: number
contextualElementSelection: {
    maxElementsPerType: Partial<Record<ContextualElementType, number>>;
    defaultMaxElementsPerType: number;
    priorityResolutionStrategy: "highest_first" | "weighted_random" | "persona_preference";
    conflictResolutionStrategy: "skip_conflicting" | "merge_compatible" | "error_on_conflict";
}

Configuration for selecting and applying contextual elements.

Type declaration

  • maxElementsPerType: Partial<Record<ContextualElementType, number>>

    Max number of elements to apply per type.

  • defaultMaxElementsPerType: number
  • priorityResolutionStrategy: "highest_first" | "weighted_random" | "persona_preference"
  • conflictResolutionStrategy: "skip_conflicting" | "merge_compatible" | "error_on_conflict"
performance: {
    enableCaching: boolean;
    cacheTimeoutSeconds: number;
    maxCacheSizeBytes?: number;
}

Performance optimization settings.

Type declaration

  • enableCaching: boolean
  • cacheTimeoutSeconds: number
  • Optional maxCacheSizeBytes?: number
debugging?: {
    logConstructionSteps?: boolean;
    includeDebugMetadataInResult?: boolean;
    logSelectedContextualElements?: boolean;
}

Debugging and logging settings.

Type declaration

  • Optional logConstructionSteps?: boolean
  • Optional includeDebugMetadataInResult?: boolean
  • Optional logSelectedContextualElements?: boolean
toolSchemaManifest?: Record<string, {
    enabledToolIds?: string[];
    disabledToolIds?: string[];
    modelOverrides?: Record<string, string[]>;
}>

Optional tool schema registration manifest enabling per-persona and per-model enable/disable semantics. Structure:

  • key: personaId (string)
  • value: { enabledToolIds: string[]; // Tools explicitly allowed for this persona (intersection with runtime tool list) disabledToolIds?: string[]; // Tools explicitly disallowed (takes precedence over enabledToolIds) modelOverrides?: { // Per model ID fine-grained overrides [modelId: string]: string[]; // Exact list of tool IDs allowed for that model when persona active }; } Resolution Order when filtering tools for prompt construction:
  1. If personaId present in manifest: a. If modelOverrides[modelId] exists => allowed set = that array (disabledToolIds still removes). b. Else allowed base = enabledToolIds (if defined) else all runtime tools. c. Remove any disabledToolIds from allowed set.
  2. If personaId absent => all runtime tools (no filtering). Note: Unknown tool IDs in manifest are ignored gracefully.

Type declaration

  • Optional enabledToolIds?: string[]
  • Optional disabledToolIds?: string[]
  • Optional modelOverrides?: Record<string, string[]>