Interface IPromptEngineUtilityAI

Interface for utility AI services that assist the PromptEngine with complex content processing tasks like summarization and relevance analysis, specifically tailored for prompt construction needs. This is a focused interface used internally by the PromptEngine. IPromptEngineUtilityAI

interface IPromptEngineUtilityAI {
    summarizeConversationHistory(messages, targetTokenCount, modelInfo, preserveImportantMessages?): Promise<{
        summaryMessages: ConversationMessage[];
        originalTokenCount: number;
        finalTokenCount: number;
        messagesSummarized: number;
    }>;
    summarizeRAGContext(context, targetTokenCount, modelInfo, preserveSourceAttribution?): Promise<{
        summary: string;
        originalTokenCount: number;
        finalTokenCount: number;
        preservedSources?: string[];
    }>;
    analyzeContentRelevance?(content, executionContext, modelInfo): Promise<{
        relevanceScore: number;
        importanceScore: number;
        keywords?: string[];
        topics?: string[];
    }>;
}

Methods

  • Summarizes conversation history to fit within token constraints, attempting to preserve key information.

    Parameters

    • messages: readonly ConversationMessage[]

      The array of conversation messages to summarize.

    • targetTokenCount: number

      The desired maximum token count for the summary.

    • modelInfo: Readonly<ModelTargetInfo>

      Information about the model for which the summary is being prepared.

    • Optional preserveImportantMessages: boolean

      If true, attempt to identify and keep important messages verbatim.

    Returns Promise<{
        summaryMessages: ConversationMessage[];
        originalTokenCount: number;
        finalTokenCount: number;
        messagesSummarized: number;
    }>

    A summary (which might be a single system message or a condensed list of messages), and metadata about the summarization.

  • Summarizes retrieved RAG context to fit token limits, ideally preserving source attribution if possible.

    Parameters

    • context: string | readonly {
          source: string;
          content: string;
          relevance?: number;
      }[]

      The RAG context to summarize.

    • targetTokenCount: number

      The desired maximum token count for the summarized context.

    • modelInfo: Readonly<ModelTargetInfo>

      Information about the model.

    • Optional preserveSourceAttribution: boolean

      If true, attempt to retain source information in the summary.

    Returns Promise<{
        summary: string;
        originalTokenCount: number;
        finalTokenCount: number;
        preservedSources?: string[];
    }>

    The summarized text and metadata.

  • Analyzes a piece of content for its relevance and importance within the current execution context. This can be used to prioritize which content to include or how to emphasize it.

    Parameters

    • content: string

      The text content to analyze.

    • executionContext: Readonly<PromptExecutionContext>

      The current execution context.

    • modelInfo: Readonly<ModelTargetInfo>

      Information about the model.

    Returns Promise<{
        relevanceScore: number;
        importanceScore: number;
        keywords?: string[];
        topics?: string[];
    }>

    Scores and extracted metadata.