The final formatted prompt ready for LLM consumption.
Optional formattedFormatted tool schemas compatible with the target model's API, if tools are used.
The structure of any[] depends on ModelTargetInfo.toolSupport.format.
Optional estimatedEstimated token count of the constructed prompt before precise provider counting.
Optional tokenPrecise token count, if available from a tokenizer or after construction.
Optional issuesAny issues (errors, warnings, info) encountered during prompt construction.
Optional details?: unknownOptional suggestion?: stringOptional component?: stringIndicates if content was truncated or summarized to fit token limits.
Optional modificationDetails about modifications made during construction (e.g., which components were truncated).
Optional originalOptional truncatedOptional summarizedOptional removedOptional addedPerformance metrics and metadata related to the prompt construction process.
Optional ragOptional cacheAn optional cache key if the result was retrieved from or stored in a cache.
Comprehensive result object returned by prompt construction, containing the formatted prompt, metadata, issues encountered, and optimization information. PromptEngineResult