Skip to main content

Class: RetrievalAugmentor

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:60

Implements

Orchestrates the RAG pipeline including ingestion, retrieval, and document management.

Implements

Constructors

Constructor

new RetrievalAugmentor(): RetrievalAugmentor

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:88

Constructs a RetrievalAugmentor instance. It is not operational until initialize is successfully called.

Returns

RetrievalAugmentor

Properties

augmenterId

readonly augmenterId: string

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:61

Implementation of

IRetrievalAugmentor.augmenterId

Methods

checkHealth()

checkHealth(): Promise<{ details?: Record<string, unknown>; isHealthy: boolean; }>

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:1308

Returns

Promise<{ details?: Record<string, unknown>; isHealthy: boolean; }>

Inherit Doc

Implementation of

IRetrievalAugmentor.checkHealth


deleteDocuments()

deleteDocuments(documentIds, dataSourceId?, options?): Promise<{ errors?: object[]; failureCount: number; successCount: number; }>

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:1166

Parameters

documentIds

string[]

dataSourceId?

string

options?
ignoreNotFound?

boolean

Returns

Promise<{ errors?: object[]; failureCount: number; successCount: number; }>

Inherit Doc

Implementation of

IRetrievalAugmentor.deleteDocuments


ingestDocuments()

ingestDocuments(documents, options?): Promise<RagIngestionResult>

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:311

Parameters

documents

RagDocumentInput | RagDocumentInput[]

options?

RagIngestionOptions

Returns

Promise<RagIngestionResult>

Inherit Doc

Implementation of

IRetrievalAugmentor.ingestDocuments


initialize()

initialize(config, embeddingManager, vectorStoreManager): Promise<void>

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:95

Parameters

config

RetrievalAugmentorServiceConfig

embeddingManager

IEmbeddingManager

vectorStoreManager

IVectorStoreManager

Returns

Promise<void>

Inherit Doc

Implementation of

IRetrievalAugmentor.initialize


registerRerankerProvider()

registerRerankerProvider(provider): void

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:221

Register a reranker provider with the RerankerService.

Call this after initialization to add reranker providers (e.g., CohereReranker, LocalCrossEncoderReranker) that will be available for reranking operations.

Parameters

provider

IRerankerProvider

A reranker provider instance implementing IRerankerProvider

Returns

void

Throws

If RerankerService is not configured

Example

import { CohereReranker, LocalCrossEncoderReranker } from '@framers/agentos/rag/reranking';

// After initialization
augmentor.registerRerankerProvider(new CohereReranker({
providerId: 'cohere',
apiKey: process.env.COHERE_API_KEY!
}));

augmentor.registerRerankerProvider(new LocalCrossEncoderReranker({
providerId: 'local',
defaultModelId: 'cross-encoder/ms-marco-MiniLM-L-6-v2'
}));

retrieveContext()

retrieveContext(queryText, options?): Promise<RagRetrievalResult>

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:748

Parameters

queryText

string

options?

RagRetrievalOptions

Returns

Promise<RagRetrievalResult>

Inherit Doc

Implementation of

IRetrievalAugmentor.retrieveContext


setHydeLlmCaller()

setHydeLlmCaller(llmCaller): void

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:265

Register an LLM caller for HyDE hypothesis generation.

HyDE (Hypothetical Document Embedding) improves retrieval quality by generating a hypothetical answer first, then embedding that answer instead of the raw query. The hypothesis is semantically closer to the stored documents, yielding better vector similarity matches.

The caller must be set before HyDE-enabled retrieval can be used. Once set, HyDE can be activated per-request via options.hyde.enabled on retrieveContext, or it can be activated globally by passing a default HyDE config.

Parameters

llmCaller

HydeLlmCaller

An async function that takes (systemPrompt, userPrompt) and returns the LLM completion text. The system prompt contains instructions for hypothesis generation; the user prompt is the query.

Returns

void

Example

augmentor.setHydeLlmCaller(async (systemPrompt, userPrompt) => {
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt },
],
max_tokens: 200,
});
return response.choices[0].message.content ?? '';
});

shutdown()

shutdown(): Promise<void>

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:1337

Returns

Promise<void>

Inherit Doc

Implementation of

IRetrievalAugmentor.shutdown


updateDocuments()

updateDocuments(documents, options?): Promise<RagIngestionResult>

Defined in: packages/agentos/src/rag/RetrievalAugmentor.ts:1283

Parameters

documents

RagDocumentInput | RagDocumentInput[]

options?

RagIngestionOptions

Returns

Promise<RagIngestionResult>

Inherit Doc

Implementation of

IRetrievalAugmentor.updateDocuments