Journals - MOST Wiedzy

TASK Quarterly

An Analysis of Retrieval-Augmented Generation: A Systematic Review Addressing Architectures, Components, and Evaluation

Abstract

Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs) by integrating external retrieval
mechanisms to improve factuality and currency. This systematic literature review characterizes current RAG architectures, components, and evaluation practices in peer-reviewed studies published between 2021 and 2025 across IEEE
Xplore, Scopus, and Web of Science. Conducted in accordance with the PRISMA guidelines, this review analyzes
41 studies that met the predefined inclusion criteria. Most research addresses Question Answering (QA) and dialogue
systems, employing diverse encoders and retrieval optimization methods. Key findings reveal a strong trend toward integrating OpenAI’s GPT models, alongside growing adoption of open-source alternatives. Persistent challenges include
hallucination control, computational efficiency, and inconsistent evaluation metrics. Despite the potential of RAG, the
evidence base is limited by a focus on English-language, high-resource domains. Furthermore, reproducibility is constrained by heterogeneous evaluation standards and a lack of open-access code or datasets. This review maps the RAG
research landscape and identifies gaps in standardization, scalability, and application to low-resource languages. The
protocol was not prospectively registered, and no funding was received for this work.

Keywords:

Large language models, Retrieval-augmented generation, Systematic literature review

Details

Issue
Vol. 29 No. 3 (2025)
Section
Review
Published
2026-03-26
DOI:
https://doi.org/10.34808/tq2025/29.3/a
Licencja:

Copyright (c) 2026 TASK Quarterly

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors

Download paper