Search result for:
Retrieval-augmented generation (RAG) augments the quality of large language model (LLM) responses by retrieving valuable information before submitting the prompt.
LLMs answer questions about everything from baseball to bass guitars. That range originates from pretraining on millions of diverse documents. However, generalist LLMs’ shallow understanding of many topics diminishes their business value for domain-specific tasks. Developers sometimes mitigate this challenge by giving the model additional context through retrieval-augmented generation—better known as RAG.