Augmentation and Prompting
After retrieval, you must combine the retrieved context with the user question into a prompt that the LLM can understand. Prompt design is critical for RAG success.
Basic RAG Prompt Template
from langchain.prompts import PromptTemplate
template = """Use the following context to answer the question at the end.
Context: {context}
Question: {question}
Answer:"""
prompt = PromptTemplate(template=template, input_variables=["context", "question"])Adding Instructions
Tell the model to say "I don't know" if the context does not contain the answer. This reduces hallucinations.
template = """You are a helpful assistant. Use only the provided context to answer. If the answer is not in the context, say 'I don't have enough information.'
Context: {context}
Question: {question}
Answer:"""Including Citations
Ask the model to cite the source document or chunk.
template = """Answer the question based on the context. After your answer, cite the source as [Source: filename].
Context: {context}
Question: {question}
Answer:"""LangChain RetrievalQA Chain
LangChain provides a ready‑to‑use chain that handles retrieval + prompting.
from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(llm, retriever=retriever)
answer = qa_chain.run("What is RAG?")Two Minute Drill
- Augment the prompt with retrieved context.
- Tell the LLM to say "I don't know" if context lacks answer.
- You can request citations from the model.
- LangChain's `RetrievalQA` chain simplifies implementation.
Need more clarification?
Drop us an email at career@quipoinfotech.com
