Implementing Hybrid Search (Vector + Full-Text Search) for RAG
Hybrid Search is a combination of vector (dense) and full-text (sparse/BM25) search with subsequent result merging. Practice shows that hybrid search consistently outperforms either method alone on most corporate datasets. Dense search is good for semantically close queries, BM25 is good for exact terms, numbers, abbreviations.
Why Dense Search Alone Is Not Enough
Dense embedding averages semantics—both strength and weakness. The query "contract №DA-2023-451" will have high cosine similarity to contracts in general, but not the specific document by number. BM25 will find the exact match of the string "DA-2023-451" instantly.
Dense search performs poorly for:
- Exact numbers (contract, article, serial number)
- Abbreviations and specific acronyms
- Rare technical terms
- Exact quote searches
BM25 performs poorly for:
- Paraphrased queries (synonyms)
- Semantically similar concepts with different words
- Cross-language queries
- Vague descriptions ("something about payment after delivery")
Result Merging Algorithms
Reciprocal Rank Fusion (RRF) — the most robust method:
from collections import defaultdict
def reciprocal_rank_fusion(
dense_results: list[tuple], # [(doc_id, score), ...]
sparse_results: list[tuple],
k: int = 60 # RRF constant (usually 60)
) -> list[tuple]:
"""
RRF score = sum(1 / (k + rank_i)) across all lists
k=60 is standard value (Cormack et al., 2009)
"""
scores = defaultdict(float)
for rank, (doc_id, _) in enumerate(dense_results, 1):
scores[doc_id] += 1 / (k + rank)
for rank, (doc_id, _) in enumerate(sparse_results, 1):
scores[doc_id] += 1 / (k + rank)
return sorted(scores.items(), key=lambda x: -x[1])
Relative Score Fusion (RSF) — normalized combination:
def relative_score_fusion(
dense_results: list[tuple],
sparse_results: list[tuple],
alpha: float = 0.5 # Dense weight
) -> list[tuple]:
"""Normalizes scores to [0,1] and weights them"""
scores = defaultdict(float)
# Dense normalization
if dense_results:
max_d = max(s for _, s in dense_results)
min_d = min(s for _, s in dense_results)
for doc_id, score in dense_results:
norm = (score - min_d) / (max_d - min_d + 1e-8)
scores[doc_id] += alpha * norm
# Sparse normalization
if sparse_results:
max_s = max(s for _, s in sparse_results)
min_s = min(s for _, s in sparse_results)
for doc_id, score in sparse_results:
norm = (score - min_s) / (max_s - min_s + 1e-8)
scores[doc_id] += (1 - alpha) * norm
return sorted(scores.items(), key=lambda x: -x[1])
SPLADE: Advanced Sparse Encoder
SPLADE (Sparse Lexical and Expansion Model) generates sparse vectors with lexical expansion—the model learns to "expand" queries with synonyms and related terms:
from fastembed import SparseTextEmbedding
sparse_model = SparseTextEmbedding(
model_name="prithivida/Splade_PP_en_v1"
)
def encode_sparse(text: str) -> dict:
"""Returns sparse vector {token_id: weight}"""
output = list(sparse_model.embed([text]))[0]
return {
"indices": output.indices.tolist(),
"values": output.values.tolist(),
}
SPLADE outperforms BM25 on most BEIR benchmarks. For Russian, we recommend naver/efficient-splade-VI-BT-large-query or multilingual variants.
Implementation with Qdrant (Practical Example)
from qdrant_client import QdrantClient
from qdrant_client.models import (
SparseVector, Prefetch, FusionQuery, Fusion,
NamedVector, NamedSparseVector
)
from fastembed import TextEmbedding, SparseTextEmbedding
dense_model = TextEmbedding("BAAI/bge-m3") # Multilingual dense
sparse_model = SparseTextEmbedding("prithivida/Splade_PP_en_v1")
client = QdrantClient(url="http://localhost:6333")
def hybrid_search(query: str, top_k: int = 5) -> list[dict]:
# Dense embedding
dense_vec = list(dense_model.embed([query]))[0].tolist()
# Sparse embedding
sparse_output = list(sparse_model.embed([query]))[0]
sparse_vec = SparseVector(
indices=sparse_output.indices.tolist(),
values=sparse_output.values.tolist()
)
results = client.query_points(
collection_name="hybrid_docs",
prefetch=[
Prefetch(query=dense_vec, using="dense", limit=50),
Prefetch(query=sparse_vec, using="sparse", limit=50),
],
query=FusionQuery(fusion=Fusion.RRF),
limit=top_k,
with_payload=True,
)
return [
{"text": r.payload["text"], "source": r.payload["source"], "score": r.score}
for r in results.points
]
Practical Case: Impact of Alpha on Retrieval Quality
Dataset: 12,000 documents of corporate knowledge base (contracts, regulations, FAQ).
Test set: 400 queries of different types.
| Configuration | MRR@5 | NDCG@5 | Exact Terms Recall |
|---|---|---|---|
| Dense only (BGE-M3) | 0.74 | 0.71 | 0.58 |
| BM25 only | 0.67 | 0.63 | 0.91 |
| Hybrid RRF (k=60) | 0.83 | 0.81 | 0.84 |
| Hybrid RSF (α=0.6) | 0.81 | 0.79 | 0.81 |
| Dense + Reranker | 0.80 | 0.77 | 0.61 |
| Hybrid + Reranker | 0.89 | 0.87 | 0.86 |
Hybrid RRF without reranker already beats dense+reranker. The combination of hybrid+reranker gives the best result.
Optimal k for RRF
k=60 is empirically robust. Too small k (10–20) gives large weight to top positions. Too large (100+) levels out position differences. On real data: check k∈{20, 40, 60, 80} on validation set.
Implementation Timeline
- Sparse encoder setup + SPLADE: 2–3 days
- Integrating hybrid search into existing RAG: 3–5 days
- Tuning optimal alpha/k on dataset: 2–3 days
- Total: 1–2 weeks







