Analyzing and Eliminating Keyword Cannibalization
Cannibalization occurs when multiple pages compete for the same search query. Google sees two relevant pages, doesn't know which to rank higher, and ends up ranking both lower. The problem is worst on large sites: e-commerce with hundreds of categories, corporate portals with edit history, blogs with accumulated archives.
Identifying Cannibalization
Via Google Search Console:
Performance report → Pages. Export clicks/impressions table. Repeat for Queries. Join tables in Python: look for queries where impressions are spread across 2+ URLs.
import pandas as pd
pages_df = pd.read_csv('gsc_pages.csv')
queries_df = pd.read_csv('gsc_queries.csv')
from googleapiclient.discovery import build
service = build('searchconsole', 'v1', credentials=creds)
response = service.searchanalytics().query(
siteUrl='https://example.com',
body={
'startDate': '2024-01-01',
'endDate': '2024-03-31',
'dimensions': ['query', 'page'],
'rowLimit': 25000
}
).execute()
rows = response.get('rows', [])
df = pd.DataFrame([{
'query': r['keys'][0],
'page': r['keys'][1],
'clicks': r['clicks'],
'impressions': r['impressions'],
'position': r['position']
} for r in rows])
# Find queries with multiple competing pages
cannibal = df.groupby('query').filter(lambda g: len(g) > 1 and g['impressions'].sum() > 100)
cannibal_sorted = cannibal.sort_values(['query', 'impressions'], ascending=[True, False])
Via Screaming Frog + Ahrefs:
Ahrefs: Site Explorer → Pages → Best by Links → export. Screaming Frog with Custom Extraction for <title> and <h1>. Look for pages with identical or similar titles.
Classification of Cases
Type 1: Explicit duplicate. Two pages with identical content and same keywords. Solution: canonical or 301 redirect.
Type 2: Semantic overlap. Category page and blog post compete for informational query. Solution: content rework, intent differentiation.
Type 3: Historical garbage. Old pagination pages, tag archives, outdated landings. Solution: noindex or 410.
Type 4: Planned cannibalization. Landing + blog post for same query — content strategy error. Solution: merge or reorient.
Technical Fixes
301 redirect (Apache/Nginx):
# nginx.conf
server {
location = /old-page/ {
return 301 /canonical-page/;
}
}
Canonical tag:
When pages should remain accessible but weight needs consolidating:
<!-- On /product-category/?page=2 -->
<link rel="canonical" href="https://example.com/product-category/" />
Noindex for low-value pages:
<meta name="robots" content="noindex, follow" />
Or via HTTP header:
X-Robots-Tag: noindex, follow
Page merging: When two strong pages with different link profiles cannibalize — 301 would be loss. Better: merge content into one comprehensive page, 301 from weaker, redistribute internal links.
Internal Linking Audit
Audit anchors via Screaming Frog: Configuration → Custom → Extract → select all <a href> with needed keywords.
Rule: one commercial anchor should link to only one page across entire site.
Tracking After Fixes
Google updates positions 2–6 weeks after indexing changes. Speed up via:
Google Search Console → URL Inspection → Request Indexing
For batch reindexing — Indexing API:
from googleapiclient.discovery import build
service = build('indexing', 'v3', credentials=creds)
batch = service.new_batch_http_request()
urls_to_notify = ['https://example.com/canonical-page/']
for url in urls_to_notify:
batch.add(service.urlNotifications().publish(
body={'url': url, 'type': 'URL_UPDATED'}
))
batch.execute()
Timeline
Site audit to 1000 pages — 3–5 business days: crawling, GSC export, analysis, cannibalization map. Technical fixes (canonical, redirects, noindex) — 2–3 days. Content fixes (rewriting, merging) — 5+ days depending on volume.







