Implementing Automatic SEO Text Generation for Product Cards (AI)
A product card without unique text is either duplicated content from the supplier's price list or a template phrase of 30 words. Neither ranks well in search results. With catalogs of thousands of products, writing texts manually is not cost-effective—this is an automation task.
SEO text generation through a language model solves this with proper architecture: prepared prompts, quality control on output, caching, and review before publishing.
Input and Output
Input—structured product data: name, category, attributes, brand, tags. Output—SEO description of 150–600 words with target keywords naturally woven into the text.
Example input data:
{
"id": "SKU-4821",
"name": "Nike Air Max 270 Sneakers",
"category": "Men's Shoes / Sneakers",
"brand": "Nike",
"attributes": {
"material": "mesh + synthetic",
"sole": "Air Max unit",
"colors": ["black/white", "navy/grey"],
"sizes": "40–46",
"weight": "310g"
},
"tags": ["running", "everyday", "cushioning"],
"targetKeywords": ["nike air max 270 buy", "nike air max 270 sneakers"]
}
Prompt Engineering for Product Texts
A prompt is not "write a product description." A good prompt defines structure, tone, length, keyword requirements, and restrictions:
function buildProductSeoPrompt(product: Product, keywords: string[]): string {
return `
Write a product description for an e-commerce catalog in English.
Product: ${product.name}
Category: ${product.category}
Brand: ${product.brand}
Key attributes: ${JSON.stringify(product.attributes)}
Tags: ${product.tags.join(", ")}
Requirements:
- Length: 200–400 words
- Include these keywords naturally (not forced): ${keywords.join(", ")}
- Structure: opening benefit statement → key features (3–5 points) → use cases → closing
- Tone: informative, no hype, no superlatives like "best" or "unique"
- Do NOT use: "this product", "we present to you", bullet points
- Do NOT start with the product name
- Write for a person comparing options, not someone who already decided
Output: plain text, no markdown, no headings.
`.trim();
}
Batch Processing with Queues
Generating texts synchronously is impossible—GPT requests take 3–10 seconds, and there can be thousands of products. The correct scheme is a task queue:
// jobs/generate-seo-text.ts
import { Queue, Worker } from "bullmq";
import { openai } from "../lib/openai";
import { db } from "../lib/db";
const seoQueue = new Queue("seo-generation", {
connection: { host: "localhost", port: 6379 },
});
// Adding tasks to queue
export async function queueProductsForGeneration(productIds: string[]) {
const jobs = productIds.map((id) => ({
name: "generate",
data: { productId: id },
opts: {
attempts: 3,
backoff: { type: "exponential", delay: 5000 },
removeOnComplete: 100,
},
}));
await seoQueue.addBulk(jobs);
}
// Worker
const worker = new Worker(
"seo-generation",
async (job) => {
const product = await db.products.findById(job.data.productId);
if (!product) return;
const keywords = await getTargetKeywords(product);
const prompt = buildProductSeoPrompt(product, keywords);
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini", // cheaper for bulk generation
messages: [{ role: "user", content: prompt }],
temperature: 0.7,
max_tokens: 600,
});
const text = completion.choices[0].message.content?.trim();
if (!text) throw new Error("Empty response");
// Save as draft, don't publish automatically
await db.productSeoTexts.upsert({
productId: product.id,
text,
status: "draft",
model: "gpt-4o-mini",
generatedAt: new Date(),
});
},
{
connection: { host: "localhost", port: 6379 },
concurrency: 5, // 5 parallel API requests
}
);
Quality Control
Automatically generated text must be validated before saving. Minimum checks:
interface ValidationResult {
passed: boolean;
issues: string[];
}
function validateSeoText(text: string, product: Product): ValidationResult {
const issues: string[] = [];
if (text.length < 500) {
issues.push(`Too short: ${text.length} chars`);
}
// Check for target keywords
const missingKeywords = product.targetKeywords.filter(
(kw) => !text.toLowerCase().includes(kw.toLowerCase())
);
if (missingKeywords.length > 0) {
issues.push(`Missing keywords: ${missingKeywords.join(", ")}`);
}
// Stop words
const stopPhrases = [
"this product",
"we present to you",
"unique",
"best in its class",
];
for (const phrase of stopPhrases) {
if (text.toLowerCase().includes(phrase)) {
issues.push(`Contains stop phrase: "${phrase}"`);
}
}
// Keyword spam
const wordCount = text.split(/\s+/).length;
for (const kw of product.targetKeywords) {
const kwCount = (text.toLowerCase().match(new RegExp(kw.toLowerCase(), "g")) || []).length;
const density = kwCount / wordCount;
if (density > 0.03) {
issues.push(`Keyword density too high for "${kw}": ${(density * 100).toFixed(1)}%`);
}
}
return { passed: issues.length === 0, issues };
}
Texts that fail validation are flagged as needs_review and placed in a separate queue for regeneration with a refined prompt.
Review Interface
An editor sees a list of drafts with the ability to approve, reject, or edit:
GET /admin/seo-texts?status=draft&page=1
→ list of cards with text, buttons: Publish / Regenerate / Edit
POST /admin/seo-texts/:id/approve
→ changes status to published, updates product card
POST /admin/seo-texts/:id/regenerate
→ adds task back to queue with attempt=2 mark
Regeneration with feedback—the worker reads the rejection reason and adds it to the prompt:
if (job.data.rejectionReason) {
prompt += `\n\nPrevious attempt was rejected. Reason: ${job.data.rejectionReason}. Fix this in the new version.`;
}
Cost and Scale
GPT-4o-mini costs $0.15 per million input tokens and $0.60 per million output tokens. One product text is approximately 300–500 tokens in and 400–500 tokens out. Total about $0.0004 per text. 10,000 cards—around $4. This makes bulk generation economically justified even with frequent catalog updates.
For large catalogs, set up a trigger on product attribute updates—if name or key characteristics change, the text is marked stale and queued for regeneration.







