AI Chatbot Integration (ChatGPT/Claude)
AI chatbot integration is not simply "a proxy to OpenAI API". The task includes conversation context management, streaming responses for proper UX, system prompts, edge case handling, and token cost control.
Provider Selection
| Provider | Models | Context | Strengths |
|---|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini, o1 | 128K | Wide ecosystem, function calling |
| Anthropic | Claude 3.5 Sonnet, Claude 3 Opus | 200K | Long context, instruction precision |
| Gemini 1.5 Pro/Flash | 1M | Largest context | |
| Mistral | Mistral Large, Mistral 7B | 32K | Self-hosted option |
For a typical website chatbot (support, FAQ, consultant) — GPT-4o-mini or Claude 3.5 Haiku are sufficient in quality and significantly cheaper than flagships.
Server Proxy: Why It's Needed
API keys never go to the browser. A server endpoint is necessary for:
- Authorization (only logged-in users)
- Rate limiting (no more than X messages per day)
- Conversation logging
- Adding system prompt (user doesn't see)
- Cost control
// api/chat.js (Next.js Route Handler)
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const SYSTEM_PROMPT = `You are a support assistant for "Tech Pro" e-commerce store.
Answer only questions about products, shipping, and returns.
If the question is off-topic, politely redirect to an operator.
Respond in the same language as the user.`;
export async function POST(request) {
const session = await getSession(request);
if (!session) return Response.json({ error: 'Unauthorized' }, { status: 401 });
const { messages } = await request.json();
// Rate limiting
const count = await redis.incr(`chat:${session.userId}:${today()}`);
if (count > 50) return Response.json({ error: 'Limit reached' }, { status: 429 });
// Limit history to last 10 messages
const recentMessages = messages.slice(-10);
const stream = await openai.chat.completions.create({
model: 'gpt-4o-mini',
stream: true,
messages: [
{ role: 'system', content: SYSTEM_PROMPT },
...recentMessages,
],
max_tokens: 500,
temperature: 0.3, // Lower = more predictable response
});
return new Response(stream.toReadableStream());
}
Streaming Responses on Client
Streaming is critical for UX: the user sees the response as it's generated, not waiting 3–5 seconds:
async function sendMessage(userMessage) {
setMessages(prev => [...prev, { role: 'user', content: userMessage }]);
setIsStreaming(true);
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages: [...messages, { role: 'user', content: userMessage }] }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
let assistantMessage = '';
// Add empty assistant message
setMessages(prev => [...prev, { role: 'assistant', content: '' }]);
while (true) {
const { value, done } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// OpenAI streaming format: data: {"choices":[{"delta":{"content":"..."}}]}
const lines = chunk.split('\n').filter(l => l.startsWith('data: '));
for (const line of lines) {
if (line === 'data: [DONE]') break;
const json = JSON.parse(line.slice(6));
const delta = json.choices[0]?.delta?.content || '';
assistantMessage += delta;
// Update last message
setMessages(prev => [
...prev.slice(0, -1),
{ role: 'assistant', content: assistantMessage },
]);
}
}
setIsStreaming(false);
}
Conversation Context Management
Models have a context window. With long dialogs, a strategy is needed:
Sliding window — just the last N messages:
const contextMessages = messages.slice(-10);
Summarization — compress the old part of the dialog:
async function compressHistory(messages) {
if (messages.length <= 10) return messages;
const toCompress = messages.slice(0, -6);
const recent = messages.slice(-6);
const summary = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{
role: 'user',
content: `Summarize this conversation briefly:\n${toCompress.map(m => `${m.role}: ${m.content}`).join('\n')}`,
},
],
max_tokens: 200,
});
return [
{ role: 'system', content: `Previous conversation summary: ${summary.choices[0].message.content}` },
...recent,
];
}
Functions and Tools (Function Calling)
The chatbot can call functions — check order status, search products, book a consultation:
const tools = [
{
type: 'function',
function: {
name: 'get_order_status',
description: 'Get order status by order number',
parameters: {
type: 'object',
properties: {
order_number: { type: 'string', description: 'Order number' },
},
required: ['order_number'],
},
},
},
{
type: 'function',
function: {
name: 'search_products',
description: 'Search products by query',
parameters: {
type: 'object',
properties: {
query: { type: 'string' },
max_price: { type: 'number' },
},
required: ['query'],
},
},
},
];
// Handle function call
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
tools,
tool_choice: 'auto',
});
const message = response.choices[0].message;
if (message.tool_calls) {
const toolResults = await Promise.all(
message.tool_calls.map(async (call) => {
const result = await executeFunction(call.function.name, JSON.parse(call.function.arguments));
return {
role: 'tool',
tool_call_id: call.id,
content: JSON.stringify(result),
};
})
);
// Send results back for final response
const finalResponse = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [...messages, message, ...toolResults],
});
}
Timeline
- Basic chatbot with system prompt and streaming — 2–3 days
- With function calling (orders, search, booking) — plus 2–3 days
- Widget with conversation history, lead capture, handoff to operator — 5–7 days
- Multilingual bot with intent routing — plus 2–3 days







