AI Prompt Guidelines
How the app uses Gemini, response tags, and prompt conventions.
Overview
AI responses use HTML comments to embed structured data. The default model is gemini-2.0-flash (configurable via app_config). Integration lives in src/lib/gemini.ts — streamChatWithGemini() and chatWithGemini().
AI response tags
The AI embeds these tags in HTML comments within its response:
| Tag | Purpose | Example |
|---|---|---|
ANSWER | Save an answer | ANSWER:system_type=alarm |
CHIPS | Show clickable chip buttons | CHIPS:["Yes","No"] |
QTY_SELECTOR | Show quantity selector UI | QTY_SELECTOR:camera |
ESTIMATE_READY | Estimate ready to generate | — |
ESTIMATE_CONFIRMED | User confirmed estimate | — |
ESTIMATE_RANGE | Show price range | ESTIMATE_RANGE:1500-3000 |
answer-extractor.ts parses these tags from the streamed response.
Gemini integration
Config is read via getConfigValue(bot, key). Streaming routes use maxDuration = 60 for Vercel. Never call eval() on user or AI content; the expression engine uses a safe recursive descent parser.
Estimate flow
The estimate chatbot uses a deterministic state machine: load session → save user message → KB search → pre-resolution → build prompt with step info and answers → stream Gemini → extract tags → save answers → evaluate transitions for next step. See src/lib/estimate/prompt-builder.ts and answer-extractor.ts.