Back to Admin

AI Prompt Guidelines

How the app uses Gemini, response tags, and prompt conventions.

Overview

AI responses use HTML comments to embed structured data. The default model is gemini-2.0-flash (configurable via app_config). Integration lives in src/lib/gemini.tsstreamChatWithGemini() and chatWithGemini().

AI response tags

The AI embeds these tags in HTML comments within its response:

TagPurposeExample
ANSWERSave an answerANSWER:system_type=alarm
CHIPSShow clickable chip buttonsCHIPS:["Yes","No"]
QTY_SELECTORShow quantity selector UIQTY_SELECTOR:camera
ESTIMATE_READYEstimate ready to generate
ESTIMATE_CONFIRMEDUser confirmed estimate
ESTIMATE_RANGEShow price rangeESTIMATE_RANGE:1500-3000

answer-extractor.ts parses these tags from the streamed response.

Gemini integration

Config is read via getConfigValue(bot, key). Streaming routes use maxDuration = 60 for Vercel. Never call eval() on user or AI content; the expression engine uses a safe recursive descent parser.

Estimate flow

The estimate chatbot uses a deterministic state machine: load session → save user message → KB search → pre-resolution → build prompt with step info and answers → stream Gemini → extract tags → save answers → evaluate transitions for next step. See src/lib/estimate/prompt-builder.ts and answer-extractor.ts.