Event Information
WHEN
ON DEMAND
Join Bruce Kratz, VP of Development, and Matt Brady, Software Developer, for an interactive look at writing smarter AI prompts. They’ll show you how to craft and refine prompts that generate accurate, creative outputs for race descriptions, sponsor blurbs, and more.
Summary of Webinar
Overview
Bruce and Matt walk through how AI—and specifically prompt engineering—is improving day-to-day work at RunSignup. They demystify jargon (LLMs, tokens, context windows), show how tools like Cursor, ChatGPT/Claude/Gemini, and Vercel v0 fit together, and share concrete workflows for getting reliable, business-ready results while avoiding hallucinations.
Key Concepts (Plain-English)
- LLM (Large Language Model): The “brain” (e.g., GPT, Claude, Gemini) trained on huge datasets to generate answers. 
- GPT = Generative Pre-trained Transformer: Generative (creates), Pre-trained (learns before you use it), Transformer (the architecture). 
- Parameters: Rough “brain capacity.” More parameters ≠ automatically smarter, but enables learning more patterns. 
- Tokens: The unit models read/write (sub-words/chars). You pay for input + output tokens; models cap how many fit per request. 
- Context / Context Window: The model’s “working memory” (your prompt + chat history + relevant files). Bigger window → more info it can “hold.” 
- RAG (Retrieval-Augmented Generation): Fetch relevant documents/code/specs → add to prompt → generate a better answer. 
Tools & How They’re Used
Cursor (primary dev tool)
- What it is: A coding editor with an AI context manager (not an LLM itself). 
- Automatic context: Open files, cursor location, nearby code, recent edits, chat history. 
- Manual context: Your prompt, attached files/snippets, indexed codebase, team rules/conventions. 
- Under the hood: Sends a curated, rule-aware prompt to an LLM (e.g., Claude, GPT). 
Vercel v0
- No/low-code UI builder: Prompt on the left → working UI on the right. 
- Great for scaffolding utilities, admin tools, and experiments. 
General chat LLMs (ChatGPT/Claude/Gemini)
- Use cases: Research, plan, explain, draft, summarize, outline tests, and suggest fixes. 
- Pro tip: Start new chats for new topics to keep context clean. 
Practical Prompt Engineering (What actually works)
- Keep context clean - New chat for new task/topic. 
- Limit open files (in dev tools) to only what’s relevant. 
 
- Give examples & guardrails - Paste short, correct examples of the pattern you want. 
- State role/format: “You are a senior engineer. Ask clarifying questions. Do not guess.” 
 
- Prefer multi-step (few-shot) over giant one-shot prompts - Plan together → confirm assumptions → then act. 
- Ask the model to outline changes before editing code. 
 
- Allow an “out” to reduce hallucinations - Say: “If unclear, ask questions or say you don’t know.” 
 
- RAG your prompts - Attach specs, API docs, code indexes, quick references. 
- Don’t just say “use the API”—paste the endpoint fragment you want. 
 
- Review + test like normal - AI accelerates; it doesn’t replace code review, tests, or staging checks. 
 
Coding Demos & Lessons
1) Reusing UI flows (speed boost)
- Task: Add existing “add sub-account” flow to a new self-serve portal. 
- Approach: Detailed prompt + concrete file/selector references + “do not deviate.” 
- Result: Hours instead of a week. AI copied patterns, dev reviewed/refined. 
2) Production bug investigation
- Task: CSV field mapping not aligning. 
- Prompt: “Investigate step-by-step. Do not guess. Think out loud.” 
- AI output: Problem analysis, proposed fix (file + line), test steps, sample CSV header. 
- Result: Faster root-cause + targeted patch; still went through human review and tests. 
3) API utility with v0 (watch for hallucinations)
- Issue: AI invented a non-existent endpoint (“race_questions”). 
- Fix: Provide exact RunSignup API spec (e.g., - race_participantswith- include_questions=…) in the prompt.
- Pattern: When AI strays, tighten context & ask it to list planned changes before acting. 
Safety & Quality Controls (RunSignup practices)
- AI can propose changes; developers decide what to implement. 
- All code still goes through PR, code review (2 approvals), and testing. 
- Use AI for repetitive patterns and research; keep critical judgment with the team. 
Handling Skepticism (“Isn’t it hallucinating?”)
- Yes, it can hallucinate—less so with: - Clean, scoped context 
- Attached specs/examples 
- Guardrails (“don’t guess; ask questions”) 
- Stepwise planning before edits 
 
- It’s not magic; it’s a power tool. Operators matter. 
Quick Tips & Prompts You Can Steal
- Role & guardrails: 
 “You are a senior {role}. Ask clarifying questions. If uncertain, say so. Provide a step-by-step plan before making changes.”
- Analyze before act: 
 “Analyze this error and propose fixes. Do not modify code yet. Output: root cause, affected files/lines, minimal diff.”
- API grounding: 
 “Here is the API spec excerpt we must use. Only use these endpoints/params. If something’s missing, ask.”
- Refactor safely: 
 “List all files you would change and why. Wait for approval before generating diffs.”
Real-World Wins
- Faster scaffolding & migrations (hours vs. days) 
- Quicker debugging via structured analysis & targeted diffs 
- Better documentation/testing (AI drafts test plans and sample data) 
- Email/ops summaries: e.g., auto-summarize incident mail threads into timelines (Gemini in Gmail) 
Key Takeaways
- Context is king. Clean chat history, attach the right docs, give examples. 
- Guide, don’t abdicate. Ask for plans; approve before change. 
- RAG everything important. Paste specs/snippets; avoid “make it work” vagueness. 
- Iterate conversationally. Multi-step prompting beats monolithic prompts. 
- Humans still ship it. AI accelerates; reviews and tests keep quality high. 
