Event Information
WHEN
ON DEMAND
Join RunSignup developers Joel Peterson and Jonathan Farrell for an upcoming webinar focused on Security and AI Access. They’ll share insights on how RunSignup is approaching security, permissions, and the responsible use of AI across our platform.
Summary of Webinar
1. Quick Recap: What Is Vibe Coding?
Vibe coding = you act more like a product manager than a coder:
You describe:
What you want the app to feel like
What it should do
The AI:
Writes the code
Handles most syntax and wiring
Upside:
Very fast, very empowering, great for prototypes and small utilities.
Downside / Risk:
“The person building the app often has no idea what’s really happening under the hood.”
AI is trained on our code, and our code includes bad code.
So it can generate:
Code that works functionally
But is fundamentally insecure
So we need a “vibe check” under the hood, especially once we go beyond toy demos.
2. When Does Security Actually Become a Concern?
They frame it as three zones:2.1 Safe Zone – Low Risk
Things that are usually fine without deep security concerns:
Fully local, self-contained pages/apps
Static prototypes: no external data, no user login, no file uploads
“Floor plan sketched on a napkin” → nothing is really connected yet
In this zone, you’re mostly safe to just play.
2.2 Tipping Point – Medium Risk
Security becomes a concern as soon as your app talks to anything else:
Signs you’ve crossed the line:
Ingesting external data
Calling APIs (RunSignup or others)
Accepting user input: comments, search boxes, text fields
Accepting file uploads
API integrations and secrets
Using API keys
Storing secrets somewhere
If you put your API key in front-end code, that key is public.
Databases
Connecting to a DB (even a small one)
Storing or retrieving any user data
Once you do this, you’re open to:
SQL / injection attacks
XSS
Abuse of exposed keys/endpoints
2.3 Red Zone – High Risk
Now you’re in “serious responsibility” territory:
Authentication & Identity
Logins, tokens, “prove you are who you say you are”
If that’s compromised, someone can impersonate your users.
Deployment to the public internet
Your app becomes a public endpoint.
Questions you must ask:
Am I exposing internal APIs unintentionally?
Am I exposing any credentials or private data?
Is data from a protected API now being re-shared publicly?
They emphasize:
Most big breaches come from people mistakes (weak passwords, phishing, pasted secrets) more than Hollywood-style hacking.
3. Principle of Least Privilege: Choosing the Right Access Level
For RunSignup specifically, they break access into three levels:
3.1 No Credentials (Public APIs)
Use this whenever you can.
Many RunSignup endpoints are public:
Example:
GET races(race search/list)
These return only public data:
Races visible on RunSignup
No draft/private races
No API keys needed, no secrets required.
Good for:
Public race search tools
Calendars / “find a race near me”
Front-end heavy demos
3.2 API Keys (Scoped Access)
Use when you need access to your own race’s internal data:
Participant lists
Results management
Other race-private data
API keys:
Are configured on:
Race dashboards
Ticket events
Partner/timer dashboards (with expected scope)
Should be treated as sensitive:
Never hard-code in front-end JS
Never check into Git
Best practice:
Store them as environment variables (server-side) and read them in your server routes / API handlers.
3.3 OAuth2 (User-by-user Auth)
They only mention this briefly:
For apps where any RunSignup user can log in and authorize your app
The user logs into RunSignup → grants your app access
More complex; not the main focus of this talk
4. Demo: Safe Use of RunSignup APIs with Vibe Coding
They walk through two concrete patterns:
4.1 Public Race Search (No Credentials)
Uses the “Get Races” API (public endpoint).
Built via Vercel v0 (AI-based app builder).
Input:
ZIP code
Radius (miles)
Output:
List of races around that point
Key point:
No API keys / secrets involved, because all this data is already public on RunSignup.
4.2 Participant List App (Requires API Key & Secret)
Use case:
Race director wants a simple app that lists participants for their race.
Steps:
Generate API key & secret
Go to race dashboard → Access / Secure Access & Info / Sharing → API Keys section.
This key works for that race’s scope.
Prompt in v0 (Vercel)
They explicitly instruct the AI:
Treat the API key + secret as sensitive
Use environment variables for them
Follow security best practices
Environment variables
RUNSIGNUP_API_KEY/RUNSIGNUP_API_SECRET(or similar)Stored in:
v0 environment during building
Vercel project → Settings → Environment Variables after deploy
Not accessible in the browser; only in server-side execution.
Result
A live app that lists participants for a specific race/event.
Uses a server-side API route which reads env vars and fetches from RunSignup.
Jonathan’s cautionary story:
He once let AI integrate with a stock-quote API.
AI asked for the API key → he pasted it.
AI then hard-coded the key in a front-end JavaScript file.
Fine for a toy prototype, but:
If deployed, anyone using “View Source” could see and copy that key.
Key lesson:
Environment variables or other server-side storage are critical.
Secrets must live behind the server, never shipped to the browser.
5. Don’t Reinvent the Wheel: Timer Utilities
They point folks to existing Vibe-coded tools RunSignup already hosts:
Timer Utilities page (RunSignup Timer Utilities)
Many small apps already built, e.g.:
Results display tools (search by name/bib, show results for a specific race)
Example they demo:
A results-search app for Scott Coffee race:
Choose race & event
Search name or bib
See results
Because results are public, these utilities don’t require secure keys.
If you want a new utility:
Use Contact Us on that page
Select “Timer Utilities” and submit requests/ideas
6. Final Security Habits: “Trust, But Verify”
They pull it together with a car analogy:
Cars are awesome and fast, but you still need a seatbelt, follow speed limits, and change the oil.
AI / Vibe coding is the same.
You get speed and power — but you also need guardrails.
6.1 Three Simple Rules to Always Follow
Never let AI hard-code secrets
API keys, secrets, tokens:
Use environment variables, config files, or secret stores.
Ask AI explicitly:
“Use environment variables for API key and secret.”
Double check it didn’t “sneak them” into front-end code.
Don’t just run the code — read it
Skim for:
Keys, passwords, tokens in plain text
Suspicious dependencies or imports
Unvalidated user input being sent directly to APIs/DBs
If you’re not comfortable:
Ask someone more technical to take a look
Or reach out to RunSignup if it’s related to their APIs
Ask the AI to audit itself
Paste the generated code back in and prompt, e.g.:
“Act as a security researcher. Find any potential vulnerabilities, hard-coded secrets, or bad practices in this code and show me how to fix them.”
Or:
“Review this for XSS, injection risks, secret leakage, and unsafe dependencies.”
It often finds and fixes its own mistakes surprisingly well.
7. Key Takeaways (for your notes)
Vibe coding is powerful and worth using — just don’t treat it as magic.
Security concerns start the moment your app:
Talks to external APIs
Accepts user input
Stores or exposes private data
With RunSignup:
Prefer public endpoints when possible.
Use API keys for your own race data.
Protect keys with environment variables, not front-end JS.
Before deploying anything public:
Confirm what data you’re exposing.
Ensure nothing private (participants, secrets, internal endpoints) is accidentally public.
Your role is no longer just “prompt engineer” — you’re also a code auditor.
