How RunSignup Develops with AI Internally

Event Information

WHEN

ON DEMAND

Go behind the scenes to see how RunSignup’s development team uses AI to build, test, and improve our technology. From code generation to internal support tools, you’ll learn how AI helps us innovate faster while maintaining reliability.

View Slides

Summary of Webinar 

AI is now a force multiplier for our team—faster iteration and more features, while keeping quality high via human-in-the-loop reviews and strong engineering standards.

Key Takeaways

  • AI is a force multiplier, not a replacement—devs spend less time on boilerplate, more on design, UX, security, and performance.

  • Context + rules = quality. Index the codebase, inject standards, and capture corrections as reusable “Memories.”

  • Human-in-the-loop remains essential. Every change still goes through code review and testing.

  • Real impact today: Days → ~1 hour to a working prototype; recent month: ~34% of code commits AI-assisted (main repo ~44%).

Tooling & Guardrails

  • Primary tool: Cursor (AI IDE)

    • Local code indexing (no IP used to train external models)

    • Enforced engineering standards (security, DB patterns, naming, docs)

    • Memories capture corrections (e.g., follow latest DB pattern; accuracy over speed; no arbitrary line limits)

  • Earlier experiment: GitHub Copilot (useful autocomplete; limited workflow change)

  • Low-code exploration: V0 for fast API-driven utilities/prototypes

Live Demo Highlights

1) Bug Investigation (Time Zone import issue)

  • Prompt the agent to analyze; it finds relevant files, proposes root causes, debug hooks, and a fix plan in minutes—no code changes yet.

2) Feature Build (Restore Deleted Time Slots)

  • One prompt → plan (5–6 steps) → generates DB method, controller action, helper, templates, and admin link (~350 LOC).

  • Dev runs it, feeds back any error logs; agent iterates.

  • Final: line-by-line human review, tests, PR, code review, deploy.

Workflow (Before vs After)

  • Before: Hours/days tracing code paths, scaffolding layers; slower onboarding.

  • After: Agent maps code + drafts implementation fast; devs review/refine, focus on UX, edge cases, security; faster demos → faster feedback.

Quality & Security

  • Code never used to train external models; indexing only for local context.

  • Standards-first prompts and Memories minimize drift/hallucinations.

  • Reviews/tests unchanged: two human reviewers, CI, performance & security checks.

Impact & Metrics (Representative)

  • Daily AI-assisted commit share often 30–50%.

  • Recent month snapshot: ~34% overall, ~44% in the main RunSignup repo.

  • Speed: Prototype in ~1 hour vs. 1–2 days (or a week for new devs).

  • Onboarding: Faster ramp by querying the agent and reviewing working examples.

Practical Tips (Replicable)

  • Seed AI with canonical files and written standards.

  • Build a Memories library from every correction.

  • Ask for a plan first, then code.

  • Be explicit: role, accuracy over speed, which files/patterns to follow.

  • Keep human control: code review, tests, perf/security validation.

Risks & Mitigations

  • Hallucinations / old patterns: Reduce via standards, canonical examples, Memories.

  • Overconfidence in outputs: Maintain strict reviews/tests.

  • Context drift: Scope prompts and point to specific files.

What’s Next

  • Larger context windows, smarter agents, richer IDE planning.

  • Better analytics to target rule/memory gaps.

  • Wider enablement: curated rules per domain (endurance, tickets, fundraising); deeper test generation.

Subscribe to Our Blog

Customize Lists...
Loading