Event Information
WHEN
ON DEMAND
At RunSignup, we see AI as the future. To understand what’s next, join RunSignup CEO and Founder Bob Bickel for an exploration of the origins of AI, the evolution of AI, and why it’s becoming an essential tool for endurance events. In this webinar, Bob will answer:
- What actually is AI?
- How has AI evolved in its short life?
- How is AI impacting the endurance industry?
Summary of Webinar
Overview
This webinar gives a practical, narrative walkthrough of modern AI: how it took off, why progress is accelerating, what major technical ideas explain today’s capabilities, and what the next phase looks like (agents, tooling standards, and AI-native interfaces). It then pivots to RunSignup’s AI roadmap—how internal development has already changed, why open standards (OpenAPI, OAuth2, MCP) matter, and how AI-powered chat and agent workflows can reduce workload for race teams while improving participant experience.
Part 1: A Brief History of Modern AI (2009 → Today)
The “Deep Learning” Kickoff
The modern wave is framed as beginning around 2009–2012, driven by:
Large labeled datasets (humans labeling images)
Competitions that rewarded better models
Breakthrough neural-net approaches (e.g., AlexNet, built at University of Toronto)
GPUs enabling parallel math at scale (NVIDIA chips became crucial)
2012–2022: The Machine Learning Era
AI grew rapidly in practical applications (vision, tagging, classification)
RunSignup example: automatic bib tagging for photos (introduced in 2016)
Built on ML that identifies bib numbers in photos
Enables searchable event photo libraries at scale
Large Language Models (LLMs) and the Transformer Era
OpenAI founded (late 2015)
Google’s Transformer paper (2017) is highlighted as a key step in enabling today’s language models:
Better understanding of context and relationships between words
2022: The Public “Wow” Moment
Midjourney (July 2022): early public image generation (imperfect but exciting)
ChatGPT (Nov 2022): mainstream public adoption; people realized it could handle broad tasks and natural language surprisingly well
Acceleration & Adoption
NVIDIA revenue growth is used as a proxy for acceleration in compute demand
ChatGPT’s climb to one of the most visited sites is used as a proxy for mainstream behavioral change
Part 2: How AI Works (Simple Mental Model)
Neural Networks as “Brain-Inspired”
AI models are described as being built from “neurons”:
Inputs → transformation → activation → output
Many neurons connected form a neural network
The analogy: learning is like how a child learns patterns and cause/effect over time (inputs → understanding → behavior)
Part 3: AI Today (Why It’s Moving So Fast)
Massive Investment
Big tech (Meta, Microsoft, Google) are spending enormous amounts on AI infrastructure
Models are improving via frequent releases—major jumps coming faster than traditional software cycles
New Techniques Driving Capability
The webinar flags two important trends:
Reasoning / chain-of-thought style approaches (multi-step problem solving)
Mixture of experts (routing tasks through specialized sub-models)
Compute is the Engine
The talk emphasizes the scale:
Trillions of “tokens” processed monthly (tokens ≈ pieces of words)
Token throughput is growing extremely quickly
Feature expansion in major LLM tools
Capabilities that have emerged rapidly (especially over ~12 months):
Memory/personalization
Agents (multi-step workflows)
Better reasoning for complex tasks
Spreadsheet/data analysis
Web-enabled retrieval (search + answer)
Multimodal interaction (text + images + voice)
Part 4: The Future (Agents + Reliability + Speed)
Key Thesis: “It’s going to keep accelerating”
The webinar references an AI forecasting paper (“AI 2027”) as a framework for how:
Agents become more reliable
Capability scales dramatically
“Human-speed” becomes the wrong comparison unit
Practical takeaway
Even if exact timelines vary, the direction is clear:
Faster models
More reliable automation
More interfaces shifting from websites/menus → conversational/agent-driven workflows
Part 5: What This Means for Events (Why RunSignup Cares)
Human context
Technology change is happening faster than people can comfortably adapt
But: events and community may become more important, not less, as humans seek connection in an AI-heavy world
Part 6: RunSignup’s AI Journey So Far (Internal)
1) AI-Assisted Development
Enabled tools like GitHub Copilot, then more advanced coding assistants (e.g., Cursor)
The key shift described:
Developers write less code from scratch
Developers spend more time reviewing, refining, and validating AI-generated code
Reported impact: meaningful productivity improvement once workflows and safeguards were added
2) Making RunSignup AI-Friendly: OpenAPI + OAuth2
RunSignup’s API foundation is positioned as a major advantage
The work:
Wrap mature APIs in OpenAPI so AI systems can “understand” endpoints/structures more easily
Use OAuth2 to manage secure access for private data and user-authenticated workflows
Clarification emphasized in Q&A:
Some data is public (e.g., races, public results)
Anything sensitive/private requires authentication (participants, internal race data)
Part 7: MCP and the Next UI Shift
MCP (Model Context Protocol) framing
MCP is described as a standard that helps LLMs:
Discover tools
Call transactional systems (like RunSignup)
Get real-time data instead of guessing from training text
Why it matters for RunSignup
LLMs won’t just “scrape websites”
They can query structured data directly and respond in natural language
This is positioned as foundational for:
Better event search
Better self-service support
More automation for race teams
Part 8: RunSignup AI Chatbot (Near-Term Product)
Core goal
Reduce repetitive support burden for event teams:
Packet pickup
Parking
Start times
Policies (refunds/transfers)
Confirmation issues
General event questions
Key ideas
Combine:
Website content (especially Website Builder V2 content)
Dynamic data (public data now; authenticated data later)
Provide tools for:
Viewing chat history
Improving responses
Managing FAQ-style answers
Longer-term vision
Move from “Q&A chatbot” → “action-taking agent”:
Transfers
Deferrals
Event switches
Participant-specific support (with authentication)
Part 9: Vibe Coding + Ecosystem Questions
A recurring theme:
AI tools make it easy to build custom interfaces quickly (especially for public data and lightweight experiences)
Where RunSignup fits even if UI gets reinvented:
The platform remains the secure transactional backbone:
Data integrity
Payments
Permissions
Operational workflows
“You can build a custom UI, but you still need a trustworthy system underneath it.”
Q&A Themes Captured
“Will third parties be able to build tools?”
Yes—MCP + OAuth2 + OpenAPI are framed as enabling:
Timers building custom results/kiosk experiences
Partners building specialized race/team dashboards
A broader ecosystem than today’s API partner set
“Energy impacts?”
Acknowledged as real and significant. The framing:
AI adoption is unlikely to slow due to investment and utility
The focus should be on using AI responsibly and for real benefit
“Where is the event industry?”
The speaker’s view:
RunSignup is positioned to move faster than competitors due to:
Mature API infrastructure
Early OpenAPI + MCP work
Deep domain expertise
Large library of educational/product content that models can reference
Best Takeaways
AI progress isn’t linear; it’s accelerating (capability + compute + adoption).
The interface trend is shifting: menus/websites → conversational + agent workflows.
RunSignup’s approach is: keep the transactional backbone strong, and add AI-native layers that reduce workload and improve user experience.
Short-term win: support deflection + better answers.
Long-term win: authenticated agents that can do things, not just answer questions.
