RunSignup is increasing the pace of AI adoption within our development team. We expect that we will be grow from a pace of 2,000 releases per year to 4,000 releases per year by the end of 2026.
This is great news for our customers, as we will be able to provide more functionality that allows events to grow and provide great experiences for their customers.
This blog will dig into some of the background on what is happening with AI, and how we are adopting the technology in our development team.
Claude Opus 4.6 and Claude Code in early February
Anthropic released a major step function improvement in AI coding ability in early February, 2026. It became clear that AI can write code better than humans in many cases, and will certainly be much better than humans as new releases roll out this year.
All of our developers were using AI. Most had transitioned to Cursor as their IDE, which provides built in AI capabilities that work with any of the back end models. The rest had stuck with CoPilot in VS Code. We were getting maybe 10-20% productivity improvements with helping to solve problems and with code completion.
Several of our developers had started experimenting with letting AI write the code. They saw a huge improvement with the new release. They could describe the functionality they wanted (in pretty technical terms), and Cursor with Claude underneath would create a Plan. The developer could edit the Plan, and then the AI would write the code. Increasingly, they found they could “One Shot” this process – meaning that Cursor wrote the exact right plan and then wrote the code that after review could go into production without modification.
AI Days Development Team Meeting
In March, much of the development team got together in Moorestown. We spent a day sharing presentations on what some of the early adopters had been doing as well as some of the overall strategy of what we wanted to do. The next day we paired up so that every developer took a fairly simple task and used the new method of having AI write the Plan and the Code.
The result of that meeting was that we have everyone trained on how to write code in this new environment – let AI write it! This is obviously not for everything, and there is still need for human interaction and steering of the AI and of course our human code review process.
Doubling Developer Productivity – Training AI with Skills
While AI can write more and better code (24 hours a day!), it needs direction from a human still. Claude has introduced the concept of “Skills” that help humans give more automated and consistent direction to the AI.
One of the actions we took from the meeting was an effort to create over 50 Skills and set that up so it could be shared across the various development environments we use – Cursor, CoPilot, Claude Code, and the different backend models like Claude Opus and Sonnet, OpenAI Codex, Gemini, etc. We are well on our way with over a dozen developed so far.
Skills are more detailed background on an area of our system. For example it might be something like EMail or Websites that allow for better creation of new email or website components. Another example was we developed a skill for writing Refund code. We used that to add some administrator refund capabilities, and were able to crank out the Membership Refund capability in a day. We have other skills to help with some of our automated test suites.
Building out Skills, and continually improving the skills, will allow AI to write better and better code.
Feature Pipeline
If developers will be able to produce twice as many feature releases, this will put strains on the overall pipeline. From the input side of deciding which features to do and how, to the code review process, and the release process. We will have bottlenecks that develop and create backlogs.
To solve that problem, we are introducing AI Agents into our pipeline:

We have prototypes of each of these AI agents already being used in a limited fashion. These will be improved and put into the process by the summer, which will help us alleviate bottlenecks. Several of these are interesting to dig into a bit more.
Evaluate Value AI Agent – Allison and Bob typically do this step as humans. They spent a few hours with Allison’s AI coworker to develop this Skill for a stand alone Agent. Allison’s agent already had a lot of context on what RunSignup does and how we work and evaluate things as it has been helping her write blogs and internal documents. We also fed it some of our blogs (like the year end series of recaps and plans) and then talked about our methods for evaluating business value. We had also instituted a new template that our Account Managers, Support and Sales people use when they hear a request from a customer. The Evaluation Agent proved to be very good at scoring the business value. Right now the agent just leaves a comment in Github with its opinion of business value. As we gain trust, this can be used to auto-pick and begin the peipline for feature requests.
Create Plan and Write Code – The key here is getting AI to write good plans and execute those plans to write quality code with minimal need to make changes by humans. Skills and training are the key to this. As an example, we have a well developed design system that defines all of the UI components used in our platform. If AI knows this system well, it will use the right components and not create new or strange UI experiences for users with a new feature. Another example is our skill to write SQL code.
AI Code Review – We are sticklers for our code reviews. This flows down from our CTO, Stephen. This is a major reason we have only had 6 minutes of downtime since 2015. We are using our new AI Infrastructure (AWS Bedrock and AgentCore) to create a knowledge base of past code reviews and weight the ones done by Stephen more heavily for training our code review agent. While still iterating, the early results are quite remarkable. The agent is able to make comments in the Github pull request and suggest improvements and corrections.

Aggressive Patience and Agent Stress
One of our core principles is to practice Aggressive Patience. In the case of AI, we are showing that we are aggressively adopting it in our development team, but doing so in a patient manner. We are not using AI to auto-release code, we are having humans review every single line of code AI produces and manually test the code. But we are cognizant that AI Agents will do more and more of the work and are striving to take advantage of that.
The other thing we discuss is Agent Stress. A developer can now fire up multiple agents working on multiple projects simultaneously. This can cause us humans to feel stress that they are not watching over their agents and keeping them busy enough. We want to make sure that the AI is serving us humans, and to keep our sanity and reasoning.
Summary
AI is changing our jobs fast. And with estimations that it will double in capabilities every 7 months, we expect the change rate to continue to accelerate exponentially. We are trying to ensure that RunSignup is leveraging the power of AI to benefit our customers – and we will keep pushing hard.
