Research

How to Evaluate Your CTF Program

Rolling out an in-house CTF can feel like reinventing the wheel, designing custom challenges is time-intensive, running the platform demands ongoing resources, and unseen organisational biases can skew the learning experience. When budgets tighten, these internal programs, which are expensive to develop and difficult to quantify, are often the first to be eliminated.

How to Evaluate Your CTF Program 1200

CTFs: Fun to build, awkward to justify.

Let's Be Honest

CTFs are one of the most effective ways to build real-world cybersecurity skills. By simulating live-attack scenarios in a gamified environment, they drive engagement, foster problem-solving under pressure, and encourage collaboration across teams. Participants retain knowledge more effectively when they “learn by doing,” and leaders gain clear visibility into skill gaps and individual progress.

Instead of flying blind, you need a lean, impact-focused evaluation framework that highlights both successes and areas for improvement. Track the essentials (attendance, completion rates, time-to-solve) and gather rapid feedback without turning your team into data scientists.

Of course, not every organisation can build and maintain its own CTF ecosystem. That’s why many are turning to solutions like Reflare CTF, where ready-made, bias-free challenge libraries and built-in analytics let you demonstrate real value from day one. In this article, we’ll show you how to measure what matters and keep your CTF program, and your cybersecurity budget, secure.

Start With What You Can Actually Track

The Numbers That Matter

Attendance and Dropout Rates: Track who starts, who finishes, and when people leave. Understanding departure patterns helps identify potential program issues, but you'll need to ask people directly why they left to know for sure.

Challenge Completion Rates: Examine the percentage of challenges that people actually complete versus those they abandon. Break this down by challenge category to see where participants struggle or excel.

Time-to-Solve Metrics: Track how long challenges take. Consider tracking time-to-first-hint and time-to-solution separately.

Score Progression: Track how people improve over time. Create learning curves for individual participants and the group as a whole.

Quick Feedback That Works

Weekly Check-ins: Ask three questions after each session: "What worked?", "What didn't work?", and "What should we do differently next time?". Five minutes, done.
Keep a running log of feedback themes. When the same suggestion comes up multiple times, it's worth implementing.

Exit Interviews: When someone drops out, ask why. Structure these around specific questions: What was the last challenge you completed? What stopped you from continuing? What would have made you stay?

Peer Feedback: Ask participants to rate challenges and each other's explanations.

Measuring Skill Development

Progress Tracking: Monitor how participants advance through difficulty levels. Document their progression from basic concepts to advanced techniques. Can they identify and exploit vulnerabilities they couldn't handle before?

Knowledge Transfer: Monitor whether participants can apply techniques from one challenge category to another.

Learning Velocity: Measure how quickly people adapt to new challenge types. Do they need fewer hints over time?

CTF Competition Performance

Individual Progress

Score Improvement: Track individual scores across multiple CTF sessions. Look for improvement patterns and identify where people plateau or struggle.

Challenge Category Mastery: Monitor which categories participants excel in and which they avoid.

Hint Usage: Track how often people require hints and whether this frequency decreases over time.

Team Dynamics

Collaboration Patterns: In team-based CTFs (particularly on-site), observe how participants work together. Are they sharing knowledge effectively? Do they divide work appropriately?

Knowledge Sharing: Track instances where participants assist each other in solving challenges.

Leadership Development: Observe when participants begin mentoring others or assuming leadership roles during on-site team challenges.

Red Flags to Watch For

When Your Program Isn't Meeting Its Goals

The Same People Always Win: Consider whether your program is designed for skill development or competition. If you're trying to teach and the same people consistently dominate, you might need to adjust your approach.

No Collaboration: If people work in isolation and your program is designed to encourage teamwork, you have a problem. However, if you're running individual competitions, this might be exactly what you want.

Difficulty Misalignment: Listen to feedback about challenge difficulty and compare it to your program's intended difficulty level.

Frustration Patterns: Watch for signs that participants are getting frustrated - people giving up quickly, negative comments about the program, or drop in activities after particular challenges.

Simple Evaluation Framework

Monthly Review Process

  1. Numbers Check: Review active participants, completion rates, and feedback scores
  2. Content Review: Identify challenges that consistently cause problems
  3. Participant Check: Touch base with regular participants and recent dropouts
  4. Adjustment Planning: Pick 1-2 things to change for next month

Focus on trends rather than individual data points.

Quarterly Assessment

  1. Skill Progression: Re-test participants on practical skills
  2. Program Feedback: Deeper dive into what's working and what isn't
  3. Content Updates: Refresh challenges based on feedback
  4. Success Stories: Document concrete examples of participant improvement

Use quarterly assessments to make bigger changes. Monthly reviews handle small adjustments; quarterly reviews address structural issues.

Making It Sustainable

Don't Overcomplicate It

Pick 3-5 metrics that actually matter to your goals and stick with them. You can always add more later, but starting with too many measurements will burn you out.

Use What You Have

Most organizations already have tools that can help. Your learning management system probably tracks completion rates. Your calendar system shows attendance.

Don't buy new tools until you've exhausted what you already have.

Automate the Boring Stuff

Set up automatic reports for attendance and completion rates. Utilise Google Forms or similar tools for collecting quick feedback. The less manual work you do, the more likely you'll actually keep doing it.

Common Pitfalls to Avoid

Over-measuring: Don't track everything. Pick metrics that inform decisions.
Ignoring Context: A 50% completion rate means different things in different programs.

Analysis Paralysis: Don't wait for perfect data. Start with what you have and improve over time.

Comparing Programs: Your beginner program shouldn't have the same metrics as expert competition.

Bottom Line

You don’t need a data science lab to prove your CTF’s worth, just a handful of meaningful metrics that show learners are growing, staying engaged, and mastering the skills you set out to teach. Focus on simple, repeatable measures and let the insights drive continuous improvement.

Whether you build your own challenges or partner with a purpose-built platform, the principle is the same: keep evaluation lean, use every data point to refine the experience, and tell a clear story of impact. When you can articulate your program’s success in five minutes or less, highlighting skill gains, problem-solving growth, and positive participant feedback, you’ll secure both buy-in and budget for the long haul.

Ultimately, a streamlined evaluation framework does more than justify spend; it fuels a cycle of better challenges, stronger skills, and a more resilient security culture.

1 Capture the Flag

 

Subscribe by email