Skip to content
Blog Article

Mapping MITRE ATT&CK to Detection Engineering Sprints

Turning ATT&CK from a static coverage spreadsheet into a living detection hypothesis engine.

July 12, 2025
7 min read
Threat Engineering Unit
Mapping MITRE ATT&CK to Detection Engineering Sprints

Problem: Static Coverage Decay

Many security teams use the MITRE ATT&CK framework to track their detection coverage, often by coloring in a matrix. While this can be a useful starting point, it creates a false sense of confidence. The reality is that these static coverage snapshots become stale within weeks. The threat landscape is constantly evolving, with new techniques emerging and existing ones being modified. At the same time, the organization's own environment is drifting, with new systems and applications being deployed. This combination of threat evolution and environment drift means that a static coverage matrix is a poor measure of an organization's true defensive posture. A more dynamic approach is detailed in our [Detection Engineering Playbook](/resources/blog/detection-engineering-playbook).

Hypothesis-Driven Alignment

A more effective approach is to use the ATT&CK framework as a living detection hypothesis engine. For each prioritized technique, the team should spawn a detection hypothesis. This hypothesis should include a clear set of success criteria, such as the required signal source, the expected false positive rate, and the target latency for the detection. This hypothesis-driven approach ensures that the detection engineering process is focused, measurable, and aligned with the organization's risk priorities.

Selection Criteria

With hundreds of techniques in the ATT&CK framework, prioritization is key. The selection of which techniques to focus on should be based on a combination of factors. The frequency of the technique in the organization's local threat model is a primary consideration. The impact of the technique on privilege escalation acceleration is another important factor. Finally, the team should consider the existing visibility gap for the technique. By focusing on techniques that are relevant, impactful, and for which there is a clear gap in coverage, the team can ensure that its efforts are having the greatest possible impact on risk reduction. This is a core activity of our [Purple Team](/resources/blog/purple-team-collaborative-uplift) exercises.

Detection Sprint Structure

To maintain momentum and ensure a steady stream of new detections, the team should adopt a structured sprint process. A two-week iteration is a good starting point. Day one should be focused on hypothesis refinement. Days two through six should be dedicated to telemetry enrichment and the drafting of the analytic. Days seven through nine should be used for validation and false positive tuning. The sprint should conclude on day ten with a retrospective and the promotion of the new detection to production. This structured approach ensures that the team is continuously delivering value and that the detection engineering process is both efficient and effective.

Scoring & Retirement

Not all detections are created equal. To manage the detection portfolio effectively, each detection should be scored based on its fidelity, its maintainability cost, and its contribution to adversary friction. This scoring system provides a basis for prioritizing which detections to improve and which to retire. Low-impact, high-noise analytics should be retired quickly to reduce the burden on the SOC and improve the overall signal-to-noise ratio of the detection system.

Automation & Regression

To guard against the silent drift of detection coverage, it is essential to automate regression testing. This involves replaying atomic or custom simulated events for each technique as part of every CI/CD cycle. This ensures that any changes to the environment or the detection logic itself do not inadvertently break existing detections. This automated approach to regression testing is far more effective than manual testing and is essential for maintaining a high level of detection fidelity over time.

Metrics

To measure the effectiveness of the ATT&CK mapping program, the team should track a set of key metrics. These should include the delta in technique coverage per quarter, the median cycle time for a detection hypothesis, the ratio of new detections to retired detections, and the compression in lateral movement dwell time. These metrics provide a clear, data-driven view of the program's performance and its impact on the organization's ability to detect and respond to threats.

Sources & Further Reading

MITRE ATT&CK (enterprise matrix).

MITRE CTI blog (technique evolution updates).

CISA Known Exploited Vulnerabilities Catalog (technique prioritization input).

Key Takeaways

Dynamic hypothesis loop keeps coverage relevant and measurable.

Retiring noisy analytics is as critical as adding new ones.