Intelligence-Driven Detection Engineering: From Threat Intel to Detection-as-Code (with the Pyramid…
Intelligence-Driven Detection Engineering: From Threat Intel to Detection-as-Code (with the Pyramid of Pain & DML )

In cybersecurity, one of the most important questions organizations ask is often the wrong one. It’s not “Do we have Threat Intelligence?” — but rather “How effectively can we operationalize that intelligence into detection and response?”
This is the core challenge of Intelligence-Driven Detection Engineering.
For years, many Security Operations Centers (SOCs) have relied heavily on Indicators of Compromise (IOCs) such as hashes, IP addresses, and domains. While these indicators can catch yesterday’s attacks, they are often short-lived and trivial for adversaries to change. A hash can be regenerated by simply recompiling or changing a single charactar , a domain can be replaced using (Domain Generation Algorithms) , and an IP can be discarded in seconds using (Fast Flux Dns ). As a result, SOCs stuck at this level of detection are constantly playing catch-up.
To move beyond this reactive state, two frameworks have reshaped how defenders think about detection maturity:
The Pyramid of Pain

Developed by David Bianco in 2013, the Pyramid of Pain illustrates how different types of indicators affect adversaries. At the bottom are atomic IOCs like hashes and IPs — easy to swap out and causing minimal disruption. As we move up the pyramid, however, detections target tools, techniques, and behaviors. At this level, forcing an adversary to change tactics or tools introduces real “pain” because it requires time, resources, and expertise to adapt.
You can explore the original post here: The Pyramid of Pain.
The Detection Maturity Level (DML) Model

Proposed by Ryan Stillions in 2014, the DML model provides a roadmap to measure the maturity of detection capabilities.
- At the lower levels (DML-0/DML-1), detection depends on atomic IOCs.
- Mid-levels (DML-2/DML-5) involve spotting host or network artifacts, tools, or attack techniques.
- Higher levels (DML-6 to DML-8) shift detection to tactics, strategies, and ultimately adversary objectives — answering not just “what happened?” but “who is behind it and why?”
For a deeper dive, check the original article here: The DML Model.
In this article, we’ll connect these two frameworks and show how SOCs can:
- Translate Threat Intelligence into actionable detections that climb higher up the Pyramid of Pain.
- Use the DML model as a roadmap for detection maturity, evolving from chasing indicators to understanding adversary goals.
- do a A practical case: Sandworm’s 2022 Ukraine Power Attack into actionable detections based on the ( DML)
- Apply practical methods like Detection as Code, Purple Teaming, and adversary emulation to build resilient detections that survive even in the face of 0-days.
Because ultimately, SOC success isn’t defined by how many alerts you generate — but by how deeply you understand adversaries, anticipate their behaviors, and design detections that truly matter.

But don’t get missed , Yes, we block IPs when needed. But the real game is moving past the small battles and focusing on the bigger war — understanding the motives behind the attacks.
The Problem with IOC-Driven Detection
For many SOCs, the default approach to detection is collecting and deploying feeds of Indicators of Compromise (IOCs): hashes of malware samples, suspicious IP addresses, newly registered domains, or email subject lines linked to phishing campaigns.
At first glance, this looks effective. You’re feeding your SIEM with fresh data, creating alerts, and responding quickly. But there’s a catch:
- IOCs are short-lived. A single hash is valid only for one file sample; a trivial recompilation by the adversary changes it instantly. Domains and IP addresses can be cycled or abandoned in minutes thanks to automation.
- They are easy to evade. Attackers know defenders rely on IOC feeds, so they deliberately design infrastructure and tooling that can rotate fast. Threat groups operating ransomware-as-a-service, for example, frequently regenerate hashes and domains, ensuring yesterday’s detection is obsolete today.
- They keep defenders reactive. Instead of anticipating or disrupting adversary behaviors, the SOC ends up in an endless loop of “update feeds → chase alerts → repeat.”
This is exactly what the Pyramid of Pain highlights. At its base, you find these atomic indicators — hashes, IPs, domains. Detecting them adds almost no cost to the attacker. They simply pivot to fresh ones. From a defender’s perspective, that means wasted cycles, high alert fatigue, and a false sense of progress.
The Detection Maturity Level (DML) model aligns with this view: at
DML-0/DML-1, you’re detecting nothing more than these easily replaced artifacts. While better than having no detections at all, it’s a fragile approach that cannot stand against determined adversaries.
In practice, many SOCs plateau here. They deploy commercial feeds, rely on blocklists, and measure “coverage” by counting how many IOCs they ingest. But this doesn’t equate to resilience. It’s like locking your doors while leaving the windows wide open: you’re safe only until the adversary chooses to bypass the obvious path.
Moving beyond this stage requires a shift: from detecting things adversaries use once, to detecting how adversaries operate consistently. And that’s where climbing the Pyramid of Pain and advancing through the DML model becomes essential.
Moving Up the Pyramid & DML
If IOC-driven detection keeps us trapped at the bottom, the way forward is to climb higher — up the Pyramid of Pain and along the DML roadmap.
At these mid-levels, we stop chasing what adversaries throw away quickly and start detecting what adversaries actually rely on.
Detecting Host & Network Artifacts (DML-2 / )
Instead of matching a single hash, we focus on the artifacts an attacker leaves behind on hosts and networks.
- Example: A webshell isn’t just a file with a hash — it’s a tool with distinct capabilities (command execution, file upload, persistence as well as network traffic noise ). Detecting its behavior is far harder for the adversary to evade.
or another example is when a malware loader leaving Artifacts by adding a program to a startup folder or referencing it with a Registry run key to
HKCU\Software\Microsoft\Windows\CurrentVersion\Run
DML-3: Detecting Tools
Instead of focusing on atomic indicators, detection at this stage aims at the tools adversaries use.
- Example: Detecting the presence of Cobalt Strike beacons , BloodHound for AD enumeration , Mimikatz usage, or custom RATs — not by hash, but by command-line syntax, mutex names, LDAP queries sent throw the network or network beaconing patterns.
- This is more painful for attackers: swapping tools requires new training, infrastructure, and in some cases, money.

Detecting Techniques & Procedures (DML-4 / DML-5)
As we move higher, detection shifts from individual tools to techniques (TTPs) and procedures.
- Detecting rundll32 being abused to execute DLL payloads (MITRE ATT&CK T1218) is a technique-level detection.
- Detecting a sequence such as: phishing email → malicious macro → credential dumping → lateral movement is procedural detection.
At this stage, even if the adversary swaps malware families or changes infrastructure, the behavioral pattern remains. That’s why detecting techniques and procedures causes much more pain: it forces adversaries to redesign playbooks, not just recompile binaries.

From Techniques to Tactics (DML-6)
Finally, at the upper technical levels, detection reaches tactics themselves: the why behind the activity.
- Example: Detecting attempts at persistence, regardless of method (registry autorun, scheduled tasks, services).
- Example: Identifying lateral movement attempts, whether by PsExec, WMI, or remote services.
At this point, adversaries face significant friction. No matter how they shuffle infrastructure, swap malware, or change initial access vectors, their fundamental objectives betray them.
The key insight: every step up the pyramid and DML makes detection harder to evade, but also more valuable to defenders. While IOCs vanish, behaviors persist. While infrastructure rotates, objectives stay constant. SOCs that recognize this shift stop running in circles and start building detections that truly matter.
Case Study: Sandworm’s 2022 Attack on Ukraine’s Power Grid
Campaign Overview
In late 2022, the Russian group Sandworm (Unit 74455) executed a disruptive cyberattack against a Ukrainian critical infrastructure provider. The attack unfolded in two coordinated waves:
- Operational Technology (OT) Disruption:
Sandworm infiltrated a SCADA environment that controlled substations. Instead of deploying custom malware like in previous campaigns (e.g., Industroyer), they abused a legitimate MicroSCADA binary (scilc.exe) to execute unauthorized SCIL commands. This caused remote terminal units (RTUs) to open circuit breakers and trigger a blackout. - Information Technology (IT) Wiping:
In parallel, Sandworm deployed a new variant of CaddyWiper across the victim’s IT network using Group Policy Objects (GPOs) and scheduled tasks. This wiped Windows systems, deleted forensic artifacts, and slowed recovery efforts.
This dual attack showcased Sandworm’s evolution: leveraging Living-off-the-Land (LotL) techniques in OT, and destructive wipers in IT, to maximize both operational disruption and investigative delay.

Why This Case Matters for Detection Engineering
For defenders, this campaign highlights that:
- IOCs (IPs, hashes) are useful for quick blocking, but they expire fast.
- Artifacts (scripts, binaries, configs) expose how attackers abuse existing tools.
- Tactics & Techniques (MITRE ATT&CK) show what the attackers are doing, regardless of the exact tools.
- Intent (tactics-level detection) reveals why they are doing it — in this case, to cut power and blind responders.
By structuring detection across these layers, we can move up the Detection Maturity Level (DML) model, achieving more resilient and proactive defense.
Mapping to Detection Maturity Levels (DML)
DML Level Sandworm Campaign Examples Detection Approach
DML-1 (Atomic IOCs) IPs (e.g., 190.2.145[.]24), hashes of Neo-REGEORG or CaddyWiper. IOC blocklists in firewalls, AV, EDR.
DML-2 (Indicators) Artifacts: a.iso (mounted image), cloud-online (malicious systemd service), malicious PHP webshells. File integrity monitoring, YARA for suspicious filenames, autorun policy alerts.
DML-3 (Tools) Abuse of scilc.exe with -do, use of wscript.exe to run lun.vbs, deployment of n.bat. YARA/SIGMA rules targeting scilc.exe execution patterns.
DML-4 (Procedures) Attack chain: lun.vbs → n.bat → scilc.exe -do s1.txt. Correlation searches linking VBS → BAT → SCIL execution within timeframe.
DML-5 (Techniques) MITRE ATT&CK: T0807 (CLI Execution), T0855 (Unauthorized Command Message), T0809 (Data Destruction). Technique-level rules: abnormal SCIL command patterns, generic detection of wiping via scheduled GPO tasks.
DML-6 (Tactics/Intent) Combined OT disruption (breaker opening) + IT wiping (CaddyWiper) = deliberate attempt to sabotage grid & slow recovery. Correlation across domains: OT log anomalies + IT wipe detections in same incident window trigger high-severity alert.
Detection Examples
YARA RULE — DML-2 (Host & Network Artifacts):
- This rule looks for specific strings and filenames that indicate the presence of a Systemd configuration file. This file is an artifact left on the system for persistence, and it is directly associated with the GOGETTER malware.
rule M_Hunting_GOGETTER_SystemdConfiguration_1
{
meta:
author = "Mandiant"
description = "Searching for Systemd Unit Configuration Files but with some known filenames observed with GOGETTER"
disclaimer = "This rule is for hunting purposes only and has not been tested to run in a production environment."
strings:
$a1 = "[Install]" ascii fullword
$a2 = "[Service]" ascii fullword
$a3 = "[Unit]" ascii fullword
$v1 = "Description=" ascii
$v2 = "ExecStart=" ascii
$v3 = "Restart=" ascii
$v4 = "RestartSec=" ascii
$v5 = "WantedBy=" ascii
$f1 = "fail2ban-settings" ascii fullword
$f2 = "system-sockets" ascii fullword
$f3 = "oratredb" ascii fullword
$f4 = "cloud-online" ascii fullword
condition:
filesize < 1MB and (3 of ($a*)) and (3 of ($v*)) and (1 of ($f*))
}
YARA Rule — DML-3 (Tool-level detection)
This rule looks for specific strings that identify a particular tool (scilc.exe). It's a signature for the tool itself.
rule M_Methodology_MicroSCADA_SCILC_Strings
{
meta:
author = "Mandiant"
date = "2023-02-13"
description = "Searching for files containing strings associated with the MicroSCADA Supervisory Control Implementation Language (SCIL) scilc.exe binary."
disclaimer = "This rule is for hunting purposes only and has not been tested to run in a production environment."
strings:
$s1 = "scilc.exe" ascii wide
$s2 = "Scilc.exe" ascii wide
$s3 = "SCILC.exe" ascii wide
$s4 = "SCILC.EXE" ascii wide
condition:
filesize < 1MB and
any of them
}
DML-4 (Procedures — Specific Malicious Behaviors)
This rule is designed to catch a specific procedure — using a VBScript to launch a batch file. It’s a defined sequence of steps to achieve a malicious outcome.
rule M_Hunting_VBS_Batch_Launcher_Strings
{
meta:
author = "Mandiant"
date = "2023-02-13"
description = "Searching for VBS files used to launch a batch script."
disclaimer = "This rule is for hunting purposes only and has not been tested to run in a production environment."
strings:
$s1 = "CreateObject(\"WScript.Shell\")" ascii
$s2 = "WshShell.Run chr(34) &" ascii
$s3 = "& Chr(34), 0" ascii
$s4 = "Set WshShell = Nothing" ascii
$s5 = ".bat" ascii
condition:
filesize < 400 and
all of them
}
DML-5 (Techniques — Execution Style, Behavioral Detection)
This rule is written to detect a specific technique — using a command interpreter for execution. It’s a higher-level detection because it focuses on a repeatable attacker behavior (like T1059 in the MITRE ATT&CK framework) rather than just a specific file.
title: MicroSCADA SCILC Command Execution
description: Identification of Events or Host Commands that are related to the MicroSCADA SCILC programming language and specifically command execution
author: Mandiant
date: 2023/02/27
logsource:
product: windows
service: security
detection:
selection:
NewProcessName|endswith:
- \scilc.exe
CommandLine|contains:
- -do
condition: selection
falsepositives:
- Red Team
level: High
tags:
- attack.execution
- attack.T1059
3.5 Lessons Learned
- LotL in OT is dangerous: Attackers no longer need custom malware if they can repurpose vendor binaries.
- Wipers remain part of GRU playbook: CaddyWiper disrupted IT to delay incident response.
- Defenders need layered detection: From IOCs → artifacts → TTPs → intent.
- DML helps structure progress: Each level builds resilience, reducing attacker advantage.
Conclusion:
The Sandworm 2022 Ukraine power grid attack shows how adversaries are evolving toward stealthy LotL in OT and destructive actions in IT. By mapping this campaign into the DML model, defenders can design detection engineering strategies that evolve from IOC-based alerts into high-fidelity intent-level detections. YARA and Sigma rules, combined with MITRE ATT&CK mappings, provide the foundation for Detection-as-Code, enabling both Blue and Purple Teams to test, validate, and strengthen security operations against state-sponsored threats.
Reference to the full details about the attack
Detection as Code
Detection as Code (DaC) transforms traditional static alerting into a software development lifecycle for detection. Rules (SIGMA, YARA, or custom scripts) and response playbooks are managed programmatically, allowing for:
- Version control: Every change is tracked and can be rolled back.
- Automated testing: Detections can be validated against sample artifacts or adversary emulation frameworks.
- Collaboration: Teams can review and improve detections collectively, similar to software code reviews.

For the Sandworm SCILC campaign, we can apply DaC principles:
- YARA Rules: Stored in Git and versioned; e.g., M_Hunting_MicroSCADA_SCILC_Program_Execution_Strings detects execution of scilc.exe -do.
- SIGMA Rules: Implemented for Windows event logs to identify process launches of SCILC binaries (M_YARAL_Methodology_ProcessExec_SCILC_Do_1).
- Automation: Upon rule update, CI/CD pipelines lint the detection code, run tests against historical logs or sample artifacts, and deploy rules to endpoint monitoring systems.
Purple Teaming and Feedback Loops

Purple Teaming bridges offensive and defensive efforts. For example:
- Offensive emulation: Run Sandworm SCILC command sequences in a controlled environment, such as scilc.exe -do pack\scil\s1.txt, and VBS batch launches (lun.vbs).
- Defensive validation: Confirm that SIGMA and YARA rules fire alerts correctly and assess coverage gaps.
- Feedback loop: Modify rules based on evasive techniques used during emulation to reduce false negatives or false positives.
Example: Testing the YARA rule M_Hunting_VBS_Batch_Launcher_Strings can confirm that VBS scripts used to launch batch files are detected, allowing analysts to adjust detection thresholds or conditions.
Adversary Emulation

Frameworks such as Atomic Red Team, Caldera, and VECTR enable precise simulation of adversary tactics, techniques, and procedures (TTPs).
For the Sandworm campaign:
- Initial Access (T0847): Emulate removable media insertion with ISO images.
- Execution (T0807, T0871, T0853): Execute SCILC commands via scilc.exe -do and VBS scripts.
- Impact / Manipulation of Control (T0831, T0855): Emulate unauthorized SCIL commands to verify detection of command execution targeting RTUs.
Rules such as M_Methodology_MicroSCADA_SCILC_Strings and M_Methodology_MicroSCADA_Path_Strings can be validated in real time to ensure that artifacts, paths, and command executions are detected. This provides proof-of-effectiveness for the intelligence-driven detections.
This approach closes the loop: intelligence feeds detection engineering, which is continuously validated through emulation and purple team exercises, resulting in adaptive and resilient detection coverage.
Conclusion
Intelligence-driven detection represents the future of Security Operations Centers (SOC). By combining structured threat intelligence, adversary emulation, and automated detection engineering, SOC teams can move beyond reactive alerts to proactive defense.
The integration of the Pyramid of Pain, Purple Teaming, and Detection as Code creates a framework for resilient and adaptive detections. Analysts are not only able to detect today’s malware and adversary campaigns, such as Sandworm’s SCILC operations, but also design systems capable of anticipating and mitigating tomorrow’s zero-day attacks.
This approach ensures that detection is continuous, testable, and evolving, shifting the SOC from a reactive posture to a strategic, intelligence-driven defense capability.
Intelligence-Driven Detection Engineering: From Threat Intel to Detection-as-Code (with the Pyramid… was originally published in Detect FYI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Introduction to Malware Binary Triage (IMBT) Course
Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor.
Enroll Now and Save 10%: Coupon Code MWNEWS10
Note: Affiliate link – your enrollment helps support this platform at no extra cost to you.
1 post - 1 participant
Malware Analysis, News and Indicators - Latest topics