The Life-Dinner Principle in Detection

“The rabbit runs faster than the fox, because the rabbit is running for his life while the fox is only running for his dinner.” — paraphrase cited “after Aesop” by Richard Dawkins & John Krebs, “Arms Races Between and Within Species”.

Cybersecurity has its own folk saying. You have heard it at conferences, on panels, in vendor decks, and in LinkedIn posts from CISOs who are “humbled” to announce something:

“The attacker only has to be right once. The defender has to be right every time.”

It sounds true. Sometimes it is true.

But as a general statement about cybersecurity, it is wrong.

Not grammatically wrong. Not useless in every context. There are attack classes where it describes the problem well enough. But it has been treated as a foundational law of security, and that is where the break occurs. It has become the sentence we use to justify endless alert volume, endless budget requests, endless analyst burnout, and endless fear.

For many important defender-attacker dynamics, the sentence is not just incomplete. It is backwards.

Dawkins and Krebs were interested in evolutionary arms races: how predators and prey escalate against one another, and why those contests do not simply spiral until one side wins completely.

A simple model says that if either side gets even a small advantage, the race should tip decisively. The fox gets faster, catches more hares, and the hares vanish. Or the hares get faster, escape consistently, and the foxes starve.

But that is not what happens. Foxes and hares have shared landscapes for millions of years. The fox still hunts. The hare is still in the meadow. Something else is going on.

Dawkins and Krebs point to an asymmetry in per-encounter stakes. The fox is running for dinner. If it loses, it misses a meal. That matters, but it is usually recoverable. It can hunt again.

The hare is running for its life. If it loses, the game is over. No second try. No recovery. No descendants.

The same race has different consequences depending on which animal you are.

That difference changes the selection pressure. Over many generations, the hare is pushed harder toward escape than the fox is pushed toward pursuit. The hare becomes, in a sense, over-invested in survival. And that over-investment is not waste. It is part of what keeps the system stable.

Cybersecurity usually talks as though the arms race is symmetric. The attacker gets infinite cheap shots. The defender is doomed to fail eventually. The attacker only needs one success. The defender needs perfection.

However! real security programs do not operate in a symmetric world.

Some attacks are high-stakes for defenders. A missed detection can mean ransomware, exfiltration, regulatory exposure, customer churn, or an existential hit to the business. In those cases, yes, the defender is running for its life. Over-investment is rational.

Other attacks are the opposite. Commodity scanning, phishing spray, low-grade malware noise, and botnet traffic can cost attackers almost nothing to generate. If the defender spends real analyst time on every one of those events, the defender loses economically even when nothing bad happens.

That is the distinction we usually fail to make.

Some detections are life races.

Some detections are dinner races.

A mature security program needs to know which is which.

The sentence we keep paying for

“The attacker only has to be right once” is not just a cliché. It is a load-bearing cliché.

Vendor marketing depends on it. A dashboard showing “X threats blocked today” only works if you believe every single threat is equally meaningful. MSSP sales pitches depend on it too. “We cover everything” sounds impressive only if “everything” deserves the same treatment.

Conference panels love it because it produces solemn nodding. It sounds humble. It sounds serious. It makes the speaker look battle-tested.

It also lets everyone avoid harder questions.

If every alert might be the one, then every alert deserves attention.

If every attacker attempt has the same strategic weight, then prioritization becomes morally suspect.

If the defender must be right every time, then exhaustion starts to look like professionalism.

That is a comfortable worldview for vendors and a miserable one for analysts.

It is also not how asymmetry works.

The cyber arms race is not one race. It is thousands of races, each with different stakes, different costs, and different consequences. In some of them, the defender really is the rabbit. In others, the defender is the fox burning calories on a chase that was never worth taking.

We need to be able to tell those apart.

Where the rabbit lives

A rabbit-side detection is one where the defender’s per-encounter loss is severe enough to justify unusually deep investment.

These are the cases where missing the signal is expensive, damaging, or irreversible. The defender may face regulatory penalties, business disruption, loss of intellectual property, legal exposure, or long-term reputational damage.

Examples:

  • Targeted exfiltration of crown-jewel data
  • Compromise of executive or board-level communications
  • Ransomware in a business-critical environment
  • APT persistence in regulated infrastructure
  • Privileged access abuse in cloud or identity systems
  • Insider theft of material nonpublic information
  • Activity suggesting a rare or expensive exploit chain

In these cases, over-investment is not paranoia. It is appropriate.

If a CFO mailbox shows anomalous access from an unusual geography, that is not the same operational object as a random inbound scan against a public web server. One may represent material business risk. The other may be normal internet weather.

The problem is that many SOCs treat both as versions of the same thing: an alert.

They are not the same thing.

Rabbit-side detections deserve depth. They deserve careful investigation, higher tolerance for false positives, better context, longer dwell time, and more experienced analysts. They deserve escalation paths that do not depend on whether Tier 1 happens to be having a good shift.

In those cases, the organization is running for its life, or close enough that it should behave as if it is.

Where we are the fox

A fox-side detection is one where the attacker’s cost is close to zero, but the defender’s cost is real.

These are the alerts that drain security teams because they scale cheaply for the attacker and expensively for the defender.

Examples include:

  • Spray phishing
  • Commodity botnet traffic
  • Opportunistic vulnerability scanning
  • Low-tier malware callbacks
  • Repeated credential stuffing from known infrastructure
  • Generic exploit attempts against non-vulnerable systems
  • Internet background noise hitting exposed services

For the attacker, these events are cheap. Sometimes they are effectively free. A scanner runs. A botnet sprays. A phishing kit sends messages. The attacker does not care about any individual failure.

But if the defender pays analyst time per event, the economics are awful.

That is how organizations end up with burned-out analysts investigating garbage. Not because the analysts are bad. Not because the SOC is lazy. Because the system has confused cheap attacker noise with expensive defender risk.

Fox-side detections should not receive the same treatment as rabbit-side detections.

They should be aggregated, suppressed, enriched automatically, rate-limited, summarized, or converted into engineering work. The goal is not heroic investigation. The goal is to make the defender’s marginal cost approach zero.

If the attacker has automated the noise, the defender has to automate the response.

Otherwise, the defender is the fox sprinting after every rabbit in the field until it collapses.

The life-dinner score

Consider the simple solution of an additional metadata field:

detection:
name: T1078.004 - Cloud Account Anomalous Login (CFO mailbox)
life_dinner: rabbit
asymmetry_justification: |
Crown-jewel SaaS login compromise. Defender per-encounter loss
is regulatory disclosure, customer churn, and acquisition
sensitivity (~8-figure exposure). Attacker per-encounter loss
is one credential and one IP. Asymmetry: several orders of
magnitude in defender’s favour.
Treatment: deep investigation per fire. No auto-close.
detection:
name: Generic Inbound HTTP Vulnerability Scan
life_dinner: fox
asymmetry_justification: |
Opportunistic. Attacker cost ~zero (open-source scanner from
free-tier VPS). Defender investigation cost nontrivial if
treated manually. Asymmetry: against defender.
Treatment: aggregate, suppress to weekly summary, no analyst
time per fire.

That one field forces the conversation security teams should already be having.

What are the stakes? Who pays per encounter? What does a miss cost? What does a false positive cost? Is this an investigation problem or an automation problem?

A detection should not ship unless someone can explain where it sits on the life-dinner line.

That classification should affect everything downstream:

  • Queue design
  • Triage expectations
  • Escalation paths
  • False positive tolerance
  • False negative tolerance
  • Metrics
  • Staffing
  • Vendor evaluation
  • Automation strategy

A rabbit-side queue should be built for depth. The analysts working it need patience, context, and investigative skill. A single alert may deserve hours or days.

A fox-side queue should be built for throughput. The people working it should think like engineers. Their job is not to lovingly investigate every piece of internet trash. Their job is to make sure nobody has to.

Most SOCs combine those two jobs and then wonder why both are done badly.

They hire investigators and bury them in noise, or they hire “throughput operators” and ask them to reason about subtle intrusions.

Those are different jobs. They require different temperaments, different tooling, and different success metrics.

Where this analogy breaks

The fox-and-rabbit analogy is useful, but it is still an analogy. It breaks in a few important places.

First, there is the rare-enemy problem.

Dawkins and Krebs noted that selection pressure changes when encounters are rare. If a hare almost never meets a fox, then fox-driven selection pressure weakens. The cyber equivalent is straightforward: if an organization is rarely targeted by sophisticated actors, it may not experience the kind of pressure that justifies a large rabbit-side investment in every domain.

That does not mean the organization needs no security. It means the life-dinner classification has to include encounter probability, not just impact.

Second, breaches create externalities.

When a fox catches a rabbit, the cost is mostly borne by the rabbit. When a company loses sensitive data, the cost spreads to customers, employees, partners, and sometimes whole ecosystems. The defender’s balance sheet may understate the actual harm.

So the model is useful, but it can underprice the social cost of failure.

Third, the asymmetry can change mid-engagement.

A tool that is too expensive to burn in peacetime may be worth burning during a geopolitical crisis. An attacker who usually behaves like a fox may suddenly behave like a rabbit if the operation is strategically important enough.

That means classification is not a one-time exercise. It is part of threat modeling.

The meadow is still here

The cybersecurity industry has spent years warning that the meadow is about to collapse, but I don’t think it has.

That does not mean defenders are winning everywhere. It does not mean the work is easy. It does not mean attackers are harmless. But it does mean the simple doom model is wrong.

The attacker does not always win, the defender does not need to treat every encounter as existential — the real work is figuring out which races matter.

If we took the life-dinner principle seriously, security programs would change in practical ways:

  • Every detection would have a life-dinner classification: rabbit, fox, or mixed
  • Rabbit-side alerts would go to a depth-oriented investigation track
  • Fox-side alerts would go to an automation and aggregation track
  • Hiring would distinguish between investigators and automation engineers
  • Metrics would differ by queue
  • Classifications would be reviewed as threat conditions changed
  • Vendors would be judged by whether they understand the difference

And maybe CISOs could stop repeating the same old sentence.

“The attacker only has to be right once” is not useless. It just needs a footnote:

Sometimes. In symmetric contests. Which many of the important ones are not.

Dawkins and Krebs gave us a better way to think about this in 1979.

The fox runs for dinner.

The rabbit runs for life.

A good security program knows which one it is before the race starts.

If you enjoyed the article, feel free to connect with me!
www.linkedin.com/in/koifman-daniel
https://x.com/KoifSec
https://koifsec.me
https://bsky.app/profile/koifsec.bsky.social

The Life-Dinner Principle in Detection was originally published in Detect FYI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Introduction to Malware Binary Triage (IMBT) Course

Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor.

Enroll Now and Save 10%: Coupon Code MWNEWS10

Note: Affiliate link – your enrollment helps support this platform at no extra cost to you.

Article Link: https://detect.fyi/the-life-dinner-principle-in-detection-822169d9da2c?source=rss----d5fd8f494f6a---4

1 post - 1 participant

Read full topic



Malware Analysis, News and Indicators - Latest topics
Next Post Previous Post