Objectifying Cyber Intel Indicators

I’ve had the fortune of visiting a good number of SOCs (including building some) and meeting with a number of leaders in the SOC/IR space over the years- and the better teams will tell you that you simply cannot look at every single alert that fires. Expanding upon this even more, in regards to an Intel-driven IR program, this means that you cannot simply dump all indicators into production; I’ve seen this fail for both immature programs (overwhelmed with alerts) as well as mature programs (too much data for the hardware to handle, too many alerts, not enough context). Simply put, I believe there must be a better way to prioritize indicators and define proper thresholds versus what most organizations are doing today. The below are some concepts that I haven’t heard talked about before, but are some ideas that I’ve spent some time working on with others and think will help adjust the playing field down the road- I don’t think they are perfect, but are perhaps a step in the right direction. As always, I’d love to hear feedback and thoughts on Twitter @SeanAMason, so please reach out.


Current State: Confidence & Impact

Confidence & Impact in CRITS

Confidence & Impact rating is generally what I’ve seen used to rank atomic indicators- but this method does not scale and is overly subjective- meaning it is not based on much data nor fact. How do you define Impact, especially when you are referring to an indicator of a possible breach? Confidence may be a bit easier to define given the source, but having only two subjective levers to categorize indicators with leaves minimal options between them both and presents a rather basic approach to something much more complex. It is also a bit more difficult to draw a line- do you draw it at Medium/Medium, what do you do with a Low/High indicator, etc…?

Confidence Impact Weighting
High High High/High
High Medium High/Medium
High Low High/Low
Medium High Medium/High
Medium Medium Medium/Medium
Medium Low Medium/Low
Low High Low/High
Low Medium Low/Medium
Low Low Low/Low

Six Objective Areas

With the stage set above with the current state, let’s begin by discussing six areas that allow for more objectivity in this space and can be driven by data and facts.

1. Threat Actor (TA): There are a plethora of security professionals (especially vendors) discussing “Advanced Persistent Threats” without understanding the origin, what the term means, or if these threats are even targeting the networks they are entrusted to protect. With this said, not all threat actors are created equal and treating them as such when you are defending a network is one of the quickest avenues to become overwhelmed and under-effective in combating threats. Last count, most organizations in the know are tracking 40+ threat actors, which encompass both Nation State APT and Organized Cyber Crime. Trying to focus on each and every one of these will quickly consume all of your resources and will not allow for proper prioritization. Learn and understand the threat actors you should be most concerned about and prioritize those ahead of the others. For example, if you are a Defense Contractor and APT is your main concern- those should be weighted higher- and even higher still if you understand the particular APT groups concerned with your IP (e.g. Are you really concerned with APT1 or perhaps one or more of the other Chinese APT groups?). Focusing your efforts on APT groups not interested in your IP, or CyberCrime groups looking for Credit Card data, while interesting, may be a huge waste of time and resources for a Defense Contractor. Even if you are an immature organization and do not have the visibility and granularity of some of the largest companies or vendors providing this information, don’t hesitate to begin grouping threats at a high level based on motivation- you may be surprised at what patterns you start to notice over time and who is actually targeting you.

Example for a company making medical devices:

[Fake] Threat Actor Name Target Incidents Identified Objective Weighting
Unicorn Spider Medical Device IP 42 10
Mutant Turtles Medical Device IP 18 10
Soccer Ball PHI data 2 6
XFit One Credit Card Data 0 1
Roger Rabbit Hacktivism 0 1

2. Source (S): If you are tracking your incidents and alerts and mapping them back to the source of the intel, you can begin to paint a clearer picture of who or what organization(s) are providing you with the highest fidelity information. It’s amazing how many Intel vendors have seemingly sprung up overnight and this is also a useful method to understand if you are getting what you are paying for. Weight the sources with the best information above others. As a note, your best source of Intel is most likely going to be from your internal team- comprised of the indicators won on the IR battlefield- so don’t overlook those.


Source Incidents Identified False Positive Alert % Objective Weighting
Internal 42 24% 10
Vendor A 18 67% 8
Government 2 89% 4
Vendor B 0 88% 2

3. Kill Chain Phase (KC): A lot has been said about the KC and much more about how hard (or impossible) it is to use in production- I disagree. Using the Kill Chain to help weight your indicators is a great use case and generally speaking, I’m more concerned with something alerting at KC7 Actions on Objectives than I am on KC1 Recon. This is not to underscore the Recon phase, as I think there is much to learn from it- but not every organization has the resourcing to review Recon alerts and this will help prioritize alerts to focus more time on issues of concern higher in the KC.


Kill Chain Phase Objective Weighting
KC7 – Action of Objectives 10
KC6 – Command & Control 9
KC5 – Installation 8
KC4 – Exploitation 7
KC3 – Delivery 6
KC2 – Weaponization 2
KC1 – Reconnaissance 5

4. Indicator Date (ID): Aging of indicators is a fundamental problem that has been talking about for years. Unfortunately, I’ve not seen a silver bullet solution for this issue. Using this as a small piece of a larger equation helps to address this. I would proceed with caution utilizing this attribute and not attempt to overweight it, as I’ve seen indicators that are years old be reused again in an attack (e.g. domains).


Indicator Age Objective Weighting
0-3 months 10
3-6 months 8
6-12 months 6
12-24 months 4
24+ months 2

5. Performance (P): Many times you will have an indicator that has proven itself time and again and in some cases you have indicators that have never fired, never led to an incident and/or have only generated False Positives. There are multiple ways to build out this table, but an example is below.

Incidents Identified False Positive Ratio Objective Weighting
2+ 10
1 9
0 5
0 10-50%+ 7
0 90%+ 2

6. Pyramid of Pain (PoP): For those who are not familiar with @DavidJBianco‘s Pyramid of Pain, check out his blog post here. Essentially the PoP is based on the reality that, “Not all indicators are created equal, and some of them are far more valuable than others.” While you may not leverage all indicator types in certain tools, for those that accept multiples, this will help level the playing field.

Pyramid Level Objective Weighting
Network/Host Artifacts 8
Domain Names 5
IP Addresses 4
Hash Values 1

Scoring Formula

Once you have these 6 areas defined and weighted appropriately, you can begin to work with the below formula to get to an appropriate measurement for evaluating your indicators more objectively. For example, if you are just starting out and have not had a lot of incidents, perhaps you will weight Sources (S) higher than Performance (P). If you’re unsure about your internal Intel capabilities and mapping back to Threat Actors (TA), perhaps you will weight those lower than Sources (S). My suggestion would be to weight your formula multiple ways and look at your existing intel and previous incidents you have had to confirm that it is aligning to where you believe it should, before blindly doing an overhaul. Ideally you should find the right balance for your organization and look to keep it evergreen based on the maturation of your program and the data your organization produces and evaluates.

As an example, your formula may look like this:
ObjRat = .25TA + .25S + .2KC + .05ID + .15P + .1PoP

As an example using some indicators, you may come up with something like this- notice how you have painted a more holistic picture of the indicator, which you can now better rely on based on objectivity and data. You can also draw a numerical line in the sand, based on an objective indicator weighting.

Indicator Threat Actor Source Kill Chain Indicator Date Performance Pyramid of Pain Formula Objective Rating
badguy.com Unicorn Spider Internal KC7 12-24 months 2+ Incidents Domain Name ObjRat = .25×10 + .25×10 + .2×10 + .05×10 + .15×10 + .1×5 9.5
6b4475ce9f9c5c4b9d2e7edc8bdbf849 Mutant Turtles Vendor A KC4 12-24 months 0 Incidents Hash ObjRat = .25×10 + .25×8 + .2×7 + .05×4 + .15×5 + .1×1 6.95 XFit One Government KC6 6-12 months 2+ Incidents IP Address ObjRat = .25×1 + .25×4 + .2×9 + .05×6 + .15×10 + .1×4 5.25
sdra64.exe Soccer Ball Vendor B KC5 24+ months 90% FP Ratio Host Artifact ObjRat = .25×6 + .25×2 + .2×8 + .05×2 + .15×2 + .1×8 4.8

Adding in Dollars and Cents

Moving away from the objective aspects for a moment, let’s bring in some ideas on how to appropriately set that line. Some organizations track the % of alerts they are able to evaluate per month and report up to senior leadership the volume/% not being reviewed- thus helping drive awareness (and hopefully funding), and allowing for risk driven decisions due to finite resourcing. I’ve always liked knowing the cost to run the SOC per minute and ultimately the cost per alert and have personally used the following math to help illustrate gaps and explain the situation in factual dollars and cents.

For example, let’s assume you have $3M SOC budget and are generating 100k alerts per year.

First, we find the Cost to run the SOC per Minute is $5.71. (525,600 minutes/yr)
$5.71 = $3,000,000 / 525,600

If it takes you an average of 8 minutes to resolve an alert, your cost per alert is $45.68.
$45.68 = $5.71 * 8

Dividing that into your $3M budget, your team can handle 65,674 alerts per year or ~66% of what is being generated. Keep in mind, this doesn’t include overhead such as training, vacation, meetings, etc…
65,674 = $3,000,000 / $45.68

If you’re generating 100k alerts- you can see you have a large resource gap and need to adjust thresholds- and more importantly, expectations- accordingly.
100,000 - 65,674 = 34,326 * $45.68 = $1,568,011


Data is King and Context is his Queen. Armed with methods to both objectify your indicators as well as understanding resource limitations for evaluating alerts, you can now set the bar to appropriately target what your level of resources are capable of. Perhaps more importantly, you can now have data-driven conversations with leadership around tightening- or opening- the aperture to detect attacks. Trust me- empowering your organizational leadership to make risk-based decisions mapped back to funding, armed with hard facts as opposed to conjecture is a very enlightening conversation to have. I’d love to hear feedback on Twitter, so please reach-out @SeanAMason.

4 thoughts on “Objectifying Cyber Intel Indicators

  1. Good article.
    The cost per alert is a very good concept and does well in driving the argument that analysts can only clear a finite number of alerts. Like all good metrics, this number could drive other strategies such as consolidation of alerts and rotation of indicator packages through the toolsets.

  2. Some thoughts on “Objective Areas”:

    1. Threat Actor – in some cases, those “tracking” threat groups (really, clusters of indicators) tend to see various “groups” go after different verticals. While this doesn’t seem to transition across macro-level domains; ie, a group that is after manufacturing plans doesn’t appear to then switch to PCI data, nor vice versa. However, the same nominal “group” has been observed doing after auto manufacturing, pharma, etc.

    2. Source – great idea, a very “Edward Deming” approach.

    3. KC – agreed, with the caveat that the model doesn’t account for the vast amount of “intel” that can be gleaned from phase 7. It’s as if the Kill Chain was developed from a malware RE & network-centric approach; from a host-based or holistic (incorporating network and host) perspective, the KC model seem to tip a lot to the right.

    4. ID – applies to SOME indicators, from a network-centric perspective. If you’re external to endpoints, then indicators in the lower half of David Bianco’s “Pyramid of Pain” have a “shelf life”; that is, they age out. As pointed out in this item, they can “return to life”. However, if your perspective is that of the endpoint (ie, host-based analysis), then the indicators at all levels of the pyramid have less of a “shelf life” and more of an “order of volatility”.

    6. PoP: TTPs => 12

    Overall, while I agree with the need for “objective” areas, my concern is that this isn’t necessarily something that will be reached due to a lack of transparency and rigor within the “community”. It’s one thing to use declarative language or a weighted value, but at some point, where does that value originate? What is the data behind it? Without a complete picture…without access to actual compromised hosts…can one ever really be “objective”, or do we have to accept a certain amount of assumption and speculation?

  3. Sean,

    You said on Twitter that I forgot to address #5…I didn’t. Rather, I was saddened by my thoughts on this.

    As a community, this isn’t something that we do…maintain enough information about an indicator or artifact to rate it’s performance.

    Have you ever had a conversation with someone about DF case notes? In a group, some will say that they keep them…but when you ask to see them, that small percentage that admits to keeping case notes gets even smaller. “Oh, I keep them on scraps of paper…”…I’ve been told that by about a dozen different analyst over the years.

    When an analyst does share their case notes, you see that they’re all over the place…there’s no real focus on the goals of the investigation.

    My point is that keeping a record of performance of an indicator is exceedingly difficult when (a) an analysts don’t generally keep records, and (b) because many analysts aren’t taking a focused approach to their analysis, they don’t have the time remaining to maintain those records.

    Jamie Levy shared an indicator with me that helps identify the use of gsecdump to dump LSA secrets on a system. I wrote a RegRipper plugin, and I use it on just about all of my analysis engagements. The false positive rate has been extremely low, and the indicator has been shown to be valid in the absence of other supporting indicators. The performance of this indicator is such that it is an eye-opener and can not only provide insight into what the adversary did on the box (Actions on Objective), but it can also change the direction of an investigation. But getting to this point required sharing, and continual attention and vigilance toward that indicator.

    I still agree that we need an objective “Deming-style” approach to the solution, but that requires a level of rigor that really isn’t pervasive throughout the “community”.

  4. Pingback: Too Much of a Good Thing? | Tieu Luu's Blog

Leave a Reply to H. Carvey Cancel reply

Your email address will not be published. Required fields are marked *