I’ve had the fortune of visiting a good number of SOCs (including building some) and meeting with a number of leaders in the SOC/IR space over the years- and the better teams will tell you that you simply cannot look at every single alert that fires. Expanding upon this even more, in regards to an Intel-driven IR program, this means that you cannot simply dump all indicators into production; I’ve seen this fail for both immature programs (overwhelmed with alerts) as well as mature programs (too much data for the hardware to handle, too many alerts, not enough context). Simply put, I believe there must be a better way to prioritize indicators and define proper thresholds versus what most organizations are doing today. The below are some concepts that I haven’t heard talked about before, but are some ideas that I’ve spent some time working on with others and think will help adjust the playing field down the road- I don’t think they are perfect, but are perhaps a step in the right direction. As always, I’d love to hear feedback and thoughts on Twitter @SeanAMason, so please reach out.
Current State: Confidence & Impact
Confidence & Impact rating is generally what I’ve seen used to rank atomic indicators- but this method does not scale and is overly subjective- meaning it is not based on much data nor fact. How do you define Impact, especially when you are referring to an indicator of a possible breach? Confidence may be a bit easier to define given the source, but having only two subjective levers to categorize indicators with leaves minimal options between them both and presents a rather basic approach to something much more complex. It is also a bit more difficult to draw a line- do you draw it at Medium/Medium, what do you do with a Low/High indicator, etc…?
Six Objective Areas
With the stage set above with the current state, let’s begin by discussing six areas that allow for more objectivity in this space and can be driven by data and facts.
1. Threat Actor (TA): There are a plethora of security professionals (especially vendors) discussing “Advanced Persistent Threats” without understanding the origin, what the term means, or if these threats are even targeting the networks they are entrusted to protect. With this said, not all threat actors are created equal and treating them as such when you are defending a network is one of the quickest avenues to become overwhelmed and under-effective in combating threats. Last count, most organizations in the know are tracking 40+ threat actors, which encompass both Nation State APT and Organized Cyber Crime. Trying to focus on each and every one of these will quickly consume all of your resources and will not allow for proper prioritization. Learn and understand the threat actors you should be most concerned about and prioritize those ahead of the others. For example, if you are a Defense Contractor and APT is your main concern- those should be weighted higher- and even higher still if you understand the particular APT groups concerned with your IP (e.g. Are you really concerned with APT1 or perhaps one or more of the other Chinese APT groups?). Focusing your efforts on APT groups not interested in your IP, or CyberCrime groups looking for Credit Card data, while interesting, may be a huge waste of time and resources for a Defense Contractor. Even if you are an immature organization and do not have the visibility and granularity of some of the largest companies or vendors providing this information, don’t hesitate to begin grouping threats at a high level based on motivation- you may be surprised at what patterns you start to notice over time and who is actually targeting you.
Example for a company making medical devices:
|[Fake] Threat Actor Name||Target||Incidents Identified||Objective Weighting|
|Unicorn Spider||Medical Device IP||42||10|
|Mutant Turtles||Medical Device IP||18||10|
|Soccer Ball||PHI data||2||6|
|XFit One||Credit Card Data||0||1|
2. Source (S): If you are tracking your incidents and alerts and mapping them back to the source of the intel, you can begin to paint a clearer picture of who or what organization(s) are providing you with the highest fidelity information. It’s amazing how many Intel vendors have seemingly sprung up overnight and this is also a useful method to understand if you are getting what you are paying for. Weight the sources with the best information above others. As a note, your best source of Intel is most likely going to be from your internal team- comprised of the indicators won on the IR battlefield- so don’t overlook those.
|Source||Incidents Identified||False Positive Alert %||Objective Weighting|
3. Kill Chain Phase (KC): A lot has been said about the KC and much more about how hard (or impossible) it is to use in production- I disagree. Using the Kill Chain to help weight your indicators is a great use case and generally speaking, I’m more concerned with something alerting at KC7 Actions on Objectives than I am on KC1 Recon. This is not to underscore the Recon phase, as I think there is much to learn from it- but not every organization has the resourcing to review Recon alerts and this will help prioritize alerts to focus more time on issues of concern higher in the KC.
|Kill Chain Phase||Objective Weighting|
|KC7 – Action of Objectives||10|
|KC6 – Command & Control||9|
|KC5 – Installation||8|
|KC4 – Exploitation||7|
|KC3 – Delivery||6|
|KC2 – Weaponization||2|
|KC1 – Reconnaissance||5|
4. Indicator Date (ID): Aging of indicators is a fundamental problem that has been talking about for years. Unfortunately, I’ve not seen a silver bullet solution for this issue. Using this as a small piece of a larger equation helps to address this. I would proceed with caution utilizing this attribute and not attempt to overweight it, as I’ve seen indicators that are years old be reused again in an attack (e.g. domains).
|Indicator Age||Objective Weighting|
5. Performance (P): Many times you will have an indicator that has proven itself time and again and in some cases you have indicators that have never fired, never led to an incident and/or have only generated False Positives. There are multiple ways to build out this table, but an example is below.
|Incidents Identified||False Positive Ratio||Objective Weighting|
6. Pyramid of Pain (PoP): For those who are not familiar with @DavidJBianco‘s Pyramid of Pain, check out his blog post here. Essentially the PoP is based on the reality that, “Not all indicators are created equal, and some of them are far more valuable than others.” While you may not leverage all indicator types in certain tools, for those that accept multiples, this will help level the playing field.
|Pyramid Level||Objective Weighting|
Once you have these 6 areas defined and weighted appropriately, you can begin to work with the below formula to get to an appropriate measurement for evaluating your indicators more objectively. For example, if you are just starting out and have not had a lot of incidents, perhaps you will weight Sources (S) higher than Performance (P). If you’re unsure about your internal Intel capabilities and mapping back to Threat Actors (TA), perhaps you will weight those lower than Sources (S). My suggestion would be to weight your formula multiple ways and look at your existing intel and previous incidents you have had to confirm that it is aligning to where you believe it should, before blindly doing an overhaul. Ideally you should find the right balance for your organization and look to keep it evergreen based on the maturation of your program and the data your organization produces and evaluates.
As an example, your formula may look like this:
As an example using some indicators, you may come up with something like this- notice how you have painted a more holistic picture of the indicator, which you can now better rely on based on objectivity and data. You can also draw a numerical line in the sand, based on an objective indicator weighting.
|Indicator||Threat Actor||Source||Kill Chain||Indicator Date||Performance||Pyramid of Pain||Formula||Objective Rating|
|badguy.com||Unicorn Spider||Internal||KC7||12-24 months||2+ Incidents||Domain Name||ObjRat = .25×10 + .25×10 + .2×10 + .05×10 + .15×10 + .1×5||9.5|
|6b4475ce9f9c5c4b9d2e7edc8bdbf849||Mutant Turtles||Vendor A||KC4||12-24 months||0 Incidents||Hash||ObjRat = .25×10 + .25×8 + .2×7 + .05×4 + .15×5 + .1×1||6.95|
|188.8.131.52||XFit One||Government||KC6||6-12 months||2+ Incidents||IP Address||ObjRat = .25×1 + .25×4 + .2×9 + .05×6 + .15×10 + .1×4||5.25|
|sdra64.exe||Soccer Ball||Vendor B||KC5||24+ months||90% FP Ratio||Host Artifact||ObjRat = .25×6 + .25×2 + .2×8 + .05×2 + .15×2 + .1×8||4.8|
Adding in Dollars and Cents
Moving away from the objective aspects for a moment, let’s bring in some ideas on how to appropriately set that line. Some organizations track the % of alerts they are able to evaluate per month and report up to senior leadership the volume/% not being reviewed- thus helping drive awareness (and hopefully funding), and allowing for risk driven decisions due to finite resourcing. I’ve always liked knowing the cost to run the SOC per minute and ultimately the cost per alert and have personally used the following math to help illustrate gaps and explain the situation in factual dollars and cents.
For example, let’s assume you have $3M SOC budget and are generating 100k alerts per year.
First, we find the Cost to run the SOC per Minute is $5.71. (525,600 minutes/yr)
If it takes you an average of 8 minutes to resolve an alert, your cost per alert is $45.68.
Dividing that into your $3M budget, your team can handle 65,674 alerts per year or ~66% of what is being generated. Keep in mind, this doesn’t include overhead such as training, vacation, meetings, etc…
If you’re generating 100k alerts- you can see you have a large resource gap and need to adjust thresholds- and more importantly, expectations- accordingly.
Data is King and Context is his Queen. Armed with methods to both objectify your indicators as well as understanding resource limitations for evaluating alerts, you can now set the bar to appropriately target what your level of resources are capable of. Perhaps more importantly, you can now have data-driven conversations with leadership around tightening- or opening- the aperture to detect attacks. Trust me- empowering your organizational leadership to make risk-based decisions mapped back to funding, armed with hard facts as opposed to conjecture is a very enlightening conversation to have. I’d love to hear feedback on Twitter, so please reach-out @SeanAMason.