Creating detection analytics is only part of the equation in detection engineering. Once you have analytics in place, the next challenge is to interpret their results accurately, distinguishing genuine threats from benign anomalies. In this post, we’ll go through how to evaluate results, identify malicious activity amid outliers, and navigate common sources of false positives.
Understanding Outliers and Context
Every detection analytic produces a range of events, and some of these will stand out as outliers. An outlier is an event that doesn’t match the normal patterns of activity in your environment. This could mean malicious behavior, or it could just be an unusual but legitimate activity.
To assess outliers, start by asking key questions:
- How frequently does this activity occur?
Rare events, especially on critical systems, often warrant closer scrutiny. - Is the activity typical across multiple systems, or isolated?
A widespread event may point to a scheduled update, while isolated activity could signal targeted, unauthorized actions. - Does the event align with known adversary behaviors?
Cross-referencing with cyber threat intelligence can help pinpoint tactics adversaries typically use, such as unexpected file execution or network access.
Using Cyber Threat Intelligence to Verify Hits
Cyber threat intelligence (CTI) is a valuable tool for interpreting results, giving you insight into which events align with known attack patterns. CTI provides context for what each detection might mean and offers examples of similar past activities. For instance, if you detect a scheduled task named “UpdateChrome” executing from an unusual directory, CTI might confirm this as a common adversarial tactic to impersonate legitimate updates.
Look for indicators in your CTI feed that match characteristics in your analytic output. Known tactics, unusual file paths, or specific command-line arguments can all be strong indicators. However, be cautious about confirmation bias; CTI should inform your interpretation without leading to assumptions. An event that matches a known tactic still needs further validation.
Dealing with Common False Positives
False positives are an inevitable part of detection, and understanding their sources helps you reduce them. Common sources include:
- Administrator Actions
Admins often perform activities that can mimic adversarial behavior, such as running scripts, creating scheduled tasks, or moving laterally within the network. Regularly consulting with admin teams can clarify legitimate events and prevent unnecessary investigations. - Security Software
Security tools, especially those with endpoint monitoring, often generate actions that look like malicious activity. Tools designed to inspect files, kill processes, or monitor networks might resemble adversary behavior under certain conditions. - System Misconfigurations
Misconfigured systems or software behaving unexpectedly can generate alerts similar to those you’d expect from malicious actions. Periodic system audits help identify and correct these misconfigurations, preventing them from recurring as false positives. - Unusual User Activity
End users frequently create unusual activity that isn’t malicious. Power users, developers, or network testers may engage in behaviors resembling adversarial tactics. Tracking baseline behavior helps identify unusual actions that may still be legitimate.
Evaluating Results with a Contextual Approach
Once you’ve considered the sources of false positives and aligned with CTI, evaluate the context surrounding each hit:
- Compare Event Details
Look at the specifics of each event: the process name, command line arguments, user associated with the action, and the system on which it occurred. Consistencies or anomalies here provide insight into whether the activity could be malicious. - Examine Related Events
Many incidents involve multiple connected events. If you identify one suspicious process, check for additional events that relate to it, such as network connections initiated by that process or further file manipulations. Malicious behavior often unfolds in sequences. - Document Findings
Track each evaluated hit and record key details. Include the initial analytic that flagged it, any CTI consulted, false positive sources, and the steps taken to assess its legitimacy. Documenting this information builds a reference for future investigations and helps refine analytics over time.
Conclusion
Interpreting detection results requires more than simply looking at alerts. By evaluating outliers, applying CTI, recognizing false positives, and examining context, you build a clear understanding of which events are genuine threats. In the next post, we’ll cover how to document investigations effectively and connect events to expand the picture of adversarial activity in your environment.
0 Comments