This is the final installment in a three-part series about the important elements of a threat-centric security capability.
In the first part of the series, I focused on the people and skills needed to effectively detect and respond to threats. In the second part, I focused on the processes that need to be in place in order to respond proactively to threats within the environment.
This final installment focuses on the tools security operations center analysts need to do their job effectively.
The first essential tool(s) needed are in the area of host detection. Typically, this will be a kernel-level agent that resides on each host. User-level agents will not be effective as attackers can see the service running on the host and disable it quite easily without detection.
The kernel-level agent provides telemetry back to the security operations center, giving visibility on host behavior. At a minimum, this kernel-level agent will provide monitoring of:
- Host process creation
- Host network activity
- Host process/thread behavior (registry access, file access, file creation/modification, directory access)
- Host memory analysis (i.e. stack, heap)
- Host behavior correlation
Other features to look for are application whitelisting and application blacklisting. Application whitelisting should be enabled in a detection mode state, as this feature tends to generate many false positives. It’s a best practice to have the analyst in the security operations center investigate further to determine if it is indeed an indicator of attack or indicator of compromise.
In addition, future endpoint technology should be able to model the ideal state of the host on which it is deployed and create an alert when behavior is seen that falls out of that ideal steady state condition.
For example, a Windows Active Directory domain-joined host exhibits certain behavior upon startup, logon, logoff and normal operation in terms of network connections, services started, event log entries and files accessed. This can be “learned” by the kernel-level agent, and alerts can be generated when certain behavior falls out of line with normal operation.
This can be “learned” by the kernel-level agent, and alerts can be generated when certain behavior falls out of line with normal operation.
At the network level, it’s important to have both in-line protection as well as network intrusion detection/monitoring via a switch SPAN port or network tap, if possible. In public cloud environments, you may not have access to SPAN ports or network taps. In that case, in-line protection is essential. The in-line protection technology is known by another name, next generation firewall or NGFW. The feature set for NGFW, at a minimum, should include:
- Application visibility and control
- User visibility and control; integration with IAM
- APT prevention
- Passive DNS
- Data filtering
- Policy control
- PFS-SSL offloading/decryption/inspection
- Exploit protection
- SaaS enforcement
- Logging and reporting
- IPv6 support
- Next gen networking support (i.e. NSX)
Whereas prior generation firewalls were typically inspected Layer 3/4 traffic, the next generation firewall will also be responsible for Layer 7 (application layer) inspection of north-south (ingress-egress) network traffic as well as east-west traffic (between layer 3 VLANs).
It not only acts as an attack prevention device, but also provides network information that can aid the analyst looking for changes in network traffic to correlate with the host detection discussed in the prior paragraph.
Network intrusion detection systems try to identify malicious action such as denial of service attacks, port scans and attempts to break into computers by monitoring network traffic. The network intrusion detection system technology selected should have the following features:
- Ability to capture packets from the network interfaces.
- An event engine to capture the packets and put them together to become events which explain the performed actions.
- A policy script interpreter to take action if it detects any suspicious and dangerous actions or discards other events not defined in the policy scripts.
- The solution should have the ability to run in high-speed (>10 Gbps) environments and be able to capture without dropping packets or slowing down the traffic.
- Next-gen (contextual) signature detection.
- Pre-written policy scripts which can be used right out of the box to detect the most well-known attacks.
- Ability to customize policy scripts specific to your environment.
- Ability to model network behavior, in order to detect changes to known network traffic.
If a network intrusion detection system is able to be deployed (public cloud environments can be challenging to implement all of these features), it serves as an additional data point for analysts in the security operations center.
Finally, an analytics engine is needed that can take information from the host and network-level tools and correlate them. This is known as security information and event monitoring or SIEM.
When a network is breached, the time between when the attack occurs and when the operations center responds can make the difference between protecting the organization’s most vital data and a successful data exfiltration (and becoming the lead story on the evening news).
SIEM software, when correctly configured and monitored, can play a significant role in identifying breaches as they’re happening. It is important that the requirements for the SIEM be discussed and agreed upon before deployment, and that the software be correctly sized for the environment in question.
Companies often overspend on their SIEM implementation because they fail to fully understand what problems they need to solve for. At a minimum, a SIEM should be able to do the following:
- Integrate traditional log sources with other event sources (i.e. host-detection & network detection tools, NGFW)
- Include capabilities to support a security operations center
- Scale to large implementations
- Import and export content (rules, reports, trends)
- Include multi-value lists (active lists, watch lists) with expiration times on lists (expire after X number of minutes/hours) and event on expiration for state table usage
- Create custom log source feeds
- Aggregate and filter at the collector level (with selectable fields and summarization of fields)
- Reuse and move objects
- Summarize tables
- Provide health status monitoring
- Provide redundancy
- Scale at the correlation engine level
- Integrate with a ticketing/workflow system
- Integrate with an existing configuration management database to pull asset tag information
One approach beginning to grow in adoption is cloud-based SIEM as a service. For those considering utilizing cloud-based SIEMs, you need to understand that log data can potentially contain personally identifiable information or protected health information.
For example, the SIEM could alert on a file transfer and collect the data from the transfer in a log file. That log file could contain a social security number or a patient’s private data. A separate privacy agreement with the cloud provider may be needed to ensure the data is handled appropriately.
Host-based detection tool(s), network-based detection/prevention tool(s) and analytics are the “triple-stack” toolset needed to support the operations center analysts. There are other tools that complement the triple stack, but these three focus areas represent the main ones, at least from a cost perspective.
Do you have questions about threat-centric security capabilities, or want to learn more about the tools your security operations center should adopt to be threat-centric? Visit Rackspace to find out more about our managed security services and the ways we help businesses stay secure.