“The greatest risk with false positives is that the tool generates so many alerts that [it] becomes seen as a noise generator, and any true issues are ignored due to fatigue on the part of those responsible for managing the tools,” Cotter says. “We frequently see this issue in tools that are not properly operationalized, such as when tools are installed and deployed using default settings and profiles.”
A common example is file integrity monitoring software, which alerts administrators when files on a monitored system are altered for any reason, and this can be an indicator of malware or intruder activity. “Using default settings, a simple patch will generate a very large number of file changes; when aggregated across a mid-sized enterprise, this could easily generate many tens of thousands of alerts,” Cotter says.
Any meaningful alerts could easily get lost in that flood of information, Cotter says, and dismissed by administrators as related to the updates. “In order to address that issue, a thorough process needs to be in place to test updates and ‘fingerprint’ their changes, so that those specific alerts can be filtered and/or dismissed, leaving a clear set of actionable alerts for administrators to follow up on,” he says.
Defining, refining, implementing and executing that process adds to the overall effort needed to support the operation of the tool, but can drastically reduce the longer-term cost of ownership as well as increase the signal-to-noise efficiency and usability of the system, Cotter says.
“Many other security tools systems can have a similar problem with excessive alerting, and are frequently ignored due to the low signal-to-noise ratio,” Cotter says. “Examples include intrusion detection systems, Web application firewalls and other systems that are monitoring Internet-accessible endpoints.”
Addressing the issue of false positives should start with a thorough understanding of what a given tool is intended to address, as well as how it functions.
“When implementing the tool, ensure that the implementers fully understand the intent of the tool deployment, rather than making assumptions about ‘normal’ use cases, or simply installing a tool with default settings,” Cotter says.
From a process and education standpoint, any security tool implementation will impact existing policies and procedures, including incident response and any operational procedures for systems that the tool impacts, Cotter says. “This impact should be reviewed and validated, and policy and procedure documentation should be updated in tandem with the tool deployment in order to ensure that operational activities are minimally impacted by the change,” he says.
The most important thing security practitioners should do is understand that not every detection is malicious in nature, Pingree says. “There are a variety of ways to categorize incidents in order to identify a false positive,” he says.