Trying to filter out phishing emails is tough work, even for organizations trying to find a better way through automation, according to a new study from security software company GreatHorn.
The company makes software that seeks out phishing attempts and can autonomously block them, but even its customers don’t switch on all the features, according to GreatHorn’s study of how customers dealt with just over half a million spear phishing attempts.
The most common autonomous action, taken a third of the time against suspicious emails, was to alert an admin when a policy was violated and let them decide what to do. This option is also chosen in order to create a record of potential threats, the company says. Another 6% of emails trigger alerts to the recipients so they can be on the lookout for similar attempts.
The platform also enables enforcement of policies, including moving suspicious emails from the inbox to trash (used with 2% of suspicious emails). They can also be quarantined, an option used on 1%, or moved to a specified folder for examination, used on 7% of suspicious emails examined in the study.
The software can also flag suspicious emails but leave them in the inbox. Adding a label to notify recipients that emails may represent a threat was used for 6% of them, and floating a warning banner inside a suspicious email was used for4%.
But taking no action was an option applied to 41% of suspicious messages. That doesn’t mean these possible threats are being ignored, however, says GreatHorn CEO Kevin O’Brien. Rather than triggering an autonomous action, certain types of low-level threats are simply monitored. This gives security pros the chance to investigate and adjust their email authentication rules as they see fit. That helps “defining a security incident-response plan based on the data that is being provided,” he says.
The vast majority of spear phishing attempts (490,557 of just over 500,000 analyzed) change the display name to someone the recipient knows, but leave behind other clues (such as domain names that don’t match) that perhaps the malicious emails are phony, according to the study.
About 45,000 of the attempts used direct spoofs by altering the From, Return Path and other fields to make it seem as if the message was sent from within the recipient’s domain – in other words it looks like it came from a fellow employee.
A small group, 2,334, used spoofed or similar domain names to the organization’s legitimate one, again to make it seem the message came from an insider.
Taking the standards-based approach
There are email standards that try to block phishing attempts, but they are hard to configure, O’Brien says. The standards are sender policy framework (SPF), Domain Keys Identified Mail (DKIM), and Domain-based Message Authentication, Reporting and Conformance (DMARC), which resolves whether SPF and DKIM results agree about which mail server was used.