How AI is stopping criminal hacking in real time

John Brandon

“An AI has supervised learning capabilities using neural networks for entity and pattern recognition for intrusion detection systems and event forensics applications,” says Testoni. “They can classify entities and events to reduce mean time to identification of problems, and analyze the behavior behind the attacks. For example, what does the attacker want, how will it affect my organization, what aspects of my business are at most risk and the impact analysis of the attack itself?”

Another area of focus: Having an AI inspect all network traffic. Today, it can be difficult to block a harmful email or attachment because there may not be a rule about the data yet or the harmful agent has not been detected yet. Forensic security tends to look at the damage after it takes place. However, as Nathan Wenzler, the chief security strategist at AsTech Consulting, explained, an AI can ingest the data, look for patterns, and block network traffic in real-time.

Fred Wilmot, the interim CEO/CTO of threat detection company PacketSled, made an interesting point about all of these AI advancements. In the coming months and years, security professionals will rely more on machine learning, and their role might change to become more like AI engineers who create the learning models. For now, the AI is still not mature enough, especially for the fraud detection and mitigation that takes place in the financial sector.

The dark side of using AI to fight hacking

Avetisov did mention one dark side. While security professionals can rely on AI to help block malware attacks or other intrusions, hackers are also leaning on AI. It’s a counter-offensive, because the hackers are using machine learning as well to find weak endpoints.

“Hackers are just as sophisticated as the communities that develop capability to defend themselves against hackers,” says SAP NS2’s Testoni. “They are using the same techniques, such as intelligent phishing, analyzing behavior of potential targets to determine what type of attack to use,  “smart malware” that knows when it is being watched so it can hide.”

“We've seen more and more attacks over the years take on morphing characteristics, making them harder to predict and defend against,” says Wenzler from AsTech Consulting. “Now, leveraging more machine learning concepts, hackers can build malware that can learn about a target's network and change up its attack methodology on the fly.”

Neill Feather, the president of website security company Sitelock, did note that the AI programming someone might use for criminal hacking is more complex, and there are higher costs involved. The incentive will remain as long as the unethical AI leads to more breaches.

In the end, the cyber war will continue -- quite possibly between the AI bots.

Previous Page  1  2