AI email security: Understanding the human behind the keyboard
27 October 2020

At the heart of any email attack is the goal of moving the recipient to engage: whether that’s clicking a link, filling in a form, or opening an attachment. And with over nine in ten cyber-attacks starting with an email, this attack vector continues to prove successful, despite organizations’ best efforts to safeguard their workforce by deploying email gateways and training employees to spot phishing attempts.

Email attackers have seen such success because they understand their victims. They know that, ultimately, human beings are creatures of habit, prone to error, and susceptible to their emotions. Years of experience has allowed attackers to fine tune their emails making them more plausible and more provocative. Automated tools are now increasing the speed and scale at which criminals can buy new domains and send emails en masse. This makes it even easier to ‘A/B test’ attack methods: abandoning those that don’t see high success rates and capitalizing on those that do.

We can classify phishing attempts into five broad categories, each aiming to trigger a different emotional reaction and elicit a response.

  • Fear: “We have detected a virus on your device, log in to your McAfee account.”
  • Curiosity: “You have 3 new voicemails, click here.”
  • Generosity: “COVID-19 has greatly impacted homelessness in your area. Donate now.”
  • Greed: “Only 23 iPhones left to give away, act now!”
  • Concern: “Coronavirus outbreak in your area: Find out more.”

It’s worth noting that today’s increasingly dynamic workforces are more susceptible to these techniques, isolated in their homes and hungry for new information.

Turning to tech

As email attacks continue to trick employees and find success, many organizations have realized that the built-in security tools that come with their email provider aren’t enough to defend against today’s attacks. Additional email gateways are successful in catching spam and other low-hanging fruit, but fail to stop advanced attacks – particularly those leveraging novel malware, new domains, or advanced techniques. These advanced attacks are also the most damaging to businesses.

This failure is due to an inherent weakness in the legacy approach of traditional security tools. They compare inbound mail against lists of ‘known bad’ IPs, domains, and file hashes. Senders and recipients are treated simply as data points – ignoring the nuances of the human beings behind the keyboards.

Looking at these metrics in isolation fails to take into account the full context that can only be gained by understanding the people behind email interactions: where they usually log in from, who they communicate with, how they write, and what types of attachments they send and receive. It is this rich, personal context that reveals seemingly benign emails to be unmistakably malicious, especially when other data fails to reveal the danger.

Misunderstanding the human

Frustrated with the ineffectiveness of traditional tools, many organizations think that the solution is to minimize the chances that employees engage with malicious emails through comprehensive employee training. Indeed, companies often attempt to train their employees to spot malicious emails to compensate for their technology’s lack of detection.

Considering humans to be the last line of defense is dangerous, and this approach overlooks the fact that today’s sophisticated fakes can appear indistinguishable to legitimate mails. It’s only when you really break an email down beyond the text, beyond the personal name, beyond the domain and email address (in the case of compromised trusted senders), that you can decipher between real and fake.

Large data breaches of recent years have given attackers greater access than ever to corporate emails and stolen passwords, and so supply chain attacks are becoming increasingly common. When attackers take over a trusted account or an existing email thread, how can an employee be expected to notice a subtle change in wording or the different type of attached document? However rigorous the internal training program and regardless of how vigilant employees are, we are now at the point where humans cannot spot these very subtle indicators. And one click is all it takes.

Understanding the human

Email security, for a long time, remains an unsolved piece of the complex cyber security puzzle. The failure of both traditional tools and employee training has prompted organizations to take a radically different approach. Thousands of businesses across the world, in both the public and private sector, use artificial intelligence that understands the human behind the keyboard and forms a nuanced and continually evolving understanding of email interactions across the business.

By learning what a human does, who they interact with, how they write, and the substance of a typical conversation between any two or more people, AI begins to understand the habits of employees, and over time it builds a comprehensive picture of their normal patterns of behavior. Most importantly, AI is self-learning, continuously revising its understanding of ‘normal’ so that when employees’ habits change, so does the AI’s understanding.

This enables the technology to detect behavioral anomalies that fall outside of an employee’s ‘pattern of life’, or the pattern of life for the organization as a whole.

This fundamentally new approach to email security enables the system to recognize the subtle indicators of a threat and make accurate decisions to stop or allow emails to pass through, even if a threat has never been seen before.

Sitting behind email gateways, this self-learning technology has extremely high catch rates. It has caught countless malicious emails that other tools missed, from impersonations of senior financial personnel to ‘fearware’ that played on the workforce’s uncertainties at a time of pandemic.

Attackers are continuing to innovate, and automation has led to a new wave of email threats. 88% of security leaders now believe that cyber-attacks powered by offensive AI are inevitable. The email threat landscape is rapidly changing, and we can expect to receive more hoax emails that are more convincing. Now is a crucial moment for organizations to prepare for this eventuality by adopting AI in their email defenses.

Original article from: Darktrace

Post by: siteadmin
More Articles from User Security