Skip to main content

Back to Blog

Can AI be Used to Mitigate Human Error?

Each month that passes, the cybersecurity industry continues to struggle with new hacking techniques, rapidly evolving threats, and how to communicate the severity of the issues to organizations. One problem that can’t necessarily be solved by cybersecurity professionals is the broad category of ‘human error’. In his IT Toolbox article, Josh Horwitz (COO of Enzoic) discusses whether or not Artificial Intelligence could be the source of solutions for the human errors we make.

What is meant by human error? What mistakes are users making, and what is the impact on the greater security of their networks or companies?

Gartner, Inc., estimates that “up to 95% of cloud breaches occur due to human errors” and they expect the trend to continue. Many of these errors are configuration mistakes, which can arise from contracting installation or from internal server setup errors.

Similarly, a Tessian report indicated that an estimated 43% of employees in the US and UK have made mistakes resulting in cybersecurity repercussions. When asked for additional details about what types of mistakes were made, one-quarter of respondents admitted to clicking on links in a phishing email at work. Most cited the top reason for clicking through was ‘distraction’ or that the ‘email looked legitimate’.

A ProofPoint survey also reported that 55% of information and security officers in the UK think that lack of cybersecurity awareness is the biggest risk for their business, regardless of what cybersecurity solutions were in place.

It’s clear that even if company security is tight, human error creates the opportunity for many vulnerabilities. Can AI be used to mitigate this problem?

According to  KPMG research, 80% of executives at large organizations said AI technology was an important part of their coronavirus response. In 2019, Gartner found that interest in AI technologies had grown 270% over the previous four years, and the coronavirus pandemic only served to accelerate adoption.

AI is being used in varying ways across business, government, and tech sectors. When it comes to cybersecurity, there are several uses.

Phishing Trips

It’s a common misstep to fall victim to a Phishing scheme, and our confidence in our ability to spot a scam is misplaced. Most people claim to be able to distinguish between a regular email and a targeted phishing message (almost 80% percent of people in this study). But, even out of those same respondents, 50% willingly admitted to clicking on a link from an unknown sender while at work. 

Horwitz points out that there are “typically some red flags that signal nefarious activity.” One example is something that a distracted employee wouldn’t think to look for: unrecognized domain names in the URL, or extra characters and letters that don’t seem to belong in a normal email account. A mature AI tool could automatically identify these, as well as other common phishing attack markers. Horwitz suggests, “either alert the user to proceed with caution or flag the communication directly to IT to determine next steps.”  

Skirting the Security

Another common source of human error is when users seek workarounds of company policies that exist to increase internal security. Employees can feel hampered by corporate policies and go around tasks that take time, which is usually done out of a desire for productivity or convenience. As understandable as that is, unauthorized use of devices and applications can quickly result in cyberattacks or credential theft.

AI might be an applicable solution to this situation. Imagine if a bot that recognizes when an unauthorized application is introduced could pop-up and remind employees of the security concerns. Often, even small reminders and deterrents can have beneficial effects.

Wi-Fright

One human error issue that has increased in frequency since the pandemic is unrestrained and public web browsing. Accessing corporate resources via public Wifi, as is the improper storing of documents and files, can become problematic practices.  

The Password Dilemma 

According to LogMeIn, an astonishing 91% of people acknowledge the security risk in password re-use, but two-thirds of users admit to doing it. As these behaviors are well known, all a hacker needs to do is find a set of credentials that have been exposed in a prior breach and use it to infiltrate corporate accounts. Creating weak or easily guessable passwords is another common user mistake. It’s time we graduate from ‘administrator123’ and similar patterns.

While it may be too soon to say if AI will evolve to address password security or offer a viable alternative, Horwitz points out that there is at least a good stopgap for this wide-scale problem. There are now services that make screening credentials efficient, unobtrusive and ensure that immediate action can be taken if a password is exposed in a breach.

Organizations can’t afford to ignore the impact of human error. There is a demand and an opportunity for AI to combat the issue and for organizations to embrace it. Cybersecurity solutions of many types are poised to remain a top area of investment, both in terms of time and resources.