The insider threat: Making user errors a thing of the past
It’s a feeling that most of us have experienced at some point: the heart-stopping, head-in-hands moment when you realise that you have copied the wrong person into a sensitive email or attached the wrong file and shared confidential data with unauthorised recipients. Once that data is out there, there is often little you can do except to politely request that the recipient deletes the email and hope that they comply. In today’s world of rigorous data protection legislation, this is simply not acceptable.
It's possible that the outcome of an email fail will be limited to the sender’s embarrassment – and I’m sure everyone has a cautionary tale about when they made an email error - but a data breach is still a breach, however accidental. If a customer’s personally identifiable information (PII) has been exposed, the consequences can be far-reaching and highly damaging to the organisation, from large fines and reputational damage to the loss of customer trust that can have a serious impact on the bottom line.
In fact, although news headlines frequently focus on malicious attempts by cybercriminals to penetrate networks and steal data, statistically according to the latest Ponemon Institute Data Breach Report one of the biggest risks to an organisation’s security is its own employees making mistakes. The report highlighted that over a quarter (27%) of breaches were caused by human error.
Tackling the insider threat
Employees have trusted status with access to confidential information and they are equipped with all the channels they need to share that information: email acts as a conduit for the vast quantity of unstructured data that flows through every business. With so much data in the hands of so many users, all of it shareable at the click of a button, perhaps it isn’t surprising that 86% of breaches in 2017 were inadvertent and unintentional - the result of user error. Insider threat represents a major risk to organisations that is unacceptable if left unmanaged, yet typically businesses focus the lion’s share of security efforts on combating external data threats.
So, in a world where sharing data is central to how we work, what are the common user ‘fails’ and how can we manage this internal threat to reduce the risk of data breaches?
The work environment is continuously changing. Previously, data security might have been viewed as a back-office, operational activity built around firewalls, anti-virus, encryption and endpoint security. Now, the advent of cloud computing, the increase in cyber-attacks targeting individual employees and the sheer amount of unstructured data in organisations has created major vulnerabilities that cannot be prevented without engaging directly with users. In fact, the only truly constant element in the equation is people and I firmly believe that the most successful long-term approach is to build a user-centric security solution around their behaviour and needs. That’s the core principle of our ethos here at Egress.
Building a user-centric solution doesn’t mean that we reject technology – quite the opposite – it means that we use technology, in the shape of AI and machine learning, to create a safety net that analyses historical user behaviour to identify when anomalies occur so that the user avoids inadvertent mistakes and makes the right security choices around data sharing.
Fail #1 - Autocomplete anomalies
Take, for example, the common email fail that occurs when the user types the first name of a recipient into their email client and the autocomplete feature fills in the address of someone else with the same first name. Suddenly, Rachel, their friend from yoga class, is about to receive confidential information intended for Rachel, Chair of the Board. By analysing the user’s past interactions with email recipients - both individuals and commonly-used groups - machine leaning spots that the information included in the email is not typically shared with that recipient, or that this person has never previously been included in emails to that group of people. An alert querying this then highlights a possible error to the user before they hit the ‘send’ button.
Fail #2 - Mis-typed addresses
When under pressure it can be all too easy to mis-type the name or the domain of an email address. Most of the time this will simply result in a bounce-back, but sophisticated phishing tactics now see perpetrators setting up alias accounts using ‘mis-typed’ addresses and using them to gain footholds in victims’ networks when unsuspecting employees continue the conversation with someone they think is a trusted co-worker. A machine-learning-based system spots the inconsistent address and alerts the user before they reply, encouraging them to double-check the authenticity of the sender.
Fail #3 - Email copy control
There have been several recent examples where the Information Commissioner’s Office (ICO) has levied heavy fines on organisations whose users have accidentally shared protected data through CC instead of BCC fields. In July, the Independent Inquiry into Child Sexual Abuse was fined £200,000 after an email was sent to 90 inquiry participants using the ‘To’ field, thereby revealing their identities. This was a very clear case of human error: the sender had originally used the ‘Bcc’ field correctly, but upon noticing an error in the content of the first email, issued an update using the ‘To’ field. A simple warning flag prompting the sender to check whether they should be using the ‘Bcc’ field for this email would have prevented a serious error that led to considerable distress for those affected.
Mitigating human error through user-centric technology
Humans are unpredictable in nature. Most people don’t intentionally make mistakes, but when we’re under pressure, tired or simply distracted, errors start to occur. When creating a solution to support the user, it’s essential to respect the fact that, most of the time, people get it right and only intervene when a possible problem is detected. If the security solution is cumbersome and time-consuming, the very humans it’s designed to assist will try to find a way to avoid using it! It needs to work in tandem with the user, putting them at the heart of the process to ensure that they see its value as a safety net that prevents unintentional errors – effectively making them smarter and more security-conscious employees.
The fantastic thing about machine learning is that it gets better over time. The more email user data the system analyses, the more intelligent and unobtrusive its actions become, creating a seamless experience that effectively addresses a huge data security risk and protects the company’s reputation.
Human error won’t disappear anytime soon, but if we can put users at the centre of a solution that works with their behaviours and learns over time, we can start to make those cautionary email fail tales a thing of the past.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
- » Why no-code platforms are key digital transformation tools rather than a shadow IT nightmare
- » Case study: How the National Crime Agency looks to squash the cybercriminals at source
- » It’s not me, but it is probably you: How IT still views employees with contempt on cybersecurity
- » Machine learning engineer is the best job in the US – according to Indeed
- » CXOs increasingly suspicious of employees when it comes to data breaches