Skip to main content

Pandora's inbox: Are employees really to blame for cyber and data breaches?

Cyber security is a regulatory imperative, but it has been argued that breaches are inevitable whatever a firm's defences because employees keep letting threats in and data out. They click phishing links with the eagerness of a Russian roulette addict, share restricted access codes and post information about colleagues on social media, while their fat finger mistakes send sensitive material who knows where.

Superficially, there is substantial evidence to support this view. The 2022 International Data Breach Investigations Report (DBIR) from the telecom corporation Verizon found that 82% of data breaches involved a human element, but that covered a wide range of incident types. Some, such as emailing sensitive information to the wrong recipient (which the DBIR found was three times more prevalent in financial services than in the public sector or healthcare), were self-evident human errors, but with others it was less clear-cut.

"There is commonly a human element to the incidents we respond to, but a better description of most is hybrid because there has also been a systems failure," said Del Heppenstall, a cyber-security partner at KPMG. "For example, a simple phishing attack relies on social engineering and human susceptibility to get someone in an organisation to supply sensitive information, but a flawed cyber-security system allowed the threat actor's message through to that person in the first place."

Data control
 

Similarly, where a threat actor uses genuine credentials stolen by hacking to trick an employee into uploading malware or making a fraudulent payment in a business email compromise (BEC) attack, it is hard to argue that the human clicking the link is solely responsible. There are also data control aspects to consider when protecting against BEC attacks if a firm outsources its email system to the cloud, as many do.

"Threat actors can often get into an email system on the cloud because there's no multi-factor authentication to verify who's entering the system," Heppenstall said.

"Attackers can use an old email or try different credentials to get inside. If an attack succeeds, it's not so much due to one person's human failings but a failure to have a sufficiently secure system or keep access permissions up-to-date by removing old ones."

Firms are so dependent on a handful of cloud service providers (CSPs) for various amenities affecting their operational resilience that lawmakers are bringing CSPs within the financial regulatory perimeter. In the EU, this will happen through the Digital Operational Resilience Regulation (DORA). The UK's Financial Services and Markets Bill proposes broadly similar measures and last month the Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) issued joint discussion paper DP22/3 on how their oversight of CSPs might work.

Tougher UK operational resilience requirements under PRA PS6/21 and FCA PS21/3, which added chapter SYSC 15A to the FCA handbook, came into force this March. In PS21/3 the FCA said it considered cyber resilience "complementary to operational resilience". Other regulations relevant to cyber security include Principles 3 and 11, plus SYSC 3.1.1 and 3.2.6. Firms also have "integrity and confidentiality" security obligations under the EU and UK versions of the General Data Protection Regulation. Nevertheless, cyber-incident reports to the FCA rose by 52% last year.

Plugging gaps
 

"As cyber security gets more mature and starts plugging gaps, criminals are increasingly seeing phishing and other social engineering attacks that leverage employee behaviour as the easiest way to breach an organisation's defences," said Felix Newman, assistant director of digital forensics at Deloitte.

The sheer volume of emails people receive at work each day and firms' growing use of outsourcing have helped hackers and criminals. Legitimate work-related emails from outsourcers often use their domain name system (DNS), so employees become attuned to seeing and trusting messages that do not bear their employer's email address. That means malicious content is less likely to raise suspicions, but technological advances are helping to combat this and other cyber threats.

"Some more mature organisations are using machine learning to analyse what normal looks like so they can quickly identify the abnormal, which could be a threat," Heppenstall said. "For instance, behavioural analytics can identify patterns in someone's standard working day: when they have lunch, how they type on a keyboard, even how they hold a phone."

The FCA has already spotted the potential of behavioural analytics in another field. Its PS21/19 added a consumer's behavioural characteristics, as identified by analytics, to the list of things that can show "inherence" for strong customer authentication under electronic payments rules.

Appropriate level
 

Another protective measure is more procedural than technical. Firms should keep access permissions under review to ensure that they are up-to-date, and that each employee's permissions are set at an appropriate level. Access rights can drift upwards, for example, for holiday cover or work on a project, and end up higher than necessary. At the uppermost levels, where cyber defences and programs for recovery from ransomware and other attacks are most stringent, this causes unnecessary risk and expense.

"The fewer people have access, the shorter the tail of risk," Heppenstall said. "Some used to argue limiting access was inconvenient but now there's recognition that cyber security is the cost of doing business in a digital economy."

As for employees, regular training and education about cyber risks is important, although both Heppenstall and Newman said keeping this fresh can be a challenge. Some firms use test phishing attacks, linking the results — who clicked the message, who reported it — to annual appraisals. New work patterns are creating new problems, however.

"The fracturing of the day's work-private life divide and remote working play a very large part in current risks," Newman said. "People slot personal tasks between work ones and use work laptops for them. They're working longer into the evening so are more tired; they may log in again after a glass of wine. People are trained to apply safeguards in the office but at home they feel safe and secure, so potential risks slip their mind."

Not all risks emanating from staff are due to ignorance, mistake or negligence, however. The fraud prevention service Cifas' 2022 Fraudscape Report said remote and hybrid working have made it easier for employees to steal data. Offboarding staff working out their notice remotely were a particular concern. It also reported a rise in "insider threat as a service", where individuals obtain roles to exploit them and their access for criminal ends, and in people advertising insider knowledge for sale on web forums.

"Reducing risk from leavers depends on the circumstances of departure," Heppenstall said. "With a dismissal, immediately removing remote access and USB port access rights is sensible, but with resignations, people often take data before handing in their notice. Where someone's role and access justify it, do a retrospective check to see what they've accessed recently and identify any abnormal behaviour: unusual log-in times, more use of USBs or photocopiers. That gives you a chance to raise the issue while they're still at the firm.

This article was originally published on Thomson Reuters Regulatory Intelligence website.

Recommendations