Creating a cyber-aware culture

5 mins read

How can AI play an effective part in cyber-defence strategies and where can it present challenges to the user? New Electronics investigates.

Earlier this month the US saw a hack on Microsoft's Exchange email software with the possibility that tens of thousands of organisations both in the US and beyond may have been impacted.

In fact, shortly after the news broke, the European Banking Authority said that its email servers had been compromised by the attack and that personal data may have been accessed from its servers. As a consequence it was obliged to pull its entire email system offline while it assessed the damage.

Despite the Microsoft Exchange servers being widely used for email by major businesses and governments, to date, few organisations have actually admitted to having been hit by the attack.

According to Microsoft the cyber-attack appears to have exploited a vulnerability in its Exchange email system making it possible to look like someone who should have access to the system. As a result the hacker(s) were able to take control of the email server remotely - and steal data from the network.

Another week, yet another series of attacks.

Singapore Airlines also recently reported a data breach that affected 580,000 frequent flyer customers which seems to have stemmed from a cyber-attack launched against air transport communications and IT vendor, SITA which serves roughly 90% of the world’s airlines.

“Many organisations don’t see the full picture of what their third-party vendors do with their critical data and systems. For example, if a vendor uses a shared account to access your corporate network, your organisation won’t be able to determine which of their employees has made a given change in the system,” said Florian Thurmann, Technical Director, EMEA, Synopsys Software Integrity Group. “This lack of visibility, control, and security insight leaves a critical blind spot. Every organisation has the responsibility to ensure their software supply chain vendors meet your cybersecurity policy requirements.”

As these examples demonstrate, the cybersecurity landscape continues to evolve at pace, as criminals become ever more sophisticated.

As a consequence, digital security tools need to continuously be improved in order to mitigate the risks as much as possible.

“Over the past twelve months we’ve seen more opportunities for hackers to strike, for example, using email phishing scams such as purporting to be authentic PPE providers, or from HMRC to dupe unsuspecting victims. More recently we have seen how phishers are now using the vaccine rollout to trick people into paying for fake vaccines,” said Oliver Paterson, Product Expert VIPRE Security Awareness Training and SafeSend.

At the same time Artificial Intelligence (AI) and Machine Learning have been heralded as innovative technologies that could help thwart evolving exploits and are seen as a key part of any cyber security arsenal.

“However, AI is not necessarily the right tool for every job,” warns Paterson. “Humans are still able to perform intricate decision making far better than machines, especially when it comes to determining what data is safe to send outside of the organisation. As such, relying on AI for this decision making can cause issues, or worse, lead to leaked data if the AI is not mature enough to fully grasp what is sensitive and what is not.”

So where can AI play an effective part in a cyber-defence strategies and where can it present challenges to the user?

Spotting similarities

One of the primary challenges for AI is to mitigate risk from accidental insider breaches by being able to spot similarities between documents or knowing if it is ok to send a particular document to a specific person. “Company templates such as invoices appear to be very similar each time they are sent, with minor differences but typically, Machine Learning and AI fail to pick up on this,” said Paterson. “The technology will register the document as it usually would, despite there being very few differences in the numbers or words used, and would typically allow the user to send the attachment. Whereas in this example, a human would know which invoice or sales quote should be sent to which customer or prospect.”

Deploying AI for this purpose in a large corporation would likely only stop a small proportion of emails from being sent. But even when the AI detects an issue to flag, it will alert the administration team rather than the user.

“This is because if the AI believes that the email shouldn’t be sent, it doesn’t want the user to override it and send the email anyway. This can therefore become an additional burden for the admin team and cause frustration for the user at the same time,” Paterson suggests.

Data storage

AI can also be very data-intensive when used for this defence strategy. According to Paterson, “This is due to the fact that in this setup, every email must be sent to an external system, off-site, to be analysed. Especially for industries that deal with highly sensitive information, the fact that their data is going somewhere else to be scanned will be an obvious concern.

“Moreover, with Machine Learning, the technology has to keep a part of this sensitive information in order to learn rules from it and use it again

and again, to make an accurate decision the next time. Given the Machine Learning nature of these types of solutions, they cannot work straight off the shelf, but have a learning phase that lasts a few months, and therefore cannot provide instant security controls.”

Understandably, a lot of companies, especially at enterprise-level, are not comfortable with their sensitive data being sent elsewhere. The last thing they want is it being stored off-site, even if it is just for analysis. AI, therefore, can add an unnecessary and unwanted element of risk to sensitive material.

The role of AI in cybersecurity

Whatever the concerns or drawbacks AI does have a critical role to play in many elements of a business’ cyber defence strategy. Antivirus technology, for example, operates a strict ‘yes or no’ policy as to whether a file is potentially malicious or not. It’s not subjective, through a strict level of parameters, something is either considered a threat, or not.

“In those cases, the AI can quickly determine whether it’s going to crash the device, lock the machine, take down the network and as such, it is either removed or allowed,” says Paterson. “It is important to note that VIPRE uses AI and ML as key components in their email and endpoint security services for example as part of their email security attachment sandboxing solution where an email attachment is opened and tested by AI in an isolated environment away from a customer’s network.”

So while AI might not be an ideal method for preventing accidental data leakage through email, Paterson does suggest that it has an important part to play in specific areas such as virus detection, sandboxing and threat analysis.

Conclusion

Cyber-attacks have wide ranging implications whether that’s having a reputational impact, causing a compliance breach or associated financial damage – all of which can be devastating. A cyber-aware culture with continuous training is essential, and so is the right technology to combat it.

“Providing a technology that alerts users when they are potentially about to make a mistake – for example, by sending an email to the wrong person or sharing sensitive data about the company, its customers or staff – not only minimises errors, it helps to create a better culture,” says Paterson. “Mistakes are easily made in a fast-paced, pressured working environment – especially with the increase in home working not providing the immediate peer review that many are used to.

“But rather than leaving this responsibility solely to AI, this type of technology, needs to be combined with trained human insight, so that users are able to make more informed decisions about the nature and legitimacy of their email before acting on it.

“Ultimately,” according to Paterson, “it’s about supporting organisations to mitigate against this high-risk element of business, and reinforcing compliance credentials through a cyber-aware culture.”