fbpx

How Can You Protect Yourself From a $25 Million Robbery?

February 29, 2024 hr-observer-artificialintelligence-deepfakes

A Hong Kong-based multinational company has lost $25 million in a deepfake scam after an employee received a video call from the company’s ‘Chief Financial Officer’ and other employees.

Despite initial suspicions raised by an email requesting a secret transaction, the victim was persuaded to transfer a staggering $200 million Hong Kong dollars ($25 million). This persuasion occured after becoming convinced of the authenticity of the participants during the group video call. 

Hong Kong police have taken action by making six arrests in connection with a series of deepfake scams emerging in recent months.

“Caution and corroboration are the two ways to avoid being defrauded by deepfakes,” said Paul M. Barrett, Deputy director, and senior research scholar, at the NYU Stern Center for Business and Human Rights.

Though Paul believes that “humane employers” would display patience and not penalise the employee, “a less humane employer might fire the poor person.” The fate of the employee in this case or others remains in the hands of their employer.  

The emergence of deepfake technology has sparked both fascination and concern in the cybersecurity realm.

Leveraging machine learning algorithms like generative adversarial networks (GANs), deepfakes manipulate audio and video content to create convincing but entirely fabricated media. GANs are a type of neural network architecture used in artificial intelligence.

Why do HR departments need to learn to detect Artificial Intelligence fraud?

One tactic used to combat CFO fraud is having a two-factor authentication process for two individuals used to verify any major transaction. Applying this to the realm of fighting AI attacks will reduce the risk of fraud, no matter how convincing the material used is. 

“Where financial risk is involved, techniques used to tackle other forms of deception can often be applicable in the deepfake realm. CFO fraud, which involves fake c-level executives authorising fake money transfers, is a good example of this,” explains Chris Boyd, Staff Research Engineer at Tenable.

Scammers have been leveraging generative Artificial Intelligence and deepfake technologies to create convincing personas for sometime.

It was only a matter of time before the use of deepfake technology move to more targeted attacks like that experienced by the multinational company in Hong Kong. Online tools and tutorials assists scammers to map likenesses onto their webcams, blurring the lines between reality and deception.

“Awareness and vigilance are our best defenses against these manipulations. Something out of the ordinary always needs to be questioned to ensure we don’t fall victim to the tangled web of AI-enhanced deception,” explains Boyd.

The recent advancements in generative AI responsible for deepfakes are convincing. Therefore, it is hard to detect when the content is not real. However, as the technology has become democratised, their access is ubiquitous through any platform leading users to be guided through a series of prompts to generate almost whatever their heart desires. 

Experts said that it’s at this stage of the discovery and use of these systems that sometimes even corrective action must be taken to ensure what happens with deepfakes and AI is kept above board. 

“Detecting fraudulent content is then placed squarely back on the consumer to evaluate whether the overall message of the generated artifact is true and not misleading in any way,” said Aaron Bugal, Field CTO APJ, Sophos.

“This unregulated usage of a deepfake platform could see sensitive information submitted to it which breaks any privacy and confidentiality of that information being solely yours anymore – it’s in the public domain – at minimum it could be used to mimic you and your business,” Bugal added. 

The Hong Kong case, in particular, underscores the growing threat posed by synthetic media manipulation. The incident serves as a wake-up call, experts say, for companies worldwide to bolster their defenses against such sophisticated scams.

“To protect themselves from falling victim to similar schemes, companies must remain vigilant and adopt proactive measures,” explains Ezzeldin Hussein, Regional Senior Director, Sales Engineering – META at SentinelOne. Hussein explains that one crucial aspect is “awareness and detection.”

How can Employees detect deepfakes?

Understanding at a leadership level the risk of deep fakes can bolster educating employees around how they could be used to conduct advanced phishing against them. Therefore, HR departments must learn to provide guidance for their employees on how to spot a deepfake by looking for not-quite-right distortions in the images and sound.

Hussein explains visual cues such as discrepancies in lip-syncing, unnatural facial expressions, or limitations in executing complex movements in real-time can raise suspicions, are all ways to detect an unusual video. 

Others includes implementing multi-factor authentication and using secure channels for sensitive communications are indispensable strategies. For instance, involving secondary conversations via email, SMS, or authenticator apps can add layers of verification. 

“With the security awareness process also reinforcing that an individual’s digital footprint on social media could be used as training data for deepfake productions,” explained Bugal. That is why, he explains it is vital to understand where an individual uploads their images.

Some companies use encrypted messaging platforms like Signal for critical discussions and financial transactions in order to add an extra level of security. Furthermore, experts advice incorporating the latest security features in video conferencing software is essential.

These features may include algorithms designed to detect anomalies characteristic of deepfake manipulation. Additionally, adopting practices like requesting participants to perform specific actions, such as writing a word on a piece of paper or executing unique gestures, can help confirm authenticity during video calls.

“Ultimately, combating deepfake scams requires a combination of technological solutions, user awareness, and robust verification protocols,” added Hussein.

These solutions, including staying informed, remaining vigilant, and implementing preventive measures. Companies can, significantly reduce their vulnerability to such malicious activities, safeguarding their assets and reputation in an increasingly digital world.

“If you’re unsure about the genuine nature of a voice call, having pre-established passphrases can help because a faker almost certainly will not know the code… asking employees who work in finance or HR and administration generally to reduce their public facing footprint can work wonders,” concluded Boyd.

Author
Editor

The HR Observer

Related Posts