In early 2024, Arup Group Limited, a British multinational professional services firm headquartered in London, lost $25 million due to a deepfake video call in which fraudsters presented synthetic impersonations of the company’s CFO and other employees. The attackers used deepfake technology to fabricate convincing likenesses and voices of the executives, effectively misleading a company’s Hong Kong-based financial worker to execute 15 consecutive transactions.
Now, in larger organisations, it’s usually a CISO that directly oversees disinformation security, but if the organisation does not have the technical capabilities to counter threats, it’s in vain.
In start-ups and fast-growth companies, especially those dealing with digital platforms, media, cybersecurity or public communications, the entire weight of cybersecurity often falls on the back of a Chief Technology Officer or Head of IT. Preventing AI-generated deepfakes, misinformation attacks on brands (executed by the closest competitors), supply chain frauds, fabricated invoices, social engineering and every other form of illicit manipulation is the direct responsibility of a technology leader.
CNN Architectures
Real-time Deepfake Detection
The premise here is simple: instead of detecting fake content after it spreads, verify authenticity at the source.
Since blockchain is decentralised and immutable, it enables:
In February 2021, Microsoft, Adobe, BBC, Intel and Truepic introduced C2PA (Coalition for Content Provenance and Authenticity). Its purpose was to address the spread of disinformation and online content fraud by developing technical standards for certifying the source and history of media content.
C2PA essentially creates tamper-proof digital signatures for media files, allowing anyone to verify:
For a creator, it is a 3-step process:
Arguably, the most important use case of C2PA and similar frameworks is protecting intellectual property, such as proprietary code.
Darktrace uses NLP and behavioural AI to analyse email metadata, content and sender patterns and protect against phishing, spear phishing and CEO fraud.
It seems easy to forge such an email; however, if an email mimics an executive’s writing style but originates from an unusual location or IP address, the AI immediately quarantines or flags it.
AI models learn normal communication patterns (who employees talk to, writing style, response time). So when an email deviates from expected behaviour, such as a CEO “urgently” requesting a wire transfer, AI flags it as suspicious.
Had Arup’s overseeing technology manager implemented such a solution, it would have likely raised an early warning by flagging the communication. This would have made it less likely for an already sceptical employee to fall victim to the scam.
Go to your dashboard and check active users. How do you know that a logged-in user is really an employee and not a threat actor? Even with MFA in place, you still cannot be absolutely sure who exactly walks through your databases, can you?
This is where Vectra AI comes in handy.
Vectra AI is an anomaly detection system designed to spot suspicious login attempts or abnormal data access in real-time, preventing compromised credentials from being exploited in fraud schemes.
It monitors employee behaviour across networks, endpoints and cloud apps and learns. So if an employee suddenly logs in from an unknown device, downloads unusual files or attempts unauthorised access, AI triggers an alert.
This is another tool that could have prevented Arup’s scam. It analyses vocal patterns, tone and biometric markers to detect synthetic voices.
In 2019, a UK-based energy company was targeted by a deepfake audio scam, where attackers impersonated the parent company’s CEO’s voice over the phone and requested an urgent wire transfer of €220,000. According to Rüdiger Kirsch of Euler Hermes Group SA, the firm’s insurance company, “The CEO not only recognised the subtle German accent in his boss’s voice but also claimed it carried the man’s “melody”.
The reason we used these two cases is because they point to the critical flaw in the security of multinational companies that has been heavily exploited.
The Cross-Race Effect (CRE), also known as Own-Race Bias, is a well-documented cognitive bias where people are better at recognising the faces of their own racial or ethnic group but struggle with those of other groups. This could explain why the Arup employee (a Chinese national) failed to detect AI-generated Western faces—he/she may have lacked familiarity with subtle facial differences in Caucasian faces.
For voice recognition, the equivalent concept is difficulty in detecting small accent variations in unfamiliar languages. The UK-based energy company’s executive (an Englishman) failed to detect an AI-generated German accent, likely due to a perceptual phenomenon where non-native listeners perceive foreign accents as “blurry” versions of their own language. In other words, people tend to “map” unfamiliar sounds onto their closest native equivalents, making it harder to detect subtle accent discrepancies.
AI-driven tools, rigorously trained on large datasets, do not succumb to either of these phenomena, making them our best defence against these types of deepfake frauds.
But what to do if you are dealing with an insider or someone who has access to your systems?
In 2023, two Tesla employees leaked over 100GB of confidential data containing customer complaints, production flaws and HR records. They exported data from internal systems and shared it with journalists.
This is another case where tools such as Darktrace, Microsoft Purview Insider Risk Management, Forcepoint Insider Threat and Splunk UEBA could have prevented the leak if they had been implemented. They are far superior at spotting unusual data modifications, access or movements, as well as identifying suspicious communication patterns and behavioural biometrics.
For example, AI can track who accesses which files and systems. So if a marketing employee suddenly downloads thousands of confidential R&D files, AI detects it as a risk. In the same fashion, AI detects how employees type, click and navigate systems. Therefore, if an account behaves differently (eg, unusual typing speed, access locations), it may indicate a compromised insider.
Let’s say that one of our finance employees suddenly changes supplier payment details to either coerce money or fabricate an invoice. Since AI learned the normal behaviour of employees (eg, who accesses what, when and how), any action that would unexpectedly modify financial data, legal documents or code repositories, would raise an alert.
However, real-time detection isn’t enough. You must automate response actions to contain insider threats before damage occurs. For example:
The tools frequently used for response automation in these types of threats are Microsoft Sentinel and CrowdStrike Falcon. Sentinel can revoke a user’s access while the incident is investigated. Falcon, on the other hand, can identify potentially compromised devices and trigger automated containment processes either through the console or API call.
Note that Microsoft Sentinel should be integrated with Microsoft Purview Insider Risk Management for the most optimal protection.
Trust as a vital asset must be reinforced through continuous monitoring and rapid response because, in the digital age, trust is not a given—it’s engineered.
That’s why the tech leader’s role evolves and enters the realm of defining organisational trust strategies. They are now directly responsible for building tech-driven infrastructures that prevent risks and enhance the detection of fraudulent behaviours.
No pressure, but keep in mind that employees and customers are more likely to have confidence in the company when they know a comprehensive tech-driven trust strategy is actively in place to protect them.
So here is a simple action plan to fulfil your role in disinformation security:
The final step is critical because, without proper personal cybersecurity hygiene, your efforts will never be truly effective—AI or no AI. Think about how often you’ve seen someone leave a device unattended or unknowingly expose sensitive information by accessing systems in public. That’s a clear example of a lack of cybersecurity awareness. Are your employees any different?
90 Things You Need To Know To Become an Effective CTO
London
2nd Floor, 20 St Thomas St, SE1 9RS
Copyright © 2024 - CTO Academy Ltd