Tech Leaders’ Role in Disinformation Security: Technologies That Discern Trust and Prevent Fraud

Igor K
February 12, 2025

In early 2024, Arup Group Limited, a British multinational professional services firm headquartered in London, lost $25 million due to a deepfake video call in which fraudsters presented synthetic impersonations of the company’s CFO and other employees. The attackers used deepfake technology to fabricate convincing likenesses and voices of the executives, effectively misleading a company’s Hong Kong-based financial worker to execute 15 consecutive transactions.  

Now, in larger organisations, it’s usually a CISO that directly oversees disinformation security, but if the organisation does not have the technical capabilities to counter threats, it’s in vain.

In start-ups and fast-growth companies, especially those dealing with digital platforms, media, cybersecurity or public communications, the entire weight of cybersecurity often falls on the back of a Chief Technology Officer or Head of IT. Preventing AI-generated deepfakes, misinformation attacks on brands (executed by the closest competitors), supply chain frauds, fabricated invoices, social engineering and every other form of illicit manipulation is the direct responsibility of a technology leader.

Technology Leaders’ Responsibilities in Disinformation Security

Tech Leaders Responsibilities in Disinformation Security - visual presentation of core responsibilities - mind map
  • Technology Strategy and Infrastructure
    • Overseeing the development and implementation of technological solutions that can detect and mitigate disinformation (eg, AI-driven content moderation, automated fact-checking and bot-detection algorithms).
  • Platform Integrity and Content Moderation
    • Developing policies and tools to identify and remove disinformation. 
    • Working with data scientists and AI teams to refine algorithms that flag misleading content.
  • Cybersecurity and Threat Intelligence
    • Collaborating with security teams to implement defences against disinformation campaigns.
  • Incident Response and Crisis Management
    • Working with PR, security and legal teams to implement rapid response strategies in case of a major disinformation attack.
  • Collaboration with CISO and Compliance Teams
    • Ensuring that technological frameworks align with regulatory requirements on disinformation, such as the EU’s Digital Services Act (DSA) or the US AI Act.
  • Emerging Tech and AI Risks
    • Evaluating and implementing defences against AI-driven misinformation campaigns (eg, tools for detecting manipulated content and watermarking authentic media).

The Tech Stack for Disinformation Defense

AI-Powered Detection and Content Verification

Tools for Content Verification

Google Fact Check Explorer

  • Search tool for investigating the validity of statements by entering keywords or phrases.
  • Uses indexed fact checks (by reputable websites).
  • Offers an in-depth approach to analysing topics (and images).
  • Allows users to see the context and timeline of an image.

Parafact

  • Real-time accuracy assessments for both human and AI-generated content.
  • Enables copy/paste of text to receive fact-checking results within seconds.
  • Provides AI-powered citations and reliable sources.
  • Offers a developer-friendly API.

Originality.AI

  • A suite of tools, including AI detection, plagiarism checking and fact-checking.
  • Provides real-time automated fact-checking.
  • Mostly used for detecting AI-generated content.
  • Shows the sources it uses.
  • >70% accuracy in fact-checking.
  • >90% accuracy in spam scoring.

ClaimBuster

  • An automated web-based fact-checking tool that uses NLP and supervised learning.
  • Monitors live streams, websites and social media to catch factual claims, detect matches with a curated repository of fact-checks and deliver the matches instantly to viewers.
  • Able to scan large amounts of text and identify statements that require fact-checking.
  • Ranks claims by checkworthiness and suggests highly ranked new claims to fact-checkers.

Methods and Architectures for Detecting Deepfake Images and Videos

CNN Architectures

  • eg, EfficientNet.
  • Foundation for many deepfake detection systems.
  • Has high accuracy with fewer parameters.
  • Optimal for real-time applications.

MesoNet

  • A CNN-based model that focuses on the mesoscopic features of images.
  • Has an average detection rate of 98% (when trained on fake videos from the internet).

Convolutional LSTM

  • Combines a convolutional layer for extraction and an LSTM layer for sequence analysis.
  • Has 97% accuracy by analysing temporal inconsistencies between frames.

Real-time Deepfake Detection

Blockchain and Distributed Trust Networks

The premise here is simple: instead of detecting fake content after it spreads, verify authenticity at the source. 

Since blockchain is decentralised and immutable, it enables:

How CTOs Can Integrate Blockchain-Based Trust Networks - visual presentation of steps

Digital Watermarking and Media Provenance Solutions

In February 2021, Microsoft, Adobe, BBC, Intel and Truepic introduced C2PA (Coalition for Content Provenance and Authenticity). Its purpose was to address the spread of disinformation and online content fraud by developing technical standards for certifying the source and history of media content. 

C2PA essentially creates tamper-proof digital signatures for media files, allowing anyone to verify:

  • Who created it
  • When it was created
  • If it has been modified

For a creator, it is a 3-step process:

  1. Embedding metadata at creation
  2. Logging edits and changes
  3. Verifying content on a blockchain or cloud-based service
Content provenance process - visual presentation of necessary steps

Arguably, the most important use case of C2PA and similar frameworks is protecting intellectual property, such as proprietary code. 

Real-Time Threat Intelligence and Behavioural Analysis

Darktrace Antigena Email

Darktrace uses NLP and behavioural AI to analyse email metadata, content and sender patterns and protect against phishing, spear phishing and CEO fraud.

It seems easy to forge such an email; however, if an email mimics an executive’s writing style but originates from an unusual location or IP address, the AI immediately quarantines or flags it.

AI models learn normal communication patterns (who employees talk to, writing style, response time). So when an email deviates from expected behaviour, such as a CEO “urgently” requesting a wire transfer, AI flags it as suspicious. 

Had Arup’s overseeing technology manager implemented such a solution, it would have likely raised an early warning by flagging the communication. This would have made it less likely for an already sceptical employee to fall victim to the scam. 

Vectra AI

Go to your dashboard and check active users. How do you know that a logged-in user is really an employee and not a threat actor? Even with MFA in place, you still cannot be absolutely sure who exactly walks through your databases, can you?

This is where Vectra AI comes in handy. 

Vectra AI is an anomaly detection system designed to spot suspicious login attempts or abnormal data access in real-time, preventing compromised credentials from being exploited in fraud schemes.

It monitors employee behaviour across networks, endpoints and cloud apps and learns. So if an employee suddenly logs in from an unknown device, downloads unusual files or attempts unauthorised access, AI triggers an alert. 

Pindrop’s AI-Powered Voice Security

This is another tool that could have prevented Arup’s scam. It analyses vocal patterns, tone and biometric markers to detect synthetic voices.

In 2019, a UK-based energy company was targeted by a deepfake audio scam, where attackers impersonated the parent company’s CEO’s voice over the phone and requested an urgent wire transfer of €220,000. According to Rüdiger Kirsch of Euler Hermes Group SA, the firm’s insurance company, “The CEO not only recognised the subtle German accent in his boss’s voice but also claimed it carried the man’s “melody”. 

The Critical Flaw in Security of Multinational Organisations

The reason we used these two cases is because they point to the critical flaw in the security of multinational companies that has been heavily exploited. 

The Cross-Race Effect (CRE), also known as Own-Race Bias, is a well-documented cognitive bias where people are better at recognising the faces of their own racial or ethnic group but struggle with those of other groups. This could explain why the Arup employee (a Chinese national) failed to detect AI-generated Western faces—he/she may have lacked familiarity with subtle facial differences in Caucasian faces.

For voice recognition, the equivalent concept is difficulty in detecting small accent variations in unfamiliar languages. The UK-based energy company’s executive (an Englishman) failed to detect an AI-generated German accent, likely due to a perceptual phenomenon where non-native listeners perceive foreign accents as “blurry” versions of their own language. In other words, people tend to “map” unfamiliar sounds onto their closest native equivalents, making it harder to detect subtle accent discrepancies.

AI-driven tools, rigorously trained on large datasets, do not succumb to either of these phenomena, making them our best defence against these types of deepfake frauds. 

But what to do if you are dealing with an insider or someone who has access to your systems?

Insider Threat Detection

In 2023, two Tesla employees leaked over 100GB of confidential data containing customer complaints, production flaws and HR records. They exported data from internal systems and shared it with journalists.

This is another case where tools such as Darktrace, Microsoft Purview Insider Risk Management, Forcepoint Insider Threat and Splunk UEBA could have prevented the leak if they had been implemented. They are far superior at spotting unusual data modifications, access or movements, as well as identifying suspicious communication patterns and behavioural biometrics.

For example, AI can track who accesses which files and systems. So if a marketing employee suddenly downloads thousands of confidential R&D files, AI detects it as a risk. In the same fashion, AI detects how employees type, click and navigate systems. Therefore, if an account behaves differently (eg, unusual typing speed, access locations), it may indicate a compromised insider. 

Let’s say that one of our finance employees suddenly changes supplier payment details to either coerce money or fabricate an invoice. Since AI learned the normal behaviour of employees (eg, who accesses what, when and how), any action that would unexpectedly modify financial data, legal documents or code repositories, would raise an alert.

Automated Response Actions to Contain Insider Threats

However, real-time detection isn’t enough. You must automate response actions to contain insider threats before damage occurs. For example:

  • Auto-block employees from transferring files to personal emails.
  • Lock accounts if AI detects login attempts from an unusual location.
  • Alert security teams when sensitive data is accessed abnormally.

The tools frequently used for response automation in these types of threats are Microsoft Sentinel and CrowdStrike Falcon. Sentinel can revoke a user’s access while the incident is investigated. Falcon, on the other hand, can identify potentially compromised devices and trigger automated containment processes either through the console or API call.

Note that Microsoft Sentinel should be integrated with Microsoft Purview Insider Risk Management for the most optimal protection. 

3 Important Considerations

  1. Watch for scalability issues since AI models require vast training datasets.
  2. There is an increased risk of over-moderation and censorship when managing false positives and ethical dilemmas.
  3. Balance cost vs. ROI.

Conclusion

Trust as a vital asset must be reinforced through continuous monitoring and rapid response because, in the digital age, trust is not a given—it’s engineered.

That’s why the tech leader’s role evolves and enters the realm of defining organisational trust strategies. They are now directly responsible for building tech-driven infrastructures that prevent risks and enhance the detection of fraudulent behaviours. 

No pressure, but keep in mind that employees and customers are more likely to have confidence in the company when they know a comprehensive tech-driven trust strategy is actively in place to protect them. 

So here is a simple action plan to fulfil your role in disinformation security:

  1. Assess organisational vulnerabilities to disinformation
  2. Build a relevant security framework
  3. Invest in AI-powered detection tools
  4. Implement behavioural analytics
  5. Educate employees on risks

The final step is critical because, without proper personal cybersecurity hygiene, your efforts will never be truly effective—AI or no AI. Think about how often you’ve seen someone leave a device unattended or unknowingly expose sensitive information by accessing systems in public. That’s a clear example of a lack of cybersecurity awareness. Are your employees any different?

Download Our Free eBook!

90 Things You Need To Know To Become an Effective CTO

CTO Academy Ebook - CTO Academy

Latest posts

How to Mitigate Risks of Shadow AI - article featured image

Shadow AI: How Tech Leaders Balance Innovation, Privacy, and Control in the Era of Decentralized AI Tooling

Integrating AI into software development and testing is now standard practice, offering significant gains in speed, efficiency, and quality. For technology leaders, the challenge is […]
How to Adapt and Stay Relevant in a Shifting Tech Job Market - article featured image

How to Adapt and Stay Relevant in a Shifting Tech Job Market

Every week, CTO Academy hosts live sessions and debates with seasoned technology leaders and career coaches. Members can ask questions and get immediate answers from […]
What is Wrong With Your CTO Resume and How to Fix It - guide featured image

What’s Wrong With Your CTO Resume & How to Fix It

In one of our most recent online sessions, we discussed the troubling trends in the tech job market, especially in the technology leadership category (e.g., […]

Transform Your Career & Income

Our mission is simple.
To arm you with the leadership skills required to achieve the career and lifestyle you want.
Save Your Cart
Share Your Cart