What are deepfakes?

Deepfakes have been around for almost a decade, but only started to gain traction in 2017. They are artificial media that use known, valid data and a form of Artificial Intelligence (AI deep learning) to manipulate videos, images or audio, making it difficult to distinguish between real and fabricated content. Deepfakes are categorised as social engineering attacks that can be used to breach an organisation’s network and compromise internal data.

There are two main deepfake techniques: the first is Generative Adversarial Networks (GANs), setting two networks against each other. One network (the generator) creates the deepfake, while the other (the discriminator) tries to identify it as fake. This ongoing battle refines the forger and the detector. The other technique is via Autoencoders, which are neural networks that learn to compress data (like a video or audio) into a smaller representation of itself, and then recreate it. This allows the data to be manipulated and deepfakes to be generated by altering the compressed version.

Deepfakes are worryingly becoming increasingly ‘popular’ in modern technology because the source code and software to create them have become more readily available to the public.

Deepfake statistics

A consumer survey by iProov shows that awareness of deepfakes is growing:

  • In 2019, only 13% of consumers said they knew what a deepfake is, compared with 29% in 2022.
  • 57% of people believe they could spot a deepfake, but unless the deepfake is poorly constructed, this is likely to be untrue. To verify a deepfake, deep learning and computer vision technologies are required to analyse certain properties, such as how light reflects on real skin, versus synthetic skin or imagery.
  • 80% surveyed said they are more likely to use services that take measures against deepfakes, meaning that online service providers must implement technology that can defend against deepfakes to maintain customer trust.

How are deepfakes used?

One of the most concerning applications of deepfakes is identity theft, particularly through audio manipulation. Criminals can use deepfakes to impersonate someone’s voice, tricking security systems and gaining access to sensitive information.

For example, if you call your bank’s helpdesk for assistance, the system uses voice recognition to verify your identity; however, an attacker, armed with a deepfake of your voice, could call and bypass this security layer simply by providing your credentials. Financial institutions that use ‘voice as password’ features are especially vulnerable to these attacks.

Deepfakes have opened up a new dimension for cyberattacks, ranging from sophisticated spear phishing to the manipulation of biometric security systems. Spear phishing is expected to evolve, with deepfakes enabling near-perfect impersonation of trusted figures, making a significant advance from the usual method of replicating writing style or mimicking email design. Previously, phishing emails were easy to spot due to the proliferation of grammatical errors, but now, cybercriminals are using AI tools like ChatGPT to craft better-written, grammatically correct emails in various languages that are difficult for spam filters and readers to spot.

Infamous deepfake attacks

Deepfakes can be used to manipulate public opinion and influence people’s actions. A now-debunked deepfake video of Ukrainian President Zelenskyy surrendering, aimed to undermine Ukrainian morale, highlights the worrying potential for deepfakes to disrupt political processes and cause real-world harm.

In another high-profile attack, threat actors used deepfake technology to manipulate videos and simulate Mark Zuckerberg’s voice to say, “Whoever controls the data controls the truth,” causing backlash in the media and throughout Facebook.

Another infamous attack, leveraging a deepfake, was a fraud incident that affected a bank in Hong Kong in 2020. The bank manager received a call from a voice he recognised: a director at a company with whom he’d spoken before. The director said his company was going to make an acquisition and needed the bank manager to authorise transfers. He also received what appeared to be a legitimate email from the director and a lawyer he worked with. Everything looked and sounded real, so the bank manager carried out $35 million worth of transfers, but it was a deepfake. The fraudsters used deep voice technology to clone the director’s speech to dupe the bank manager. Investigators were able to trace around $400,000 worth of stolen funds and identified that around 17 individuals were involved in the scheme.

Emerging deepfake detection methods

Machine learning, while being the source of the problem, may also be part of the solution. Deepfake detection algorithms are already being developed, using features like subtle facial movements and light reflections that are often overlooked by GANs.

Governments and international organisations must collaborate to establish regulations and standards that protect against the misuse of deepfakes. Direct legislation like the US’s DEEPFAKES Accountability Act is a step in the right direction, making it illegal to create or distribute deepfakes without consent or proper labelling.

As part of the updated Criminal Justice Bill, which continues its passage through Parliament, the UK government is creating a range of new criminal offences to punish those who take or record intimate images without consent, or install equipment to enable someone to do so. These changes in the Criminal Justice Bill will build on the existing ‘upskirting’ offence, making it a criminal offence to:

  • intentionally take or record an intimate image or film without consent or a reasonable belief in consent
  • take or record an intimate image or film without consent and
  • with intent to cause alarm, distress or humiliation; or
  • for the purpose of sexual gratification

How to protect your organisation from deepfakes

Employers should not depend on, or expect, a legislative solution to malicious utilisation of deepfakes.Security teams need to be educated and trained in how to detect when an attacker is using a deepfake to impersonate an employee, vendor, partner or customer, and determine how to protect their data. They need to learn how to verify the source and authenticity of any suspicious communication or request.

People are still an organisation’s weakest link, and cybercriminals exploit the human element. By implementing strict security protocols and policies, and installing technical controls such as strong authentication, multi-factor authentication (MFA) and rigorous verification processes for sensitive transactions and information, you can mitigate the likelihood of an attack.

Invest in tools and technologies that can help detect and prevent deepfakes. There are various solutions available, such as digital watermarking, blockchain or AI-based analysis, that can help identify and flag deepfake content. Some of these tools can also help trace the origin and source of the deepfake, which can also assist in legal prosecution.

SIEM is key

Securus has already covered the need for SIEM (Security Incident and Event Monitoring) in cyberattack prevention, ensuring that your organisation can recognise and address potential security threats and vulnerabilities before they can disrupt business operations.

SIEM tools continuously monitor network traffic, system logs and other sources for suspicious activities or anomalies that may indicate a security breach or intrusion attempt. By correlating events across different sources, SIEM can detect complex attack patterns that might otherwise go unnoticed.

How Securus can help

Security Incident and Event Monitoring will alert you to potential attacks, however you still need a Security Operations Centre (SOC) to action these alerts. At Securus, we help organisations protect their operations and intellectual property from increasingly malicious and complex cyberthreats, such as deepfake attacks. We see the security challenges from the inside, working alongside our customers to provide advanced detection, incident response and recovery against emerging cybersecurity threats.

Our managed cybersecurity services are suited to any size of organisation, safeguarding them from the latest vulnerabilities. They are designed to be proactive, responsive and tailored to the unique needs of each customer. We work closely with industry-leading information security and business partners to ensure that we provide the most advanced security solutions available, such as:

  • Endpoint security
  • Network security
  • Cloud security
  • Identity and Access Management (IAM)
  • Email security
  • Compliance and regulatory solutions

The future for deepfake technology

The cybersecurity field is constantly evolving, and new detection methods are being continually developed to prevent deepfake attacks.

By deploying the latest technologies, and only through a comprehensive approach internally or by outsourcing your cybersecurity strategy to an expert such as Securus, can you effectively address the challenges posed by deepfakes.

If you’re interested in learning more about how Securus can help you detect and prevent deepfake attacks, call us on 03451 283457.

Get In Touch

SD-WAN, Anti-Malware, Next Generation Anti-Virus, SASE and Immutable Backup, Securus has a security solution to suit your requirement and budget.

Let’s discuss your latest network security requirements in more detail.