Deepfakes: The urgent threat to truth and trust

Melgorithm

By Mel Migriño

The concept of deepfakes, or hyper-realistic forged videos and images, may seem like a recent phenomenon born from modern AI, but its roots trace back decades. Long before the term “deepfake” was coined, the seeds of this technology were rooted in the 1990s with early experiments in computer vision and facial manipulation.

However, the true inflection point arrived with the public release of algorithms and open-source code, which put the power of this complex technology into the hands of anyone with a computer. The journey from niche special effect to mainstream concern is a testament to the rapid and often unpredictable evolution of artificial intelligence.

Deepfake, is one of the recent techniques used to generate manipulated media images, videos, and audio that look highly realistic to human eyes, created through powerful deep learning tools.

Deepfake images consist of performing facial manipulations, such as identity swapping—where the target person’s face is switched for another character or person in the original image—or by creating highly realistic-looking faces bearing identities that do not actually exist.

Audio media method utilizes generative adversarial networks (GANs) as the audio synthesizing component to produce high-fidelity audio by converting text to speech.

Deepfake videos, based on the incorporation of both image and sound modalities, require the combination of several methods in the deep learning field, and it has achieved significant improvements in creating highly realistic videos alongside paired speech.

This media creation has adopted a number of deep learning networks, ranging from auto-encoders to GANs to solve various problems, and a wide variety of deepfake algorithms that use GANs have been proposed for the purpose of duplicating a person’s facial expressions and motions and swap them with those of another person.

For deep learning models to be trained to create so called realistic images and videos using deepfake techniques, a huge amount of picture and video data is typically needed.

Therefore, the fact that celebrities and politicians typically have a lot of films and pictures available online makes them popular targets for deepfakes.

What are the common types of deepfakes

Video Deepfakes: This create synthetic videos where a person’s face is swapped or their body is manipulated to show them saying or doing things they never did. 
Audio Deepfakes: This use artificial intelligence to mimic a person’s voice, creating convincing audio that can be used to impersonate them. 
Image Deepfakes: This alters images to deceive digital users, such as creating a fake photographic evidence or altering an image of a product. 

As a digital user, these are some of the tips on how you can identify Deepfakes

1. Check for Visual Inconsistencies

Look for unnatural facial movements. Watch the person’s face as it may seem stiff, or do the expressions look strange. The eyes might not blink naturally, or they might blink too much. Facial movement might be different from the overall body movements.

Check for weird lighting and shadows. Pay close attention to how light and shadows interact with the person’s face, neck, and the rest of the body.

Check for super perfect Hair and Teeth. AI models often struggle with fine details. Look for hair that looks unnaturally smooth or lacks flyaway strands. Teeth might appear too perfect.

2. Listen for Audio Inconsistencies

The audio is often a weak point in deepfakes because it’s a separate process from the video. This could be in form of poor lip-syncing when the words don’t perfectly match the movements of the person’s mouth. The voice might sound slightly off or robotic.

You can hear a voice that has a strange cadence or it sounds flat or emotionless that does not go with facial expression.

Listen to be background noise that doesn’t fit the scene or strange or inappropriate digital sounds that might have been introduced during the editing process.

3. Confirm the Source and Content

This should answer the questions like: Who is the creator of the media? The information should be from trusted source, or is it from an unknown social media account with a brand-new profile

Is the information credible? Does the video present a shocking claim that no major news outlet is reporting? Do a quick search online to see if the same story or video is being reported on by multiple, reputable sources.

Are there tools to verify veracity of content? Tools like Google Reverse Image Search can help you see if a still from the video has appeared elsewhere online, which might reveal the original context or that it has been used in other fakes.

You can also use content credential tools to check the history and changes done on the suspected malicious content.

What lies ahead?

The future of deepfakes will be virtually indistinguishable from real media to the human eye. This is due to advancements in AI that will address current weaknesses like unnatural facial movements, strange lighting, inconsistent hair and many more.

We are seeing real time creation of deepfakes will become common in the near future. This poses a significant risk for security, privacy and communication, as it could be used for real-time impersonation in business, legal, and personal settings. Further, multi-modal integration will be seen making the generated media content hard to detect its authenticity.

Hence, the assimilation of trust technology and raising of mass user education, appropriate legislative measures must be implemented to help combat this deceptive creation.

Deepfakes force us to confront our own biases, our willingness to believe what we see, and the fragile nature of digital truth. The future may be filled with increasingly perfect fakes, but our greatest challenge will be to maintain an “unshakeable belief in the value of verifiable facts and shared reality” — and that will be the battle of our minds.

Latest News

CICC releases list of online influencers subject for page takedown due to illegal gambling promotion

ePLDT unveils ‘Pilipinas AI’ with Dell, Katonic to drive nation’s AI future

Trend Micro warns of rising AI-Powered Cybersecurity Risks in PH

Google AI Plus now available in PH, expanding access to AI

Closing the digital divide: Why every tower matters for Filipinos

PH consulate in HK warns Filipinos against fake aid scam messages