Deepfake

Deepfakes may now be able to fool detection tools. (MDV Edwards/Shutterstock)

In a nutshell

  • Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats.
  • The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. This challenges many existing detection tools, which may need significant redesigns to keep up with the evolving technology.
  • To effectively identify high-quality deepfakes, researchers suggest shifting focus from just detecting heart rate signals to analyzing how blood flow is distributed across different facial regions, providing a more accurate detection strategy.

BERLIN — Digital doppelgängers just developed a pulse. Researchers hunting deepfakes long relied on a seemingly foolproof detection method: AI couldn’t fake the subtle skin color changes caused by your pulse. That certainty has now collapsed, as scientists discover modern deepfakes inadvertently preserve the heartbeat patterns from their source videos, making our most trusted detection tools suddenly unreliable.

For years, cybersecurity experts thought they had deepfakes all figured out. Since AI wasn’t explicitly programmed to mimic subtle color changes in human skin caused by blood flow, researchers believed fake videos would lack these physiological signals. But according to a new international study published in the journal Frontiers in Imaging, that defense has now crumbled.

Previous research has suggested that deepfake creation processes erased the subtle heartbeat-related signals in videos, making them a reliable tool for detection. However, this study challenges this idea, showing that modern deepfake methods no longer remove these signals.

This revelation means many existing deepfake detection tools may be less effective than previously thought. Banking on pulse detection as a shield against deepfakes may no longer be an option.

How Fake Videos Acquired a Pulse

Deepfakes in dictionary
Deepfakes are now capable of mimicking signals that link to having a heartbeat. (© Feng Yu
– stock.adobe.com)

How can you detect a heartbeat in a video? Our heartbeats cause subtle color variations in our skin as blood flows through vessels near the surface. This phenomenon, invisible to the naked eye, can be detected using a technique called remote photoplethysmography (rPPG).

Think of rPPG as a contactless way to measure someone’s pulse through video alone. Special algorithms analyze the almost imperceptible color changes in facial skin to extract heart rate information. Until now, this technique has been considered one of the most promising methods for identifying deepfakes.

For their study, the researchers created a pipeline specifically designed to extract these physiological signals from videos. They analyzed both genuine videos and deepfakes, focusing on heart-beat-related signals that typically occur between 0.7 Hz and 3 Hz (equivalent to 42-180 beats per minute).

The team recorded videos of twelve diverse participants under controlled conditions with uniform lighting. During each session, participants performed various activities like talking, reading, and interacting with the recording supervisor. Crucially, the researchers also measured the actual heart rates of selected participants using electrocardiogram (ECG) and photoplethysmography (PPG) sensors to establish ground truth data.

Deepfake Evolution

Using these recordings, the researchers generated their own deepfakes using multiple methods. One method used a special system that combines two decoders to create realistic deepfake videos, resulting in 858 videos of different people and 156 original, unedited videos. They also created additional deepfakes using an open-source tool called DeepFaceLive, which performs real-time face swapping.

The generated deepfakes were impressively high-quality, avoiding common mistakes that plagued earlier generations of fake videos. For additional testing, the team incorporated deepfakes from public datasets, including the Korean DeepFake Detection Dataset (KoDF).

When the researchers analyzed the deepfakes, they discovered that the fake videos contained valid heart rate signals that closely matched those in the original source videos. The correlation was particularly strong in the DeepFaceLive-generated fakes.

This means that deepfake technology is unintentionally preserving physiological signals from source videos, effectively transferring the “heartbeat signature” to the fake videos.

Many researchers had built detection systems on the premise that deepfakes couldn’t replicate these subtle biological signals. Now, those systems may need complete redesigns.

There is one silver lining. While the deepfakes preserved heart rate signals, these signals were generally weaker than in genuine videos. The average signal-to-noise ratio was -1.97 dB for genuine videos compared to -3.35 dB for deepfakes, indicating that the pulse signal in deepfakes, while present, is of lower quality.

The Next Frontier in Deepfake Detection

Deepfake and AI artificial intelligence video editing technology.
Deepfake technology is only growing more sophisticated. (Tero Vesalainen/Shutterstock)

What does this mean for our ability to identify fake videos? According to the researchers, we need to shift away from simply looking for the presence or absence of heart rate signals. Instead, future detection methods should analyze how these signals are distributed across the face.

It’s not just about whether a pulse exists in the video, but whether that pulse behaves naturally across different facial regions. Genuine videos show specific patterns of blood flow that deepfakes might not perfectly replicate, despite capturing the overall heart rate.

Deepfakes have evolved from obvious fakes with visible artifacts to sophisticated manipulations that can fool human observers. Political figures like Barack Obama, Donald Trump, and others have already been targeted, raising concerns about the potential for misinformation campaigns using fake videos.

The race between creators and detection technology is on. Deepfakes are becoming more sophisticated than we anticipated, and our detection methods must evolve just as quickly as the deepfakes themselves.

Paper Summary

Methodology

The researchers developed a pipeline to extract and analyze heart-related signals from videos showing human faces. This pipeline requires videos at least 10 seconds long and incorporates motion compensation techniques to ensure accurate extraction of physiological signals. The process involves aligning each frame with a reference face using facial landmarks, applying a technique called Plane-to-Orthogonal skin transformation on a 10-second window, and filtering the signal to isolate heart rate frequencies (between 42-180 bpm). The team created a dataset of videos from twelve diverse participants recorded in a controlled studio environment, with some participants also having their ECG and PPG measured for ground truth comparison. They then created deepfakes using multiple methods: a dual-decoder autoencoder approach generating 858 deepfake videos, DeepFaceLive for creating 32 videos, and also analyzed publicly available deepfakes from existing datasets.

Results

The study found that high-quality deepfakes contain detectable heart rate signals that originate from the source videos used to create them. When comparing the heart rates extracted from genuine videos with those from corresponding deepfakes, there was a significant correlation (average correlation coefficient of 0.57 for one method and 0.82 for DeepFaceLive fakes). The average difference between the detected heart rate and ground truth was minimal: 1.80 bpm for genuine videos, 1.85 bpm for one set of deepfakes, and 3.24 bpm for another. While the deepfakes contained heart rate signals, these signals were generally weaker than in genuine videos, with lower signal-to-noise ratios (-1.97 dB for genuine videos vs. -3.35 dB for deepfakes). This challenges the previous assumption that deepfakes don’t contain physiological signals and suggests that simply detecting the presence of a heart rate is no longer sufficient for identifying deepfakes.

Limitations

The research noted several limitations in analyzing physiological signals in deepfake videos. Many existing deepfake datasets suffer from compression artifacts, low resolution, inconsistent frame rates, high background noise, and challenging illumination settings, which can degrade the quality of extracted heart rate signals. Public datasets also typically lack reference measurements like ECG or PPG sensor readings, making it difficult to validate the accuracy of extracted signals. The researchers observed that for certain videos, particularly in public datasets, the detected heart rate signals might not be related to the actual heart rate despite appearing physiologically plausible. The study also notes that current detection methods based solely on global heart rate analysis may be insufficient, suggesting a need to shift toward analyzing localized blood flow patterns.

Funding/Disclosures

The research was funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 13N15735 (FakeID) and by Horizon Europe under Grant No. 101121280 (Einstein). All participants provided written consent for the use of their recordings in the experiment and subsequent publication.

Publication Information

The paper titled “High-quality deepfakes have a heart!” was published in Frontiers in Imaging on April 30, 2025. It was authored by Clemens Seibold, Eric L. Wisotzky, Arian Beckmann, Benjamin Kossack, Anna Hilsmann, and Peter Eisert from the Computer Vision & Graphics department at Fraunhofer Heinrich-Hertz-Institute HHI in Berlin, Germany, and the Visual Computing department at Humboldt University in Berlin. The paper was accepted on February 25, 2025, after being received on September 30, 2024.

About StudyFinds Staff

StudyFinds sets out to find new research that speaks to mass audiences — without all the scientific jargon. The stories we publish are digestible, summarized versions of research that are intended to inform the reader as well as stir civil, educated debate. StudyFinds Staff articles are AI assisted, but always thoroughly reviewed and edited by a Study Finds staff member. Read our AI Policy for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

Sophia Naughton

Associate Editor

Leave a Reply

10 Comments

  1. Rumpledstiltskin says:

    Yes it is hard to tell just looking at the photo-shots, that is until they open their mouths and prove they do not know how to pronounce names without sounding like an idiot. Half the shorts on YouTube are now having its ranks filled with these Deep Fake story lines.

  2. Jim Johnson says:

    Has You Tube become AI Tube?

  3. Marie says:

    Not really “news” as all technology has always had to “keep up”. When robbers learned how to use scanners and radios to open electric garage door openers decades ago, technology advanced to digital signals that changed with each ‘open’, for example. There are thousands of (old) examples.

  4. Nedley Meyers says:

    Whatever. I can tell deepfakes and I’m no expert. The mouth and lips are never right. There’s always something unnatural even if I can’t pinpoint. And let’s say deepfakes get better and even undetectable? So what. The new assumption by the world will be that any video or media is fake unless it’s authenticated through other means. There are many things in life like this like currency or bars of gold. Overall: Yawn.

  5. Robert says:

    Deepfake discrimination continues. Just let them live their lives.

  6. E Cald says:

    Trump’s AI Pope foto opens his bank account to AI Papacy

  7. Lance-- says:

    Well, that certainly explains the “deep fake” that inhabited the White House from 2020 to 2024…

    1. Nelson says:

      At least that person was decent. You can’t fake that.

    2. AFM says:

      Touche!

    3. Paul says:

      That one had a heartbeat? Pretty sure it was an older model.