V
0

I've been tracking the lip sync errors in political deepfakes, and the gap between good and bad ones shrank from about 2 seconds to almost nothing in just the last 8 months.

What specific new tool or method do you think is closing that gap so fast, because it's getting scary good?
3 comments

Log in to join the discussion

Log In
3 Comments
pat_murray53
I saw a clip last week of a local news anchor from ten years ago that had been perfectly re-dubbed to say something he never did. The scary part was how his smile matched the new audio. It made me remember when you could spot a fake by a stiff jaw, but that tell is gone now. The speed of this change is what keeps me up at night. We're not ready for what comes next.
6
averymartin
That point about the smile matching the audio is key. People are missing the new training data. It's not just more videos. They're feeding the AI pure motion capture data from high-end video game animations. They map a politician's face onto a perfectly synced digital puppet. The tech learns from that flawless movement, not just bad lip flaps. It's borrowing from a solved problem.
1
terryscott
terryscott10d ago
Honestly that's so true, saw a fake clip of my own mayor last month and the way his eyes crinkled when he "laughed" was just too perfect. Tbh we're already way past being able to trust our own eyes.
3