Deepfakes detected via reverse vocal tract modeling are ‘comically’ inhuman

Scientists have long studied what kinds of sounds a dinosaur makes or how a person’s voice might sound based on skulls or other speech-producing items and organs. By reversing this process and applying it to deepfakes, scientists generate models of the voice organs that the speaker in deepfake audio must have. And they’re not human, as The Conversation reports.

Research by Logan Blue, a doctoral candidate in computer science and information science and engineering, and Patrick Traynor, a professor in the same department at the University of Florida, with research colleagues, uses the mechanics of speech to reconstruct everything, from the vocal cords to the teeth from a fake deep sound. , easily discovering the tell.

Their technique measures acoustic and fluid dynamic differences between human and synthetically generated voice samples. The models, even in line, clearly show the difference, the synthetic voice revealing that it was not made in the same way as a human voice, that it did not come from a human body.

Representation of the differences between real and artificial voice paths. Credit: Logan Blue et al., CC BY-ND

“When extracting vocal tract estimates from deepfake audio, we found that the estimates were often comically incorrect,” the researchers write in The Conversation. “For example, it was common for deepfake audio to result in vocal tracts having the same relative diameter and consistency as a drinking straw, unlike human vocal tracts, which are much wider and more variable in shape. .”

Deep fakes often succeed via social engineering, rather than fooling biometric identity checks. Audio is easier to remove than video, but can now also be easier to detect. Biometric accuracy verification could be another tool to detect attempts, Russian bank Sber is patenting a method of detecting blood flow under a speaker’s skin.

Paravision recently received funding from an anonymous partner in the Five Eyes alliance to detect deepfake videos.

Article topics

biometrics | biometric research | fake fake | synthetic data | synthetic voice | voice biometrics

Comments are closed.