Relying on AI detectors Raises Censorship Concerns, After Real Videos Are Labeled as Fake

If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The idea of using AI to detect and perhaps even censor AI content online has been growing over the last year. However, if recent tests are to be believed, the accuracy of such technology is far from perfect – meaning genuine content could be falsely censored if such technology is trusted. In the fight against deepfake videos, Intel has released a new system named “FakeCatcher” that can allegedly distinguish between genuine and tweaked digital media. The system’s effectiveness was put to the test by using a mixture of real and doctored clips of former President Donald Trump and current President Joe Biden. Intel reportedly uses the physiological trait of Photoplethysmography, which reveals blood circulation changes and tracks eye movement to identify and expose these deep fakes. The acclaimed scientist Ilke Demir, part of the Intel Labs research team, elucidates that the process involves determining the authenticity of the content based on human benchmarks such as a person’s blood flow changes and eye movement consistency, the BBC reported. These natural human characteristics are detectable in real videos, but absent in videos made through AI tools, Demir expounded. However, preliminary testing revealed that this technology might not be foolproof. Despite the company’s bold claim of 96% accuracy for FakeCatcher, the test results onstage a contrasting story. The system efficiently detected lip-synced deepfakes, failing to recognize only one out of several instances. Interestingly, the real ordeal emerged…Relying on AI detectors Raises Censorship Concerns, After Real Videos Are Labeled as Fake