New AMP tool unveiled to help fight deepfake media

A person sits a computer while someone next to them lies on a bed
(Image credit: Authenticated Media Protection (AMP))

Digital identity security firm ID Crypt Global has unveiled its Authenticated Media Protection (AMP) product, which creates an invisible cryptographic watermark that can be used to validate videos, photos, and other relevant media. 

A key part of the technology is the fact that the watermark links the file directly to a verifiable digital identity, in theory making it impossible to re-publish files anonymously. 

Files that are modified or re-published break the watermark, immediately indicating that the file is not trusted. Users can do the verification on the fly using a freely available browser extension for either Chrome or Microsoft Edge.

The growing problem with fake news

 

Deepfake media is becoming a growing problem today, with celebrities and public figures such as Taylor Swift and Tom Cruise recently becoming victims of phoney video and audio material, and the tools to create these fake clips are becoming more sophisticated every day. 

There are hundreds of software apps available out on the open web, which can manipulate, alter and produce this material. Some of them can only do limited face swapping such as overlaying a new face onto a body - but the most sophisticated will produce a whole video with fake voice, head and lip sync movements. 

“The rapid rise of deepfake media is extremely concerning, and we’re already starting to see the severe consequences that can come as a result of sharing falsified images and videos,” says Lauren Wilson-Smith, CEO of ID Crypt Global, “The good news is we’re seeing a huge level of investment into the detection of fake media, and these technological advancements do, at least, allow us to level the playing field.”

Products like AMP, FakeCatcher from Intel and Microsoft’s Video Authenticator, are designed to catch such false material at source, which is especially important given the level of misinformation which is being spread during election cycles. 

Many of the world’s major democracies have been prey to election interference in one way or another, including fake audio recordings in Slovakia and counterfeit AI avatars in Indonesia. Even in India, AI versions of dead politicians have been brought back to life, all of which adds a much more serious dimension to the ongoing war against false news. 

The global issue is so serious that the fraud detection sector has grown 194% over the past decade, from £372 million in 2014 to over £1.1 billion today. The addition of generative AI tools and the expected rise in text-to-video models in the future will only make the situation worse. A recent report from the US National Security Agency, predicts that the overall use of generative AI will exceed $100 billion by 2030, growing at a rate of more than 35% per year. 

The Agency splits the use of ‘synthetic media’ into different categories. Shallow/Cheap Fakes, involve crudely manipulating media without the use of AI. This can include situations where audio is deliberately slowed down to simulate intoxication, or copy/pasting audio, video, or image material to deliberately mislead the recipient or public. Regular deepfake media is multimedia that has either been fully or partially created using some form of computing and machine learning. This fully synthetic product can include compromising politician or celebrity videos, highly trained voice cloning or in some cases completely fake online conference calls.

A woman sits at a computer on a video call, with a face that is not hers reflected back at her

(Image credit: Authenticated Media Protection (AMP))

In a shocking example earlier this year, a clerk at a major Hong Kong finance company was conned out of $25 million by a faked video conference call, apparently with the firm’s Chief Financial Officer. The CFO was in fact an AI construct, and the whole video meeting was designed to get the clerk to sign over the money without the usual checks. 

In another case, the British CEO of an energy company received a call from what he believed was his parent company’s German CEO, asking for an urgent transfer of $243,000. The voice sounded legitimate, but was in fact synthetic, and the UK company lost all the money in the fraudulent transfer. 

The NSA has set out a list of ways that companies and individuals can minimize the risks of becoming a victim of this kind of crime. First, they should make full use of verification technologies like digital watermarking, and also deploy real-time checking in the form of multi-factor authorization, PINs, biometrics, or other steps to check if the approach is genuine. 

In addition, the agency recommends casting a more critical eye on all aspects of communication, especially if it involves the transfer of money or important privacy or security details. Also specifically mentioned as a good source of detection tools is the antispoofing.org wiki.

Things may not be quite as bad as the Wild West yet, but it definitely pays to be careful out there. 

Nigel Powell
Tech Journalist

Nigel Powell is an author, columnist, and consultant with over 30 years of experience in the tech industry. He produced the weekly Don't Panic technology column in the Sunday Times newspaper for 16 years and is the author of the Sunday Times book of Computer Answers, published by Harper Collins. He has been a technology pundit on Sky Television's Global Village program and a regular contributor to BBC Radio Five's Men's Hour. He's an expert in all things software, security, privacy, mobile, AI, and tech innovation.