Facebook and Microsoft team up on $10 million Deepfake Detection Challenge to fight the growing threat of deceptive AI-altered videos

Facebook and Microsoft team up on $10 million Deepfake Detection Challenge to fight the growing threat of deceptive AI-altered videos

The Daily Mail | Source URL

As concerns grow over the threat of AI-generated ‘deepfakes’ videos, some of the top names in tech have revealed a plan to fight fire with fire.

Deepfakes can, quite literally, put words in a person’s mouth; in recent examples, footage of celebrities and politicians have been altered to convincingly show them doing or saying they never really did.

Facebook, Microsoft, and the Partnership on AI have now teamed up with researchers from a slew of US universities to launch the Deepfake Detection Challenge, which seeks to create a dataset of such videos in order to improve the identification process in the real world.

Facebook is commissioning consenting actors for the effort, and says it has set aside $10 million for related research and prizes.

Deepfakes have rapidly grown more realistic in the short time since they first sprung into existence.

And, the tech industry has struggled to keep up.

By partnering with academics and creating a high-quality dataset, Facebook is hoping to harness the tools of the AI industry to get to an effective solution – and quickly.

Facebook has tapped academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany, SUNY for the challenge.

”Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,’ Facebook CTO Mike Schroephfer wrote in a blog post published today.

‘Yet the industry doesn’t have a great data set or benchmark for detecting them. We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes.’

Facebook says it will not include user data from its own site to build the dataset. Instead, it will bring in ‘clearly consenting participants.’

The resulting dataset will be put through tests at the International Conference on Computer Vision in October before it is released closer to the end of the year, during the Conference on Neural Information Processing Systems (NeurIPS) this December.

According to Facebook, it’s important that the dataset is ‘freely available for the community to use.’

Though many have attempted to come up with ways to detect deepfakes in recent months, a single, effective solution has yet to emerge.

‘People have manipulated images for almost as long as photography has existed. But it’s now possible for almost anyone to create and pass off fakes to a mass audience,’ said Antonio Torralba, Professor of Electrical Engineering & Computer Science and Director of the MIT Quest for Intelligence in a statement on the challenge.

‘The goal of this competition is to build AI systems that can detect the slight imperfections in a doctored image and expose its fraudulent representation of reality.’

WHAT IS A DEEPFAKE? 

Deepfakes are so named because they are made using deep learning, a form of artificial intelligence, to create fake videos of a target individual.

They are made by feeding a computer an algorithm, or set of instructions, as well as lots of images and audio of the target person.

The computer program then learns how to mimic the person’s facial expressions, mannerisms, voice and inflections.

With enough video and audio of someone, you can combine a fake video of a person with fake audio and get them to say anything you want.

Leave a Reply

Your email address will not be published. Required fields are marked *