Last week, Facebook announced that its researchers are partnering with Michigan State University to develop artificial intelligence (AI) that can reverse engineer deepfake media and identify its origin. Facebook’s AI is meant to identify the algorithm(s) and/or pattern(s) used by the AI that creates deepfake images. Eventually, Facebook’s AI will also be able to trace similarities between algorithms and/or patterns across various deepfake images; when two or more images show very similar patterns, it is assumed that the images most likely come from the same creator or group of creators. The AI can also identify fingerprints (small flaws) in deepfake images and use those fingerprints to group together images. The ability to identify a common source of images will be especially helpful when those deepfakes are created with malicious intent. Once the sources are identified, the proper actions can be taken to remove the images from the Internet and punish the creators.
The company currently claims that deepfake media are not a common issue on their platform but that they want to prevent deepfake media from polluting Facebook and the various other platforms they own. Facebook also reported that their AI was successful in training, identifying over 100 deepfake images.
The code that created the AI, the tested data set, and the trained models were released to the public to further research on deepfake detection. Unfortunately, with this information as public, deepfake creators can attempt to work ahead of the algorithm to avoid detection, so Facebook’s original AI will need to be updated as deepfake technology advances.
x
No comments:
Post a Comment