How to Detect Deepfake Satellite Imagery?
In areas related to satellite imagery, deepfake imagery is becoming a problem for individuals and organizations alike. There is a growing worry that satellite imagery will be used for nefarious purposes using deep learning techniques and fake imagery.
Detecting Deepfake Satellite Imagery
Deepfakes are not just used to poke fun of people or organizations but they are seen as a potential threat to countries and their security. To counter deepfakes, algorithms have been created to detect where images might have been manipulated.
One technique is called common fake feature network (CFFN), often used along with standard convolutional neural networks (CNN), which detect potential discriminative features using pairwise learning that suggest potential change or alternation to an image. Other techniques include spatial feature extraction, using SSTNet, which applies feature extraction and temporal extraction as well for changing imagery.
In general, researchers have been countering deepfake neural network models with other neural networks that can, at times, reverse engineer or at least detect shapes and pixels that are altered from one frame to another or simply detect alternations that vary from the expected. Some techniques even look for changes to noise or other artefacts common in imagery that might be missing or altered in deepfakes. While many of these techniques have been applied to image and video content online, researchers see that they are useful for satellite imagery as well.
Although techniques are now emerging to counter deepfakes, the evolution of deepfake and counter deepfake algorithms demonstrates a type of arms race is emerging that may mean we see ever more sophisticated deepfake techniques and more sophisticated methods to counteract this. Neural network models can be created to be more sophisticated by creating additional layers that alter or detect given features, meaning neural networks are well-suited algorithms to be made into more complex machinery that both counteracts and creates deepfakes. So long as incentives are strong for creating potentially harmful or deceptive imagery, the likelihood is we will continue to see deepfake satellite imagery used for a variety of reasons.
Deepfake images have become almost ubiquitous online, so much so we often are not sure what we are looking at and if it is even real or computer-generated. For satellite imagery, this has also been true, continuing a long-lasting trend of geospatial manipulation. Thankfully, methods are now available to counteract common deepfake neural networks; however, the likelihood is we are seeing just the beginning of deepfake algorithms applied on geospatial data. We should not expect to see the end of deepfake satellite imagery any time soon, even if there are now better counter actions to detect deepfake imagery.