Warping Reality: Adobe’s Neural Filters are Ripe for Mayhem
Face tuning apps have thrived for years in the mobile phone ecosystem, allowing users to make subtle (and sometimes not so subtle) changes to their appearance for a selfie-obsessed generation. Some consumers use the tools to get closer to the generic celebface look that dominates the influencer world of the Kardashians. For others, the tools are used to simply erase a few years from one’s visage – perhaps turning a selfie into a headshot for professional use. We’ve even seen “fun” apps that advance age your photo built by questionable developers with unknown motives.
But whereas face tuning apps fall under the category of “fun,” there is something legitimizing about the incorporation of similar technologies into Adobe Photoshop – the powerful image editing software that has been so ubiquitous as to become a verb.
With the newly released version 22, Adobe introduced a set of neural filters – a cloud-based technology based on machine learning and GANs that offer a set of switches and sliders to use their beta technology.
Whereas convincing image manipulation used to take a highly skilled operator hours of work, anyone with an Adobe Creative Cloud subscription can make convincing alterations to a photo. The implications are frightening. In this age of disinformation, one doesn’t need pristine output to move the masses. Indeed, a low-quality, intentionally slowed down video of Nancy Pelosi was enough to convince some users that she was drunk and slurring her words. And even when viewers know the video is false, the power of visual imagery reinforces confirmation bias.
And thus any image can be easily altered, and meme-ified with the potential to go viral. Critical thinking skills are suspended. Damage done.
It is one thing to read about a technology, but when I tried it out for myself, I was filled with a sense of dread. Using a low resolution, Creative Commons portrait of Joe Biden, I reversed aged him about 40 years in two minutes.
In Adobe’s promotional materials, they use an example of a baby looking away from the camera. A few tweaks later, she gazes in the same direction as her mother. The presentation of innocuous and anodyne use cases ignores the obvious potential for abuse.
As Washington Post photo editor Olivier Laurent tweeted “I think it’s time for the developers at Adobe to realize that not all ideas are good ideas, especially at a time when their “tools” are making it easier for some to manipulate photos and for people to question whether what is shown is real or not.”
Pandora’s box is already open, and there is no turning back the availability of technologies like deepfakes, 100% synthetic faces, and other reality bending tools like Adobe’s neural filters. It will only be minutes before the next Macedonian teen generates the fakes news – now accompanied by photo-realistic, synthetic images – that sways an election somewhere in the world. Maybe even in your backyard.
Photography continues its slow march away from reality. Even with burst-mode enabled phones, we seem more intent than ever on capturing life as we want to remember it, not as it was.
Correction: A previous version of this article indicated that this change was in Photoshop version 26. The current version is 22. We regret the error.
Today Adobe also talked about upcoming technology to show if a photo has been manipulated. When it is ready, photojournalists should be REQUIRED to use this on all of their assignments.
The scare part, Adobe says the following: “Our goal is to systematically replace time-intensive steps with smart, automated technology wherever possible. With the addition of these five major new breakthroughs, you can free yourself from the mundane, non-creative tasks and focus on what matters most – your creativity.”
Don’t they realize that these time-intensive steps are what prevent abuse (to be clear, there will always be people willing to put in the time to manipulate an image to their likings, especially when there are economic and political aims) and they are just making it easy for anyone with a grudge or an agenda to access such results with none of the work (I’m a firm believer that learning a trade, one that is intensive, builds an awareness of the responsibilities that come with these skills).
This just means that even better deepfake generators can be made using a GAN (you train the deepfake-generator to beat the deepfake-detector). There’s no winning.
“You can’t trust the medium, you can only trust the source.” Galen Rowell.
These are the same old problems repackaged. I don’t think we can blame a software company for innovating.
“Photography continues its slow march away from reality. Even with burst-mode enabled phones, we seem more intent than ever on capturing life as we want to remember it, not as it was.”
This just makes me so sad. But I also feel like, or at least want to believe, there will always be a need and market for reality – for capturing life as we see it, because fundamentally as humans we all crave it wether we know it or not.