Last night, I learned that I am going to be a disgusting, decrepit old man. My beard will be gray, my cheeks will puff out, and my nose will get redder and more bulbous. At least, according to FaceApp.
Like many others online, including Gordon Ramsay, I recently downloaded and used FaceApp, which uses artificial intelligence to alter the appearance of someone in a photograph. It can make you look older or younger (if you don’t have a beard like me), or change your hairstyle or add makeup.
Amid the sudden popularity of FaceApp, some online are raising concerns about the privacy implications of the company’s retention of user data. Wireless Lab, the Russian company that runs FaceApp, retains the ability to use your photos, name, and likeness for any purpose, according to its terms of service. The company is also headquartered in Russia, where tech companies are expected to acquiesce to government demands. Some observers were concerned that the application would upload a user’s entire Camera Roll to the service.
But there’s a bigger question behind the concern: Why would FaceApp want these images to begin with? Even if it weren’t gathering these pictures as part of a state-sponsored surveillance campaign, how could it put them to use?
The practice of altering human faces is well-worn in A.I. research, but typically boils down two techniques, as mentioned in an IEEE research paper from earlier this year: Either having an algorithm analyze pairs of images where a person differs in age, or showing an algorithm pictures of younger people and pictures of older people to identify similarities that are independent of a person’s identity. Looking at FaceApp’s creations, the app tends to whiten hair, add wrinkles and jowls, and redden skin.
The common factor between all of these applications is data. To train these algorithms you don’t just need one or two examples of a young person and an old person, but thousands. These datasets already exist online. A compilation of facial aging datasets from 2018 shows that hundreds of thousands of images are already available online for researchers to use. If you posted pictures of yourself online with certain Creative Commons licenses, you could be a part of them.
Earlier this year, an NBC News report detailed how IBM created a facial recognition dataset of more than a million people by scraping publicly available images on Flickr, a popular photo-sharing service.
Having its own facial recognition dataset would allow FaceApp to control the quality of the data that’s used to train its algorithms. Having exclusive access to high-quality photos is a differentiator for many big tech companies — for instance, Facebook and Google lead the field in computer vision and facial recognition in part because of their massive datasets built from user data. Which is to say: All of the photographs you’ve uploaded to these services over the years have been used to train the companies’ technology.
Technological advancement is neatly tucked in the middle of that sentence, but Facebook clearly leverages user images for facial recognition. In a 2014 research abstract, the company outlined that it had a facial dataset of more than 4 million images — and again, that was five years ago.
FaceApp and Facebook are clearly two very different beasts. One is an opaque tech company operating under unclear privacy regulations and with little oversight on how it uses its technology, and the other one is FaceApp.
The point here is that FaceApp isn’t the only tool that’s ripping your data off. If you use Google or Facebook, you’re already inside a global panopticon of advertising data and tracking pixels. That doesn’t mean that you should hand over your data to everyone else, of course.
We should be wary of startups that can scoop up device information and photos and use them in any way they see fit. Your photos could be sold and used to impersonate you, set up fake online accounts, troll people on Twitter, and a thousand other scams still undiscovered.