Last week, a slew of people fell for another viral trend without giving much thought to it. The photo editing app FaceApp added a new feature that ages a person’s photo to show what that person would look like if they were old. The app raced to the top of the charts as people thought it’d be fun to see what they’ll look like in the future. Soon, though, chaos ensued as misinformation and rumors sprung up around the app, bringing into question concerns about privacy and trust when it comes to facial recognition technology.
A year ago, Microsoft called for public regulation of facial recognition. The company was hoping to ensure that facial recognition technology was used for good. But clearly issues around facial recognition and AI are still prevalent today.
There are police departments in the United States using Amazon’s facial recognition technology, Rekognition. And Amazon pitched the solution to Immigration and Customs Enforcement (ICE) as well. But facial recognition is a potentially dangerous technology given the potential for bias (based on race, gender, sexuality, etc).
Even with the potential dangers of facial recognition technology, though, there still isn’t much regulation around it.
RELATED CONTENT:
There’s a diversity problem in the tech industry and it’s not getting better
Google explores the challenges of responsible artificial intelligence
Microsoft calls for public regulation of facial recognition technology
Local and state governments may have their own restrictions, but there is no federal regulation for the country as a whole.
“The technology companies, they’re going to continue to build [facial recognition tech] as long as they can see the impact that it’s going to have on their bottom line,” said Sean McGrath, digital privacy expert at ProPrivacy. “And the government agencies are going to continue to consume it as long as they can see the benefits from their perspective. So, where do the checks and balances come in?”
If it’s not Amazon providing these services, it will be another company. And if it’s not another company, these government agencies will just build whatever system they need themselves, McGrath explained. This is why he believes the conversation needs to focus not on who is providing these technologies, but how we can effectively legislate theoretical use. “If we can’t do that,” McGrat hasked, “should we be using [facial recognition tech] at all?”
McGrath also explained that an issue with conventional legislation is that things take so long to get passed that by the time they do, the tech industry has already moved on to the next iteration.
The issues around FaceApp also illustrate a problem with the way we interact with technology. Soon after it exploded in popularity, a lot of misinformation began to spring up. There were rumors that the app was secretly uploading users’ entire photo galleries, which there is no evidence for. There were also some concerned Apple users who didn’t understand how the app got access to photos without them having to explicitly grant access to the photo library. According to TechCrunch, this is not sneaky behavior of FaceApp, but rather a feature of an Apple API introduced in iOS 11. The API allows apps to collect a single photo (uploaded by the user) without the user being prompted by a system dialog to grant permission to that app.
In addition, the fact that the app is designed by a Russian company led to some concern, likely due to our proximity to the 2020 election and evidence of Russian interference in the 2016 election. TechCrunch has reported that FaceApp’s founder confirmed that storage and cloud processing is performed on AWS and Google Cloud servers located outside of Russia.
“I would say, facial recognition technology in general isn’t very popular right now…So you’ve got Russia involved and now you’ve got facial recognition involved, which isn’t a popular technology, and it’s stirring up a lot of noise,” said Kevin Freiburger, director of identity programs at Valid.
FaceApp’s popularity is a stark reminder that people are pretty much willing to hand over any information to participate in a viral trend. Part of the issue is that app developers aren’t making it clear what people are handing over. Google and Apple both vetted the apps for their stores, and even linked to their privacy policies, according to a report in The Denver Post.
“They’re coming out with these massive terms of service with very legalized language and content that isn’t going to be understood by a 12-year-old wanting to make their own music video,” said Freiburger. “Parents aren’t even necessarily going to understand it. So I think part of that is on the app developer with these muddy terms of service. And then I think it’s just lack of consciousness by the public, not wanting to know what’s going to happen when they start using these projects. So I do look at it as a privacy issue and lack of discretion by users, or the public.”
McGrath explained that ProPrivacy is pushing for an outright ban on facial recognition technology, especially when it’s being used by government agencies.
“We personally think it poses one of the greatest threats to privacy today, and so [we] can’t really sit idly by while we watch the automation of oppression,” said McGrath. “And I think that we as a society need to take a step back and really think about it. Big tech companies are going to continue to create, the big government sectors are going to continue to buy it, and legislators are going to continue to sit around and scratch their heads and try to understand it. But just because we have access to technology doesn’t mean we have to use it.”
When Microsoft called for facial recognition regulation last year, it released a list of eight questions it wanted government regulation to address:
- “Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime?
- Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?
- What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology?
- Should use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy?
- Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces?
- Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent?
- Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces?
- Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?”