Research and development of facial recognition technology has been advancing for years, but its potential applications for mobile and wearable devices are only beginning to be tapped. Along with this surge in innovation, though, critics of the technology warn of its potential to erode public privacy.

The NameTag app, in beta for Google Glass, is the first real-time facial-recognition search engine. The app takes a photo of a face with the Google Glass camera, sends it to a server where the photo is run against a database of millions of publicly available images, and returns a match to the Glass display with a name, additional photos and social media profile information.

Created by, the NameTag database currently holds more than 2.5 million photos obtained through public search data and social network information. Developers are also working to add the capability to scan the NameTag database against images from dating sites such as and OkCupid, as well as against the 450,000 entries from the National Sex Offender Registry and other criminal databases where individuals have forfeited their right to privacy. NameTag apps for smartphones, tablets and other wearable devices are also in the works.

This sort of technology, long gestating in research but new and unnerving to the public, was bound to run into some pushback. Since announcing an app capable of instantaneously identifying someone with the snap of a Glass camera, NameTag has been hit with criticism and concerns over its privacy implications. U.S. Sen. Al Franken of Minnesota, the Chairman of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, wrote an open letter to NameTag creator Kevin Alan Tussy voicing his concerns about the technology. Franken implored the need for best practices governing the use of facial recognition technology, and urged Tussy to delay the app’s launch until a series of Senate meetings could establish them.

In response to Franken’s concerns, Tussy explained that before unlocking the ability to search the NameTag database, users must opt in by creating a profile, linking it to at least one social media site, and accepting an Info Exchange agreement.

“Facial recognition technology is a reality,” said Tussy. “We understand that it carries the potential for the invasion of the privacy that Americans hold so dear. No home addresses or personal phone numbers will be displayed on NameTag. No one under 18 years of age will knowingly be included. Most importantly, anyone that wishes to opt out will be able to do so by visiting the NameTag site and filling out a brief form that will remain completely confidential.”


At the moment, Google’s ban on facial recognition Glassware apps prevents NameTag from a wide release until Google decides to support the technology. Franken believed the ban is a necessary check, at least for the moment.

“Google took the right step in banning these apps from Glass,” Franken said. “Right now, there is no law that provides even basic privacy protections around facial recognition technology. As long as that’s the case, it makes sense for an industry leader like Google to hold off like this.”

Franken has commissioned the National Telecommunications and Information Administration to study facial-recognition technology. The researchers are presenting their findings at a series of government meetings in Washington called the Privacy Multistakeholder Process, with the goal of devising a set of best practices to govern the technology.

“For me, the most important thing is control,” Franken said. “Part of our fundamental right to privacy is the ability to control who gets our sensitive information and who it’s shared with. Biometric information like faceprints and fingerprints are extremely sensitive. It’s unique to you, and if a file of this information is ever hacked, you’re in trouble—you can change your password and you can even change your credit card, but you can’t do that with your face or fingers. This technology should not be used on people unless they know about it and agree to it.”

Tuong Nguyen, an analyst at research firm Gartner, agreed that best practices for facial recognition such as an opt-in model should be established, but he believed the tech companies themselves should enact and enforce them, not the government.

“The industry itself, the Googles and Apples of the world, must put in place strict rules and regulations,” Nguyen said. “It’s better than a government or legislative body laying down arbitrary rules, letting the free market regulate itself rather than external intervention. To draw a parallel, look at the app store. [The companies] curate it, and say ‘We will not allow certain types of apps to be available’ instead of letting a senator say we’re going to just ban this type of content.”

Nguyen looks at head-mounted facial-recognition software like NameTag as a technology in its infancy. Not only is the functionality of wearable devices still basic compared to the state of facial-recognition technology, but consumers also haven’t caught up yet. He said consumers will adapt though, because 10 years ago the over-sharing concept of social networks was just as foreign.

“A lot of the issues coming up have to do with the fact that this technology is evolving at such a rapid pace, consumers are having a hard time keeping up with it,” Nguyen said. “Recognition, whether it’s facial, object or motion, has been around for a while and continues to be developed and advanced regardless of whatever fad is going on. Research into computer vision—using algorithms to positively identify objects—continues to move forward. Wearables are not the driving force behind the computer vision movement, but nine to 12 months from now, they could be.”

Facial coding: Sensing emotions from facial expressions
Computer vision technology is by no means exclusive to facial recognition. Emotion tracking company Affectiva, a startup that grew out of the MIT Media Lab, doesn’t want to know who you are. They want to know how you feel.

Affdex reads facial expressions captured on a webcam or smartphone camera, and measures the emotional connection an individual had with some type of content. Its original purpose was to measure viewer connection with advertising content or a particular brand, but Affectiva recently released the Affdex mobile SDK, which opens up the facial coding technology new developer possibilities.

Affdex has a clear opt-in policy and doesn’t record or store the videos or images analyzed. It doesn’t keep track of identity. Since releasing the SDK, Affdex has rejected several developers who planned to use the technology for facial recognition and surveillance. Cofounder and chief science officer Rana el Kaliouby, who created Affdex, set out to build a self-aware computer that could understand a user’s state and adapt to it.

“The face is one of the best channels for communicating emotion, so it made a lot of sense to build a computer that reads facial expressions, maps those to certain emotional states, and then enables the machine to act on it,” el Kaliouby said. “It’s exciting because computers and devices with cameras are becoming really ubiquitous, so that’s an opportunity to capture your emotional state every single time you use your phone.”

The Affdex computer vision algorithm hones in on regions of the face and identifies a particular facial expression, like a smile, based on the specific texture and movements of facial features based on patterns. The SDK gives developers a way to use Affdex to measure the kind of engagement users have toward their apps.

“You’re emotion-enabling your app,” el Kaliouby said. “Maybe your app is Angry Birds, and you want to measure how emotionally engaged your users are. Our technology is really about personal analytics more than any kind of surveillance. That’s an important distinction to make.”

el Kaliouby also sees great potential for facial coding technology in wearable devices. When the project was still at MIT, she tested out wearable applications before Google Glass even existed.

“We had a three-year-long project with an autism school in Rhode Island, where a prototype mounted camera read the expressions of the person you’re interacting with and gives you a real-time feed of what they’re feeling,” el Kaliouby said. “There are a ton of applications for Google Glass. Some of the applications around autism, around visually impaired people, around education and learning with a full audience in front of you, we definitely see how the technology could make an impact.”