Protecting Your Privacy from Facial Recognition

Demystifying Clearview AI Blog Series (Part 8)

Samuel Brice
9 min readDec 20, 2020
Anon (2016)

Spoiler Alert: “Adversarial Glasses” Work 90% of the Time

In 2016 researchers at Carnegie Mellon University demonstrated the use of “adversarial glasses” to successfully dodge facial recognition and, in some cases, entirely impersonate a target individual.

(source)

Meaning, by wearing a pair of 3D printed eyeglasses, it’s possible to completely evade some face detection models or successfully break a FaceID type lock.

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition

The term “adversarial” comes from the “generative adversarial networks” framework, in which two models are trained simultaneously: a model that learns to generate predictions from the training data, and a separate adversarial model that learns to mislead the predictor model.

Adversarial models are used to produce “adversarial perturbations” or “tweaks” to an image that can tip a predictive model into the wrong prediction. This form of attack is possible due to some intriguing properties of deep neural networks.

As explained in a seminal research paper detailing the findings, deep neural networks are “highly expressive,” and while their expressiveness is why they succeed, it also causes them to learn “uninterpretable solutions” that could have counter-intuitive properties. In other words, we lack the mathematical and linguistic tools to understand when precisely neural networks work. As such, we equally lack understanding of when precisely neural networks don’t work.

Explaining and harnessing adversarial examples

The math of it is actually pretty cool and has to do with, among other things, linear behavior in a high-dimensional space.

Universal Adversarial Perturbations

Adversarial perturbations involve image specific pixel-level changes, often invisible to the human eye, that can effectively manipulate a target deep learning object detection or recognition model. Similarly, the concept of a “universal adversarial perturbation” is the equivalent of a “camouflaging filter” that, when applied to an image, will, to some extent (in whatever direction and however small) reduce the effectiveness of a target deep learning model.

Akin to how different types of camouflage are ideal for different environments, universal adversarial perturbations have been computed specific to each of the typical deep learning architectures, including the ResNet architecture used in our CCTView demonstration.

(source)

Applying a universal perturbation makes it possible to categorically influence a deep learning neural network’s ability to classifying most objects it comes across.

All deep learning network architectures are vulnerable to adversarial perturbations, and there is no foolproof defense.

Universal adversarial perturbation

Practical Personal Protection from Facial Recognition

While applying the adversarial defenses described above requires some level of technical understanding, for those merely interested in blanket protection, a more practical solution is an “invisible mask.”

Invisible masks involve using infrared light to disrupt digital cameras’ ability to take pictures of your face properly. Infrared light (IR) gets picked up by camera sensors but cannot be seen by the naked eye.

For example, with infrared LEDs hidden in a hat, it’s possible to obscure your face on camera without affecting your visage in real life. The technique is known to have a 100% percent success rate for face detection evasion.

(source)

Using infrared light is akin to adversarial pixel manipulation before a picture gets taken; thus, it’s also possible to impersonate different individuals. Because infrared light has well-known effects, such as making green light appear red or making red light appear blue, it can generate predictable perturbations in real life.

(source)

Although not as practical as being used for blanket invisibility, infrared light can be used to successfully impersonate persons, with success rates ranging from 14% to 70%.

Reflectacle IR Frames and Lenses

Reflectables Phantom and Ghost.

While not quite the same as the adversarial glasses mentioned above these “Reflectables” from the creator of Urban Spectacles use both IR-absorbing lenses in combination with IR-reflective frames to shield your face in various surveillance scenarios.

The Phantom model (at left) only reflects infrared light, making the wearer undetectable to night-vision technology in most CCTV cameras. Whereas the Ghost model (at right) reflects both infrared and visible light to cameras which works to partially blind traditional flash photography in low-light environments.

Ghost light reflection.

Legal Protection from Facial Recognition

The European Union was first to enact one of the world’s most comprehensive and potent privacy and data regulation legislation with the 2016 General Data Protection Regulation (GDPR). Superseding the previous Data Protection Directive, the GDPR mandates that businesses design their information systems with privacy in mind. The California Consumer Privacy Act (CCPA) was adopted in the US several years after the GDPR’s enactment, has many similarities, and has been one of the many ways of confronting Clearview legally in the US.

CCPA’s biggest weakness is its limits on “private right of action,” meaning even if a company is found to have violated the law, an individual cannot sue said company directly. Under the CCPA, only the State of California can sue a company for violations of CCPA laws. The first California legal case brought against Clearview by an individual came under California’s Unfair Competition Law (UCL), citing violations of the CCPA as underlying “unlawful” activity. The case, Berke vs. Clearview, is still in the very early stages of litigation, and it remains to be seen what the courts will decide.

An Illinois resident filed one of the most promising legal cases against Clearview under the Illinois Biometric Privacy Act (BIPA). The case was allowed to move forward, and Clearview was also forced by the court to delete all data on Illinois residents.

New York is poised to become the most stringent data privacy and protection jurisdiction in the US with the proposed New York Privacy Act (NYPA) under NY Senate Bill 224. Unlike the CCPA, the NYPA gives New Yorkers the right to sue companies directly. Until federal level regulation is introduced, individuals will have to make do with various state-level patches and options.

Warning: ReID Data is Considered Personal Data

As part of the various privacy legislation mentioned above, the de-identifying or anonymizing of data is minimal. Not only must data be stripped entirely of its identifying aspects, but its re-identification must not be possible or encouraged. It’s a shame this section had to come buried so deep within this series.

To put it in several different words, you could interpret the laws as such…ReID data is considered personal data, re-identification is illegal, deep learning ReID models generate ReID data, deep learning ReID is illegal.

State of California Department of Justice

As detailed in Part 5, Face Identification, and Re-Identification, a deep learning model, must be trained to ReID. Training involves the deep learning model (in its own “expressive” way), filtering identifiable data, and storing it as a “model state” to be used in production. Effectively, when you train a deep learning re-identifying model (even with de-identified data) by definition, that data is no longer de-identified and neither is the model. Conceivably this would also apply to “public data,” but such an argument has yet to be made in a court of law.

Under BIPA, in Mutnick v. Clearview, the courts forced Clearview to delete all so-called “public data” it had collected on Illinois residents. It’s unclear if that data included the ReID model state and vector embeddings generated from training on Illinois residents’ data.

The de-identification requirement of privacy laws is the Achilles heel of deep learning facial recognition. Because it makes it prohibitively expensive to illegally use personal data as that necessarily requires retraining an illegally trained model. Because, what good is a ReID model that can’t ReID?

Courts haven’t quite completely caught on to the fact.

Fatal: ReID Model Data is Considered Personal Data

Under the CCPA and GDPA, Clearview is required to provide you with an inventory of all personal data it has collected.

To comply with CCPA, Clearview requests two things:

  1. A clear headshot, and
  2. A government-issued ID.

Using the headshot, Clearview will run its deep learning ReID model to find all data associated with you, returning a summary of the face search results, including an index of where each image was found.

Two such documented examples from authors Anna Merlan at Vice and Thomas Smith of Gado Images can be seen below.

Photos by Anna Merlan and Thomas Smith.

What’s fatally missing from the data returned by Clearview are the vector embeddings and model weights generated using those personal images. As explained in Part 5, a vector embedding is a numerical way of representing and saving an image.

Photos from various sources. See attributions below.

Clearview’s search literally can’t work without using pre-computed vector embeddings, especially at the sub-second speeds they’ve demonstrated.

On the record, Clearview has stated the following :

  • “Clearview does not maintain any sort of information other than photos.”
  • “To find your information, we cannot search by name or any method other than image.”

Now that you understand how deep learning works, it is clear that they are lying.

As part of the CCPA, you can request that Clearview delete all the information they have on you, supposedly, all the information they returned to you. Even without those original images, Clearview still retains the ability to re-identify you as if it still had those original images. And in the same way that you can use a deep learning model to generate a vector embedding from an image, you can also use a different type of deep learning model to generate an image from a vector embedding.

Without a proper audit of Clearview’s internal systems and internal code, it’s difficult to know the extent to which they’ve retained and are still using ReID data. However, researchers at Facebook have successfully developed a method of “radioactive tagging” that can be used to secretly determine if a model has been trained on a specific set of images. Facebook’s radioactive tagging method is not yet widely available.

References

Generative Adversarial Nets

Using ‘radioactive data’ to detect if a data set was used for training

When DNNs go wrong — adversarial examples and what we can learn from them

Universal adversarial perturbations

BadNets: Identifying vulnerabilities in the machine learning model supply chain

Adversarial patch

Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples

Invisible mask: practical attacks on face recognition with infrared

Breaking Deep Learning with Adversarial examples using Tensorflow

The California Consumer Privacy Act

The General Data Protection Regulation

CCPA in Litigation: 2018 to Present

Adversarial Attacks

I Got My File From Clearview AI, and It Freaked Me Out

Attacking Machine Learning with Adversarial Examples

Image Generator — Drawing Cartoons with Generative Adversarial Networks

Plaintiff in biometric privacy class action asks judge to order Clearview AI to delete data

Biometric privacy lawsuit decisions: Clearview AI loses, Shutterfly and Southwest win, TikTok in trouble

Let the Litigation Begin! California Residents Already Filing Enforcement Actions Under the CCPA

New York Privacy Act Would Be Considerably Tougher Than California’s Bill

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition

Burke v. Clearview

Mutnick v. Clearview

What to expect if the New York Privacy Act is enacted, following the privacy regulation boom of GDPR and CCPA

Clearview AI class-action may further test CCPA’s private right of action

The Law of Unintended Consequences: BIPA and the Effects of the Illinois Class Action Epidemic on Employers

CCPA litigation is here: putative class action filed for alleged notice and collection violations

Here’s the File Clearview AI Has Been Keeping on Me, and Probably on You Too

These Glasses Fool Facial Recognition Into Thinking You’re Someone Else

New York’s Privacy Bill Is Even Bolder Than California’s

These infrared-blocking sunglasses can disable facial recognition technology

Fight facial-recognition technology with Phantom glasses

Image sources:

--

--