Info Image

Facing a Privacy Crisis: The Unregulated Reach of AI Facial Recognition Technologies

Facing a Privacy Crisis: The Unregulated Reach of AI Facial Recognition Technologies Image Credit: Natali_Mis/BigStockPhoto.com

AI facial recognition technology is spreading, but the privacy and discrimination problems it causes are not discussed enough. As the EU approves the world’s first AI regulation [1], a lot must be done to ensure that governmental institutions and law enforcement use the technology transparently and without discrimination. Perhaps it’s time to decide whether it should be used at all.

Governments have been collecting fingerprints, headshots, and eye scans to recognize their citizens since the 90s [2]. Back then, using biometrics for a person’s identification required a lot of manual work. However, the latest developments in AI offer a new way to use this data to identify people more quickly and efficiently. What if your face, spotted on a street or a border control by a camera, could be matched with your online photos, confirming your identity in seconds?

As scary as it sounds, one US-based company has made that vision a reality. Clearview AI has collected millions of images from publicly available pages [3] on the web (including Facebook, Instagram, LinkedIn, and Venmo) and now offers its database of faces to government and law enforcement agencies worldwide. As it is publicly known, the company collected all these images without explicit users’ consent and used them to develop a commercial product, which was offered to the government.

At first, the database was available to private companies as well. However, Clearview AI faced scrutiny from US legislators and was restricted from sharing its AI software with private entities [4]. But is that enough to prevent the potential threats that such AI software creates, like racial discrimination and abuse of power by law enforcement officials? Probably not.

The CEO of Clearview AI, Hoan Ton-That, boasts that business is going well, with over 1 million searches conducted on its platform [5]. Clearview AI now aims to offer its service to public defenders [6], claiming it would help solve some cases where facial recognition is crucial (e.g., identifying witnesses and asking them to testify in court). However, civil liberty activists strongly and, in my opinion, reasonably oppose the technology because its benefits don't seem to outweigh the threats.

Fortunately, some countries (e.g., Canada, Australia, Britain, France, Italy, Greece) have already declared Clearview’s activities unlawful. Let’s hope that more countries pay attention to the company’s expansion attempts and consider proper facial recognition legislation soon. After all, American [7] and Swedish police agencies [8] were found to be using the software without proper authorization, violating laws related to personally identifiable information processing.

Clearview AI's facial recognition technology was validated by the US's National Institute of Standards and Technology (NIST). NIST’s positive evaluation represents a "badge of quality” for meeting benchmarks in accuracy and reliability. However, the NIST's testing employed databases of faces, including those of immigrants, children, and the deceased, without their or their legal representative’s clear consent [9] —paralleling Clearview's own debatable data practices.

The NIST also routinely distributes its datasets for academic and innovation purposes, and those databases potentially include data on nearly half of American adults [10] (as of 2016). Such a practice compromises privacy and aids in the development of AI facial recognition without the consent of those citizens.

All of this uncovers broader issues about how the law regulates facial recognition technology and the protection of people’s privacy. There are bias and accuracy problems, too, which may also have legal implications. Let’s explore some more nuances.

An AI algorithm’s query results depend on what data it has been trained on [11]. If the data lacks diversity or is influenced by biases from the individuals who train it, AI may then incorrectly identify people. One Harvard report [12] revealed that AI facial recognition used in law enforcement is based on mugshot databases, where the Black community is disproportionately represented. Such facial recognition models are likely unreliable, resulting in biased predictions and discriminatory profiling. This could lead to unjust legal consequences for the individuals involved as they may be considered potential criminals. Another research claimed biometrics could “predict” criminality [13]. Luckily, the project was rejected as absurd.

So far, facial recognition tech feels like a global experiment with involuntary participants. AI facial recognition is in the legal gray zone — no regulations target the use of AI biometrics specifically. While the GDPR regulates biometrics to some extent, it’s mainly focused on data privacy, so it’s not a law that can control the use of AI well enough. Companies continue to independently develop new AI facial recognition products by sourcing facial images from the internet without the users’ consent. So can we expect that AI facial recognition products and their creation will ever be controlled by a specific law?

Here is some good news. In December of 2023, the European Parliament and Council agreed on the EU AI Act [14]. The act limits the use of AI, based on its perceived level of risk. Under this act, algorithms deemed to pose an “unacceptable” level of risk, such as the “biometric identification and categorization of people,” would be banned in EU products and services.

However, the EU AI Act will only come into effect in two years. By then, it will likely be outdated due to the rapid pace of AI technology evolution. Moreover, with this act, AI biometrics will still be allowed in law enforcement. Can we be sure that law enforcement will use AI facial recognition technologies in a fair, transparent, non-discriminatory way? And can we be sure there will be no continuous, disproportionate surveillance that will violate people's privacy?

The EU is considering banning the use of live facial AI recognition in public places [15] — that’s a significant legal approach. The US state of Illinois has the Biometrics Information Privacy Act (BIPA) [16], which helped ban Clearview AI in the state and is now considered to be an exemplary law for regulating biometric data use. Other states might develop similar laws, too, so citizens can have a basis to ask Clearview AI to remove their data. Californians are already doing that based on the local laws. Even though there are only close to 500 requests [17], compared to potentially millions of photos of Californians in their database, I believe these requests are bound to increase as more people become aware of AI facial recognition technologies over time.

As AI facial recognition technology integrates into government agencies without clear regulations, where does that leave us, the citizens? We have been providing our data without realizing it, and, unfortunately, we may be under threat of being exploited even more. If AI facial recognition technology use becomes the norm for law enforcement, it will pose a constant threat to our privacy. So we should demand better laws and accordingly proportional control of AI technology immediately, or risk losing our privacy to AI and whoever controls it.

 

References

  1. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
  2. https://recfaces.com/articles/history-of-biometrics
  3. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
  4. https://edition.cnn.com/2022/05/09/tech/clearview-ai-aclu-settlement/index.html
  5. https://www.bbc.com/news/technology-65057011
  6. https://www.nytimes.com/2022/09/18/technology/facial-recognition-clearview-ai.html
  7. https://www.engadget.com/ice-clearview-ai-facial-recognition-180144627.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZW5nYWRnZXQuY29tLw&guce_referrer_sig=AQAAAI51A5zvK9sLozlMpfF0H-3ZuV-M8m70n13c-7m-hicUl4RpBdDIkE507JLOq92aaxHPuf2Z_mvR7dQJAIwTcEssMBAO069YSQHAhTlVaSfDbtMQ5p6Y0pPXztDgJmhG6GdGVAw7V4sHtjVslCbjHf-WoRgiLGKgQl2X4fvrnNtb
  8. https://edpb.europa.eu/news/national-news/2021/swedish-dpa-police-unlawfully-used-facial-recognition-app_en
  9. https://www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2019/03/19/the-technology-202-government-using-photos-of-visa-applicants-dead-people-to-test-facial-recognition-software/5c8ff2581b326b0f7f38f1c3/
  10. https://www.theatlantic.com/technology/archive/2016/10/half-of-american-adults-are-in-police-facial-recognition-databases/504560/
  11. https://www.forbes.com/sites/ariannajohnson/2023/05/25/racism-and-ai-heres-how-its-been-criticized-for-amplifying-bias/
  12. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
  13. https://www.wired.com/story/algorithm-predicts-criminality-based-face-sparks-furor/
  14. https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
  15. https://www.euronews.com/2023/11/16/the-eu-wants-to-make-facial-recognition-history-but-it-must-be-done-for-the-right-reasons#:~:text=In%20June%202023%2C%20the%20European,us%20all%20as%20walking%20barcodes.
  16. https://www.aclu-il.org/en/campaigns/biometric-information-privacy-act-bipa
  17. https://www.theverge.com/23919134/kashmir-hill-your-face-belongs-to-us-clearview-ai-facial-recognition-privacy-decoder
NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Goda Sukackaitė is a Privacy Legal Counsel at Surfshark. She is a professional lawyer focusing mainly on privacy, data protection and technology law. Previously, she gained her experience at law firms, practicing as an assistant-attorney and working on various legal projects.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic