By February 2026, the operational of Clearview AI has expanded into a of surveillance previously reserved for authoritarian state actors. While the company initially marketed itself with a database of three billion images in 2020, federal contract documents from the U. S. Customs and Border Protection (CBP) reveal that by 2026, this repository has swollen to an estimated 60 billion facial images. This figure represents not a collection of photographs a detailed biometric catalog of nearly every identifiable human being with a digital footprint. The method behind this accumulation is not a passive index an aggressive, unauthorized extraction engine that systematically violates the Terms of Service (ToS) of the world’s largest digital platforms.
The core of Clearview’s acquisition strategy relies on automated web crawlers, frequently referred to as “spiders” or “scrapers.” Unlike standard search engine bots, such as Googlebot, which respects the robots. txt protocol to exclude private or sensitive directories, Clearview’s scrapers are designed to ignore these exclusion standards. They traverse the open web and social media ecosystems, specifically targeting platforms rich in personal identifiers: Facebook, LinkedIn, Instagram, X (formerly Twitter), and Venmo. The scrapers do not index the location of an image; they download the file, extract associated metadata, and permanently ingest the content into Clearview’s private servers. This distinction is legally important. A search engine points a user to a source; Clearview seizes the source material to construct a derivative biometric product.
The technical execution of this scraping involves sophisticated evasion techniques. To bypass IP bans and rate limits imposed by social media giants, data extraction operations frequently use residential proxy networks. These networks route traffic through the IP addresses of ordinary home internet users, making the scraping bots appear as millions of distinct, legitimate human visitors. This method allows the company to harvest data at an industrial without triggering the automated defense method designed to stop denial-of-service attacks or mass data exfiltration. By the time a platform identifies and blocks a specific scraping vector, the system has already rotated to a new set of identifiers, ensuring a continuous stream of fresh biometric inputs.
Once an image is ingested, the system executes its most controversial process: immediate biometric vectorization. In the European Union, this step constitutes the primary violation of the General Data Protection Regulation (GDPR). The moment a scraper retrieves a photograph, Clearview’s proprietary algorithms analyze the facial geometry, measuring the distance between eyes, the angle of the cheekbones, the contour of the jawline, and convert these physical features into a unique mathematical string known as a “face print” or vector. Under Article 9 of the GDPR, this vector is classified as biometric data, a special category of personal information that requires explicit, affirmative consent for processing. Clearview obtains no such consent. The conversion happens invisibly, transforming a casual social media upload into a permanent, searchable biometric record used by law enforcement and intelligence agencies globally.
The scraping of Venmo is particularly illustrative of the privacy violation’s depth. Venmo, a peer-to-peer payment service, hosts user profiles that frequently include real names, profile pictures, and transaction descriptions. When Clearview scrapes Venmo, it does not just capture a face; it captures a financial social graph. By linking a biometric template to a Venmo profile, the system allows an investigator to identify a suspect and immediately view their public transaction history, associates, and payment notes. This fusion of biometric identity with financial behavior creates a surveillance composite that users never consented to when they signed up for a payment app. The “public” nature of the profile does not legally equate to permission for biometric profiling, a distinction the Dutch Data Protection Authority (DPA) emphasized in its September 2024 ruling.
LinkedIn represents another high-value target for this unauthorized extraction. The platform contains high-resolution, front-facing images explicitly tied to professional identities, employment history, and educational background. For Clearview, LinkedIn provides the “ground truth” data necessary to train and refine its recognition algorithms. The scraping of this platform allows the system to associate a face not just with a name, with a workplace, a job title, and a professional network. This creates a vulnerability where individuals can be identified and profiled based on their employer, facilitating targeted surveillance of specific industries or organizations. even with Microsoft (LinkedIn’s parent company) sending cease-and-desist letters as early as 2020, the continued growth of the database suggests that these legal threats have been treated as operational costs rather than binding prohibitions.
The company’s defense relies heavily on a specific interpretation of the Amendment in the United States, arguing that it has a right to collect public data. Hoan Ton-That, the company’s CEO, has frequently compared his software to Google, stating that he is organizing publicly available information. This argument collapses under scrutiny in European jurisdictions. In the EU, the legality of data processing depends on the purpose. A user uploads a photo to Facebook for the purpose of social interaction with friends. Clearview scrapes that photo for the purpose of biometric identification and police investigation. This “purpose limitation” principle is a of the GDPR. The repurposing of social data for biometric surveillance without a valid legal basis renders the entire scraping operation unlawful in the eyes of EU regulators.
The persistence of this data is another serious matter. When a user deletes a photo from Instagram or Facebook, the platform removes it from public view. yet, because Clearview has already downloaded and vectorized the image, the biometric template remains in their database indefinitely. The “right to be forgotten,” a key tenet of EU privacy law, is technologically subverted by this scraping model. Even if the original source is extinguished, the derivative biometric data, immortalizing the user’s digital face in a law enforcement lineup. The Dutch DPA’s 2024 fine of €30. 5 million was driven in part by this inability, or refusal, to purge data that had been deleted from the source platforms.
The sheer volume of 60 billion images implies a scraping velocity that the ability of regulators to monitor it. To achieve this number, the system must ingest tens of millions of images daily. This requires a distributed infrastructure capable of parsing the complex, JavaScript of modern web applications. Social media platforms continuously update their code to prevent scraping, using techniques like DOM obfuscation and CAPTCHA challenges. Clearview’s ability to maintain and grow its index suggests it employs a dedicated engineering team focused solely on breaking these countermeasures. This is an adversarial relationship with the internet ecosystem, where the entity acts as a parasite on the infrastructure of legitimate service providers.
also, the “public” data argument ignores the reality of modern privacy settings. users employ “friends of friends” settings or limited visibility options. Scraping bots can circumvent these limitations by using compromised accounts or “sock puppet” profiles, fake accounts created to gain access to semi-private networks. While Clearview denies using such methods, the opacity of their acquisition channels leaves this possibility open. Independent security researchers have noted that to reach the claimed volume of data, the scrapers must be accessing content that is not strictly on the open web, or they are scraping archives and caches that users believe are long gone.
The integration of this scraped data into a search engine creates a “person-centric” view of the internet. A standard search engine indexes pages; Clearview indexes people. When a client uploads a probe image, the system does not just return a match; it returns a dossier of every location that face has appeared across the web. A single query can link a dating profile, a professional headshot, a candid photo from a friend’s album, and a mugshot. This aggregation destroys the concept of contextual privacy, the idea that information revealed in one context (a dating app) should not necessarily be linked to another (a professional profile). The scraping method is the foundational tool that enables this context collapse.
In 2024, the legal friction regarding this scraping reached a breaking point in Europe. The Dutch DPA’s investigation revealed that Clearview had processed the biometric data of Dutch citizens without any legal basis. The regulator noted that the company did not just scrape the data; it failed to inform the subjects that their data was being processed, a violation of the transparency requirements in Article 14 of the GDPR. The company’s argument that it has no physical presence in the EU was rejected; the monitoring of EU citizens’ behavior (via their digital footprint) brings the company under the territorial scope of the regulation. The scraping method itself was deemed an “illegal collection” method, poisoning the fruit of the entire tree.
The refusal to comply with deletion orders further aggravates the illegality. When European citizens exercise their data subject rights to request access or deletion, Clearview has historically required them to provide *more* personal data, specifically a photo of themselves, to prove their identity. Regulators have flagged this as a cynical data harvesting tactic disguised as a compliance measure. By demanding a fresh biometric sample to check for existing samples, the company ensures it captures the very data the subject wishes to protect. This circular logic serves to discourage individuals from exercising their rights while allowing the scraping engine to continue its work unimpeded.
, the method of unauthorized scraping is not a technical detail; it is the business model. Without the theft of intellectual property and personal data from social media platforms, Clearview AI would have no product. The company produces no data of its own. It relies entirely on the uncompensated appropriation of the digital lives of billions of people. As the database grows toward the projected 100 billion mark, the friction between this extraction engine and global privacy laws define the era of biometric regulation. The tools used to scrape this data are becoming more aggressive, and the legal walls erected to stop them are being tested by a company that views fines as mere licensing fees for its illicit operation.
The 2026 shows that even with the fines and the cease-and-desist orders, the scraping continues. The company has bet that the value of its database to government clients, like the U. S. CBP, outweighs the cost of regulatory non-compliance in Europe. This creates a bifurcated reality: in the US, the database is a “national security asset”; in the EU, it is a “criminal enterprise” of data theft. The scraping bots, yet, do not respect these borders. They crawl the web indiscriminately, pulling European faces into American servers, creating a transnational conflict that remains unresolved even as the server racks fill with new vectors every second.
The core of Clearview AI’s violation of European law lies not in the scraping of images, in the subsequent mathematical transformation of those images into biometric templates. This process converts a static photograph into a searchable, unique identifier, an action that triggers the strictest prohibitions under the General Data Protection Regulation (GDPR). As of February 2026, the company has amassed a database exceeding 70 billion images, each processed into a biometric vector without the subject’s knowledge or consent. This industrial- processing of special category data constitutes a direct and ongoing breach of GDPR Article 9.
The Mathematical Transformation: From Pixel to Vector
To understand the severity of the legal violation, one must examine the technical operation that occurs after Clearview’s scrapers acquire an image. The company does not simply archive the visual file. Instead, its proprietary algorithms analyze the facial geometry within the image, measuring the distance between eyes, the shape of the nose, the contour of the jawline, and other unique physiological markers. This analysis generates a “faceprint” or “vector”, a string of numbers that represents the individual’s face in a high-dimensional space. Unlike the original photograph, which is a visual representation, the vector is a biometric tool designed exclusively for identification. This vector allows the system to cluster images of the same person across decades of time, varying angles, and different contexts. Under GDPR Article 4(14), this vector is defined as “biometric data.” It is personal data resulting from specific technical processing relating to the physical characteristics of a natural person. The creation of this vector is the precise moment Clearview crosses the threshold from processing standard personal data to processing “special category” data.
GDPR Article 9: The Prohibition on Biometric Processing
GDPR Article 9(1) establishes a general prohibition on the processing of biometric data for the purpose of uniquely identifying a natural person. This ban is absolute unless the data controller can demonstrate that one of the specific exceptions in Article 9(2) applies. The most relevant exception is explicit consent (Article 9(2)(a)). The subject must give a clear, affirmative agreement to have their biometric data processed. Clearview AI has never sought nor obtained consent from the billions of individuals in its database. The company has no direct relationship with the data subjects; it extracts their data from third-party platforms without warning. Clearview has attempted to rely on Article 9(2)(e), which permits processing if the data relates to personal data which are “manifestly made public by the data subject.” The company that because users upload photos to public social media profiles, they have made their data public. European regulators have systematically rejected this defense. The French Data Protection Authority (CNIL), the Italian Garante, and the Dutch Data Protection Authority (AP) have all ruled that posting a photograph on social media does not constitute “manifestly making public” one’s biometric template. A user may intend to share a visual image with friends or the public, yet they do not intend to have that image analyzed to create a biometric key for a global police lineup. The act of publishing a photo is not an act of consent for biometric profiling.
The Dutch Data Protection Authority Ruling (September 2024)
In September 2024, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens or AP) issued a landmark ruling that crystallized the illegality of Clearview’s database construction. The AP imposed a fine of €30. 5 million on Clearview AI for creating an “illegal database” of faces. The AP’s investigation found that Clearview had seriously violated the GDPR on multiple counts. The primary violation was the creation of the database itself. The authority stated that Clearview was never permitted to build a database of unique biometric codes in the place. The ruling emphasized that the conversion of facial images into biometric vectors requires a legal basis that Clearview completely absence. The AP also warned that the use of Clearview’s services by Dutch organizations is illegal. This criminalized the client-side use of the software within the Netherlands, closing the market to Clearview. The regulator noted that Clearview failed to inform people that their photos were being used and failed to provide a way for them to access their data. This ruling was significant because it attacked the existence of the database, not just the company’s failure to respond to access requests. The AP declared the very asset Clearview sells, the graph of biometric vectors, to be contraband under EU law.
The UK Upper Tribunal Decision (October 2025)
While the UK operates under the UK GDPR following Brexit, the legal principles remain parallel to the EU framework. In October 2025, the UK Upper Tribunal handed down a decisive judgment in *Information Commissioner v Clearview AI Inc*. This ruling overturned a previous -tier Tribunal decision that had questioned the extraterritorial reach of the UK GDPR regarding foreign law enforcement activities. The Upper Tribunal confirmed that Clearview’s processing is “related to the monitoring of the behaviour” of UK residents. By indexing the faces of UK citizens and allowing clients to track their presence across the internet, Clearview engages in behavioral monitoring. This finding brought Clearview firmly back under the jurisdiction of the UK Information Commissioner’s Office (ICO). The tribunal upheld the ICO’s view that Clearview’s processing of biometric data was unlawful. The judgment reinforced the position that a US-based company cannot scrape the faces of UK residents without complying with UK data protection standards. The ruling validated the ICO’s earlier enforcement notice, which ordered Clearview to delete the data of UK residents and cease further processing.
The Persistence of the Illegal Database
Even with these definitive rulings, Clearview AI has refused to its database or remove the biometric templates of EU and UK citizens. As of early 2026, the company claims to host over 70 billion images. The vectors derived from these images remain stored on Clearview’s servers, likely in the United States. The company’s non-compliance presents a serious enforcement challenge. Clearview has no physical presence, employees, or assets within the European Union. It has ignored the fines imposed by the CNIL (€20 million), the Italian Garante (€20 million), the Greek DPA (€20 million), and the Dutch AP (€30. 5 million). This recalcitrance has created a “zombie” database scenario. The data of European citizens remains active in Clearview’s system, available to clients in the US and other non-EU jurisdictions. While EU law enforcement agencies are barred from using the tool, the biometric templates of EU citizens are still processed every time a US agency runs a search. If a French tourist is photographed in New York, and that photo is run through Clearview, their face is matched against the illegal template created from their social media photos in France.
The Irrevocability of Biometric Templates
The creation of these illegal templates causes irreparable harm. Unlike a password or a credit card number, a face cannot be changed. Once Clearview converts a face into a vector and stores it in its graph, that individual is permanently identifiable by the system. The database operates as a perpetual, automated lineup. Every person in the database is a suspect in every search run by Clearview’s clients. The “probabilistic” nature of the vector matching means that individuals are constantly being mathematically compared to crime scene photos, CCTV footage, and other surveillance imagery. The GDPR Article 9 prohibition exists specifically to prevent this type of mass, uncontrolled biometric categorization. By stripping the biometric data from the context of the original image and storing it as a raw identifier, Clearview has industrialized the violation of privacy. The vector exists independently of the source image; even if the user deletes the original photo from Facebook, the vector remains in Clearview’s index unless the company takes specific action to remove it.
Regulatory Impotence and the “Public” Defense
Clearview continues to assert that its conduct is lawful under US law, citing the Amendment right to collect public data. It that GDPR enforcement cannot reach across the Atlantic to dictate how a US company processes data stored on US servers. This argument ignores the extraterritorial scope of the GDPR (Article 3), which applies to any organization monitoring the behavior of EU subjects. The Dutch AP and the UK Upper Tribunal have both affirmed that Clearview’s activities constitute monitoring. The “public data” defense fails because the processing involves the generation of *new* data (the vector) that was never public. The user made a photo public. The user did not make a mathematical map of their facial geometry public. The gap between the visual image and the biometric template is where the violation occurs. Clearview exploits this gap, claiming they are a “search engine for faces” like Google is for text. Regulators reject this analogy. Google indexes text and metadata found on the page. Clearview performs complex biometric analysis *on* the content to generate new sensitive data. This distinction is important. The processing required to create the template is the Article 9 violation, regardless of where the original image came from.
Conclusion of the Section
The construction of Clearview AI’s biometric database represents a systematic failure to adhere to the fundamental principles of data protection. The company has built a commercial empire on the illegal processing of special category data. By converting billions of facial images into biometric vectors without consent, Clearview has violated GDPR Article 9 on a global. The rulings from the Netherlands, France, Italy, and the UK confirm that this processing is unlawful. Yet, the database remains, growing larger each day, a testament to the difficulty of enforcing digital sovereignty against a rogue actor operating from a safe harbor. The biometric templates of millions of Europeans sit in Clearview’s servers, illegal in their creation persistent in their existence.
The Dutch Verdict: A €30. 5 Million Condemnation
On September 3, 2024, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens or AP) issued a definitive legal judgment against Clearview AI, imposing a fine of €30. 5 million ($33. 7 million). This ruling stands as one of the most aggressive enforcement actions taken by a European regulator against the American facial recognition firm. Unlike previous sanctions that focused primarily on the absence of a legal basis for processing, the AP’s decision explicitly categorized Clearview’s core asset, its database of billions of facial images, as an “illegal database” under European law. The AP’s investigation concluded that Clearview AI violated the General Data Protection Regulation (GDPR) on multiple counts. The regulator found that the company possessed no legal grounds to collect the data of Dutch citizens. The fine was accompanied by a series of penalty orders designed to compel compliance, specifically demanding the cessation of all violations and the deletion of data belonging to Dutch nationals. Aleid Wolfsen, Chairman of the AP, delivered a scathing assessment of the company’s operations. He stated that facial recognition is a “highly intrusive technology” that cannot be unleashed on the world without restriction. The AP’s ruling dismantled Clearview’s defense that it organizes public information, asserting instead that the conversion of facial images into unique biometric codes constitutes a severe infringement on personal privacy rights that cannot be justified by the company’s commercial interests.
Violation of Article 9: The Biometric Prohibition
The central pillar of the AP’s decision rests on Article 9 of the GDPR, which prohibits the processing of special categories of personal data, including biometric data used for unique identification, unless a specific exception applies. The AP found that Clearview AI automatically converts scraped images into biometric vectors, mathematical representations of facial features, without the explicit consent of the individuals depicted. Clearview attempted to that its processing was necessary for the prevention and detection of crime, an exception frequently reserved for government authorities. The Dutch regulator summarily rejected this claim. The AP clarified that Clearview is a private commercial entity, not a competent law enforcement authority. Consequently, it cannot claim exemptions designed for police or judicial bodies. The creation of the biometric template itself was deemed an unlawful act, occurring the moment the software analyzed an image, regardless of whether that image was ever matched to a search query. This distinction is legally important. It establishes that the violation occurs at the point of ingestion and processing, not at the point of sale or use. By scraping the web and generating these codes, Clearview engaged in the mass processing of sensitive biometric data of Dutch citizens without their knowledge or permission, a direct contravention of the strict protections afforded to biometric identifiers in the European Union.
Rejection of the “Public Data” Defense
Clearview AI has long maintained that it has a right to scrape data that is publicly available on the open web, citing the Amendment of the US Constitution. The Dutch DPA’s ruling explicitly dismissed this argument within the context of EU jurisdiction. The AP asserted that the public availability of a photograph does not constitute a waiver of privacy rights regarding biometric processing. The regulator noted that individuals posting photos on social media or professional networking sites do so for specific, limited purposes, sharing with friends, family, or colleagues. They do not consent to having their faces harvested, converted into mathematical vectors, and added to a global lineup for police identification. The AP ruled that Clearview’s “scraping” practices violated Article 5(1)(a) of the GDPR, which requires data to be processed lawfully, fairly, and in a transparent manner. By operating in the shadows, Clearview deprived Dutch citizens of any opportunity to object to the processing of their data. The investigation highlighted that the company failed to inform the people in its database that their images were being captured and analyzed, a violation of Articles 12 and 14, which mandate that data controllers provide clear information to subjects about how their data is used, even if the data was not obtained directly from them.
Incremental Penalty Payments for Non-Compliance
Recognizing Clearview AI’s history of ignoring European regulatory orders, the Dutch DPA attached coercive measures to its decision. to the lump-sum fine of €30. 5 million, the AP imposed “orders subject to a penalty for non-compliance” (dwangsom). These orders mandate that Clearview must stop its violations, requiring it to stop processing the data of Dutch citizens and delete existing records. If Clearview fails to comply, it faces incremental penalties of up to €5. 1 million. These penalties are designed to accumulate over time, punishing the company for every week or month it refuses to adhere to the deletion order. This method distinguishes the Dutch ruling from a simple punitive fine; it is an active, ongoing enforcement action intended to force a change in business practices. The AP’s decision to impose these periodic penalty payments signals a recognition that one-off fines may be treated as the “cost of doing business” by lucrative tech firms. By structuring the penalty this way, the Dutch regulator aims to make continued non-compliance financially unsustainable, although the effectiveness of this strategy depends entirely on the ability to collect the funds from a US-based entity.
Threat of Personal Liability for Directors
In a significant escalation of regulatory rhetoric, AP Chairman Aleid Wolfsen publicly threatened to hold Clearview AI’s directors personally liable for the company’s GDPR violations. This marks a departure from standard enforcement, which the corporate entity. Wolfsen stated, “That liability already exists if directors know that the GDPR is being violated, have the authority to stop that, omit to do so, and in this way consciously accept those violations.” This statement suggests that the Dutch DPA is investigating legal avenues to pierce the corporate veil. If successful, this method could result in personal fines for executives like CEO Hoan Ton-That. The logic is that the violations are not operational oversights the result of deliberate strategic decisions made by the company’s leadership to ignore European law. The threat of personal liability introduces a new risk calculus for the company’s management. While Clearview as a corporation might shield its assets in the United States, individual directors could face legal complications, travel restrictions, or asset seizures if they enter jurisdictions that enforce Dutch civil or administrative judgments. This move by the AP represents a “nuclear option” in data privacy enforcement, targeting the individuals behind the algorithm.
Jurisdictional Battle and Article 3
Clearview AI’s Chief Legal Officer, Jack Mulcaire, responded to the fine by reiterating the company’s standard defense: Clearview does not have a “place of business” in the Netherlands or the EU and therefore is not subject to the GDPR. The company that it is a US entity operating under US law and that European regulators have no authority over its servers or operations. The Dutch DPA anticipated this argument and grounded its jurisdiction in Article 3 of the GDPR. This article extends the regulation’s territorial scope to organizations outside the EU if they monitor the behavior of individuals within the Union. The AP concluded that Clearview’s services, which allow clients to identify people and chance track their online presence and associations, constitute “monitoring of behavior.” also, the scraping of Dutch citizens’ photos from Dutch websites or social media accounts associated with the Netherlands establishes a clear link to the territory. The AP ruled that the physical location of the server is irrelevant when the data subjects are protected by EU law. This extraterritorial application of the GDPR is a serious legal battlefield, asserting that digital borders cannot be bypassed simply by hosting illegal data offshore.
Prohibition on Use by Dutch Organizations
The AP’s ruling extended beyond Clearview AI to its chance customers. The regulator issued a stern warning that the use of Clearview AI by Dutch organizations is strictly prohibited. Wolfsen declared, “Clearview breaks the law, and this makes using the services of Clearview illegal.” This warning carries significant weight for Dutch law enforcement and intelligence agencies, as well as private companies. The AP stated that any Dutch organization caught using the service could expect “hefty fines.” This kills Clearview’s market in the Netherlands, regardless of whether the company pays the fine. By targeting the demand side, the AP ensures that even if Clearview ignores the penalty, it cannot legally monetize its database within the Dutch jurisdiction. This aspect of the ruling reinforces the “fruit of the poisonous tree” doctrine in data protection. Because the database was constructed illegally, any intelligence or identification derived from it is also tainted. This creates a legal liability for any Dutch police force that might attempt to use the tool, chance jeopardizing criminal cases where Clearview evidence is introduced.
The Impossibility of Data Subject Rights
The investigation also highlighted Clearview’s widespread failure to honor the rights of data subjects. Under the GDPR, individuals have the right to access their data (Article 15) and request its deletion (Article 17). The AP found that Clearview places unreasonable blocks in the way of these requests or simply ignores them. For a Dutch citizen to ask Clearview if they are in the database, they are frequently required to provide a photo of themselves, giving Clearview more biometric data to process. The AP deemed this practice unacceptable. also, Clearview’s refusal to appoint a representative in the EU (a violation of Article 27) makes it nearly impossible for citizens to communicate formally with the company. The regulator noted that Clearview did not respond to access requests filed by the complainants who triggered the investigation. This silence is not just poor customer service; it is a violation of a fundamental legal obligation. The inability of citizens to exercise control over their own biometric data was a primary factor in the severity of the fine.
Enforcement Challenges and International Precedent
While the legal judgment is clear, the practical enforcement remains a challenge. Clearview AI has no known assets in the Netherlands to seize. The collection of the €30. 5 million fine likely require international legal cooperation, which is complex given the differences between US and EU privacy laws. Yet, the ruling sets a precedent. It contributes to a growing consensus among European regulators, including those in France, Italy, and Greece, that Clearview’s business model is fundamentally incompatible with GDPR. The Dutch DPA’s specific focus on personal liability and the illegality of the database itself adds new weapons to the regulatory arsenal. The decision also serves as a template for other jurisdictions. By establishing that the database is illegal contraband, the AP paves the way for other countries to criminalize the possession or use of Clearview’s data, isolating the company from the global market. The “illegal database” designation suggests that the data must be destroyed, not just that the company must pay a fee. This existential threat to Clearview’s inventory, its 50 billion face vectors, is the true core of the conflict.
Conclusion of the Dutch Action
The Dutch DPA’s action in 2024 represents a hardening of the European stance against non-consensual biometric surveillance. The €30. 5 million fine, combined with the threat of director liability and the absolute ban on usage, constructs a legal firewall around the Netherlands. While Clearview continues to operate from the safety of the US, the Dutch ruling ensures that its “illegal database” remains a toxic asset within the European Union, subjecting any user to immediate legal peril. The standoff continues, the legal status of the database in the Netherlands is settled: it is an unlawful construct that must be deleted.
Dutch DPA Pierces the Corporate Veil: The of Executive Liability
In a watershed development for global data privacy enforcement, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens or AP) initiated a formal investigation in late 2024 into the personal liability of Clearview AI’s senior leadership. This marked a decisive escalation from standard corporate fines to targeting the individual assets and liberty of executives. By early 2025, this investigation had evolved into the European Union’s most aggressive attempt to pierce the corporate veil for GDPR violations. The AP’s strategy rests on a legal interpretation that holds directors personally accountable when they possess full knowledge of ongoing legal infractions and the authority to halt them, yet deliberately choose inaction.
The catalyst for this move was Clearview AI’s categorical refusal to comply with the AP’s September 2024 administrative order. That order not only imposed a €30. 5 million fine on the corporation also mandated the immediate cessation of all biometric data processing involving Dutch citizens. When the company ignored the deletion order, the AP triggered an incremental penalty method (dwangsom). This method imposed additional periodic penalty payments totaling up to €5. 1 million for continued non-compliance. Clearview’s failure to pay either the primary fine or the incremental penalties by the start of 2025 forced the regulator to abandon traditional enforcement channels in favor of direct executive accountability.
The “Knowledge and Authority” Doctrine
AP Chairman Aleid Wolfsen articulated the legal basis for this in explicit terms, establishing a doctrine that reverberated through corporate boardrooms across the Atlantic. Wolfsen stated that personal liability attaches automatically when a director “knows that the GDPR is being violated, has the authority to stop that, omits to do so, and in this way consciously accepts those violations.” This interpretation removes the protection of the corporate entity when the violation is not a procedural error a core business strategy directed by leadership.
The investigation specifically targeted CEO Hoan Ton-That, whose public statements confirming the continued expansion of the database were as evidence of “conscious acceptance” of illegal activity. Unlike previous GDPR enforcement actions that treated non-compliance as a cost of doing business, the Dutch probe seeks to impose administrative fines directly on the directors’ personal estates. This method bypasses the jurisdictional deadlock frequently encountered when fining US-based shell companies, as European authorities can chance freeze personal assets or enforce judgments through international cooperation treaties if the directors travel to jurisdictions with extradition or enforcement agreements.
Criminal Escalation in Austria: The Article 84 Complaint
The pressure on Clearview’s leadership intensified in October 2025 when the privacy enforcement group NOYB (None of Your Business) filed a criminal complaint in Austria against Clearview AI’s management. This legal action utilized Article 84 of the GDPR, which permits Member States to implement criminal penalties for severe data protection infringements. Unlike the civil administrative fines issued by the Dutch AP, the Austrian complaint seeks custodial sentences for the directors responsible for the illegal processing.
The complaint that the directors’ refusal to comply with valid deletion orders from French, Italian, Greek, and Dutch authorities constitutes a criminal offense under Austrian law. By continuing to process the biometric data of Austrian citizens even with explicit prohibitions, the directors are accused of intentionally perpetuating a crime. This escalation to criminal law represents a severe threat to the personal liberty of Clearview’s executives, barring them from entering the European Union or any country with an extradition treaty with Austria, lest they face immediate arrest.
Failure of the “No Presence” Defense
Throughout 2024 and 2025, Clearview’s directors maintained the defense that the company has no physical presence, bank accounts, or personnel within the EU, and is therefore untouchable. The Dutch and Austrian actions this defense by focusing on the effects of the processing and the personal conduct of the directors. The AP’s investigation operates on the principle that the harm, the illegal biometric profiling, occurs within Dutch territory, granting them jurisdiction over the individuals directing that harm, regardless of their physical location.
Table 4. 1: Escalation of Personal Liability Measures Against Clearview Directors (2024-2025)| Jurisdiction | Action Type | Target | Legal Basis | chance Consequence |
|---|
| Netherlands | Administrative Investigation | Board of Directors (incl. CEO) | Dutch General Administrative Law Act; GDPR Art. 83 | Personal fines; Asset seizure |
| Netherlands | Incremental Penalty (Dwangsom) | Corporate & Directors | Non-compliance with Sept 2024 Stop Order | €5. 1 Million (Personal liability for non-payment) |
| Austria | Criminal Complaint | Senior Management | GDPR Art. 84; Austrian Data Protection Act § 63 | Imprisonment; Criminal Record |
| EU-Wide | Travel Restrictions | Named Executives | Schengen Information System (SIS) Alerts | Denial of entry; Arrest at border |
The shift toward personal liability signifies a failure of the “fine-and-forget” model of regulation. For years, Clearview AI treated multi-million euro fines as uncollectible debts. The 2025 investigations by the Dutch AP and the criminal filings in Austria demonstrate that European regulators are to weaponize the personal risks of executives to force compliance. This strategy places Clearview’s directors in a precarious position: continuing to operate the database in defiance of EU law carries the tangible risk of personal bankruptcy and incarceration, moving the conflict from a corporate legal dispute to a matter of individual survival.
On October 7, 2025, the United Kingdom’s Upper Tribunal (Administrative Appeals Chamber) delivered a judgment that dismantled Clearview AI’s primary defense against European data privacy laws. In the case The Information Commissioner v Clearview AI Incorporated [2025] UKUT 319 (AAC), a panel of three judges, Mrs. Justice Heather Williams, Judge Church, and Judge Butler, ruled decisively in favor of the Information Commissioner’s Office (ICO). This ruling overturned the October 2023 decision by the -tier Tribunal (FTT), which had previously accepted Clearview’s argument that its services were outside the jurisdiction of UK law because they were used exclusively by foreign law enforcement agencies. The Upper Tribunal’s decision re-established the ICO’s authority to levy the £7. 5 million fine and, more importantly, to enforce the order requiring the deletion of all biometric data belonging to UK residents.
The Fallacy of Sovereign Immunity by Proxy
The core of Clearview AI’s legal strategy relied on a concept of “derivative sovereign immunity.” The company argued that because its clients were foreign government agencies, such as police forces in the United States or national security bodies in Latin America, its data processing activities fell outside the “material scope” of the UK GDPR under Article 2(2)(a). This article exempts data processing that concerns national security or defense. Clearview contended that regulating its database would constitute “back door regulation” of foreign states, violating international principles of comity.
The Upper Tribunal rejected this interpretation entirely. The judges found that the -tier Tribunal had erred in law by conflating the activities of the client (the foreign police force) with the activities of the vendor (Clearview AI). The judgment clarified that Clearview is a private commercial entity incorporated in Delaware, operating for profit. It is not an extension of the United States government, nor does it possess the legal privileges of a sovereign state. The processing of UK residents’ data, scraping, vectorization, and storage, occurs before any foreign law enforcement agency accesses the database. Consequently, the commercial act of constructing the biometric database is distinct from the subsequent use of that database by a client. The Upper Tribunal ruled that private contractors cannot shield themselves from data protection liability simply by selling their illegal products to government entities.
Redefining “Monitoring Behaviour” in the Algorithmic Age
A second, equally significant aspect of the ruling concerned the “territorial scope” of the UK GDPR under Article 3(2)(b). This article asserts jurisdiction over foreign controllers if their processing activities are “related to the monitoring of the behaviour” of individuals within the UK. In the 2023 FTT decision, the tribunal had adopted a narrow, almost archaic definition of “monitoring,” suggesting it required a temporal element, watching someone over time, which they argued Clearview did not do, as it provided a “snapshot” identification service.
The Upper Tribunal dismissed this “anthropomorphised” view of surveillance. The judges accepted the ICO’s submission that “monitoring” in a digital context does not require a human agent to sit and watch a video feed. Instead, the Tribunal established that the creation of a searchable database of 20 billion facial vectors constitutes “monitoring” because it enables the behavioral profiling of individuals. The judgment noted that Clearview’s technology does not just identify a name; it aggregates data from sources, social media, professional profiles, news articles, to create a detailed picture of an individual’s life, associations, and activities. The act of indexing this data with the specific intent of allowing third parties to track or investigate individuals satisfies the definition of “monitoring behaviour.”
The Tribunal also clarified the phrase “related to” in Article 3(2)(b). Even if Clearview argued that the client does the monitoring, Clearview’s processing is inextricably “related to” that purpose. Without the database, the monitoring could not occur. This interpretation closes a significant loophole for data brokers who previously claimed they were passive libraries of information rather than active participants in surveillance.
The Role of Privacy International and Civil Society
The proceedings included an intervention by Privacy International, a civil liberties organization that provided detailed submissions on the technical realities of biometric scraping. Their evidence helped demonstrate that the “passive” collection of data described by Clearview was, in reality, an aggressive and continuous process of “crawling” the web to update behavioral profiles. The Tribunal’s acceptance of these technical arguments marks a shift in judicial understanding of how AI models function. The judgment recognized that the “training” or “vectorization” phase of an AI system is not a neutral administrative task a form of data processing that can infringe on rights long before a specific search query is run.
for the “Splinternet” Defense
Clearview AI has frequently attempted to evade European regulation by declaring that it has no clients in the EU or UK and therefore does not “target” those markets. This “splinternet” defense, the idea that a company can operate a global dragnet while legally isolating itself from specific jurisdictions, was dismantled by the Upper Tribunal. The ruling confirmed that the physical location of the company (USA) and the location of its clients (outside the UK) are irrelevant if the data subjects are in the UK and the processing involves monitoring their behavior.
This extraterritorial application of the UK GDPR means that the ICO has the power to problem enforcement notices against foreign entities that have no physical presence in the country. The judgment explicitly stated that the “reach” of the GDPR is designed to be global in these specific circumstances to prevent the exact scenario Clearview attempted: operating a data haven where the privacy rights of UK citizens could be violated with impunity from abroad.
Reinstatement of the Enforcement Notice
Following the Upper Tribunal’s decision, the case was remitted to the -tier Tribunal, with strict instructions to proceed on the basis that the ICO does have jurisdiction. This procedural step reinstates the ICO’s May 2022 Enforcement Notice. This notice demands not only the payment of the fine, more serious, the deletion of all data belonging to UK residents. Clearview’s inability to filter its database by nationality, a technical limitation they have admitted to in other proceedings, creates a binary choice: either develop a 100% accurate geolocation filter to exclude UK data (which is technically improbable given the nature of the open web) or cease scraping the open web entirely to avoid further liability.
The ruling also exposes Clearview’s directors to chance personal liability under the Data Protection Act 2018, as the “sovereign immunity” shield has been removed. If the company continues to ignore the deletion order, the ICO can pursue contempt of court proceedings or seek international cooperation for asset seizure, unencumbered by the “foreign law enforcement” exemption that had previously stalled the process.
Global Legal
The UK Upper Tribunal’s decision brings British case law back into with the European Union. Data protection authorities in France (CNIL), Italy (Garante), and Greece (Hellenic DPA) had already rejected Clearview’s jurisdictional arguments. The caused by the UK’s -tier Tribunal decision in 2023 had created a dangerous precedent, suggesting that the UK might become a “safe harbor” for biometric data brokers. The October 2025 ruling corrects this anomaly, presenting a unified European front (UK and EU) that defines non-consensual biometric scraping as illegal monitoring, regardless of where the server or the client is located.
The judgment serves as a warning to other AI companies training models on scraped data. The Tribunal’s broad reading of “monitoring behaviour” implies that any large- ingestion of personal data for the purpose of profiling, whether for facial recognition, credit scoring, or predictive policing, fall under the jurisdiction of UK law if UK residents are included in the dataset. The “publicly available” defense, which relies on the idea that data on the open web is fair game, was implicitly rejected by the confirmation that such processing requires a valid legal basis, which Clearview absence.
Criminal Complaint Filed by Noyb in Austria Targeting Corporate Executives
On October 28, 2025, the European privacy enforcement organization Noyb (None of Your Business) escalated its legal campaign against Clearview AI by filing a formal criminal complaint with Austrian public prosecutors. This action marks a decisive shift from administrative regulatory procedures to criminal law, directly targeting the personal liberty of the company’s senior management. Unlike previous enforcement actions that resulted in unpaid corporate fines, this complaint seeks to hold individual executives personally liable for persistent violations of the General Data Protection Regulation (GDPR) and Austrian federal law.
Legal Basis: Article 84 GDPR and Section 63 DSG
The complaint is grounded in Article 84 of the GDPR, which EU member states to establish “, proportionate and dissuasive” penalties for data protection infringements beyond administrative fines. Austria has implemented this provision through Section 63 of its Data Protection Act (Datenschutzgesetz or DSG). Section 63 DSG specifically criminalizes the processing of personal data with the intent to generate profit or cause harm when such processing is known to be unlawful. The provision stipulates a prison sentence of up to one year for individuals found guilty. Noyb’s legal filing that Clearview AI’s operations satisfy these criteria entirely: * **Unlawfulness:** Multiple Data Protection Authorities (DPAs) across Europe, including the Austrian DSB, had already ruled Clearview’s biometric scraping illegal by 2024. * **Intent to Profit:** The company’s core business model relies exclusively on selling access to this illegally constructed database to law enforcement and private clients. * **Knowledge:** The company continued its operations in defiance of explicit deletion orders and prohibitory rulings served between 2021 and 2024. By invoking this criminal statute, Noyb aims to bypass the jurisdictional deadlock that allowed Clearview to ignore administrative fines. While a US-based corporation can refuse to pay a foreign civil penalty, individual executives facing criminal charges risk arrest upon entering the European Union or any nation with an extradition treaty with Austria.
Targeting the Leadership Hierarchy
The criminal complaint names “all responsible managers” of Clearview AI, a strategy designed to pierce the corporate veil. This broad targeting is particularly significant given the leadership restructuring that occurred earlier in 2025. In February 2025, co-founder Hoan Ton-That resigned as CEO, transitioning to a board role, while Hal Lambert and co-founder Richard Schwartz assumed co-CEO positions. Noyb’s filing posits that liability for criminal data violations extends to any decision-maker who authorized the continued illegal processing of Austrian citizens’ biometric data. This includes: 1. **Former CEO Hoan Ton-That:** For establishing the illegal processing architecture and directing operations during the initial period of non-compliance. 2. **Current Co-CEOs Lambert and Schwartz:** For maintaining the illegal database and continuing commercial sales even with knowledge of the binding European prohibition orders. The complaint alleges that these individuals acted with direct intent (dolus directus) to enrich the company through criminal means. The filing explicitly challenges the executives’ assumption that their geographic location in the United States provides immunity from European criminal law when their digital actions cause direct harm to individuals within Austrian territory.
The “Profit from Crime” Argument
A central pillar of the complaint is the monetization of criminal activity. Austrian criminal law distinguishes between incidental regulatory breaches and systematic exploitation of illegal acts for financial gain. Noyb provided prosecutors with evidence showing that Clearview AI did not “fail to comply” with GDPR; it built its entire revenue stream on the misappropriation of biometric identities. The submission details how the company commodified the faces of millions of Austrians without their knowledge. By selling search access to this data, the executives fenced stolen goods, in this case, stolen biometric identities. This framing moves the infraction from a technical administrative violation to a property and privacy crime, cognizable under the Austrian Criminal Code’s provisions on data abuse.
Procedural Escalation and chance Consequences
Upon receipt of the complaint, the Vienna Public Prosecutor’s Office (Staatsanwaltschaft Wien) is required to examine the evidence to determine if a formal investigation should be opened. If prosecutors proceed, they hold the authority to: * **problem Subpoenas:** Demand internal documents and communications from Clearview AI regarding their knowledge of EU deletion orders. * **Request International Assistance:** use Mutual Legal Assistance Treaties (MLATs) to seek evidence from US authorities, though political friction frequently such requests in data privacy cases. * **problem Arrest Warrants:** If the accused fail to appear for questioning or trial, Austrian authorities can problem EU-wide arrest warrants (European Arrest Warrant). The immediate practical effect of this complaint is the restriction of movement for Clearview’s leadership. Legal experts note that traveling to any EU member state, or countries with strong judicial cooperation with Austria, carries the risk of detention and interrogation. This “jail time” threat creates a tangible personal risk for executives that corporate fines failed to produce.
Reaction and Industry Impact
Clearview AI has not issued a specific public rebuttal to the criminal filing as of late 2025, though the company has historically maintained that it is not subject to GDPR as it has no physical presence in the EU. yet, the Austrian legal framework for data crimes applies based on the location of the *victim* (the data subject), not the perpetrator. Privacy advocates view this action as a test case for the “long arm” of European privacy law. If successful, it establishes a precedent that corporate officers of non-EU tech companies can face imprisonment for mass- privacy violations. This would fundamentally alter the risk assessment for foreign companies operating in the grey zones of data scraping, signalling that the cost of doing business in Europe illegally may be paid in personal liberty rather than just corporate revenue.
The Jurisdictional Anchor: Article 27 as a battlefield
GDPR Article 27 serves as the primary jurisdictional anchor for foreign entities that process the data of European residents. It mandates that any organization without a physical establishment in the Union must designate a legal representative if it monitors the behavior of EU subjects. This requirement is not a bureaucratic formality. It creates a necessary legal foothold for accountability and ensures that regulators and citizens have a valid address for the service of legal documents. Clearview AI has treated this obligation with open contempt. The company has systematically refused to appoint a representative in any EU member state. This refusal functions as a calculated legal firewall designed to obstruct enforcement and delay judicial proceedings.
The Dutch Data Protection Authority (Autoriteit Persoonsgegevens or AP) shattered this defense in its definitive ruling released in mid-2024. The AP imposed a fine of €30. 5 million on Clearview AI. A serious component of this penalty specifically targeted the violation of Article 27. Dutch regulators determined that Clearview’s processing of biometric data constituted “monitoring of behavior” under Article 3(2) of the GDPR. This classification automatically triggers the Article 27 requirement. By failing to appoint a representative, Clearview attempted to render itself legally invisible within the jurisdiction. The AP rejected this “ghost” defense and ruled that the company’s physical absence from the Netherlands did not absolve it of the obligation to answer to Dutch law.
The “Ghost Company” Strategy
Clearview’s refusal to comply with Article 27 is a strategic evasion tactic rather than an oversight. The company that it has no offices, no servers, and no paying customers within the European Union. Its legal team asserts that the GDPR does not apply because the company operates exclusively under United States law. This argument collapses when confronted with the reality of their data collection methods. Clearview expropriates the biometric data of millions of EU residents without their knowledge. The Dutch DPA and other regulators have consistently held that this mass extraction establishes a clear link to the territory. The act of scraping faces from local social media accounts creates a digital presence that triggers compliance obligations.
The absence of a representative creates a procedural deadlock for victims seeking justice. When a data subject in the EU wishes to exercise their rights to erasure or access, they must theoretically contact the controller. Without an EU representative, these requests frequently into a legal void. Clearview ignores them or claims they are unenforceable. This forces regulators to resort to extraordinary measures to deliver legal notices. In Greece, the Hellenic Data Protection Authority was forced to route its penalty notification through diplomatic channels via the Ministry of Foreign Affairs because Clearview offered no valid legal address in Europe. This procedural absurdity highlights the extent of the company’s commitment to evasion.
Dutch Penalty Orders and Non-Compliance
The 2024 Dutch ruling escalated the consequences of this intransigence. Beyond the administrative fine, the AP issued non-compliance penalty orders (dwangsom) specifically attached to the failure to appoint a representative. These orders impose cumulative fines for every week the violation continues. The AP Chairman Aleid Wolfsen stated that facial recognition is a “highly intrusive technology” that cannot be unleashed without accountability. The ruling clarified that Clearview’s business model is illegal under Dutch law and that the absence of a representative is an aggravating factor that demonstrates bad faith. The company’s continued refusal to designate a point of contact has resulted in the accumulation of millions of euros al penalties beyond the initial fine.
This pattern of defiance is consistent across all major EU jurisdictions. The French regulator CNIL previously fined Clearview €20 million in 2022 and subsequently added €5. 2 million in overdue penalties for failing to comply with orders. The Italian Garante and the Greek DPA also issued €20 million fines that Article 27 violations. In every instance, Clearview ignored the order to appoint a representative. The company relies on the practical difficulty of asset seizure across the Atlantic to insulate itself from these judgments. Yet the accumulation of these rulings creates a permanent barrier to the company’s entry into the European market and exposes its directors to increasing personal liability risks.
The “Monitoring Behavior” Legal Test
The legal dispute centers on the definition of “monitoring behavior.” Clearview claims its database is a search engine for public photos. European regulators that the creation of a biometric template constitutes behavioral monitoring because it enables the tracking of individuals across time and space. The Dutch DPA analysis in 2024 solidified this interpretation. They concluded that transforming a static image into a searchable biometric vector allows for the surveillance of an individual’s movements and associations. This processing falls squarely within the scope of Article 3(2)(b) of the GDPR. Once this threshold is met, the appointment of a representative becomes mandatory under Article 27. Clearview’s failure to do so is not just a procedural error. It is a substantive violation of the law that denies EU residents their fundamental right to legal recourse.
Privacy advocacy group noyb (None of Your Business) has been instrumental in exposing this systematic failure. Their complaints in Austria and other jurisdictions have repeatedly highlighted the Article 27 violation as a primary obstacle to enforcement. Noyb has argued that Clearview’s strategy grants it immunity from civil liability unless regulators take aggressive action. The 2025 criminal complaints filed against Clearview executives in Austria mark a shift in this battle. By targeting the individuals responsible for the policy of non-compliance, advocates seek to bypass the corporate shield and force the appointment of a representative through the threat of personal criminal liability.
Table: Regulatory Actions Citing Article 27 Violations (2021-2025)
| Jurisdiction | Regulator | Date of Ruling | Fine Amount | Article 27 Specifics |
|---|
| Netherlands | Autoriteit Persoonsgegevens (AP) | May 2024 | €30. 5 Million | failure to appoint representative as a distinct violation aggravating the fine. Issued incremental penalty payments for continued non-compliance. |
| France | CNIL | October 2022 | €20 Million | Explicitly noted the absence of a representative prevented citizens from exercising rights. Added €5. 2 million in 2023 for failure to comply. |
| Italy | Garante | March 2022 | €20 Million | Ordered immediate appointment of an EU representative to data subject requests. Order ignored by Clearview. |
| Greece | Hellenic DPA | July 2022 | €20 Million | Required notification via diplomatic channels due to absence of representative. Article 27 breach as a key factor in the maximum fine. |
| Austria | DSB | May 2023 | None (Initially) | Ruled processing illegal and ordered appointment of representative. Noyb later filed criminal complaints in 2025 citing this failure. |
| United Kingdom | ICO | May 2022 | £7. 5 Million | failure to have a UK representative (post-Brexit equivalent of Article 27) as part of the enforcement notice. |
The CNIL Standoff: A Case Study in Digital Defiance
The conflict between the Commission nationale de l’informatique et des libertés (CNIL) and Clearview AI represents a defining moment in the enforcement of European data sovereignty. While other regulators issued warnings, the French authority moved to aggressive financial compulsion. This escalation began with a formal notice in December 2021, demanding that Clearview cease the unlawful collection of French citizens’ data and delete existing biometric templates. Clearview AI chose a strategy of total non-engagement, failing to respond to the notice or attend subsequent hearings. This refusal to recognize the authority of the regulator set the stage for a punitive sequence that remains unresolved as of 2026.
The October 2022 Sanction: Maximum Financial Impact
In October 2022, the CNIL’s Restricted Committee imposed a fine of €20 million on Clearview AI. This amount represented the maximum financial penalty available under the GDPR for the specific violations at the time. The regulator identified serious breaches of multiple GDPR articles, specifically Article 6 (lawfulness of processing), Article 12 (rights of the data subject), Article 15 (right of access), Article 17 (right to erasure), and Article 31 (cooperation with the supervisory authority). The CNIL established that Clearview’s scraping of public images to build biometric templates constituted processing without consent or legitimate interest. The regulator rejected Clearview’s argument that public availability of images equated to a waiver of privacy rights. The decision also highlighted the company’s systematic failure to the exercise of user rights. When French citizens attempted to access their data or request deletion, Clearview frequently ignored them or demanded excessive identification, walling off the data subjects from their own biometric information.
Liquidation of the Penalty Payment (Astreinte)
The initial €20 million fine included a compliance order: Clearview was required to stop processing data of persons located in France and delete the data already collected within two months. To enforce this, the CNIL attached a daily penalty payment (*astreinte*) of €100, 000 for every day of delay beyond the deadline. By May 2023, Clearview AI had provided no proof of compliance. Consequently, the CNIL’s Restricted Committee convened to liquidate the accrued penalty. On May 10, 2023, the regulator ordered Clearview to pay an additional €5. 2 million. This sum was not a new fine for a new violation a method of enforcement for the previous order. The liquidation of the *astreinte* demonstrated the CNIL’s intent to use every legal tool available to compel obedience. Yet, the company remained unresponsive, treating the cumulative €25. 2 million debt as a theoretical figure rather than a binding obligation.
Jurisdictional Evasion and the “Ghost” Defense
Clearview AI’s legal defense, articulated in various public statements and limited correspondence, relies on a strict interpretation of territorial jurisdiction. The company that because it has no physical establishment, employees, or banking assets within the European Union, it falls outside the scope of the GDPR. Jack Mulcaire, Clearview’s Chief Legal Officer, has characterized these European rulings as “unenforceable” and “devoid of due process.” This “ghost” defense presents a serious challenge to the GDPR’s extraterritorial reach. Article 3 of the GDPR explicitly extends jurisdiction to controllers established outside the Union if they monitor the behavior of data subjects within the Union. The CNIL, along with the Dutch, Italian, and Greek authorities, maintains that scraping the faces of European citizens to sell surveillance products constitutes behavioral monitoring. Clearview disputes this interpretation, asserting that it organizes public information. This legal impasse has created a situation where a US entity operates in open contempt of EU law, protected by the Atlantic Ocean and the absence of a mutual enforcement treaty for administrative fines.
Operational Reality in 2024 and 2025
Throughout 2024 and 2025, the operational reality for French citizens did not change. even with the deletion orders, Clearview AI continued to scrape French websites and social media profiles. The database, which contained approximately 20 billion images at the time of the initial CNIL investigation, reportedly swelled to over 50 billion images by late 2025. The company’s “catch-me-if-you-can” method proved in the short term. Without assets in France to seize, the CNIL could not forcibly collect the fines. The inability to execute the financial penalty exposes a structural weakness in the international data protection framework. While the GDPR provides strong legal method for defining illegality, it absence a streamlined process for collecting administrative debts from non-cooperative foreign entities. The CNIL has likely referred the debt to French public finance authorities for recovery, yet the prospect of seizing US assets remains low without a shift in US judicial cooperation.
The Broader Enforcement Gap
The CNIL’s experience mirrors that of other European regulators stands out for its use of the *astreinte* method. The accumulation of daily penalties creates a mounting liability that could theoretically trigger asset seizures if Clearview ever attempts to establish a European presence or transact through European financial institutions. Legal experts suggest that while Clearview feels safe in the US, this debt bars the company from future entry into the European market. This standoff also prompted the escalation seen in late 2025, where privacy advocacy group NOYB filed criminal complaints in Austria against Clearview’s directors. The shift from administrative fines (which Clearview ignores) to chance criminal liability for executives marks the phase of this conflict. The CNIL’s unpaid fines serve as the foundational evidence for these criminal proceedings, proving that the company engaged in willful, prolonged non-compliance even with clear judicial orders.
Summary of CNIL Enforcement Actions Against Clearview AI (2021-2025)| Date | Action Type | Financial Impact | Clearview Response |
|---|
| December 2021 | Formal Notice | None (Warning) | No response; continued processing. |
| October 2022 | Sanction (Fine) | €20, 000, 000 | Did not attend hearing; refused to pay. |
| December 2022 | Compliance Deadline | Daily Penalty Accrual | Deadline ignored; data not deleted. |
| May 2023 | Liquidation of Penalty | €5, 200, 000 | Statement calling the order “unenforceable.” |
| 2024-2025 | Debt Collection Phase | Interest & Fees | Continued non-payment; database expansion. |
The CNIL’s actions against Clearview AI illustrate the limits of administrative power in a fragmented global internet. The regulator successfully defined the illegality of the biometric database under French law and imposed the maximum possible financial sanction. Yet, the continued existence of the database containing French faces demonstrates that without cross-border enforcement method, financial penalties alone cannot a rogue data infrastructure. The €25. 2 million debt remains a symbol of the friction between European privacy rights and American surveillance capitalism.
The €40 Million Standoff: Italy and Greece vs. Clearview AI
As of early 2026, Clearview AI remains in a state of open defiance regarding two massive financial penalties imposed by Southern European regulators. In 2022, the Italian Data Protection Authority (Garante per la protezione dei dati personali) and the Hellenic Data Protection Authority (HDPA) levied separate fines of €20 million each against the American facial recognition firm. These penalties, totaling €40 million, stand as unpaid debts four years later. The company continues to operate without an EU establishment, daring European authorities to enforce their rulings across the Atlantic. This standoff exposes a serious limitation in the General Data Protection Regulation (GDPR): the difficulty of extracting payment from a foreign entity that refuses to recognize the jurisdiction of European courts.
The refusal to pay is not an administrative oversight a calculated legal strategy. Clearview AI maintains that because it has no physical offices, employees, or paying clients within the European Union, it falls outside the territorial scope of the GDPR. Both Italian and Greek regulators rejected this argument, citing Article 3(2) of the regulation, which asserts jurisdiction over any entity that monitors the behavior of data subjects within the Union. By scraping the public images of millions of Italian and Greek citizens to build biometric profiles, Clearview engaged in behavioral monitoring. Yet, the absence of assets on European soil has rendered the collection of these fines a logistical impossibility for the time being.
The Italian Ruling: A Categorical Ban on Biometric Scraping
The Italian Garante issued its ruling in March 2022, following a complex investigation triggered by complaints from privacy advocacy groups and individual alerts. The regulator found that Clearview AI possessed a database containing billions of facial images, including a significant number of Italian nationals. The Garante determined that this collection occurred without an appropriate legal basis, violating Article 6 of the GDPR. More seriously, the processing involved biometric data, which is classified as a “special category” under Article 9. Processing such sensitive data generally requires explicit consent from the subject, which Clearview never sought nor obtained.
The Italian investigation revealed that Clearview violated the principles of transparency (Article 5(1)(a)), purpose limitation (Article 5(1)(b)), and storage limitation (Article 5(1)(e)). The company scraped images from social media platforms and websites where Italians posted photos for personal or professional reasons, not expecting them to be fed into a global police lineup. The Garante ordered the company to erase all data relating to individuals in Italy and imposed a ban on any further collection.
Clearview’s response to the Italian order was dismissive. While the company stated it had “blocked” all IP addresses from Italy to prevent access to its service, it refused to delete the biometric templates already generated from Italian images. The company argued that deleting specific data based on nationality was technically unfeasible without collecting more data to identify who was Italian, a circular argument that the Garante found unpersuasive. As of 2026, the €20 million fine sits on the books of the Italian treasury as an uncollected receivable, a testament to the limits of the Garante’s reach.
The Greek Verdict: Homo Digitalis and the Rights of the Subject
In July 2022, the Hellenic Data Protection Authority (HDPA) followed Italy’s lead, imposing its own €20 million fine. This case originated from a complaint filed by the digital rights organization Homo Digitalis on behalf of a complainant who exercised their right of access. The individual asked Clearview AI to disclose what data it held on them. Clearview failed to satisfy this request adequately, violating Article 15 of the GDPR (Right of access by the data subject).
The HDPA investigation confirmed that Clearview’s practices violated the same core articles as found in the Italian case: Article 5 (principles), Article 6 (lawfulness), and Article 9 (biometric data). The Greek regulator was particularly firm on the violation of Article 14, which mandates that data controllers inform subjects when their data is collected from third-party sources. Since Clearview scrapes the open web, it is impossible for the company to notify the billions of people whose faces it ingests. The HDPA ruled that this impossibility does not grant an exemption; rather, it renders the business model incompatible with EU law.
The Greek authority faced immediate procedural blocks. Because Clearview had no representative in the EU (a violation of Article 27), the HDPA had to serve the penalty notice through diplomatic channels via the Greek Ministry of Foreign Affairs. This diplomatic route show the severity of the friction between the EU’s privacy standards and the US-based company’s operations. Like the Italian fine, the Greek penalty remains unpaid. The ban on processing Greek data is technically in force, without an audit method inside Clearview’s New York servers, verification relies entirely on the company’s word.
The “No Establishment” Defense and Article 27
A central pillar of Clearview’s defense in both Italy and Greece is the claim that it is not subject to the GDPR because it has no “establishment” in the Union. The company that it sells its software exclusively to law enforcement and government agencies in the United States and select other non-EU nations. Therefore, it asserts, it does not “target” EU consumers.
Regulators dismantled this defense by pointing to the “monitoring of behavior” clause in Article 3. The creation of a biometric template, a mathematical map of facial geometry, constitutes the monitoring of a person’s physical characteristics. When that person is in Italy or Greece, the GDPR applies. also, both regulators Clearview’s failure to appoint an Article 27 representative. The GDPR requires foreign companies processing EU data to designate a liaison within the Union to handle regulatory inquiries. Clearview’s refusal to appoint such a representative is a deliberate tactic to insulate itself from legal service and liability.
This “strategy of silence” creates a procedural deadlock. European legal systems rely on the ability to serve documents to a recognized entity. By remaining a ghost in the European corporate registry, Clearview forces regulators to rely on international treaties and diplomatic notes, processes that are slow and frequently toothless in civil administrative matters.
Enforcement method and the Asset Vacuum
The primary reason these fines remain unpaid in 2026 is the absence of Clearview assets within the jurisdiction of European courts. If Google or Meta were to refuse a GDPR fine, authorities could seize their local bank accounts, raid their local offices, or intercept revenue streams from European advertisers. Clearview has none of these. It has no European bank accounts to freeze and no European revenue to garnish.
For the Italian and Greek authorities to collect the €40 million, they would need to petition a US court to recognize and enforce the foreign judgment. This is a high legal bar. US courts are frequently hesitant to enforce foreign fines that are “penal” in nature, especially when the underlying conduct (scraping public data) might be protected under the Amendment in the US. Clearview has successfully argued in American litigation that its collection of public photos is a form of protected speech. This in legal philosophy creates a safe harbor for the company, allowing it to ignore European financial penalties with relative impunity.
The table summarizes the specific infractions by the Italian and Greek authorities in their 2022 rulings, which remain the basis for the outstanding debt.
| Authority | Fine Amount | Date of Decision | Key GDPR Violations | Status (2026) |
|---|
| Garante (Italy) | €20, 000, 000 | March 10, 2022 | Art 5 (Principles), Art 6 (Lawfulness), Art 9 (Biometrics), Art 12-14 (Transparency), Art 27 (Representative) | Unpaid; Order to delete data ignored. |
| HDPA (Greece) | €20, 000, 000 | July 13, 2022 | Art 15 (Access Rights), Art 5 (Principles), Art 6 (Lawfulness), Art 9 (Biometrics), Art 27 (Representative) | Unpaid; Ban on processing ignored. |
The Precedent of Impunity
The inability of Italy and Greece to collect these fines sets a dangerous precedent. It suggests that the GDPR’s extraterritorial reach is only against companies that voluntarily participate in the global market or have physical supply chains. For a purely digital entity like Clearview, which is content to operate solely within the US security apparatus, the “Brussels Effect” is negligible. The fines serve as a reputational stain and a legal barrier to future market entry, they do not inflict the immediate financial pain intended by the legislators.
This situation also frustrates the victims. The Italian and Greek citizens whose faces remain in Clearview’s database have no practical remedy. Their data protection authorities have fired their heaviest artillery, maximum fines and deletion orders, and the target has simply shrugged. The “right to be forgotten” (Article 17) is nullified when the controller resides in a jurisdiction that prioritizes the commercial exploitation of public data over privacy rights.
By 2025, the discussion among European regulators shifted from imposing new fines to finding enforcement pathways. legal scholars proposed blocking Clearview’s website at the ISP level across Europe, treating the service as illicit contraband similar to illegal gambling sites. Others suggested pressure on Clearview’s US investors or clients. Yet, as of this writing, the €40 million debt remains a symbol of the gap between the GDPR’s theoretical power and its practical limitations in a fragmented digital world.
The persistence of these unpaid fines also highlights the role of civil society. Organizations like Homo Digitalis in Greece and Privacy International (which supported the complaints) continue to document the company’s non-compliance. Their work ensures that even if Clearview does not pay, it cannot normalize its operations in Europe. The outstanding warrants for payment act as a permanent “Do Not Enter” sign, preventing the company from ever establishing a legitimate foothold in the EU market without settling a massive bill and fundamentally altering its technology.
The “Index of Everything”: 100 Billion Images as a Metric of Defiance
Clearview AI’s operational strategy since 2022 has centered on a singular, quantifiable ambition: the accumulation of 100 billion facial images. This figure, revealed in a confidential pitch deck to investors and later confirmed by CEO Hoan Ton-That, represents more than a technical milestone. It serves as a declaration of intent to construct a permanent, searchable “index of faces” covering the entire global population. By February 2026, even with a barrage of enforcement orders from European regulators, Clearview AI has not only maintained this trajectory accelerated it. Contracts signed with U. S. federal agencies in early 2026 indicate the database holds between 60 and 70 billion images, a volume that mathematically ensures the average human being is indexed by approximately 14 distinct biometric vectors.
The expansion continues in direct contravention of the General Data Protection Regulation (GDPR). While European authorities demand data minimization and deletion, Clearview AI pursues data maximization. The company’s infrastructure is engineered to ingest billions of images monthly, scraping social media platforms, public forums, and obscure corners of the open web. This relentless accumulation creates a “perpetual lineup” where every individual, regardless of criminal history or consent, is subject to constant biometric analysis. The between the 30 billion images by regulators in 2023 and the 70 billion referenced in 2026 federal contracts demonstrates that fines have failed to act as a deterrent. Instead, the company treats regulatory penalties as the cost of doing business while it races to achieve total information dominance.
Dutch DPA Ruling 2024: The Database as Contraband
The collision between Clearview’s expansionist model and European law reached a definitive breaking point with the Dutch Data Protection Authority’s (Autoriteit Persoonsgegevens) ruling in mid-2024. The regulator imposed a fine of €30. 5 million, yet the monetary penalty was secondary to the legal characterization of the database itself. Unlike previous rulings that focused on specific processing instances, the Dutch DPA declared the database in its entirety to be “illegal.” The authority established that Clearview AI never possessed a lawful basis to construct the biometric repository in the place. Consequently, every image held within it, whether of a Dutch citizen or not, is the fruit of a poisoned tree under EU privacy standards.
This ruling attacked the core of Clearview’s asset value. To the Dutch DPA, the database is not a proprietary trade secret a collection of contraband. The order demanded not just the cessation of processing Dutch data, the of the system that enables such processing. Clearview’s refusal to comply, citing a absence of physical presence in the EU, forced the regulator to problem penalty payments for non-compliance. The standoff highlights a serious jurisdictional gap: European law views the possession of these 100 billion vectors as a crime, while Clearview’s U. S. operations view them as the product.
Technological Infrastructure of Mass Surveillance
Maintaining a database of this magnitude requires significant engineering resources, which Clearview AI has optimized to reduce overhead while increasing. In communications with investors, Hoan Ton-That noted that the cost to store and search 40 billion images had dropped by 36 percent compared to the cost for 30 billion, attributed to proprietary vector search improvements. This efficiency allows the company to ignore the financial friction that limits data hoarding. The system converts raw images into mathematical vectors, unique faceprints, which are then indexed for sub-second retrieval.
The technical architecture renders “deletion” impossible in the way GDPR envisions. Even if a specific image URL is removed from the source website, Clearview retains the biometric vector and the cached metadata. When a user requests deletion, Clearview frequently requires them to provide a “search photo” to locate their data, a process privacy advocates forces the subject to surrender more biometric data to the entity they wish to escape. With the database expanding toward 100 billion entries, the statistical probability of a “clean” deletion diminishes. The scraping bots constantly re-acquire data, creating a pattern where deleted profiles are simply reconstructed from new sources or older, cached versions found elsewhere on the web.
U. S. Federal Contracts as Expansion Fuel
The between European prohibition and American adoption became clear visible in February 2026. Just as European tribunals confirmed the extraterritorial reach of GDPR to stop Clearview, the U. S. Customs and Border Protection (CBP) awarded the company a contract worth nearly a quarter-million dollars for “tactical targeting” and “counter-network analysis.” This contract, along with a $9. 2 million deal with Immigration and Customs Enforcement (ICE) signed in 2025, provides the capital necessary to sustain the database’s growth.
These federal agreements explicitly reference the utility of Clearview’s “open-source” repository. The U. S. government subsidizes the very activity that EU regulators have deemed criminal. The revenue from these contracts allows Clearview to pay legal fees and ignore European fines, creating a geopolitical deadlock. The database is no longer just a private commercial venture; it has become a component of the U. S. national security apparatus, insulated from foreign privacy laws by federal procurement requirements.
Comparative and the End of Anonymity
Biometric Database Comparison (2026 Estimates)| Entity | Estimated Database Size | Source of Data | Legal Status in EU |
|---|
| Clearview AI | 60, 70 Billion Images | Non-consensual Web Scraping | Illegal / Banned |
| FBI (NGI) | ~800 Million Images | Criminal / Civil Records | Law Enforcement Cooperation |
| Interpol | <200 Million Images | Police Records | Authorized |
The of Clearview’s repository dwarfs legitimate law enforcement databases. The FBI’s Generation Identification (NGI) system holds hundreds of millions of photos, primarily mugshots and civil records. Clearview’s 70 billion images represent a dataset nearly 100 times larger, encompassing the innocent, the minor, and the bystander. This volume fundamentally alters the nature of surveillance. In a database of 100 million, a search might return no results. In a database of 100 billion, a search returns not just a match, a timeline of a person’s life, vacation photos, conference attendances, background appearances in strangers’ selfies, and social media profile history.
This “Index of Everything” destroys the concept of obscurity. European privacy law relies on the premise that individuals have a right to control their digital identity. Clearview’s expansion proves that in the absence of physical server seizures, digital sovereignty is difficult to enforce against a determined, well-funded adversary operating from a permissive jurisdiction. The 100 billion goal is not a business target; it is a method to render privacy laws obsolete by establishing a level of ubiquity that regulation cannot undo.
The Architecture of Jurisdictional Arbitrage
Clearview AI’s refusal to comply with European Union privacy laws is not a matter of negligence; it is a calculated strategy of jurisdictional arbitrage. By maintaining no physical presence, no offices, no servers, no employees, and no bank accounts, within the European Economic Area (EEA), the company has constructed a legal that renders standard GDPR enforcement method useless. While the General Data Protection Regulation (GDPR) asserts extraterritorial jurisdiction under Article 3(2), claiming authority over any entity that monitors the behavior of EU residents, Clearview AI has demonstrated that asserting jurisdiction and enforcing it are two entirely distinct realities. The company’s operational structure is designed to exploit the gap between digital ubiquity and physical absence. Clearview scrapes the faces of European citizens from the open web, processes their biometric data on American servers, and sells the resulting intelligence primarily to American clients. When European regulators problem fines or deletion orders, Clearview’s legal team responds with a consistent defense: the company has no “establishment” in the EU and therefore falls outside the practical reach of European administrative law. This defense was explicitly articulated by Clearview’s Chief Legal Officer, Jack Mulcaire, following the Dutch Data Protection Authority’s (AP) imposition of a €30. 5 million fine in September 2024. Mulcaire stated the decision was “unlawful, devoid of due process and is unenforceable,” citing the company’s absence of a business nexus in the Netherlands. This “ghost” status allows Clearview to operate as a digital phantom. It extracts value from the European population, their biometric identities, while remaining intangible to the regulators charged with protecting them. The strategy has proven highly in shielding the company’s assets. even with accumulating over €100 million in administrative penalties from France, Italy, Greece, the United Kingdom, and the Netherlands between 2021 and 2025, there is no public record of Clearview AI paying a single cent into a European treasury.
The “Revenue Rule” as a Transatlantic Shield
The primary legal barrier preventing EU regulators from seizing Clearview’s assets in the United States is a common law doctrine known as the “Revenue Rule.” This legal principle, deeply rooted in Anglo-American jurisprudence, holds that the courts of one sovereign not enforce the penal or revenue laws of another. In the eyes of a US court, a GDPR fine is classified as a foreign penal judgment or a revenue claim. Unlike private contractual debts or commercial arbitration awards, which are routinely enforced across borders under treaties like the New York Convention, administrative fines imposed by foreign governments are generally treated as unenforceable in the United States. This doctrine provides Clearview AI with a near-impenetrable shield. For the French CNIL or the Italian Garante to collect their fines, they would need to petition a US court to recognize the debt and authorize the seizure of Clearview’s US assets. Under current legal precedents, a US judge would likely dismiss such a petition, citing the Revenue Rule. This reality creates a safe haven for US-based technology companies that do not have assets in Europe. As long as Clearview avoids opening a European subsidiary or holding funds in European banks, its US treasury remains safe from GDPR penalties. The of this legal deadlock are serious. It creates a two-tier system of compliance. Multinational corporations with physical assets in the EU, such as Google (Alphabet) or Meta, are to asset seizure and must negotiate with regulators. In contrast, “pure” digital entities like Clearview AI can operate with financial impunity, treating GDPR fines as symbolic reprimands rather than existential threats. This undermines the credibility of the European regulatory framework, revealing that the “Brussels Effect”, the ability of the EU to regulate global digital standards, is contingent on the target having something to lose within the bloc.
The Dutch DPA and the “Paper Tiger” Problem
The limitations of this enforcement model were clear illustrated by the Dutch Data Protection Authority’s investigation in 2024. The AP found that Clearview had built an illegal database containing the faces of Dutch citizens, processed without consent and in violation of Article 9’s prohibition on biometric data processing. The regulator imposed a €30. 5 million fine and ordered the company to cease processing. Aleid Wolfsen, chairman of the AP, used unusually strong language, stating, “Clearview breaks the law, and this makes using the services of Clearview illegal.” Yet, the regulator admitted the difficulty of enforcement. The fine remains unpaid, and the data remains on Clearview’s servers. The AP’s inability to collect the penalty exposes the “paper tiger” problem inherent in the GDPR’s extraterritorial ambition. While the law claims global reach, its enforcement arm is amputated at the EU border. The Dutch regulator’s subsequent move to investigate the *personal liability* of Clearview’s directors (discussed in Section 4) was a direct response to this corporate evasion. By targeting the individuals behind the corporate veil, the AP attempts to bypass the Revenue Rule, as criminal liability or personal negligence claims might travel across borders more easily than corporate administrative fines, or at least restrict the travel and international operations of the executives involved.
Table: Unpaid Administrative Fines (2021-2025)
The following table details the major financial penalties imposed on Clearview AI by European regulators, all of which remain uncollected due to the company’s absence of physical presence and refusal to recognize jurisdiction.
| Jurisdiction | Regulator | Date Imposed | Fine Amount | Status (2026) | Company Response |
|---|
| France | CNIL | October 2022 | €20, 000, 000 | Unpaid | Ignored formal notice; claimed GDPR does not apply. |
| Italy | Garante | March 2022 | €20, 000, 000 | Unpaid | Asserted no establishment in Italy; ignored deletion orders. |
| Greece | Hellenic DPA | July 2022 | €20, 000, 000 | Unpaid | Refused to appoint Article 27 representative; ignored fine. |
| United Kingdom | ICO | May 2022 | £7, 500, 000 | Unpaid / Litigated | Challenged jurisdiction; initially overturned, then reinstated by Upper Tribunal (2025). |
| Netherlands | Autoriteit Persoonsgegevens | September 2024 | €30, 500, 000 | Unpaid | “Unlawful and unenforceable” statement by CLO. |
| Total | — | — | ~€100M+ | 0% Collected | Systematic Non-Compliance |
Operational Impunity and the Splinternet
Clearview’s successful evasion of these fines has emboldened its operational strategy. Rather than modifying its behavior to comply with EU law, which would require deleting billions of images and implementing strict geofencing, the company has simply accepted that it cannot do business *with* EU entities while continuing to do business *on* EU subjects. The company stopped selling its software to private companies and law enforcement agencies within the EU (largely because those clients would be liable for using an illegal tool), it did not stop scraping the data of EU residents. This distinction is serious. The GDPR is designed to protect the *data subject*, not just regulate the *data controller’s* sales. By continuing to hold and process the biometric templates of French, German, and Dutch citizens, Clearview violates their rights daily. Yet, because the company’s revenue stream is derived from US law enforcement and government contracts, the loss of the EU market was a calculated write-off. Clearview operates in a “Splinternet” reality where it adheres to US permissiveness while ignoring European restrictions. The company’s defiance also sets a dangerous precedent for other AI entities. It demonstrates that the cost of GDPR non-compliance for a US-centric AI company is zero, provided they have no physical footprint in Europe. This encourages a “hit-and-run” method to data collection: scrape the world’s data, store it in a US safe haven, and ignore the summons from foreign regulators.
The Failure of International Cooperation method
The inability to enforce these fines also highlights a failure in international legal cooperation regarding data privacy. Article 50 of the GDPR outlines method for international cooperation, these are largely information-sharing agreements, not enforcement treaties. There is no “GDPR Extradition Treaty” or “Mutual Legal Assistance Treaty” (MLAT) for administrative data privacy fines between the US and the EU. While the US Federal Trade Commission (FTC) cooperates with EU authorities on matters, the US government has shown little interest in helping foreign regulators punish American AI companies, especially those like Clearview that are deeply in the US national security apparatus. Clearview’s contracts with the Department of Homeland Security, the FBI, and local police forces give it a of political protection. It is unlikely that the US Department of Justice would entertain a request to assist the Dutch DPA in bankrupting a contractor important to US domestic policing. This geopolitical reality leaves EU regulators with few options. They can problem fines that generate headlines no revenue. They can problem deletion orders that are ignored. The only remaining escalation is the one taken by the Austrian privacy group Noyb and the Dutch DPA in late 2024 and 2025: shifting the target from the corporate entity to the human beings running it. By filing criminal complaints and investigating personal director liability, regulators are attempting to pierce the corporate veil. If Hoan Ton-That or other executives face the threat of arrest upon entering the Schengen Zone, the calculation changes. as of early 2026, the corporate entity of Clearview AI remains a, proving that in the digital age, physical absence is the defense against regulatory sovereignty.
chance for European Arrest Warrants Against Company Leadership 2026
The enforcement strategy against Clearview AI underwent a fundamental shift in late 2024 and throughout 2025. European regulators, faced with a US-based entity that persistently ignored administrative fines and deletion orders, moved to pierce the corporate veil. By February 2026, the legal conversation had transitioned from civil penalties to criminal liability, raising the distinct possibility of European Arrest Warrants (EAWs) being issued against the company’s executive leadership, specifically CEO Hoan Ton-That and co-founder Richard Schwartz. This escalation represents a severe test of the GDPR’s extraterritorial reach and signals that data protection authorities are to weaponize criminal law to stop non-compliant foreign actors.
The catalyst for this aggressive tactic emerged in September 2024, when the Dutch Data Protection Authority (Autoriteit Persoonsgegevens or AP) imposed a €30. 5 million fine on Clearview AI. While the financial penalty was substantial, the accompanying legal threat was far more significant. AP Chairman Aleid Wolfsen explicitly announced an investigation into the personal liability of the company’s directors. Under Dutch law, directors can be held personally accountable if they possess the authority to stop a violation yet knowingly permit it to continue. Wolfsen stated that such liability exists when executives “consciously accept” illegal conduct. This declaration marked the time a major EU regulator formally initiated proceedings to hold individual tech executives personally responsible for GDPR violations, moving the conflict into the domain of personal asset seizure and criminal prosecution.
Following the Dutch initiative, the privacy advocacy group Noyb (None of Your Business) escalated the matter further in October 2025 by filing a criminal complaint in Austria. This action invoked Article 84 of the GDPR, which permits member states to implement criminal sanctions for serious data protection breaches. Austria has codified this provision in Section 63 of its Data Protection Act (DSG), which allows for imprisonment in cases of severe, intentional data theft or misuse. Noyb’s complaint argued that Clearview’s continued processing of biometric data, even with explicit bans from Austrian and other EU authorities, constituted a criminal offense. Max Schrems, the chairperson of Noyb, publicly compared the theft of biometric data to physical theft, arguing that cross-border criminal procedures used for stolen property must also apply to the theft of digital identities.
The legal mechanics of a European Arrest Warrant rely on the dual criminality principle. For an EAW to be valid, the offense must be recognized as a crime in both the issuing state and the executing state, or fall under a list of 32 specific categories of serious crime that do not require dual criminality verification. While GDPR violations are primarily administrative in jurisdictions, the unauthorized interception of data and computer-related fraud are criminalized widely across the bloc. If Austrian or Dutch prosecutors successfully charge Clearview executives with these specific criminal offenses, they can problem an EAW. This warrant would be valid across all EU member states, barring Ton-That and Schwartz from entering the European Union. Any attempt to cross an EU border would trigger an immediate arrest and subsequent extradition proceedings to the country that issued the warrant.
This “digital fugitive” status imposes severe operational constraints on Clearview’s leadership. While extradition from the United States for data privacy violations remains legally complex and politically, the existence of an EAW creates a high-risk zone encompassing the entire European continent and chance other nations with extradition treaties aligned with EU standards. The executives would face the constant risk of detention during international travel, even for transit purposes. This restriction isolates the company’s leadership and damages its ability to court international government clients, as the executives cannot physically attend meetings, conferences, or negotiations within the EU without risking their liberty.
The move toward criminalization addresses the “impunity gap” that allowed Clearview to ignore over €100 million in cumulative administrative fines from France, Italy, Greece, and the UK. By targeting the individuals rather than the bankrupt-proof corporate shell, European authorities are establishing a new deterrent. The Dutch and Austrian actions demonstrate that the “no physical presence” defense, the argument that a company is untouchable because it has no offices in the EU, fails when prosecutors target the human decision-makers behind the algorithm. As of early 2026, the investigation files remain open, and the threat of arrest warrants serves as the enforcement tool in a regulatory battle that has exhausted all civil remedies.
The Myth of Deletion: Vector Persistence and the “Shadow” Database
The fundamental architecture of Clearview AI’s platform renders the European Union’s General Data Protection Regulation (GDPR) Article 17, the Right to Erasure, technologically impotent. While the company publicly claims to honor opt-out requests, a forensic examination of its data processing pipeline reveals a system designed for permanent retention. The core problem lies not in the storage of the original JPEG images scraped from the open web, in the persistence of the proprietary biometric vectors derived from them. When a subject requests deletion, Clearview AI does not scrub the biometric data from its systems; instead, it frequently shifts that data into a “suppression” or “blocklist” file. This action, while preventing the face from appearing in search results for law enforcement clients, the continued storage and processing of the individual’s unique biometric map. To know whom not to identify, the system must permanently know exactly who the subject is.
This technical paradox formed the crux of the Dutch Data Protection Authority’s (Autoriteit Persoonsgegevens or AP) explosive ruling in September 2024. The AP’s investigation found that Clearview’s database, which had swelled to over 30 billion images at the time of the inquiry and reportedly exceeded 100 billion by early 2026, was illegal from its inception. Because the initial scraping occurred without a valid legal basis under GDPR Article 6, any subsequent processing, including processing for the purpose of exclusion, remains poisonous fruit. The Dutch regulator explicitly rejected the “security” exemption Clearview attempted to claim. By retaining the hash or vector of a dissident, journalist, or ordinary citizen to prevent future matching, Clearview continues to process special category biometric data prohibited under Article 9, maintaining a shadow database of individuals who have tried to escape its reach.
The Vectorization Trap: Why “Delete” Means “Hide”
To understand the persistence of this data, one must examine the conversion process. When Clearview’s scrapers ingest an image from Facebook, Venmo, or a news site, the system immediately runs a neural network to map the facial geometry. This process generates a 512-point (or higher) vector, a string of numbers representing the mathematical relationship between facial features. This vector is the actual asset. It is small, easily indexed, and searchable in milliseconds. Even if Clearview were to delete the source image URL and the thumbnail from its visible index, the vector frequently remains in the high-dimensional space used to train and refine its matching algorithms. Removing a vector from a live, optimized index (such as those built on FAISS or similar libraries) is computationally expensive and technically complex, frequently requiring a full rebuild of the index to truly purge the data traces.
also, machine learning models exhibit a phenomenon known as “model inversion” or data leakage, where traces of the training data remain baked into the algorithm’s weights. Unless Clearview performs “machine unlearning”, a sophisticated and rare process of retraining the model from scratch without the specific data points, the biometric essence of the deleted individual within the AI’s cognitive structure. There is no evidence to suggest Clearview engages in model retraining for individual deletion requests. Consequently, a European citizen’s face may technically be removed from the search results, yet their biometric identity remains a functional part of the neural network’s logic, helping the system identify others more accurately.
The “Write-Only” Architecture of the Scraped Web
The persistence method is further reinforced by the disconnect between Clearview’s static database and the internet. The internet is a living document; users delete posts, change privacy settings, and remove photos daily. Clearview’s scrapers, yet, operate largely as a one-way valve. They ingest data rarely, if ever, re-verify the source to check for deletion. A photo deleted by a user from Instagram in 2021 remains in Clearview’s cold storage in 2026. The company has admitted in various legal filings that it caches images to ensure availability for law enforcement, even if the original source. This creates a “zombie” dataset where digital ghosts of past internet activity are preserved in amber, accessible to police agencies regardless of the user’s current privacy choices.
The Dutch DPA’s 2024 penalty of €30. 5 million was driven by this specific failure. The regulator noted that Clearview’s system makes it impossible for individuals to know they are in the database unless they actively inquire, at which point they must surrender more personal data (frequently a government ID and a fresh photo) to prove their identity. This “verification” process feeds the very beast the subject seeks to starve. Security researchers have long warned that this requirement serves as a secondary data collection point, allowing the entity to link a high-confidence, government-verified identity to the scraped social media profiles, so increasing the value of the biometric profile rather than destroying it.
Regulatory Standoff and the 2026 Enforcement Vacuum
As of early 2026, the standoff between European regulators and Clearview AI has hardened into a permanent state of non-compliance. even with the Dutch fine, the Greek fine of €20 million, the Italian fine of €20 million, and the French penalty, Clearview has adopted a strategy of total jurisdictional evasion. By maintaining no physical offices, servers, or staff within the European Union, the company dares regulators to enforce their orders across the Atlantic. The unpaid fines have accumulated to over €100 million, a figure the company dismisses as unenforceable. This absence of physical presence allows Clearview to ignore deletion orders with impunity. The “Right to be Forgotten” exists on paper in Brussels dissolves upon contact with Clearview’s servers in the United States.
The UK Upper Tribunal’s ruling in October 2025, which confirmed that GDPR applies to Clearview even with its absence of British clients, was a legal victory a practical stalemate. While it affirmed the Information Commissioner’s Office (ICO) authority to order deletions, the technical execution of those orders remains unverified. Clearview continues to that its processing is “exclusively in furtherance of” foreign criminal law enforcement, a defense the Tribunal scrutinized which the company uses to justify its refusal to purge the biometric maps of UK and EU residents. The company operates as a data haven, storing the biometric identities of the world’s population in a jurisdiction that absence a federal privacy law comparable to GDPR.
The Blocklist Paradox and Article 9 Violations
The most sophisticated legal defense mounted by Clearview involves the “blocklist” argument. The company asserts that to comply with a deletion request, it must maintain a “hash” of the face to recognize and reject it in future scrapes. Privacy advocates and the Dutch AP this is a circular violation. To maintain a blocklist of 450 million Europeans would require Clearview to hold a massive database of European biometric data, the very thing they are forbidden from possessing. The Dutch ruling clarified that because the initial collection was unlawful, Clearview has no legal basis to retain any data, even for the purpose of suppression. The only compliant action is the total destruction of the database and the cessation of scraping European websites, a step Clearview refuses to take as it would degrade the utility of its product for its primary US government clientele.
This creates a scenario where the “opt-out” method is functionally a “confirm-in” method. By submitting a photo for exclusion, a user confirms their identity and provides a high-quality “probe image” that the system can use to validate its existing clusters. In the absence of third-party technical audits, which Clearview has consistently resisted, there is no guarantee that these exclusion requests are not simply tagged with a “do not show” flag while remaining fully searchable by intelligence agencies operating with higher clearance levels or under national security exemptions. The opacity of the system prevents any independent verification of true deletion.
Conclusion: The End of Digital Anonymity
The investigation into Clearview AI’s operations from 2017 to 2026 demonstrates that the company has constructed a system immune to the traditional method of data protection. The database is not a collection of photos; it is a persistent, evolving biometric map of the human race that ignores the concept of deletion. The technical reality of vector embeddings, combined with the strategic retention of “blocklist” data, ensures that once a face is ingested, it is permanent. The fines imposed by the Netherlands, France, Italy, and Greece serve as markers of the violation have failed to alter the technological trajectory of the company. Clearview AI has successfully decoupled its data retention practices from the legal frameworks designed to constrain them, creating a permanent, searchable archive of human identity that in defiance of the right to be erased.
Table 13. 1: Technical blocks to GDPR-Compliant Deletion in Clearview AI Architecture| Technical Component | Function | Barrier to Erasure | Regulatory Finding (EU/UK) |
|---|
| Biometric Vector | 512+ point mathematical map of facial geometry. | Vectors remain in index/training data even if image URL is removed. | CNIL & Dutch AP rule vectors are personal data; retention is illegal processing. |
| Hash/Blocklist | Cryptographic signature used to prevent re-scraping. | Requires permanent storage of the biometric signature to function. | Dutch AP (2024) rejected “security” exemption; ruled blocklist is illegal biometric processing. |
| Neural Weights | The “learned” patterns within the AI model. | Data leakage means faces are “baked in” to the model’s logic. | No evidence of “machine unlearning” or model retraining upon deletion request. |
| Cold Storage | Backups of scraped web data. | No synchronization with live web; deleted source photos remain in backups. | Violates GDPR principle of accuracy (Article 5) and storage limitation. |
| Probe Image | Photo submitted by user to request opt-out. | Verifies identity and provides high-quality biometric sample. | Regulators warn this creates a secondary, verified biometric database. |