Clearview AI said it violated Australian privacy law and ordered data to be deleted – TechCrunch

After Canada, Australia discovered that controversial facial recognition company Clearview AI broke national privacy laws by covertly collecting facial biometric data from citizens and incorporating it into its correspondence service AI-based identity – which it sells to law enforcement and others.

In a statement released today, Australia’s Information and Privacy Commissioner Angelene Falk said Clearview AI’s facial recognition tool violates Privacy Law 198 from the country.8 through:

  • collect sensitive information about Australians without their consent
  • collect personal information by unfair means
  • failing to take reasonable steps to notify individuals of the collection of personal information
  • failing to take reasonable steps to ensure that the personal information he disclosed was accurate, having regard to the purpose of the disclosure
  • failing to take reasonable steps to implement practices, procedures and systems to ensure compliance with Australian Privacy Principles.

In what appears to be a major victory for privacy, the regulator has ordered Clearview to stop collecting facial biometric data and biometric templates from Australians; and destroy all existing images and models it owns.

The Office of the Australian Information Commissioner (OAIC) ​​conducted a joint investigation into Clearview with the UK data protection agency, the Information Commission’s Office (IOC).

However, the UK regulator has yet to announce any findings.

In a separate statement today – which perhaps reads slightly edgy – the ICO said it “is considering its next steps and any formal regulatory action that may be appropriate under UK data protection laws” .

An ICO spokeswoman declined to give more details, such as how long she might be thinking about doing something.

British citizens should hope that the regulator won’t take as long to ‘consider’ Clearview as it did to resolve (but not act against) the adtech legality problem.

Meanwhile, other European regulators have already hit Clearview users with sanctions …

Back halfway around the world, CATO wastes no time taking action against Clearview or mince words.

In public comments on the OAIC’s decision (pdf) finding Clearview violated Australian law, Falk said: “The covert collection of this type of sensitive information is unreasonably intrusive and unfair. It carries a significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched in the Clearview AI database.

“By their nature, this biometric identity information cannot be reissued or canceled and can also be replicated and used for identity theft. People in the database can also be exposed to misidentification, ”she also said, adding:“ These practices fall far short of what Australians expect when it comes to protecting their personal information. “

The OAIC also found that the privacy impacts of Clearview AI’s biometric system were “unnecessary, legitimate and proportionate, given the public interest benefits”.

“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely independent identification purposes.” , Falk said.

“The indiscriminate scratching of people’s facial images, only a fraction of which would ever be linked to law enforcement investigations, can negatively impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”

The Australian regulator said that between October 2019 and March 2020, Clearview AI provided tests of its tool to some local police forces – who carried out searches using facial images of individuals located in Australia.

The OAIC added that it was currently finalizing an investigation into the Australian Federal Police’s trial use of the technology to decide whether the force had complied with Australian government agencies’ privacy code requirements to assess and mitigate privacy risks. It therefore remains to be seen whether the local law enforcement authorities will obtain a sanction.

Earlier this year, the Swedish data protection watchdog warned cops across the country against what it called illegal use of Clearview’s tool – fining € 250,000 in that case.

Returning to the OAIC, she said Clearview defended itself by claiming that the information it was processing was not personal data – and that, as a US-based company, it was not reporting under the jurisdiction of the Australian Privacy Act. Clearview also told the regulator that it stopped offering services to Australian law enforcement agencies shortly after the CATO’s investigation began.

However, Falk rejected Clearview’s arguments, saying she was convinced it had to comply with Australian law and that the information it was processing was personal information covered by privacy law.

She also said the case reinforces the need for Australia to strengthen protections through an ongoing review of the privacy law, including restricting or banning practices such as data recovery. of a personal nature on online platforms. And she added that the case raises further questions on whether online platforms are doing enough to prevent and detect the scraping of personal data.

“Clearview AI’s business in Australia involves the automated and repetitive collection of sensitive biometric information from Australians on a large scale for profit. These transactions are fundamental to their business enterprise, ”said Falk. “The company’s patent application also demonstrates the technology’s ability to be used for other purposes such as dating, retailing, distributing benefits and granting or denying access to. a facility, place or device. “

Clearview has been contacted to comment on the CATO decision.

The company has confirmed that it will appeal by sending this statement (below), attributed to Mark Love, BAL Lawyers, representing Clearview AI:

“Clearview AI has gone to great lengths to cooperate with the Australian Information Commissioner’s office. In doing so, Clearview AI has voluntarily provided considerable information, but it is evident to us and to Clearview AI that the Commissioner has not properly understood how Clearview AI operates. Clearview AI operates legitimately in accordance with the laws of its locations.

“Clearview AI intends to seek reconsideration of the Commissioner’s decision by the (Australian) Administrative Appeals Tribunal. Not only did the commissioner’s decision miss the mark on how Clearview AI works, the commissioner lacks jurisdiction.

“To be clear, Clearview AI has not violated any laws or interfered with the privacy of Australians. Clearview AI does not do business in Australia, does not have Australian users.”

The controversial facial recognition company has faced litigation on its home soil in the United States – under Illinois’ biometric privacy law.

Whereas, earlier this year, Minneapolis voted to ban the use of facial recognition software for its police department – effectively banning the use by local law enforcement agencies of tools like Clearview.

The fallout from Clearview AI’s scraping of the public web and social media sites to amass a database of over 3 billion images to sell a global identity matching service to law enforcement may have being contributed to an announcement made yesterday by Facebook’s parent company, Meta – which said it would remove its own mountain of facial biometrics.

The tech giant cited “growing concerns about the use of technology as a whole.”

Update: In addition to the above statement, Clearview founder Hoan Ton-That also posted a personal response (pasted below) to the OAIC’s decision – in which he expresses his disappointment and argues that the decision The Privacy Commissioner misinterprets the value of his “crime fighting” technology to society.

“I grew up in Australia before moving to San Francisco at the age of 19 to further my career and create world-famous anti-crime facial recognition technology. I have dual Australian and American nationality, the two countries I care about most. My company and I have acted in the best interests of these two nations and their people by helping law enforcement solve heinous crimes against children, the elderly and other victims of unscrupulous acts. We only collect public data from the open Internet and comply with all privacy and legal standards. I respect the time and effort that Australian officials have put into evaluating aspects of the technology I have built. But I am disheartened by the misinterpretation of its value to society. I look forward to engaging in a conversation with executives and lawmakers to discuss privacy concerns in detail, so that the true value of Clearview AI technology, which has proven so essential to law enforcement , can continue to ensure the safety of communities. “

About Janet Young

Check Also

Technique protects privacy when making online recommendations – Eurasia Review

Algorithms recommend products when we shop online or suggest songs we might like when we …

Leave a Reply

Your email address will not be published.