Dallas Cops installs face recognition used on personal phones


Illustration of the title of the article using face recognition software on personal phones without Dallas Police approval Illustration of the article titled Installed

Pictures: Justin Sullivan (Getty Images)

Dallas police officers use unauthorized facial recognition software to operate within 500 and 1,000 searches In an attempt to identify people based on photographs. A Dallas Police spokesman said the searches were never approved by the department and in some cases officers installed face recognition software on their personal phones.

Spokesperson, Senior CPL. Melinda Guterres said the department first learned of the matter after contacting investigative reporters at BuzzFeed News. The face recognition application, known as Clearview AI, was not approved, he said, “for use by any member of the department.”

Department leaders then ordered the software removed from all city-issued devices.

Officers were not completely banned from possessing the software. No order was given to delete Copy the application Has been installed “They were only instructed not to use the app as part of their work,” said Guterres, a personal phone user.

Clearview AI did not respond to a request for comment Wednesday, saying it did not comment on whether officials in the department had denied access to their department without their consent.

The Dallas Police Department says it never had a deal with Clearview AI. However, officials were still able to download the app by visiting the company’s website. According to BuzzFeed, officials who signed up for a free trial at the time did not need to prove that they were authorized to use the software.

What’s more, emails from news outlets also show that Clearview AI CEO Han Ton-That Not opposed Allowing officials to sign up using personal email accounts.

During an internal review, Dallas officials told senior officials that they learned about Clearview through word of mouth from other officials.

BuzzFeed News first revealed below on Tuesday that Clearview AI was being used in Dallas One year long investigation In the company. The Dallas Police Department is one of 34 agencies that recognize that employees used the software without authorization.

Using data provided by a confidential source, reporters discovered that nearly 2,000 public agencies had used Clearview AI’s face recognition tools. Buzzfeed said the source was not named, for fear of retaliation.

About 260 agencies told reporters that staff had never used the software. Since then, he has rescheduled. Clearview AI was used and nearly a hundred refused to confirm that more than 1,160 organizations did not respond at all.

BuzzFeed’s data, which began in 2018 and ended in February 2020, also shows the Dallas Security Department, which oversees security at City Hall, where searches were conducted somewhere between 11 and 50. A spokesman said the department had no record of using the Clearview AI.

Dallas City Mayor Eric Johnson did not immediately respond to an email. A City Council member said they needed time to review the matter before speaking on the record.

Misuse of confidential police databases is not an unknown phenomenon. In 2016, the Associated Press received information about the police Regular access to law enforcement databases Collecting information related to “romantic partners, business associates, neighbors, journalists and others that have nothing to do with everyday police work”.

Between 2013 and 2015, the AP found at least 325 cases of dismissal, suspension or resignation of officials for abuse of access to law enforcement databases. In another 250 cases, officers received reprimands or counseling or faced less discipline.

Today, oral recognition is considered to be the most controversial technology used by the police. The American Civil Liberties Union has pressed federal lawmakers to impose a nationwide moratorium on its use, citing multiple studies that show the software is flaw-risky, especially for people with darker skin.

A study Of the 189 facial recognition systems operated by a branch of the U.S. Department of Commerce in 2019, for example, it was found that people of African and Asian descent were misidentified by software at a rate 100 times more Compared to white people. Experiments have shown that women and the elderly are at greater risk of being identified as strangers.

There was a system used in Detroit Approximate “99 percent of the time” to be wrong by the city’s own police chief.

Clearview AI, which has reportedly turned off social media to millions of people’s photos without their consent or the consent of the platform, has consistently claimed that its software is unbiased and, in practice, helps prevent “color misidentification”.

Ton-That, the CEO, told BuzzFeed that “independent testing” has shown that his product is biased; However, he also avoided repeated requests for more information about the alleged tests. The news outlet was able to send pictures of up to 30 people to a source with access to the system, and included several pictures of computer-generated faces. Clearview AI falsely matched the two fake faces in real human images. One is a woman of one caste and the other is a young girl of one caste

In 2019, more 30 institutions With a combined membership of 15 million people, he called on U.S. lawmakers to permanently ban the technology, saying no amount of rules can adequately protect Americans from violating their unrestricted civil liberties.

Correction: An earlier version of this article erroneously stated that Clearview AI “removed images of billions of people on social media with their consent”. Without Their consent. We’re sorry for the error.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *