There is Clearview AI Scraping and appealing to the web for photos gets involved in controversy Facial recognition To give the police and others the unprecedented power to peek into our lives. Now the CEO of the company wants to use Artificial intelligence To make ClearView’s monitoring tool even more powerful.
This can make it more dangerous and error-prone.
Clearview Included are billions of images collected across websites Facebook, Instagram, And Twitter And uses AI to identify a particular person in the image. Police and government agents use the company’s Face Database to help identify suspects by tying the suspects to their online profiles.
The company’s co-founder and chief executive, Juan Ton-That, told Wired that Clearview now collects more than 10 billion images across the web যা three times more than previously reported.
Ton-That says a larger pool of photos means users, often law enforcement, are more likely to find similarities when searching for someone. He further claims that the larger data set makes the company’s equipment more accurate.
Clearview combined web crawling strategy, progress Machine learning Which has improved facial recognition, and neglected personal privacy to create surprisingly powerful tools.
Ton-The Reporter took a picture and demonstrated the technology through a smartphone app. The app has created dozens of images from numerous US and international websites, each showing the right person in an image captured over a decade. The greed for this kind of tool is obvious, but so is the possibility of its misuse.
Clearview’s actions have sparked widespread public outrage and widespread controversy over privacy expectations in the age of smartphones, social media and AI. Critics say the company is violating personal privacy. ACLU Has filed a lawsuit against Clearview In Illinois under a law that prohibits the collection of biometric data; The company faces class action lawsuits New York And California. Facebook and Twitter have claimed that ClearView will stop scraping their sites.
The pushback didn’t stop Ton-That. He says he believes most people accept or support the idea of using facial recognition to solve crimes. “People who are worried about this are very vocal, and that’s a good thing, because I think over time we’ll be able to address more of their concerns,” he said.
Some of Clearview’s new technologies could spark further controversy. Ton-That says it’s creating new ways for police to find a person with “Deblur” and “mask removal” tools. The first takes a blurry image and sharpens it using machine learning to see what a clear picture would look like; The second tries to visualize the covered part of a person’s face using a machine learning model that fills in the missing details of an image using a best guess based on the statistical patterns found in another image.
These capabilities can make ClearView technology more attractive but more problematic. It is not yet clear how accurate the new strategies work, but experts say they could increase the risk of misidentifying a person and further increase the underlying bias of the system.
“I would expect the accuracy to be very bad, and beyond the accuracy, without caution on the data set and the training process I would expect an excess of unintentional bias to occur.” Alexander Madrid, A professor at MIT who specializes in machine learning. Without proper care, for example, the method may increase the likelihood of misdiagnosis of certain characteristics.
Even if technology works as promised, Madrid says the morality of unmasking people is problematic. “Think of people who wore masks to take part in peaceful demonstrations or were vague to protect their privacy,” he said.
Ton-The says that tests have found that new tools have improved the accuracy of Clearview results. “Any enlarged images should be noticed that way, and extra care should be taken when evaluating the results of an enlarged image,” he says.