Facebook is sharing a new and diverse dataset with the wider AI community. In a Announcement Marked by Venture byteThe company says it uses this collection to visualize researchers, dubbing casual conversations to test their machine learning models for bias. The dataset contains 3,011 people across 45,166 videos, and its name is found in the true sense of the word, providing written answers to the questions of those people.
Important for casual conversations is that it includes paid actors who have asked Facebook to specify their age and gender. The company also hired trained professionals to label the skin toner of those involved according to the ambient light and lemon. Fitzpatrick scale, A system developed by a dermatologist to classify the colors of human skin. Facebook claims that the dataset is the first of its kind.
You no longer have to look for examples of bias in artificial intelligence. One Recent research Found that Oral recognition and analysis program Like Face ++, black men’s faces will be rated as more harmful than their white parts, even if both men are smiling. Those same flaws have worked in customer-facing AI software. In 2015, Google tweeted photos of software engineer Jackie Alsini to stop using the label after having the app. Misrepresenting his black friends as “gorillas”. You can get back to using many of those issues with dataset companies for their software training, and that’s where such initiatives can help. Recent MIT study Among the popular machine learning datasets, it has been found that about 3.4 percent of the data in these collections are either inaccurate or misleading.
Although Facebook describes casual conversations as “good, bold first steps,” it acknowledges that the dataset is not perfect. To begin with, it only includes people from the United States. The agency also did not ask participants to identify their sources, and when it came to gender, they had only options “male,” “female” and “other”. Over the next year, however, it plans to include more datasets.