Sun. Nov 28th, 2021

Facebook has denied the allegations. “At the center of these stories is a foundation that lies,” spokesman Kevin McAllister said in an email. “Yes, we are a business and we make a profit, but the idea that we do it for human safety or well-being is misunderstood where our own commercial interests lie.”

On the other hand, the company has recently faced specific criticism from the 2019 document. “In the past, we have not adequately addressed security and safety challenges in the product development process,” it said in a September 2021 Blog post. “Instead, we’ve improved response to a specific abuse. But we’ve fundamentally changed that approach. Today, we embed groups that focus specifically on safety and security issues directly into product development teams that address these issues during our product development process.” Allows to do, not after. ” McAllister pointed to Live Audio Rooms, launched this year, as an example of a product under this process.

If that’s true, that’s a good thing. Similar claims made by Facebook over the years, however, have not always been verified. If the company is serious about its new approach, it will need to internalize a few more lessons.

Your AI can’t fix everything

On Facebook and Instagram, the value of a given post, group or page is largely determined by how much you look, like, comment or share. The more likely that is, the more the platform will recommend that content to you and show it in your feed.

But that attracts people’s attention Unequally What Makes them angry or confused. This helps explain why low-quality, annoying-tempting, hyper-partisan publishers do so well on the platform. One of the internal documents, from September 2020, notes that “low integrity pages” get most of their followers through news feed recommendations. Another describes a 2019 experiment where Facebook researchers created a dummy account called Carol and it followed Donald Trump and a few conservative publishers. Within a few days the platform was encouraging Carol to join the QAnon Group.

Facebook is aware of this dynamic. Zuckerberg explained himself 2018 The more content that comes close to violating the rules of the platform, the more busy it gets. But instead of rethinking the intelligence to optimize for engagement, Facebook’s answer is mostly a mix of human reviewer and machine learning to find the bad stuff and remove or degrade it. Its AI tools are widely regarded as world-class; A February Blog post Chief Technology Officer Mike Schroffer claims that in the last three months of 2020, “97% of hate speech from Facebook has been removed by our automated system before anyone flags it.”

The internal documents, however, paint a grim picture. A presentation from April 2020 noted that Facebook removals reduced the overall trend of graphic violence by about 19 percent, nudity and pornography by about 17 percent, and hate speech by about 1 percent. A file from March 2021, previously reported The Wall Street Journal, More pessimistic. In it, the company’s researchers estimate that “we can act as 3-5% of hate and ~ 0.6% [violence and incitement] On Facebook, despite being the best in the world.

Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *