Mon. Dec 6th, 2021


Image for the article titled How the 2019 Christchurch Massacre Facebook Has Changed Forever

Pictures: Sanka Bidangma (Getty Images)

On March 15, 2019, a heavily armed white supremacist named Brenton Tarant entered two separate mosques in Christchurch, New Zealand and opened fire, killing 51 Muslim worshipers and injuring countless others. An attack took place in about 20 minutes Live streamed on Facebook-And more than when the company tried to take it down 1 million copies Crop up in its place.

Although the company was able to quickly delete or automatically block thousands of copies of the horror video, it was clear that Facebook had a serious problem: shooting Not going anywhere, And livestream Is not. In fact, up to this point, Facebook was live A bit of a reputation A place where you can catch the currents of violence — including anything Murder.

Christchurch was different.

A Internal documents A detailed account of Facebook’s response to the Christchurch massacre on June 27, 2019, describes the steps taken by the company’s task force in the wake of the tragedy to expose users to violent acts of livestreaming. It has changed a lot about its systems – and how far its systems still have to go.

More: Here are all the ‘Facebook papers’ that we have published so far

The 22-page document was made public as a part Growing trove Capturing Facebook internal research, memos, employee comments and more Frances Haugen, a former employee of the company who filed a lawsuit Whistle blower complaint Against Facebook with the Securities and Exchange Commission. Haugen’s legal team has released hundreds of documents to select members of the press, including Gizmodo, with countless more expected in the coming weeks.

Facebook relies heavily on artificial intelligence to moderate its sprawling global platform, in addition to tens of thousands of human moderators who have historically been subject to traumatizing content. However, as the Wall Street Journal Facebook has not yet responded to our request for comment.

You could say that the company’s failure started at the moment of shooting. “We have not actively identified this video as a potential infringement,” the author wrote, adding that Livestream scored relatively low compared to the classifiers used by Facebook’s algorithm to identify graphically violent content. “Also no user reported this video until it was on the platform for 29 minutes,” they added, noting that even after it was downloaded, 1.5 million copies had already been dealt with within 24 hours.

Further, according to the document, its systems were apparently only able to detect any kind of violent breach of the Terms of Service “5 minutes after the broadcast.” Five minutes is too slow, especially if you’re dealing with a mass shooter who starts filming as soon as the violence starts, the way Tarant did it. For Facebook to reduce this number, it needs to train its algorithms, just as any algorithm needs data to train. There was only one terrible problem: there wasn’t a lot of live streaming shooting to get that data.

According to the document, the solution was to create what sounds like the darkest dataset known to humans: a collection of police and bodycam footage, “recreational shooting and simulation” and “video from the military” acquired by the company. Partnerships with law enforcement. The result was the identification of “First Person Shooter (FPS)” and the improvement of a tool called XrayOC, according to internal documents, which enabled the company to flag footage from live-streamed shootings that were clearly violent in about 12 seconds. Of course, 12 seconds is not perfect, but it is deep Better than 5 minutes.

The company has also added other practical corrections. Instead of users needing to jump through multiple hoops to report “violence or terrorism” happening in their stream, Facebook thinks it might be better to report users with one click. After reporting these videos, they internally added a “terror” tag to better track.

Following the list of “things Facebook probably should have had before broadcasting a massacre”, the agency imposed some restrictions on who was allowed to go live. Prior to Torrent, the only way you could be banned from livestreaming while livestreaming was to violate certain platform rules while livestreaming. According to the study, an account that was internally flagged, say, A potential terrorist Under this rule “will not be restricted” from livestreaming on Facebook. After Christchurch, that changed; Company a roll out “One-strikePolicy that prohibits anyone from using Facebook Live for 30 days while posting particularly serious content. Facebook’s “serious” umbrella includes terrorism Applies to Torrent.

Of course, content restraint is a Dirty, incomplete work Partly, it is driven by algorithms which, in the case of Facebook, are often Just as flawed As that company made them. These systems did not flag the shooting of retired police chief David Dorn when it was caught on Facebook Live Last year, No it’s a man caught Shooting livestream of his girlfriend Just a few months later. And when Hourly obvious bomb threat This last August a live stream on a far-right extremist platform that was not as terrifying as one of these examples, it was a Literally a bomb threat Was able to stream that For an hour.

Regarding the bomb threat, a Facebook spokesperson told Gizmodo: “At the time, we contacted law enforcement and removed the suspect’s video and profile from Facebook and Instagram. Our teams have worked to identify, remove and block any other instances of the suspect’s videos that do not condemn the incident, discuss it impartially or provide impartial news coverage of the issue. “

Nevertheless, it is clear that the Christchurch disaster had a lasting effect on the company. “Since this event, we have faced international media pressure and the legal and regulatory risks on Facebook have increased significantly,” the document reads. And that’s an understatement. Thanks to a new Australian law that was passed in a hurry in the wake of the shooting, Facebook executives could face stricter legal fees (not to mention Prison time) If they are caught trying to allow a livestream of violence like shooting on their platform again.

The story is based on Frances Haugen’s disclosure to the Securities and Exchange Commission, which was provided to Congress in a revised form by her legal team. The revised editions obtained by Congress were obtained by a consortium of news organizations, including Gizmodo, The New York Times, Politico, Atlantic, Wired, The Verge, CNN, and dozens of other outlets.



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *