zondag 13 oktober 2013

Wie screent klachten over de inhoud op Facebook? Niet wie je zou verwachten

 

Dit heb ik me vaker afgevraagd. Na een melding van racisme of haatdragende taal krijg je netjes een bericht of het wordt verwijderd of niet, met de suggestie om degene die het heeft geplaatst te blokkeren zodat je geen berichten meer van die persoon kunt zien. Maar wie beoordeelt die miljoenen meldingen, en wie stuurt je zo’n bericht? Ik kreeg een keer een waarschuwing omdat een **kel het leuk vond me te rapporteren.

De opmerking dat ‘de beste Jood een dode Jood’ is werd niet als haatdragend gezien, zo meldde men mij een maand geleden. Een maand later kreeg ik bericht dat de reactie alsnog was verwijderd. Een nieuwe moderator? Of zoveel klachten dat men toch maar overstag ging? En hoe komt het dat zo’n opmerking in eerste instantie niet in strijd met de regels van Facebook werd geacht??

Onderstaand artikel geeft antwoord op enkele van die vragen. Ik wil er wel op wijzen dat dit verhaal is gebaseerd op slechts één bron. Als het er echt zo aan toegaat, zouden daar van meer mensen verhalen over moeten zijn. Het gaat immers om hordes mensen die dagelijks door miljoenen meldingen moeten ploeteren. Vreemd dat je daar niet meer over hoort....

 

RP

 

http://allfacebook.com/facebook-content-screeners_b78279

 

If you’ve ever wondered how Facebook has the resources to screen all of the content that users report as objectionable, here’s a hint: The contract workers who sift through the mountains of material aren’t exactly paid U.S. minimum wage.

Gawker shared the story of 21-year-old Moroccan man Amine Dekourai, who receives paid the princely sum of $1 per hour from an outsourcing firm that screens flagged Facebook content.

Dekourai told Gawker he was hired by California-based outsourcing firm oDesk, which counts Facebook and Google among its clients.

After passing a written test and surviving an interview, he was assigned to an oDesk team of roughly 50 from third-world nations including Turkey, the Philippines, Mexico, and India. Team members worked four-hour shifts from their homes, and they were paid $1 per hour, plus “commissions,” which, the job listing said, should quadruple their base pay.

The team used a Web-based tool to view reported photos, videos, and wall posts, Dekourai told Gawker, and they were tasked with confirming the flag (which resulted in the deletion of the content), unconfirming the flag (allowing the content to remain), or escalating it for Facebook staff to examine.

Facebook was never mentioned, Dekourai told Gawker, adding, “It’s humiliating. They are just exploiting the third world.”

Gawker was able to get in touch with other former moderators, and their descriptions of the job, and why they left it, included:

Think like that there is a sewer channel, and all of the mess/dirt/ waste/shit of the world flow towards you, and you have to clean it.

You had KKK cropping up everywhere.

Pedophelia, necrophelia, beheadings, suicides, etc. I left (because) I value my mental sanity.

They did mention that the job was not for the light of heart before hiring me. I think it’s ultimately my fault for underestimating just how disturbing it’d be.

Dekourai shared documents with Gawker, including a 17-page manual (embedded below) and a one-page cheat sheet (pictured). One amusing guideline: “Versus photos,” or photos comparing two people side-by-side, are banned — ironic considering that Facebook Co-Founder and Chief Executive Officer Mark Zuckerberg’s original website, FaceMash, was created to do exactly that.

A Facebook spokesman confirmed to Gawker that the social network is a client of oDesk, saying:

In an effort to quickly and efficiently process the millions of reports we receive every day, we have found it helpful to contract third parties to provide precursory classification of a small proportion of reported content. These contractors are subject to rigorous quality controls, and we have implemented several layers of safeguards to protect the data of those using our service.

Now we have a better understanding of why Facebook encounters accusations of censorship: The people charged with making decisions about flagged content are underpaid subcontractors. If there were some way to do this inhouse for a reasonable cost, perhaps fewer disputes would arise over content pulled erroneously.

Readers: What is your reaction to Facebook’s use of third-party, third-world contract workers at low rates of pay to screen its flagged content?

 

Geen opmerkingen:

Een reactie posten