Inside how Twitter decides which images to block

After news -- and footage -- of journalist James Foley's death began circulating on social media networks, tech firms immediately had to grapple with the question of how to handle the graphic images. Twitter, arguably the fastest-moving and most public social network, opted to do something that it rarely does. The company scrubbed images of

After news -- and footage -- of  journalist James Foley's death began circulating on social media networks, tech firms immediately had to grapple with the question of how to handle the graphic images. Twitter, arguably the fastest-moving and most public social network, opted to do something that it rarely does. The company scrubbed images of Foley's beheading from its network, and even temporarily blocked the account of journalist Zaid Benjamin, who appeared to be the first to report the news.

When asked why it took the images down, Twitter said it does not share information on specific cases. But it did refer reporters to its policies regarding takedown requests from the family members of deceased users. That indicates that the firm took down the photos and videos of Foley's death not because of a request from the U.S. government--which reached out to social media sites regarding the video on Tuesday--but because the family had asked it to, shortly after the news broke.

The policy, recently changed, outlines the guidelines Twitter follows to remove imagery of deceased individuals at the request of immediate family members -- a policy enacted after some Twitter users bullied the daughter of comedian Robin Williams off the network by sending her altered images supposedly depicting her father's corpse.

How does Twitter scrub its network of the offensive images, which tend to spread online very quickly? The process is actually low-tech. When a request is submitted, Twitter employees on the company's safety and legal teams look at the image that's been flagged,  reviewing each one individually, and evaluating the public interest and newsworthiness of each message. That, for example, could explain why some images of Foley's death were taken down immediately, while others bore a warning:

It may seem incomprehensible to many users that in an age of algorithms and high technology, Twitter would remove offensive images by hand rather than by automation. Twitter and other companies like Facebook, Microsoft and Google all have and use an automated technology that lets them identify and flag images based on certain criteria.

Advertisement

But they use that software technology, called PhotoDNA, only to identify images of child sexual exploitation. The database assigns a unique ID to images, which technology firms are then obligated to report to the National Center for Missing & Exploited Children. (Google, for instance, recently used this technology to tip off law enforcement about a Houston man who had child pornography on his Gmail.)

Share this articleShare

For the human beings reviewing these images, making judgment calls about which pictures to remove or keep on the site quickly becomes tricky. Del Harvey, Twitter's vice president of trust and safety noted in a TED talk earlier this year that even messages that look clearly like spam or abuse can turn out to be something different when you take a closer look. For example, a message asking people to "share and watch" a clip could be a spambot. But it could also be a protester -- maybe one like those in Ferguson, Mo. -- asking people to spread awareness about a crime.

"We don't want to gamble on potentially silencing that crucial speech by classifying it as spam and suspending it," she said.  "That means we evaluate hundreds of parameters when looking at account behaviors, and even then, we can still get it wrong and have to reevaluate."

Advertisement

(The full talk is 10 minutes long, but lays out the basics of Twitter's philosophy pretty well.)

Even with graphic, disturbing and violent images, Harvey said, there are a lot of gray areas.  And, as my Switch colleague Brian Fung pointed out, there is no industry standard. Each company makes its own rules based on how it wants to shape its community. Facebook and Instagram, for example, have rules against nudity in photos. Twitter has no such rules -- a decision that it's made as a company to live by its assertion that, in nearly all cases, the "tweets must flow."

While companies continue to debate what they can and should do at the administrative level to stop certain kinds of images, there are some things that individual users can do to take action on their own accounts. On Facebook, for example, you can block users, apps or pages if you don't want to see the content they publish.

Twitter offers users the option to change their media settings themselves. Here, if users want to see images that could be considered "sensitive" and skip seeing Twitter's warning message, they can opt to do so. Users can also decide to mark their own media as sensitive by default, if they think their pictures could upset others.

ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZLumw9Joq6GdXajEqsDCoWawqF9nfXKAjmlvaGphZLavv8idnGagn6x6tcPIraueql2ZsqS1w56qZq%2BYnrCpecimmKCdo2LBsHnBpaaco18%3D

 Share!