Three recent New York Times articles illustrate some issues facing information providers like Facebook when it comes to dealing with potentially harmful content being shared through its service.
The first article recounts the strange tale of attacks against the Comet Ping Pong Pizzeria in Washington. The restaurant has been accused of being the site of a child prostitution ring. These stories prompted Edgar Welch to drive all the way from North Carolina, with his assault rifle, to check them out. He was arrested by local police after firing a shot in the establishment, although no one was hurt.
Accusations against the establishment have been debunked on a number of occasions but these debunkings seem only to have stimulated further attacks. Rebuttals claim that debunkings are part of an elaborate scheme to cover up the story.
It is unclear why the restaurant has been targeted in this way. However, connections between the owner and Hillary Clinton's campaign chair may be the ultimate reason.
The second article notes that Facebook has quietly developed a censorship tool for use in the event that it is permitted to operate in China. The social media giant's policy is to comply with local regulations regarding removal of illegal materials. Chinese authorities have an established record of censoring the news to achieve political goals.
Of course, such measures are somewhat embarrassing for Facebook since such censorship would be considered unconstitutional in its home country.
The third article reports impatience among officials of the European Commission regarding removal of hate speech. Only about 40% of material identified as hate speech was removed by Facebook and its ilk within 24 hours, they complain.
Facebook has pledged to comply. However, it obviously remains ambivalent since hate speech is not considered unconstitutional in the US.
What are Facebook's obligations when it comes to policing potentially harmful, but not obviously illegal speech on its site?
Along with Google, Facebook has moved to deprive advertising revenue to purveyors of fake news. It has also introduced suicide prevention tools with which users may flag posts that suggest people may be about to harm themselves. Does it also have a responsibility to deploy its means to remove (or otherwise respond to) misinformation that could lead to harm, as recent events have shown is possible?