They built their services for sharing, allowing users to reach others around the world. Now they want people to hold back.
Facebook and other social media companies battled their own services on Friday as they tried to delete copies of a video apparently recorded by the gunman as he killed 49 people and wounded scores of others in the attack on two New Zealand mosques Friday.
The video was livestreamed on the suspect's Facebook account and later reposted on other services.
According to news reports, Facebook took down the livestream of the attack 20 minutes after it was posted and removed the suspect's accounts. But people were able to capture the video and repost it on other sites, including YouTube, Twitter and Reddit.
YouTube has tweeted that it is "working to remove any violent footage." A post from one user on Reddit asks others not to "post the videos. If you see the videos, bring it to the moderators' attention."
Criticism of pace
Despite the companies' quick actions, they still came under fire for not being fast enough. Critics said the platforms should have better systems in place to locate and remove content, instead of a system that helps others facilitate its spread once something is posted.
One critic, Tom Watson, a member of the British Parliament and deputy leader of the Labor Party, called for YouTube to stop all new videos from being posted on the site if it could not stop the spread of the New Zealand video.
Resistance to censorship
The companies' race to stamp out the New Zealand video highlighted the dilemma that social media companies have faced, particularly as they have allowed livestreaming.
Built on users' content, Facebook, YouTube and others have long resisted the arduous task of censoring objectionable content.
At hearings in Washington or in media interviews, executives of these firms have said that untrue information is in itself not against their terms of service.
Instead of removing information deemed fake or objectionable, social media companies have tried to frame the information with fact checking or have demoted the information on their sites, making it harder for people to find.
That is what Facebook appears to be doing with the anti-vaccination content on its site. Earlier this month, Facebook said it would curtail anti-vaccination information on its platforms, including blocking advertising that contains false information about vaccines. It did not say it would remove users expressing anti-vaccination content.
But sometimes the firms do remove accounts. Last year, Facebook, Twitter and others removed from their platforms Alex Jones, an American commentator, used for spreading conspiracy theories and stirring hatred.
More monitors
In the past year, some social media companies have hired more people to monitor content so that issues are flagged faster, rather than having to wait for other users or the firm's algorithms to flag objectionable content.
With the New Zealand shooting video, Facebook and other firms appeared to be in lockstep, saying they would remove the content as quickly as they found it.
But there have been more calls for human and technical solutions that can quickly stop the spread of content across the internet.