Posted by & filed under AI/Artificial Intelligence, Civil Liberties, Ethical issues.


“You can criticize institutions, religions, and you can engage in robust political conversation,” said Bickert, of where Facebook draws the line. “But what you can’t do is cross the line into attacking a person or a group of people based on a particular characteristic.”  For Facecbook crafting the policy is “tricky,” especially given that 80% of Facebook’s 1.6 billion users are outside of the U.S. and likely have different views on what content might be offensive or threatening.

Source: CNN Money

Date: March 14th, 2016



1) Many companies run a corporate blog, or have a Facebook or other such social media feed.  What policies, practice and procedures should a company have in place to manage “content [that] might be offensive or threatening”?

2) The article notes that some suggest Facebook should automate the process of evaluating “content [that] might be offensive or threatening” but Facebook says that context is important and that needs humans to be part of the process.  What are your thoughts on automation of this process?

Leave a Reply

Your email address will not be published. Required fields are marked *