Thank God for Filtering Technology

Mike King

Children are vulnerable when participating in online properties.  Fortunately, companies like Disney take extraordinary steps to protect children online from unwanted user generated content such as profanity or bullying language.  The use of filtering technology blocks this unwanted content from offending or marginalizing their visitors.

Thank God for Filtering Technology

There are some critical challenges that must be addressed, especially when managing online communities for children.  The community managers must select technology that is both highly sophisticated and flexible.  They must combine the technology with intelligent, clearly defined rules along with human moderation to discern the context of content shared in their communities.  The ultimate objective is to protect children while providing an entertaining and wholesome branded online community where they can interact with other children.

Disney in recent news

There was recent news of a child named Lilly who tried to express gratitude for the things she values most on the Disney Channel website (a branded online community).  The things Lilly was most thankful for: “God, my family, my church and my friends.”  It turned out Disney was using its filter to prevent the word God from being used in its community, which resulted in Lilly’s post being blocked. Most people think a company is filtering to keep out profanities and other similarly offensive content, but protecting children online is a far more complex endeavor.

Todd Starnes at Fox News implied in this opinion piece that Disney was trying to make Lilly feel bad about professing her faith in God on its website.  While Todd is entitled to his opinion, he certainly failed to report the full story.

Consider some of the factors a company like Disney must weigh before deciding what user generated content is appropriate for their audience.

  • Is the content generally offensive?
  • Is it age appropriate for the audience (Under 13, Under 17, Adult)?
  • Will this content allow other users to bully the one posting the content?
  • Will this content spur a debate that is inappropriate or unrelated to this community?
  • Is the sentiment or context difficult to identify as benign or derogatory?
  • Is the user sharing information that could allow them to be personally identified (address, age, phone number, etc)?
  • Is the user asking questions that may indicate they are preying on children?

These are just a few of the myriad of factors that community managers must consider.  It seems apparent that Mr. Starnes was not interested in understanding the complexities of community management for children.  The fact is, companies like Disney are going to block some content, but - if you’re being honest about the dangers children face online - you want them to.

An acceptable conversation

God is certainly an acceptable conversation in our society, however it is extremely difficult to identify the sentiment of a phrase including the word God. Often, the word is used in a profane or offensive way. Many times the use of the word God will spur debate. Most children are not properly equipped for debate especially when the topic can be controversial.  Disney properties are designed primarily for children, but adults participate as well.  The sad truth is that some adult users in public forums are child predators who engage in grooming behaviors (defined as actions deliberately undertaken with the aim of befriending and establishing an emotional connection with a child, to lower the child's inhibitions in order to sexually abuse the child). Such realities require that the community manager take calculated steps to protect their users.

Thank God for filtering technology

Thank God for technology like CleanSpeak.  This filtering and moderation software prevents unwanted user generated content from being posted in online properties. CleanSpeak does more than just prevent inappropriate behavior in the community by filtering content, it also recognizes questionable content and queues it up for review by a human moderator.  The intelligence built into CleanSpeak enables human moderators to efficiently monitor the online property for problem users.  It is very flexible and gives the community manager significant control to enforce the policies unique to their community.

Companies like Disney understand that in addition to filtering, human moderation is a crucial element in preventing unwanted content, bullying, grooming and other anti-social activity.    Yes, these companies will block some content you think should be allowed, but - if you’re being honest - you want them to.