Profanity Filtering in Forums

Marshall Bauernfeind


Filtering Forums

Filtering forum posts is unique from filtering real-time chat and other user-generated content since forums are focused on specific topics. Implementing a profanity filter not only keeps the content free of profanities, hate speech, and the like; it can also help ensure that conversations stay on topic. This post covers how to best utilize a profanity filter to aid your moderation processes, limit user frustration, and keep content productive and appropriate within forums.

Continue reading


Online Safety In Kids Hands: NCMEC & Sprint Join Forces

Sean Bryant

Online Safety
With new media and interactive game on Net Smart Teens, the National Center for Missing & Exploited Children and Sprint are challenging tweens (8-12 year olds) to think about the choices they’re making online. The new content added to the free Internet-safety site tackles issues like cyberbullying and online enticement. Its goal is to empower kids to be safer and smarter online.

"Recent studies have found that most children are using the Internet every day by age 8. As they get older the amount of time spent online will only increase," said John Ryan, CEO of NCMEC. "We have to help our kids understand, from a young age, that what they are doing online can have a lasting impact on their lives. Threats from potential predators are real, but kids also have to consider how they will react to cyberbullying and what they are leaving online for people like college admission officers and employers to see. With Sprint's help, we’re asking kids to think, not just about their safety, but about the kind of people they want to be online."

Continue reading


Complex Objects: Filtering Forum Posts

Brian Pontarelli

Complex Object

Imagine you’re building a community forum for your online property that allows users to submit posts with a title, a body, images and video. Child safety is important to your brand, so you want to disallow forum posts that contain objectionable user generated content in any part of the post.

Chat filters and moderation software, typically deployed to prevent unwanted content in forum posts, require the client’s application to send each individual piece of content for  moderators to determine if it should be posted, rejected, edited or resubmitted. It’s easy to see how tedious a process this can be when having to review all aspects of the blog submission.  The CleanSpeak profanity filter and moderation software has a feature that addresses these submissions as one complex object making it faster and easier for moderators to evaluate and take action on user generated content.

 Complex Objects Solution

  1. All parts of the forum post will be submitted as a single complex object to be filtered
  2. Clients will receive a single response (the response contains information about each individual piece of user generated content)
  3. There is no need for multiple filter and moderation responses
  4. Moderation time is reduced while response time is increased

3 Ways to Monetize Your Online Community

Mike Moloughney


One key concern for site owners is how to take their online community from a friendly place where people with a similar interest can talk with one another to a business that can generate income.

To date, the primary method of doing that is through advertising. But not all communities are in a position to charge companies to advertise on their site. Perhaps your community is still very small; or the topic is very focused and finding advertisers has proved difficult. And who really likes those obnoxious banner ads anyway?!

Continue reading


Ask the CEO: Pitfalls of AI User Profiling

Brian Pontarelli

Brian Pontarelli on AI (Artificial Intelligence) Systems

You are successfully growing your online community with awesome user experience and engagement. Would you risk all your hard work using filtering and moderation software that you can’t trust?

Some key considerations when selecting filtering and moderation software that uses Artificial Intelligence:

  • Artificial intelligence systems attempt to use the User Generated Content over-time to infer behavior. They require training by moderators to make these inferences
  • Training the system costs a lot of money
  • The system makes mistakes (it thinks '!' means aggression only)
  • AI systems aren't smart enough and don't learn well. Even if you train them well, you often have to untrain them to avoid the '!' problem
  • Mistakes in training or maintaining the AI system can diminish the user experience
  • A rogue artificial intelligence profanity filter puts the site owner’s brand at risk

Brian Pontarelli, CEO of Inversoft, discusses the inherent risks with AI-based profanity filters and user profiling in this 2-minute video.

Continue reading