"Recent studies have found that most children are using the Internet every day by age 8. As they get older the amount of time spent online will only increase," said John Ryan, CEO of NCMEC. "We have to help our kids understand, from a young age, that what they are doing online can have a lasting impact on their lives. Threats from potential predators are real, but kids also have to consider how they will react to cyberbullying and what they are leaving online for people like college admission officers and employers to see. With Sprint's help, we’re asking kids to think, not just about their safety, but about the kind of people they want to be online."
Imagine you’re building a community forum for your online property that allows users to submit posts with a title, a body, images and video. Child safety is important to your brand, so you want to disallow forum posts that contain objectionable user generated content in any part of the post.
Chat filters and moderation software, typically deployed to prevent unwanted content in forum posts, require the client’s application to send each individual piece of content for moderators to determine if it should be posted, rejected, edited or resubmitted. It’s easy to see how tedious a process this can be when having to review all aspects of the blog submission. The CleanSpeak profanity filter and moderation software has a feature that addresses these submissions as one complex object making it faster and easier for moderators to evaluate and take action on user generated content.
Complex Objects Solution
- All parts of the forum post will be submitted as a single complex object to be filtered
- Clients will receive a single response (the response contains information about each individual piece of user generated content)
- There is no need for multiple filter and moderation responses
- Moderation time is reduced while response time is increased
One key concern for site owners is how to take their online community from a friendly place where people with a similar interest can talk with one another to a business that can generate income.
To date, the primary method of doing that is through advertising. But not all communities are in a position to charge companies to advertise on their site. Perhaps your community is still very small; or the topic is very focused and finding advertisers has proved difficult. And who really likes those obnoxious banner ads anyway?!
Brian Pontarelli on AI (Artificial Intelligence) Systems
You are successfully growing your online community with awesome user experience and engagement. Would you risk all your hard work using filtering and moderation software that you can’t trust?
Some key considerations when selecting filtering and moderation software that uses Artificial Intelligence:
- Artificial intelligence systems attempt to use the User Generated Content over-time to infer behavior. They require training by moderators to make these inferences
- Training the system costs a lot of money
- The system makes mistakes (it thinks '!' means aggression only)
- AI systems aren't smart enough and don't learn well. Even if you train them well, you often have to untrain them to avoid the '!' problem
- Mistakes in training or maintaining the AI system can diminish the user experience
- A rogue artificial intelligence profanity filter puts the site owner’s brand at risk
Brian Pontarelli, CEO of Inversoft, discusses the inherent risks with AI-based profanity filters and user profiling in this 2-minute video.