Imagine you’re building a community forum for your online property that allows users to submit posts with a title, a body, images and video. Child safety is important to your brand, so you want to disallow forum posts that contain objectionable user generated content in any part of the post.
Chat filters and moderation software, typically deployed to prevent unwanted content in forum posts, require the client’s application to send each individual piece of content for moderators to determine if it should be posted, rejected, edited or resubmitted. It’s easy to see how tedious a process this can be when having to review all aspects of the blog submission. The CleanSpeak profanity filter and moderation software has a feature that addresses these submissions as one complex object making it faster and easier for moderators to evaluate and take action on user generated content.
Complex Objects Solution
- All parts of the forum post will be submitted as a single complex object to be filtered
- Clients will receive a single response (the response contains information about each individual piece of user generated content)
- There is no need for multiple filter and moderation responses
- Moderation time is reduced while response time is increased
One key concern for site owners is how to take their online community from a friendly place where people with a similar interest can talk with one another to a business that can generate income.
To date, the primary method of doing that is through advertising. But not all communities are in a position to charge companies to advertise on their site. Perhaps your community is still very small; or the topic is very focused and finding advertisers has proved difficult. And who really likes those obnoxious banner ads anyway?!
Brian Pontarelli on AI (Artificial Intelligence) Systems
You are successfully growing your online community with awesome user experience and engagement. Would you risk all your hard work using filtering and moderation software that you can’t trust?
Some key considerations when selecting filtering and moderation software that uses Artificial Intelligence:
- Artificial intelligence systems attempt to use the User Generated Content over-time to infer behavior. They require training by moderators to make these inferences
- Training the system costs a lot of money
- The system makes mistakes (it thinks '!' means aggression only)
- AI systems aren't smart enough and don't learn well. Even if you train them well, you often have to untrain them to avoid the '!' problem
- Mistakes in training or maintaining the AI system can diminish the user experience
- A rogue artificial intelligence profanity filter puts the site owner’s brand at risk
Brian Pontarelli, CEO of Inversoft, discusses the inherent risks with AI-based profanity filters and user profiling in this 2-minute video.
We here at Inversoft take children’s online safety seriously. We continuously work to update and develop new solutions to keep up with the ever expanding, trend changing world of ours. Our commitment is solely based on providing the safest and most engaging online experience for every age and demographic while providing the online property the means to protect their brand and user at the same time.
Many questions arise on how to educate the younger generation as well as the parent in best practices when engaging online. With so many avenues and resources, it can be difficult to find the best and most receptive approach.
Inversoft is extremely happy to announce a source of information that will engage, teach, and entertain.
Disney has teamed with Common Sense Media to help kids and tweens understand the importance of online safety while providing families with the ‘tips and tools’ to safely navigate the digital world. The hit series “Dog With A Blog” will be airing an episode directed around online safety.
The episode “My Parents Posted What?!,” shines light on the importance of understanding social networks and the repercussions that can occur when practical jokes go south.
The demographic of a virtual world that offers real-time chat between users will determine the user-generated content (UGC) that is allowed. For communities targeted at kids, one way to prevent inappropriate UGC from reaching the younger members of the community is to implement a gated chat system. With gated chat there are typically 3 options: