One key concern for site owners is how to take their online community from a friendly place where people with a similar interest can talk with one another to a business that can generate income.
To date, the primary method of doing that is through advertising. But not all communities are in a position to charge companies to advertise on their site. Perhaps your community is still very small; or the topic is very focused and finding advertisers has proved difficult. And who really likes those obnoxious banner ads anyway?!
Brian Pontarelli on AI (Artificial Intelligence) Systems
You are successfully growing your online community with awesome user experience and engagement. Would you risk all your hard work using filtering and moderation software that you can’t trust?
Some key considerations when selecting filtering and moderation software that uses Artificial Intelligence:
- Artificial intelligence systems attempt to use the User Generated Content over-time to infer behavior. They require training by moderators to make these inferences
- Training the system costs a lot of money
- The system makes mistakes (it thinks '!' means aggression only)
- AI systems aren't smart enough and don't learn well. Even if you train them well, you often have to untrain them to avoid the '!' problem
- Mistakes in training or maintaining the AI system can diminish the user experience
- A rogue artificial intelligence profanity filter puts the site owner’s brand at risk
Brian Pontarelli, CEO of Inversoft, discusses the inherent risks with AI-based profanity filters and user profiling in this 2-minute video.
Brian Pontarelli on Artificial Intelligence Systems
You can’t trust an artificial intelligence system to consistently protect your online community from inappropriate content. All artificial intelligence systems suffer from two critical flaws:
- Artificial intelligence systems constantly require costly training and retraining
- Re-training of the artificial intelligence system leads to inconsistent performance
Every such system on the market today suffers from these flaws. In this 2-minute video, Inversoft CEO, Brian Pontarelli, explains why CleanSpeak is a different and more effective technology.
The sixth in a series of posts about the finer points of profanity filtering...
Embedded words occur when a dictionary word or proper name contain profanity:
- Don't assume profanity filters are inaccurate
- Harry Lipshitz has a hard time creating accounts on web sites
- This has been documented as the Scunthorpe problem
CleanSpeak's sophisticated profanity filter looks for dictionary words that contain profanity and safely ignores them during the filtering process. Poorly written filters will often get caught up on these simple cases and flag large number of dictionary words as profanity. CleanSpeak pulls from a large set of dictionary words and proper names in real time, over 140,000 in all, to correctly handle this situation and avoid a potentially large number of false positives without hindering performance.
The fifth in a series of posts about the finer points of profanity filtering...
One of the more sophisticated attacks that users employ against profanity filters involves inserting separators, such as spaces or periods, between the other characters of a word so that the word can still easily be read.
The following examples illustrate how the simple process of inserting additional non-alphabetic characters between the characters of the word does not interfere with the reader's ability to identify the word correctly:
- s m u r f
- s....m u r....f
I'm going to smash it (false positive!)
It might be difficult to see the profanity in #4, but if you look at the last 4 characters on their own, you'll see it.