Ask the CEO: Pitfalls of AI User Profiling
- By Brian Pontarelli
- Online Community
- September 10, 2013
Brian Pontarelli on AI (Artificial Intelligence) Systems
You are successfully growing your online community with awesome user experience and engagement. Would you risk all your hard work using filtering and moderation software that you can’t trust?
Some key considerations when selecting filtering and moderation software that uses Artificial Intelligence:
- Artificial intelligence systems attempt to use the User Generated Content over-time to infer behavior. They require training by moderators to make these inferences
- Training the system costs a lot of money
- The system makes mistakes (it thinks '!' means aggression only)
- AI systems aren't smart enough and don't learn well. Even if you train them well, you often have to untrain them to avoid the '!' problem
- Mistakes in training or maintaining the AI system can diminish the user experience
- A rogue artificial intelligence profanity filter puts the site owner’s brand at risk
Brian Pontarelli, CEO of Inversoft, discusses the inherent risks with AI-based profanity filters and user profiling in this 2-minute video.
Further Reading:
Ask the CEO: Pitfalls of AI Based Profanity Filters
How to Build an Online Community for Your Business
Profanity Filter Best Practices: Customize in Real-Time