The YouTube comments section is a dark place where mindless, offensive words masked with anonymity are the norm; the infamous comments are some of the filthiest found online. YouTube has introduced new comment moderation tools to combat this very issue and tame the trolls.
YouTube rolls out new tools for better comment moderation
YouTube video creators can now pin comments, choose moderators and define blacklisted words or phrases. Additionally, a new beta feature is available to automatically identify potentially offensive/abusive comments and hold them for review before they are visible to the public.
The App Store is a developer's best friend, until your app is rejected. (Are you suffering from App Store Rejection? You aren't alone - watch this humorous video.)
App Store Guidelines
"We will reject Apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, “I’ll know it when I see it”. And we think that you will also know it when you cross it."
(App Store Review Guidelines)
Image Moderation Just Got Faster
As applications, websites and online communities continue to expand, user generated content becomes difficult to manage. Nonetheless, a moderation solution is critical for sites that rely on users to succeed. Companies often focus on filtering chat, URLs and personally identifiable information. It is important to remember that images can be just as harmful to a brand and its user community.
Uncensored images are making their way to children via various platforms due to deficient moderation or lack of moderation altogether. Seven out of ten youths have accidentally come across pornography online.
A healthy and engaged online community is critical to a company’s success. This is a hot topic amongst CMGRs and top industry influentials and while most people can agree on the importance of a branded online community not all agree on the path to achieving this safe environment.
If you have an active online community, you already know that not every user is a good user. Trolls, bullies and URL spam inherently present problems and there will be consequences if you simply ignore the issue.
New Zealand recently enacted a bill that will make cyber-bullying illegal and punishable for the bully and the company that hosts the application used for the bullying.
Though there were a few legislators that voted against the bill, the vote was an overwhelming 116 to 5.
Opponents believe that this will impact free speech and that determining if specific user-generated content is in fact cyber-bullying could be difficult or impossible.
From my perspective, I don't feel this bill impacts free speech. It is similarly illegal to harass or threaten someone in person, so why should it be any different online?
Furthermore, identifying user-generated content that is cyber-bullying shouldn't be overly difficult. If someone feels cyber-bullied and reports the issue, that should be enough to investigate. Likewise, companies can also use automated solutions like CleanSpeak to help get alerts when conversations look like they contain cyber-bullying. Companies can let moderators make the final judgement and remove the content from their applications and/or kick the bully out as well.