A breakdown of User-generated content (UGC) and moderation considerations
You may have heard of user-generated content (UGC) before, but what does it actually mean? UGC refers to any type of media produced by users rather than the company that owns or operates the platform where the media gets shared. Examples include YouTube or Facebook. Content such as photos, videos, comments, and product reviews are all examples of UGC.
For many companies, it can be a challenge to moderate and filter such content on your site or app if your system of moderation is easy to circumvent. This post will answer six questions you might have about UGC moderation and filtering to help you address these challenges. These tips will ensure that your site has high quality UGC while protecting your brand from liability and spam.
As companies allow more UGC and their online communities grow, users begin to control the tone and image of the community. This can be both good and bad, depending on the users. Relinquishing control to the users empowers them and drives further engagement. Companies do face a risk, however; as users will not always have the best interest of the companies in mind; a few might even abuse the privilege of an open forum by posting inappropriate content. The risk of abuse creates a strong argument for closer monitoring of user generated content.
This is the point when most companies decide to invest in filtering technology or other solutions to protect their online presence. However, filtering UGC is not as simple as it seems. Users are sophisticated and highly imaginative when it comes to working around automated filters.
Today, most content moderation is reactive in nature. The idea is to go through a site’s or brand’s existing content after it has already been published and make edits or deletions as necessary. With UGC, you must be proactive rather than reactive to ensure your users receive the best experience upfront.
Wait? Isn’t looking for profanity or other abusive words easy? Unfortunately, there are several challenges to moderating user generated content, but the biggest is filter circumvention. This can include:
If you’re responsible for moderating UGC, you know that it can be quite a challenge to keep up with all of these. Many companies are embracing UGC in their marketing and communications strategies, but they often find themselves struggling to keep on top of all that activity and manage their moderation efforts more effectively.
A UGC platform may be anywhere that a user can generate content in a public space. Examples include chat rooms, feeds, review sites, and forums, among others. UGC platforms provide brands with access to large, relevant audiences that can together foster a sense of community. These communities are valuable as a strong community generating content can help you build a strong brand.
However, many companies are hesitant to use a UGC platform because of quality control and brand protection concerns.
The reality is that there are opportunities and risks involved in using a UGC platform, whether you choose to use one or not.
The internet is full of content, most of which is user-generated. For big platforms like YouTube and Facebook, it’s incredibly important to have a system in place that filters out spam or anything else you wouldn’t want on your site. So how do people react to being moderated? And what happens if they don’t like what they see? We searched the web for feedback from moderators as well as users that have faced bans from platforms for inappropriate content.
Moderated users typically seem to understand the reason for their penalization, even if it does annoy them. Often the actions that led to a user being brought to book on a platform include them being profane, a troll, spamming, or something else outside of the community guidelines.
When users do not like the content they are seeing, they are more likely to drop off. For this reason it’s important to keep transparent guidelines around the content that should be shared. Since brands do not want to seem pushy, only 16% of UGC platforms actively highlight how users should share UGC; however, 53% of users say they would prefer clear and specific directions on what to do. Strong guidelines help create a better experience for users and moderators alike.
Content moderators oftentimes find themselves in tough situations between a rock and a hard place. Similar to the role of a judge, the moderator is the gatekeeper responsible for keeping an online community safe. Many content moderators agree that technology alone can not handle the whole problem of unwanted content, and that people must be involved to make the final decision if something is in the gray area. This can be especially necessary in communities when posts last for a long time, i.e. forums, comment sections. Content moderators often urge their users to read their terms and conditions, as well as community guidelines to make sure that they are aware of what is allowed and not allowed to help create a better user experience.
There are two broad ways to moderate UGC: automatically or manually. Automated moderation refers to systems that employ algorithms to flag inappropriate content; manual moderation means humans looking through posts before they appear online.
Automatic moderation can help larger teams scale to meet audience demand better. But Trust & Safety teams need to be sure of the calibration of that automation before implementing it. Not all models or rules are one-size-fits all, so teams must test and vet their automated rules before relying on them entirely - and even then, regular review and auditing is recommended to maintain quality.
The best solution for UGC moderation is a mix of both automatic and manual moderation. Automatic moderation can catch the high volume, obvious offensive items. Manual moderation can catch the items which require judgment, eliminating false positives and negatives. Intertwining the two is optimal for moderation efficiency, and it is often cheaper than hiring moderators to catch content which is obviously offensive.
Every platform is different, so there are no hard-and-fast rules to guarantee you’ll be able to remove all profanity. Most platforms allow you to moderate your content and ban users who violte guidelines, but some platforms have built-in moderation systems that can help detect and automatically flag or remove content with profanity.
Things to consider when considering eliminating profanity and negativity from your platform include:
There is considerable value in keeping a safe and productive community through UGC moderation. Content moderation can be a tricky task, but with the right tools it can be simplified. UGC moderation can help strengthen your brand, keep your users engaged for a longer period of time, and grow your community. Many factors come into play when you are deciding the best way to eliminate certain content from your platform. As technology continues to progress, it will be interesting to see how automated moderation transforms and how online communities shape themselves.