Should you Filter and Moderate Usernames?

Policy tips to consider when including username moderation

In online communities, public facing marketing campaigns or customer service interactions, it is common for your users to want to create Internet personas that enable them to express themselves and establish an online social identity. By allowing users to choose unique public display names to represent the personas they aim to create, you encourage repeat interaction and engagement with the community.  While it is vital to encourage these things, it is also important to ensure public usernames remain appropriate for your environment.  

The first step in doing so is to identify your target audience. You will most likely want to prevent profanities in usernames and may also need to prevent personally identifiable information (PII) for COPPA compliance in the case of under-13 communities. But many non-kid/family focused brands do not want to have untoward language included in usernames associated with their campaigns either. To do so, you have the choice to implement an automated profanity filter, employ human moderation, or utilize both.

This post will cover the limitations, overhead, and assessment of risk for each approach.

How to Filter and Moderate Usernames

Profanity Filter Challenge

Implementing an automated profanity filter to monitor username creation has several benefits. A filter can block obvious profanities and prevent members from using their real name. (Refer to the following link for more on Blocking PII).  However, usernames are analogous to personalized license plates. Sometimes the meaning of the letter/number combination jumps out at you immediately, and other times it’s not so obvious. Consider the following examples (say the words out loud if the meanings are not obvious):

ifyouseekamy

tackhilla

tuggisnewts

tirekrowch

If your target audience is adults, you may decide that the above usernames are perfectly acceptable. So long as the obvious “f-bomb” type words are filtered, you may be willing to let these names be used. However, if you would like to be more vigilant about preventing inappropriate usernames, a profanity filter is not 100% effective.

Safe Words

Username generators (see below) for safe words are particularly popular for children’s applications. You can create one massive list of potential usernames and allow users to click “randomize” to claim a unique username. Another technique is to have two or three sets of safe words where a member can create their own combinations to build a username.

Usernames

Pre-moderate All

If you’d prefer to allow users to create free-form usernames while still being as cautious as possible, moderators should pre-approve each username prior to it being public. While the name is pending review, a temporary username is issued until approval is made. The obvious downside to this solution is the overhead of moderator hours that will be required to review every username submission, particularly for large communities. This method is, however, exceptionally effective to prevent PII from being shared within a username.

Be Careful! Even well trained moderators will miss some of the more “clever” inappropriate names. Did YOU catch the hidden meaning of each example above?

Combination of Filtering & Moderation

Cleanspeak recommends an open submission approach that takes advantage of a profanity filter, moderation of small samples, and community moderation. We suggest the following workflow:

  1. User submits their desired username. If it is unique and no matches are found from the profanity filter, allow the username to be used immediately.
  2. If the username contains a profanity filter match, reject it and ask the user to try again (this can happen multiple times). When the user submits a name that does not contain a profanity filter match, the user is allowed to use it but the username is flagged for moderator review immediately. If a user is caught once, the odds are high that they will continuously attempt to submit inappropriate names until one gets through.
  3. Educate and support community members to report inappropriate names when discovered. Review any names that are flagged by the community members and instruct users to revise their username as needed.

A combination of filtering and moderation is not as bulletproof as using a safe word system. However, you’ll be able to allow users to create their own identity and enhance their user experience while lowering moderation overhead with a significantly lower risk factor than relying entirely on a profanity filter.  This ensures users are able to create the online persona they desire while preserving the environment you seek to provide for your members.