In online communities, public facing marketing campaigns or customer service interactions, it is common for your users to want to create Internet personas that enable them to express themselves and establish an online social identity. By allowing users to choose unique public display names to represent the personas they aim to create, you encourage repeat interaction and engagement with the community. While it is vital to encourage these things, it is also important to ensure public usernames remain appropriate for your environment.
The first step in doing so is to identify your target audience. You will most likely want to prevent profanities in usernames and may also need to prevent personally identifiable information (PII) for COPPA compliance in the case of under-13 communities. But many non-kid/family focused brands do not want to have untoward language included in usernames associated with their campaigns either. To do so, you have the choice to implement an automated profanity filter, employ human moderation, or utilize both.
This post will cover the limitations, overhead, and assessment of risk for each approach.
Implementing an automated profanity filter to monitor username creation has several benefits. A filter can block obvious profanities and prevent members from using their real name. (Refer to the following link for more on Blocking PII). However, usernames are analogous to personalized license plates. Sometimes the meaning of the letter/number combination jumps out at you immediately, and other times it’s not so obvious. Consider the following examples (say the words out loud if the meanings are not obvious):
ifyouseekamy
tackhilla
tuggisnewts
tirekrowch
If your target audience is adults, you may decide that the above usernames are perfectly acceptable. So long as the obvious “f-bomb” type words are filtered, you may be willing to let these names be used. However, if you would like to be more vigilant about preventing inappropriate usernames, a profanity filter is not 100% effective.
Username generators (see below) for safe words are particularly popular for children’s applications. You can create one massive list of potential usernames and allow users to click “randomize” to claim a unique username. Another technique is to have two or three sets of safe words where a member can create their own combinations to build a username.
If you’d prefer to allow users to create free-form usernames while still being as cautious as possible, moderators should pre-approve each username prior to it being public. While the name is pending review, a temporary username is issued until approval is made. The obvious downside to this solution is the overhead of moderator hours that will be required to review every username submission, particularly for large communities. This method is, however, exceptionally effective to prevent PII from being shared within a username.
Be Careful! Even well trained moderators will miss some of the more “clever” inappropriate names. Did YOU catch the hidden meaning of each example above?
Cleanspeak recommends an open submission approach that takes advantage of a profanity filter, moderation of small samples, and community moderation. We suggest the following workflow:
A combination of filtering and moderation is not as bulletproof as using a safe word system. However, you’ll be able to allow users to create their own identity and enhance their user experience while lowering moderation overhead with a significantly lower risk factor than relying entirely on a profanity filter. This ensures users are able to create the online persona they desire while preserving the environment you seek to provide for your members.