“When we were growing up, our parents were able to monitor our social activity by observing our interactions: reading a letter, listening to a phone call, or watching a social interaction at a house or gathering. Fast forward to present day. Now we are parents, and monitoring our children's social interactions is a whole lot more complicated than listening to the conversation they are having on the phone in the next room.”
Social networks like Facebook, Instagram, Twitter and Tumblr, to name a few, can introduce an overwhelming learning curve for anyone new to the scene. So why should you, the parent, take the time to understand how these platforms work? Why should you understand how your child uses these social networks?
What You Don’t Know, CAN Hurt You
When your child is online, they are talking, sharing and engaging with friends, family and strangers. Not knowing how your child uses social networks, what information they are sharing online, and who they are talking to can be a dangerous mix. The danger lies in the transfer of information, how easily that information can be obtained and how easily it can be hidden.
Have you ever seen the parent that walks around with blinders on, listening to the angry stories their friends and other parents share about their child's deception, but not thinking critically about his/her own child?
YouTube is a BIG DEAL with today’s youth. Allowing users to share YouTube videos within your application increases user engagement and retention. However, consider what might happen if inappropriate content such as sex, violence, or personally identifiable information (PII) is shared. In this post I will walk you through the process to provide a safe environment that parents will trust and save precious moderator hours by building a list of already-approved and already-rejected videos.
Clip of Inversoft's Sean Bryant's puppy "Bodhi"
Approval Process: YouTube ID White & Black List
Allowing kids to post and share videos without being reviewed first by a moderator is risky, to say the least. Even if you allow users to report inappropriate content, it would not take more than one angry parent seeing pornography on your site to ruin your reputation for good. Therefore you must watch every video before it is made public in your community.
Kids love to share videos, particularly the same video over and over. Your moderation efforts will be taxed if a single video is reviewed 700 times! However, you can reduce the burden on moderators by employing a simple whitelist/blacklist technique on YouTube videos. Each YouTube video has a unique ID that can be saved after a moderator watches it. The next time the same video is posted, you can check the whitelist/blacklist to determine if it has already been moderated.
Key Line of Defense
The profanity filter is a key line of defense in protecting online communities from bullies, trolls and pedophiles. While these bad actors are present in forums and chat rooms, most people just want to participate freely with others who share a similar interest. The community members have differing sensibilities; some people are less sensitive than others to foul language and racy content. Are there times when it makes sense to turn the filter off? Who should have this control, and just how much freedom should they have?
Menu, Whitelist, Open Messaging
Community managers, users and parents all control the filter to some extent in many online environments. There are three levels of chat engagement: Menu, Whitelist, and Open Messaging. Each of these levels gives community managers a different degree of filter control. Menu messaging allows children to engage in online conversation using a predetermined dictionary of ‘safe’ words and phrases. Whitelist messaging is similar in that there is a list of acceptable words that can be used in the community, but the list is much more comprehensive and a filter is employed to ensure communications use the whitelist words in an acceptable manner. An open messaging environment has free communications in any language, but a blacklist is employed with a filter such as CleanSpeak to protect against inappropriate content.
In light of the recent COPPA (Children's Online Privacy Protection Act) violations and some hefty fines being doled out by the FTC (see our resources at the end of this post for links to the violations), we put together a list of 7 ways to be more COPPA compliant.
1. Collect as Little Information as Possible
The simplest way to be more COPPA compliant is not to collect personally identifiable information (PII) from your users. If you are collecting this type of information, ask yourself why. If the answer to that question isn't vital to your business, stop collecting the information. It’s easy to fall into the trap of collecting information for no other reason than having it.
One place you might have overlooked where you could be collecting PII is blog comments. Some blog software requires users to give their name and email addresses to post a comment. If you want to allow users to comment on blogs, make sure they can do so without sharing their information.
Another place to look is online features. If you require that users register in order to provide online features like saved games, settings and preferences, ask yourself if a simple username and password is sufficient. If you don't need additional information from the user, don't collect it.
2. Ask for the Age First
If users must register for your website, game, or community, you must determine their age first. Asking "are you under 13" with a yes or no answer isn't sufficient. You must ask for the user’s age in such a manner that they are more inclined to answer truthfully.
Preventing users from sharing account information is a security concern as well as a way to prevent paid accounts from being shared. When hosting a virtual environment targeted to kids, you are also required to take reasonable measures to prevent users from sharing PII (Personal Identifiable Information) in accordance with COPPA (Children’s Online Privacy Protection Act). The types of personal information include, but are not limited to, phone number, email address, and home address which cannot be shared in chat rooms, forum posts, and the like. Implementing all of the following prevention techniques will dramatically reduce your risk from users sharing account credentials and PII.
Educate Your Users