Facebook & Social Emotional Learning

Anne Collier
  • By Anne Collier
  • Misc
  • May 23, 2013

One of the fascinating impacts of our now very social media environment is technology companies having to learn a whole lot about the best and worst of humanity – and, for their own and their users’ sake, about how to foster the best of it. Facebook, for example, has an engineering team working with empathy researchers, and this has direct impact on young users’ social experiences, wherever FB’s embedded in them.

Psychologist Marc Brackett at Yale University, who developed a social-emotional learning (SEL) program for schools, has been working with Facebook on its social-reporting tools for 13- and 14-year-olds. [Social reporting is basically abuse reporting with additional options to "report" offending content or behavior to people who can help with the problem in "real life," which is typically the context of whatever goes on in Facebook.] What Facebook has set up for young teens, with the help of Marc and other researchers, is not only a social reporting process (which it calls a “flow”), but one that actually teaches them social literacy as they go through it.

Facebook Social Emotional Learning

Here’s part of Facebook’s reporting flow that, in a simple way, gets users to think about and accurately express what bothers them about a photo

Learning Social Skills by Using Social Media

Facebook’s social reporting flow “teaches some SEL,” Marc emailed me, “because it encourages them to think about the feelings they have about their experiences” and how best to express those feelings. The reporting flow also “warns them against doing things that might make things worse and provides text that might help reduce the intensity of the situation,” Marc added.

That’s important because it not only increases social-emotional literacy but also helps to resolve situations. It can do that because, first of all, so much of what happens in social media has to do with what’s going on in offline life and social circles – so people in offline life typically help resolve problems in social sites better than the sites themselves ever could, once those people understand what’s going on. Second, not much of the abuse reporting that social sites get is about anything really nasty, Facebook has found. For example, the vast majority of photos reported for take-down are reported because the user just didn’t like the way s/he looked in the photo. In fact, in most cases of problem photos, no harm was meant, and the friends who post them usually want to know if there’s a problem with them. So, once FB started social reporting so that people could actually ask their friends to take down a photo, it found that “60% of the people who were asked to remove a photo felt positive about the person who sent the request,” FB reports in its blog post about its 3rd-annual Compassion Research Day this week.

Facebook’s Learning Process

In social reporting’s early days (about two years ago), when a user chose the option to ask the photo poster to take the photo down, FB just provided a box for the reporter to type in a message to the poster. It soon found that only 20% of those users would fill in the box and send the request to the poster. But after FB “filled in the box,” so to speak – offered users options for explaining why they want the photo taken down – the percentage of users who reported to the person who posted the photo went up to 60%. “If you give them the right language, an emotionally rich language [but also simple and conversational], that number goes up,” FB explains (see what they mean in the screenshot above).

Facebook has also discovered that, “if a 13-to-14-year-old teen needs support from someone they trust, they most likely turn to older teenagers.” That’s borne out in academic research too (e.g., see this). But social reporting allows them to report to peers or adults (whether or not they’re Facebook members), as well as Facebook itself.

No Shortcuts

Finally, these developments teach all of us one more thing: Not to develop a false sense of security about how much a customer service team at a distant corporate office can understand, much less resolve, problems that arise in people’s offline lives and relationships. Certainly it’s baseline corporate responsibility for them to do whatever’s possible, and certainly users can be blocked and offensive content deleted, but – in each case – it helps users to be realistic about how much either technology or third parties can truly resolve a conflict (when we were kids, Mom or Dad didn’t call the phone company when arguments broke out on the phone). Every case is unique, and sometimes it does help to get someone’s bad behavior out of our face, but we can’t get people banned from everybody else’s profiles.

People need to remember, too, that there are always other media properties and devices where determined “haters” can pop up and carry on. Nothing is more effective than addressing issues with the people involved. Unfortunately, there are no easy workarounds for that. That’s why we keep saying that safety in social media is, well, social – a shared responsibility. It’s also mostly local, where kids are concerned, because the real context for what happens in Facebook, on phones, in Snapchat, Instagram, online games, etc. is usually school life, the epicenter of their social lives.

Originally posted via Net Family News -  January 13th, 2013

Anne Collier of Net Family NewsContent provided by Anne Collier. Editor of NetFamilyNews.org and founder and executive director of its parent organization, Net Family News, Inc. More on Anne Collier can be found here.

 

Related links

 

Tags: