Facebook Pixel

Big Social Media has had a particularly rough two months of press. Almost daily, articles have graced the front pages of newspapers highlighting grave social media-related dangers affecting our families.  We hope that parents and educators do not become overwhelmed by the sheer breadth of these reports, but rather that all this evidence serves as an urgent wake-up call that social media is a risky activity for children.
 
Starting off the back-to-school season, the Wall Street Journal recently released a report finding that videos about drugs, pornography, and other adult content regularly appear on the feeds of TikTok accounts purporting to be owned by young children.  Unfortunately, we were not surprised.   The Organization for Social Media Safety, the first consumer protection organization focused exclusively on social media, does ongoing safety testing of all major social media platforms. We have identified significant volumes of problematic content on children’s social media accounts. In addition to sexually explicit and drug-related material, this includes thousands of posts classifiable as cyberbullying, sexual harassment, fraud, and violence, among others.  On TikTok, we have even found this type of content with the Restricted Mode engaged, a setting which the platform markets as a way for parents to limit content that may not be appropriate for all audiences.  
 
This subject matter is not merely inappropriate, rather it is downright dangerous with empirical evidence clarifying the risks.  Children who have viewed drug-related content on social media are more likely to abuse substances themselves.  Children exposed to sexually explicit material on social media tend to engage in riskier sexual behaviors.  And cyberbullied children are about twice as likely to attempt suicide. Hundreds of thousands of children across the world are suffering actual injury from social media.

In another series of reports, a former Facebook employee, Frances Haugen, provided chilling evidence of Facebook’s ongoing malfeasance in protecting its child users.  Facebook reportedly undertook its own research finding that Instagram harms the mental well-being of its teen users.  Instead of publicizing its findings and moving to protect these children from harm, Facebook hid the research.  Facebook has also allegedly been aware that drug cartels and traffickers have been using its platforms to carry out criminal operations, even perpetrate murders.  Again, Facebook looked the other way. 
 
Despite the credibility and disturbing nature of this new information, various forces seem to be lulling parents and decision-makers into passivity instead of implementing reasonable interventions to increase social media safety.  Perhaps, the sheer ubiquity of social media, with its nearly 4 billion global users, has been motivating this blind eye. Or the many, ultimately incorrect, cries against new technologies, which began back with Socrates’ alarms about the written word, have made us all wary against recurring Luddism.
 
While a collective shrug seems to be the trend, the body count grows as the status quo safety efforts fail.  As evidenced by the Wall Street Journal investigation, TikTok’s moderation algorithm is insufficient to protect children.  TikTok has argued in response that “no algorithm will ever be completely accurate at policing content.”  What their own conclusion really means is that TikTok cannot ever make its platform safe for children, since exposure to dangerous content will be an ongoing risk.  Most social media platforms face this problem.

As Ms. Haugen’s brave efforts have shown, the social media industry cannot be trusted to prioritize safety.  That is because social media companies face an inherent conflict of interest between safety and profits.  More content moderation or other safety efforts can mean less consumer engagement or increased costs.  We have seen this story before with tobacco companies.

That is why we need to give attention to third-party safety software providers, whose sole profit motive lies in protecting children. Parents should have the choice to use third-party safety software on all platforms and with all device operating systems.  These products provide parents with essential, even lifesaving, alerts when children encounter unsafe content through social media. And any social media platform can, with a negligible operations burden, provide secure access to reputable, third-party safety software providers.

If some social media platforms continue to deny parents this choice, policymakers should require it of all that serve children.  Such legislation would not only immediately increase safety for millions of young social media users but also ensure a robust third-party safety software industry.  A thriving marketplace will mean better content filtering, smarter alerts for dangerous content, and more effective time management capabilities.  All of which translates to better protection for our children on social media.
  
New technologies that improve our lives can cause harm.  The societal advantages from the adoption of the automobile are undeniable, yet during the 1920s, 18 people were dying in accidents per million miles driven representing a significant public health threat.  Through improved technology, common-sense regulation, and consumer education we reduced that rate over the past century by over 90%.  Like cars, social media has brought us benefits and is here to stay. But continuing to ignore the risks to millions of social media-using families should no longer be an option.

Marc Berkman serves as the CEO of the Organization for Social Media Safety.

Leave a Reply

Your email address will not be published. Required fields are marked *