The student news site of Allegheny College

The Campus

The student news site of Allegheny College

The Campus

The student news site of Allegheny College

The Campus

Meta’s latest scandal

What effect has Facebook had on mental health?

Meta, the parent company of both Facebook and Instagram, is currently being sued over their apps’ negative effects on children’s mental health.
On Oct. 24 a lawsuit was filed in federal court by 33 states alleging that Meta was aware that its apps Facebook and Instagram had addictive qualities and worsened self-esteem. These lawsuits come after a whistleblower released private Meta documents, studies and communications demonstrating that Meta was aware of their site’s negative effects on mental health, yet pushed features that exacerbated these effects.
The mental health effects on children shown in Meta’s internal studies range from body-image issues, worsened eating disorders, worsened depression, worsened suicidal thoughts, heightened insomnia and a dependency on these sites. Concern about social media dependency is no surprise, as a recent report released by the Pew Research Center determined that about one-third of teens say they use social media “almost constantly,” with about one-tenth of teen Instagram users saying they use the app “almost constantly.”
Since Meta’s apps have been proven to use manipulative features that cause the aforementioned negative mental effects, I have to wonder about another way these manipulative features manifest – through pushing hateful content that radicalizes youth.
Known as “the alt-right pipeline,” this phenomenon starts with people, particularly teen boys, being shown relatively normal content that gradually introduces “edgy” humor often including racist, sexist or homophobic jokes, and spirals into outwardly hateful and inflammatory content. Facebook and Instagram are known to have this quality, as Meta’s algorithm prioritizes engagement over everything else and recommends and shows posts largely based on their popularity.
Divisive content that stirs people’s emotions drives higher engagement, so inflammatory posts can be recommended with increasing frequency, causing users to enter echo chambers and become radicalized. Through this method, Facebook preys on children’s need for approval and encourages them to agree with the posts they are recommended. Just as some experts have claimed that the “like” button preys on children’s need for approval and causes lowered self-esteem, the need to fit in encourages the radicalization of youth.
Additionally, children do not have as developed emotional regulation or critical thinking skills, causing them to be particularly vulnerable to falling into these echo chambers.
Radicalization isn’t just a hypothetical on Facebook. In 2021 Meta came under fire for removing safety systems on Facebook that were put in place prior to the 2020 election in order to reduce misinformation. Many, including the whistleblower who released Meta’s internal documents, studies and communications, believe that removing these systems after the election allowed misinformation to spread and made organizing the Jan. 6 riot possible.
With Facebook and Instagram having so many seemingly deleterious effects, experts have begun seeking ways to mitigate harm. The American Psychological Association has released lists of recommendations for addressing the harms of social media, both for parents and legislators. These recommendations cover a range of useful approaches, like encouraging parents to have discussions about what their kids are seeing on social media, pushing governments to fund social media research and pressuring social media sites to remove some features that encourage dependent behaviors. However, I believe there are two places that these recommendations miss.
Firstly, the functionality of reporting and removing structures are greatly in need of rethinking. When something on Meta or Instagram is reported it doesn’t just automatically disappear, it must be reviewed by moderators who determine whether or not it violates Meta’s community standards. The volume of disturbing, violent, illegal, and hateful content which these moderators must review on a daily basis is difficult and at times damaging for them. Workers have reported experiencing depression and post-traumatic stress after starting this job.
While companies — who are sometimes departments within a social media’s business or third-party organizations — will say they allow their employees to take mental health breaks, fear of termination prevents employees from being able to take these breaks as frequently as is needed. Furthermore, outdated systems, pressure to work through queues both quickly and accurately, low pay and uncomfortable office spaces adds to the stressors content moderators experience. Yet still, the amount of harmful content visible to consumers on sites like Meta and Instagram every day suggests that more moderators — or improved methods of moderation — are needed.
If we want to prevent some of the worst mental health effects of Facebook and Instagram, content that creates these negative effects — like suicide and self harm content, pro-anorexia content and hate-speech — needs to be removed. With how stressful the content moderation job currently is, I don’t believe the system is sustainable; and yet, these jobs are needed to create and maintain a safe social media by eliminating triggering content and misinformation.
Unfortunately, it does not appear that automated moderation is close to being able to efficiently and accurately perform moderation either. The programming and training of automated moderators is going to take more people viewing harmful content in the short-term, and it’s unclear when, if ever, automated moderation will be able to account for the context that is so essential in determining if something is hateful or misinformation.
In addition, automated moderators are still inaccurate when flagging violent or self harm related content. They are not anywhere close to where they need to be for widespread use and implementation.
In the meantime, it is necessary that social media companies improve the working conditions, pay, visibility and protections of moderators so that their capacity to safely hire and retain these necessary employees can grow. If these sites are going to be safe for teenagers, effective content moderation is the bare minimum.
The APA has released a list of recommended actions parents can take to protect their children from the negative mental health effects of social media.
These actions include encouraging kids to use social media for social support, companionship, and emotional intimacy, teaching kids to avoid, report and block content relating to self-harm, eating disorders and hate, teaching kids to not compare their appearances to the people they see online, routine screening for social media dependency, limiting screen time and night-time use of social media, training in social media literacy and some monitoring or discussion about what kids are seeing online.
While these are excellent ideas, I don’t think the burden of education on this topic should stop at parents. I believe schools should work conversations and lessons about social media into their curriculum. Almost all of these steps can be integrated into classrooms. Schools are already places where important emotional and social skills are learned, and many teachers take time to explicitly help kids develop these skills.
Social media is here to stay, and kids need as many opportunities as possible to learn safe social media usage. By pushing safe social media practices in the classroom as well as at home, kids who may miss out on lessons in either one of these places can learn in the other.
These lessons could include social media literacy that helps teens spot misinformation and manipulative content, educating about boundaries that could help prevent dependency — like turning off notifications, setting a time limit — and helping people spot content pulling them into depressive or other negative spaces, along with drilling in a regimen of avoid, report, block for content that does encourage self harm, pro-anorexia or suicide. Additional educational places like daycares, youth groups, clubs and libraries should also talk about and encourage these practices when they are able.
The more opportunities kids have to learn about and discuss social media, the more chances we have to help prevent them from incurring the negative mental effects associated with its use. While it’s impossible to know how effective these lessons would be, opening a conversation as a society and experimenting with solutions is where we need to start.

Story continues below advertisement
Leave a Comment
More to Discover
Donate to The Campus
$50
$500
Contributed
Our Goal

Comments (0)

All The Campus Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *