Introduction
Social media platforms act as breeding grounds for viral cartoon memes and trendy dance choreography, as well as widespread social movements and vibrant displays of activism. But social media also have a dark side: disinformation, hate speech, revenge pornography, harassment, terrorism activity, and sex trafficking run rampant here. Critics claim that social media platforms do not take sufficient action to eliminate such damaging content. All popular social media platforms fail to place safety at the head of their operations, leading to the propagation of harmful speech. As they currently stand, social networks are self-regulated. Because of private content moderation, most social media platforms employ a combination of algorithmic and human action to determine what kinds of content to eject from their sites. And in the cases of human action, the employees responsible often collapse soon under mental health issues due to the content they are forced to deal with on a daily basis.
The wide discretion that platforms possess over content moderation can be dangerous. Social media platforms often put greater weight on generating profits than on protecting users from destructive speech, and due to the multinational nature of the companies, they often elude judicial or governmental oversight.
The committee should strive to implement the greater regulatory oversight that is needed in order to prompt change within the industry while dealing with the multinational nature of the social media platforms.