According to the U.S. Government Accountability Office, almost a third of internet users have reported experiencing hate speech, yet little action has been taken by policymakers and online platforms.
The largest culprits of influencing harmful speech are social media platforms; however, they are private, meaning they aren’t limited by the First Amendment. Essentially, they can monitor content posted on their platform without violating any of the five freedoms, which includes deleting posts or even accounts.
According to the Freedom Forum, through Section 230 social media sites aren’t punishable through the content that their users create. Alongside the First Amendment, these laws give social media sites unchecked power while also protecting them from liability.
The current social media legal system creates a breeding ground for online hate. Especially following largely polarizing issues, such as the ongoing Israel-Hamas war, hate speech skyrockets. Following Oct. 7, the Institute for Strategic Dialogue noted a significant surge, with a 4983% rise in antisemitic comments on YouTube and a 422% increase in language-based anti-Muslim hate on platform X.
At the beginning of the COVID-19 pandemic, an Anti-Defamation League study found that 44% of internet users reported online harassment, which particularly affected minority groups. These threats had large influences on victims, who reported changes in their online behavior, and detrimental economic and physiological impacts.
These instances of online discrimination normalize hateful behavior towards certain minority groups with little repercussion. There’s no concrete barrier between the phone screen and the real world, and online hate speech encourages violence. According to a Council of Foreign Relations study, online hate speech has led to a global rise in violence towards minorities, including mass shootings, lynchings, and ethnic cleansing.
While social media companies vary in terms of their hate-speech and harassment policies, even the strictest actions have very little impact without legal punishment. A banned account or deleted comment might temporarily remove a hateful remark, but users can easily create another account and continue to inflict harmful behavior.
The U.S. needs more rigid policies that clearly define online hate speech and strategies to help prevent it. In Germany, the government can overturn companies and forcefully remove posts. Countries such as Australia have adopted specific laws to help prevent online harassment, including the recent Online Safety Bill in 2021. The bill introduced a take-down policy that holds service providers accountable for content removal and emphasizes adaptability to emerging online threats. The U.S. needs to follow suit.