Silenced on social media: the gatekeeping functions of shadowbans in the American Twitterverse

Algorithms play a critical role in steering online attention on social media. Many have alleged that algorithms can perpetuate bias. This study audited shadowbanning, where a user or their content is temporarily hidden on Twitter. We repeatedly tested whether a stratified random sample of American Twitter accounts (n ≈ 25,000) had been subject to various forms of shadowbans. We then identified the type of user and tweet characteristics that predict a shadowban. In general, shadowbans are rare. We found that accounts with bot-like behavior were more likely to face shadowbans, while verified accounts were less likely to be shadowbanned. The replies by Twitter accounts that posted offensive tweets and tweets about politics (from both the left and the right) were more likely to be downtiered. The findings have implications for algorithmic accountability and the design of future audit studies of social media platforms.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.