Spotify announced it won’t tolerate hateful content or hateful conduct on its platform and the effort seems reasonable and admirable. The music industry powerhouse is partnering with several prominent advocacy groups including the Southern Poverty Law Center, The Anti-Defamation League, Color of Change and GLAAD, among others, to help identify hateful content so it can be removed. But the attempt appears to be more and more dangerous as the details start to unfold.
The question is: Do Spotify users really want the site policing what they hear? It depends whom you ask. At the end of April, women of color in the Time’s Up movement launched the #MuteRKelly campaign, calling on companies such as RCA Records, Ticketmaster, Spotify and Apple Music to cut ties with Kelly. Time’s Up tweeted in support of Spotify’s decision, but the company’s social media pages have been flooded with comments from users who are unhappy with the move, in part because Kelly has never been convicted of a crime. If Spotify is willing to punish Kelly based on allegations alone, they have to figure out where they’ll draw the line.
Rachel Stilwell, an entertainment lawyer who predominantly works with clients in the music industry, says, “Implementing these policies is tremendously difficult. Reasonable minds can differ on what is hateful conduct and not everybody is going to agree on what is hateful content and what isn’t.” She further explains that Spotify has no legal obligation to shield users from hateful content or music by artists who are believed to have engaged in hateful conduct, but it’s perfectly legal for them to refrain from playing—or featuring—any artist they choose. She says, “It’s a business decision and perhaps a moral decision. It does appear that they may be reacting to pressure associated with Time’s Up and their viewpoints with respect to R. Kelly’s behavior, which has been alleged to be really horrible for a long time.”
Stilwell hopes the company will be careful in how they implement the policy. “Could this be a slippery slope, where Spotify says, ‘We’re going to make this R. Kelly music harder to find, and we’re not so sure about Chris Brown, but we’re going to go ahead and get rid of that, too.’ And then somebody else is alleged to have done something untoward and we don’t like that either, so we’re going to take that down.' It’s probably not in Spotify’s business interests to piss off a bunch of artists and also piss of its listeners by engaging in a bunch of censorship that the majority of their business partners and customers would not like.”
She calls attention to the part of the policy where Spotify says they will remove content that they believe “promotes" hatred towards persons or groups because those persons or groups have specific identifiable characteristics. She says, “That’s arguably a very broad group of content subject to possible removal from Spotify. The list of characteristics they cite are examples of characteristics they are concerned about being targeted. But theoretically, N.W.A.’s ‘Fuck tha Police’ could have fallen into this category as a track that promotes hatred of the police. Police collectively do share a characteristic, even though that characteristic is not enumerated in the policy. Of course, ‘Fuck tha Police’ is an important work of art expressing an important and valid viewpoint of frustration by people of color toward law enforcement. Should Spotify remove that track? No. So where do they draw the line? That line drawing can be really difficult to do.”
Spotify is far from the only social platform policing its users’ access to content. Facebook has been doing it for years, and in April, the company shared the detailed community standards guidelines that its reviewers use to determine which posts, photos and videos are allowed on the site—and which ones aren’t. Now that Facebook’s community standards are public, users can see exactly what the company considers offensive. It’s understandable that they don’t want users posting violent threats or infringing copyright, but some of their standards reflect moral judgment calls.
The section on Adult Nudity and Sexual Activity starts with the note, “We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content.” Of course, it’s true that “some people” don’t want to see a bare female nipple as they’re scrolling through their feed. But some people don’t want to see photos of people posing with guns—and those are allowed. Some people don’t want to see baby pictures. Should those be forbidden? People are sensitive to all sorts of things, for all sorts of reasons, so why don’t they get to determine what gets blocked from their feeds? Facebook uses facial recognition algorithms to ask users if they want to be tagged in photos, so it should be able to use similar algorithms to block anything a user might consider upsetting. That would give users the ability to create the feed they want to see, without limiting other people’s preferences.
Instead of filtering for its users—discontinuing their control— perhaps Spotify could give users the option to block artists they find offensive, for any reason, and prevent their songs from playing when that user listens to public playlists. It’s a feature Spotify users have been requesting in the site’s community forums for years, but instead, the company decided to make those determinations on a global level.
Social platforms are worthless without their users, so will they ever respect the customer enough to let them make their own decisions?