Political campaigns have increasingly turned to social media as a channel to reach voters. Social media not only has the power to reach audiences numbering in the billions, but it also has the power to change the behavior of its users. This far-reaching influence is nothing new—advertisers pay lots of money to use these channels to sell and market their products to targeted audiences. But, could this power be used to sway voters, and is there anything that prevents social media companies from getting into the game of politics?
In 2010, a Facebook experiment was conducted to see if the platform could be used as a tool to increase voter turnout for the 2010 U.S. congressional elections. Facebook users were shown informational messages encouraging them to vote with a link to polling places and an “I Voted” button and social messages that showed users six of their friends who had clicked on the “I Voted” button. The study concluded that more than 340,000 people voted as a result of the experiment, a seemingly modest number—but for a close election, a critical one. In the context of the current election, what if Facebook used its algorithms to regularly place pro-Hillary content in voters’ newsfeeds or regularly showed pro-Trump articles in the trending news section? Recently, Facebook employees asked Mark Zuckerberg in an internal poll whether Facebook had a responsibility to prevent Trump from becoming president in 2017. As far-fetched as the idea sounds, social media companies like Facebook are already using their platforms to voice political opinions, and there is currently nothing that would legally prevent a company from acting to prevent a presidency of a candidate its employees or leadership opposes or to support a candidate its employees or leadership supports.
The main reason Facebook, or any company for that matter, could do so is simple—under the First Amendment, corporations, including media and tech companies, have the right to express their political opinions. Therefore, any social media company that provides content to users (i.e., most) could simply make content that favors one candidate appear more frequently in users’ feeds and make any content favoring the opposition appear less so. It could also, based on user information, tweak its algorithms so that a candidate’s content appears in the news feeds of target demographics. Such editorializing, choosing which articles to place before others, is what newspapers do every day. Facebook itself made the news when Gizmodo reported that employees were routinely suppressing news stories of interest to conservative readers and that stories appearing on the front page of CNN or The New York Times would be injected as a trending topic on Facebook even if it wasn’t naturally trending. Furthermore, as long as a company’s use of user data does not violate any privacy laws, there would be nothing illegal about using that information to target certain users, such as undecided voters. Under its own terms and conditions, Facebook is not under any contractual obligation to display its content in a fair and unbiased manner, nor even required to disclose that it is engaging in such “editing.” Of course, were any social media company to be paid to place ads or content, it might run into issues with Federal Election Commission rules, but in a scenario where a company is choosing to favor one candidate over another, it’s simply free expression of political opinion.
Such expression is not limited to presidential and congressional campaigns, of course. In 2015, Facebook created a rainbow filter that users could apply to their profile pictures in support of the Supreme Court’s same-sex marriage decision. About 26 million people added this rainbow filter to their profile pictures within a week after the Supreme Court’s decision. Similarly, in 2012, Google actively protested SOPA (Stop Online Piracy Act) and PIPA (Protect IP Act), two congressional bills that it believed would encourage censorship, by blacking out its homepage, replacing its logo with a censorship logo and providing a link to support the fight against SOPA and PIPA.
In those instances it expressed to users that Facebook and Google were each taking a political stance on a certain issue. But even in scenarios involving less obvious adjustments to provided content, a lack of transparency does not mean the actions are any less protected by the First Amendment.