Weighing the future, viability of social media platforms


Collage of images of an office setting, computers and social media posts.

The way we communicate on social media is evolving, leading many to question the longevity and stability of these spaces. Graphic by Alec Lund

|

When a Twitter account purporting to be pharmaceutical giant Eli Lilly and Co. announced “insulin is free now” on Nov. 10, the company’s stock went tumbling. Using the new Twitter Blue service — which allows users to purchase a coveted blue checkmark, meant to ensure they are who they say they are — someone had cost the company millions for the price of $8.

Twitter Blue is just one of the changes Elon Musk rolled out since his purchase of the social media platform in October.

Since the purchase, the Tesla CEO has been very vocal about his plans for the social media site. Within the first few weeks of his purchase, Musk had already made sweeping changes to verification protocols, reinstated former President Donald Trump’s account and fired more than half the platform’s workforce. 

This upheaval has left many questioning the stability and future of social media platforms and what role these spaces play in public discourse.

Here, three Arizona State University experts share their insights on content moderation, free speech and mis- and disinformation in social media spaces.

Dan Gillmor is a former journalist and professor of practice in the Walter Cronkite School of Journalism and Mass Communication. He is also the co-founder of the News Co/Lab, which is an initiative in the Cronkite School that works to advance media literacy through journalism, education and technology.

Kristy Roschke is a media literacy expert and assistant teaching professor in the Cronkite School. She is also the managing director of the News Co/Lab.

Shawn Walker is an expert in social media data analysis and an assistant professor in the School of Social and Behavioral Sciences. His work focuses on addressing the social and ethical implications of social media data collection.

Question: What motivates content moderation? 

Gillmor: All internet services that offer people the ability to post things themselves moderate content to one degree or another. It's not just the big platforms, such as Facebook, Twitter or WhatsApp. One motivator is liability-related issues. For example, if there's a copyright infringement claim, and if they don't take down the item that's allegedly infringing, then they can be held liable for the infringement. 

Another reason for moderation is simply to make the experience better for users. It’s kind of the equivalent of you inviting people into your home for a party; if someone is acting up in a bad way, you ask them to leave. 

Q: How has the perception of free speech changed with the advent of social media?

Walker: When discussing social media, we often throw out terms like the First Amendment, censorship and free speech without understanding what those terms mean and when those terms are applicable. They're used often in a political context as calls to specific threats, or a specific level of concern. So if we use the phrase “I'm being censored” instead of “my content was deleted,” we're moving up the scale of aggression. We're saying, “A crime has been committed against me.” 

These are private platforms that are commercial spaces. The First Amendment does not apply to commercial spaces. Social media platforms own, operate and pay for the operation of these platforms. They're spaces that we happen to inhabit by agreeing to a set of terms of service. So there is no free speech right. This is not a platform that's produced by a government entity. This is not censorship, this is content moderation because you can't censor inside of a platform where you don't have a First Amendment right.

Gillmor: There's a general appreciation for freedom of expression, but it becomes tested when people see offensive content. Sometimes it's used in ways that are downright painful though legal, like hate speech. Our commitment to preserving freedom of expression is, I fear, being tested. I think it's the cornerstone of democracy and the republic. But the online world has tested people's commitment to free speech.

Q: What are the benefits or drawbacks of allowing social media companies to moderate speech? Do you think that they're doing a good job? 

Roschke: They've been forced into this role because we've had unfettered access to platforms that allow for any kind of speech, some of which has gotten increasingly ugly and harmful, veering toward hate speech. This has required companies that had no intention, and certainly no expertise, to play the role of a content moderator. Now that we've seen so many examples of how speech can be harmful online and in the real world, these companies have no choice but to make decisions about what to keep, what to remove, what to promote, what to downplay.

Are platforms doing a good job of content moderation? No, they're not, and that is not for a lack of money spent, because there have been large investments in content moderation. But it's not nearly enough in the grand scheme of things, and it is also disproportionately applied. For example, even though Facebook has a huge international population and arguably is more active in other countries than it is in the States, the bulk of the content moderation takes place on English content; but so much is really based in other countries and other languages. So it would be nearly impossible for these companies to be able to effectively moderate content. 

Walker: There isn’t much of an alternative. Having no content moderation would be very undesirable because people post a lot of content that we don't want to circulate, ranging from violent and sexual content to disinformation that we would argue would harm society. 

Sometimes folks act like mis- and disinformation is a solvable problem. Disinformation has been a problem since we've been communicating. So the question is, how do we decrease its negative impact on society versus how do we eliminate it?

Q: Where's the line between honoring free discourse but still maintaining a safe space online? 

Walker: Online is not a safe space. We encounter a lot of content that will make people feel uncomfortable and a lot of diversity of opinions and views in some spaces. We see some more insular spaces and private groups where there is more uniformity of opinion and less contesting of ideas. The platforms themselves get to decide because they own those spaces, and then we get to decide whether we want to participate in those spaces. 

Q: If online is not a safe space, what standards are social media companies held to? Do they have a responsibility to work toward a safe space? 

Walker: The issue with social media companies is that they've fallen under the same regulations as telecommunications companies and internet service providers in that, by law, telecommunication companies are not responsible for the things that folks do. They're just the network that messages move over, but they don't produce the content itself, so therefore they are not responsible for that. Social media platforms have been regulated under the same thing; however, they’re providing a different service than internet service providers. Social media companies and platforms are creating these platforms, so they decide who can talk about what features they have. So that's a bit different, but they've traditionally followed that sort of lack of regulation and lack of responsibility.

Q: Why is it important to be verified? What is that responsibility that goes along with it?

Roschke: Verification is a cross-platform designation. It's an aesthetic signal that anyone who's seen the internet or seen social media platforms looks at and says, "That means something about that person.” Oftentimes, that check mark is automatically conflated with trustworthiness. 

There's a magazine called Good Housekeeping and they have something called the Good Housekeeping Seal of Approval. And when they review products, they'll only put their seal on things that they recommend. Verification has that same connotation, even if in practice it's not actually the same thing. Verification really only means that this account belongs to a real person. However, the way that verification has been handed out across platforms is mostly to well-known people. This check mark has had significance for over 10 years, and that's really, really important because lots of people have learned to look at that check mark, sometimes incorrectly, for credibility.

Q: Does it make it easier to spread misinformation if you can just pay for your verification?

Roschke: Yes, it could make it easier for misinformation to gain traction. It could make it easier to see now, because there's this false sense of notability and credibility associated with an account with a check mark. If someone were to see something that they would dismiss otherwise, they might now give it a second thought or share it. 

Q: Do you think the changes happening at Twitter are indicative of trends across social media?

Roschke: Twitter as a platform is not the biggest; it's not even the second biggest. And it's used by some pretty specific people in most cases. This impacts a lot fewer people than if this were happening to Facebook or something. Twitter doesn't have the engagement that some other platforms do, but what it does have is very close ties to politicians and journalists. If this ecosystem is disrupted in a substantial way, there's going to be potentially big implications for journalism and how journalists and politicians communicate. Of course this has implications for all of the other social media platforms. It shows how precarious these systems of control are. We think that we own them or we run them because we produce all the content for them, but we don't run them and we have no control over what happens to them. And this is painfully obvious now.

More Science and technology

 

Two scientists in a lab observe a microchip.

ASU student researchers get early, hands-on experience in engineering research

Using computer science to aid endangered species reintroduction, enhance software engineering education and improve semiconductor…

Gail-Joon Ahn works with a colleague in his office.

ASU professor honored with prestigious award for being a cybersecurity trailblazer

At first, he thought it was a drill.On Sept. 11, 2001, Gail-Joon Ahn sat in a conference room in Fort Meade, Maryland.…

Michel Kinsy poses in his lab.

Training stellar students to secure semiconductors

In the wetlands of King’s Bay, Georgia, the sail of a nuclear-powered Trident II Submarine laden with sophisticated computer…