Filling in the Gaps in Section 230

Section 230 of the Communications Act has been the focus of considerable attention for some time now. When Democrats complain that Facebook and Twitter are firehoses of conspiracy theories and deliberately false information, they note that 230 was supposed to encourage social platforms to weed their gardens.

When Republicans complain about the censorship of their views, they also invoke 230, claiming it’s part of an implicit contract between these business and the people they serve. Copyright hawks also criticize 230 for encouraging the theft of creative works by relieving platforms of liability for enabling criminal acts.

Each of these arguments contains its kernel of truth, and each is also an exaggeration. Section 230 has fostered the development of case law that has altered its effect from original intent; in most cases, the case law is to blame for the complaints. So how do we correct the errors in court-made law that have cause the drift?

Clarification by Small Steps

One way forward is to amend the law to clarify its purpose to courts. A good way to start is by explicitly stating clarifications to the content and user moderation process Section 230 governs.

Platforms are two-sided businesses that deal with user content on one side and advertising revenue on the other. Platform obligations on both sides are largely unspecified, so confusion reigns.

Congress is good at clarifying things that people generally know. In the case of user-generated content, we generally know that platforms and users are bound by terms of service accepted by users when the sign up for their Facebook or Twitter accounts.

Terms of Use are Hard to Find

The problem is that terms of user start with a blank slate that service providers can fill with essentially any conditions they want. Unsurprisingly, terms of use are little more than shield that allow platforms to moderate or ignore content as they see fit.

Terms of use and the moderation process often leave users in the dark about what’s permitted and what isn’t. Even finding them can be a chore: Facebook’s are located under Help & Support -> Help Center -> Policies and Reporting -> Our Policies -> What types of things aren’t allowed on Facebook -> Facebook Community Standards, all under the triangle at the right edge of the page displayed by clicking the Facebook logo.

This placement does not encourage users to learn the rules, and neither does the language. Why not place a link to their location on the main screen? It’s simple enough once you’ve found it: https://www.facebook.com/communitystandards/.

Terms of Use are Hard to Understand

Facebook’s contract is maddeningly vague, as the standards are largely subjective:

Authenticity: We want to make sure the content people are seeing on Facebook is authentic. We believe that authenticity creates a better environment for sharing, and that’s why we don’t want people using Facebook to misrepresent who they are or what they’re doing.

Safety: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.

Privacy: We are committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, and to choose how and when to share on Facebook and to connect more easily.

Dignity: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others.

This aspirational statement doesn’t provide users with a clear sense of boundaries, and it’s relatively silent on the subject of fake news. While Facebook does say it wants information to be “authentic,” it jams this goal into a statement that supports the use of real names.

The statement about privacy is almost comical in the wake of the Cambridge Analytica scandal. It would be much more helpful to admit that it can’t guarantee meaningful privacy without a wholesale change to its business model.

Similar criticisms can be – and have been – lodged against other platforms such as YouTube, Twitter, NextDoor and the rest, but I respect the value of your time so I won’t detail them.

Santa Clara Principles

At the second Content Moderation at Scale conference in Washington, DC on May 7th, 2018, organizers proposed “three principles as initial steps that companies engaged in content moderation should take to provide meaningful due process to impacted speakers and better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights.”

These principles bridge the gap between aspirations and meaningful standards. While they need some work – they over-specify in some areas and under-specify in others – they’re a very good starting point.

Here is the list compiled by the proponents.

First Principle: Numbers

The first principle is helpful to the platform as well as the user as it helps the platform discipline its own behavior and shines a light on some forms of abuse. I think users would also like to know how many people complain about their content, as well as whether certain complainers are flagging a large number of their contributions.

Platforms can protect the privacy of complainers while providing this information, of course. It’s also useful to know whether organized groups of complainers are targeting certain people with mass complaints. This happens quite often.

Second Principle: Notice

Some platforms censor silently, which isn’t helpful to the offender. Others provide unhelpful information, such as pointing to an aspirational goal as a reason for restricting access or taking down a comment.

Platforms should be able to point to a specific contribution and the rule it violates when taking down content, and they should be able to identify a number of offending contributions when banning a user or putting a user in time out.

This is almost never done today. The main goal of content moderation and account restriction should be educating users on what the rules actually mean.

Third Principle: Appeal

This is the most controversial provision because it’s very labor-intensive and therefore costly. I don’t have a problem with platforms that don’t provide an appeals process, but it seems to me that most would like to know when they get things wrong.

The minimum requirement should be a means of reporting the user’s belief that a moderation decision has been made in error. It should be incumbent on the user to explain why the decision is wrong, but they can only do this when the platform follows the first two principles.

I don’t think anyone has a right to human review unless they’re willing to pay for it. I also think it would be great for platforms to develop AI systems capable of handling appeals, but I don’t believe we’re there yet.

Conclusion

Improving moderation standards doesn’t solve the whole problem with social platforms: it won’t stop misinformation, privacy leaks, user manipulation, addiction, or burnout.

It’s also not a universal fix for Section 230. What it does do is open the door for detailed clarifications of Section 230 that will put us on a path to refocusing the law on encouraging the behavior we want – healthy interaction – while reducing the behaviors we don’t want, such as piracy and mob rule.

Starting with a relatively easy problem will make Congress better equipped to handle the harder questions down the road.