A Long-Term Vision of Internet Content Regulation: How Did We Get Here?

This fall was the biggest for Internet content regulation since 1996, when Congress approved Article 230 of the Telecommunications Law establishing a basic rule on the Internet that Internet platforms are not responsible for messages posted by users. This immunity was soon copied by the EU statement that internet platforms are “mere conduits”… followed by dozens of countries around the world.

Recently, things have changed for the biggest Internet platforms, creating a new regulatory model. EU Digital Services Act (DSA) – which makes the biggest platforms responsible for the content they allow to be published — progress towards final implementation; President Biden clarified that it no longer supports general immunity for platforms; a The US Circuit Court has upheld a Texas law that holds platforms accountable for their content management, and the Supreme Court decided to consider a case that could end the immunity of 1996.

At the risk of ignoring mountains of important legal details, we need to take a long-term view of what is going on.

It’s important to look at the internet forest through the legal trees, because that’s the perspective used by the vast majority of the public, most lawyers and most high-level policy makers, who normally have few ideas – or interest – on the actual operation of the Internet or the platforms. and process analogies they understand. (When the FTC began exploring Internet regulation, I hosted its chairman and top executives for day-long briefings at IBM Research on “What is the Internet?” and later I hosted similar Congressional Internet Caucus tutorials.) With a few notable exceptions, most senior officials, lawyers, and the public take a long view of Internet content regulation, so it’s worth definitely worth bypassing the details and getting the big picture.

By the time consumer-facing Internet platforms emerged in the mid-1990s, three models for regulating content on electronic media existed: 1) telecommunications operators, such as the telephone, had no responsibility for people transmitted electronically on them; 2) broadcasters, like television, were fully responsible for everything they distributed electronically; and 3) computer networks were private internal electronic networks used by large organizations to connect things like internal email. (A handful of geeks put up computer bulletin boards for other geeks, but few noticed.) So when lawmakers, judges, and the public began to consider consumer computer networks like AOL and Prodigy, they had to figure out if this new supporting electronic system was more like a telephone, a television, or a company’s internal computer network.

At first, some courts and the decision makers concluded that any “interactive computer service” aimed at consumers that actively controlled all the content it distributed was similar to a broadcaster and therefore responsible for the content it distributed, but if the platform simply displayed all that received, it sounded like a telephone operator. But, as mainstream platforms like AOL and Prodigy grew, it became apparent that while broadcasters can easily monitor their single broadcast feed, a large platform trying to monitor content posted by thousands of users could be flooded. So early platforms would either have to let any user’s post go up – including, especially, pornography – or spend large sums on legal services to defend themselves for offenses like distributing obscenity. or defamation.

So Congress proposed an unusual hybrid formula, politically justified by the need to limit Internet pornography: for user-posted content, all consumer-facing platforms would have a broadcaster’s content controls (but none of its responsibilities) with the content responsibilities of a telephone operator. Put simply, platforms could control as much or as little as they want, like a broadcaster, but not be responsible for the content, like a telephone operator.

This made sense because the platforms were relatively small, and it was widely hoped that – if allowed to thrive – they would improve education, healthcare, the arts, etc.

By the 2010s, the global growth of some major platforms had exceeded all 1990 expectations, and growing criticism of the “single conduit” legal structure for very large internet platforms emerged. Critics included smaller competitors, copyright interests, computer and telecommunications interests, print media interests, political activists of all kinds, national security interests, and national governments.

While some national governments have reacted by proposing to simply regulate Internet platforms as if they were local broadcasters, in Europe and the United States an important new concept has emerged: the creation of a new category of media composed only of very large Internet platforms. This approach had the advantage of leaving the “simple intermediary” character of small and medium-sized Internet platforms largely intact, while subjecting only the largest platforms to content regulations and responsibilities somewhat resembling those of broadcasters.

Important milestones in this development included the unanimous decision of the United States Supreme Court in 2017 Packingham Decision in which the Court left 230 intact but found that due to Facebook’s sheer size it exhibited many characteristics of a “public square” and that “to completely ban access to social media is to prevent the user to engage in the lawful exercise of First Amendment Rights. This was followed by the EU’s decision in 2020 to move forward with a Digital Services Act that fundamentally redefines content obligations and responsibilities for very large internet platforms, known as ‘gatekeepers’. .

These two pivots, and similar actions, have resulted in a torrent of proposed and enacted legislation in the United States and elsewhere that often retains the “mere conduit” character of small/medium platforms, but singles out very large platforms. -forms of a certain responsibility for the publications. contents.

The creation of a new category that includes the largest platforms and substantially excludes small websites is the most significant change in internet regulation since 1996. It obviously leaves open many complex questions of enforcement, not the least of which is “Who exactly is a Guardian?” and “How do I get into the less regulated category?”

It will take the better part of a decade to see if this new category of guardians holds up – and if so, how it will be tempered by legislatures, lobbyists, regulators and courts.

Roger Cochetti provides consultancy and advisory services in Washington, D.C. He was a senior executive at Communications Satellite Corporation (COMSAT) from 1981 to 1994. He also led Internet public policy for IBM from 1994 to 2000 and later served as Senior Vice President and Chief Policy Officer for VeriSign and Group Policy Director for CompTIA. He served on the State Department’s Advisory Committee on International Communications and Information Policy during the Bush and Obama administrations, testified extensively on Internet policy issues, and served on advisory committees to the FTC and various United Nations agencies. He is the author of Handbook of mobile satellite communications.