In 1997, SixDegrees.com was the first real attempt at social networking, creating a space where users could upload their information and list their friends. The site peaked at nearly 3.5 million users before closing in 1999.
Since then, a series of social networking business models have emerged, each time offering more advanced tools for user interaction. LiveJournal, a site to keep up with school friends, combining blogging and WELL-inspired social networking features; Friendster was a social network that allowed increased interaction and control by users; Myspace had open membership and gave users the freedom to personalize their sites. In 2005, it – and its 25 million user base – was sold to News Corp. But within three years, Myspace was overtaken by Facebook, which started in 2004 for college students and opened to everyone in 2006.
As that story shows, in the early days of the Internet, the rift was deep and persistent: New companies appeared in music, video, e-commerce, publishing, and telephony on an almost annual basis. The Internet seemed to be a space where competition could flourish.
Not any more.
Today, disruption occurs in much smaller ways. Examples are dwindling as economies of scale have concentrated innovation in the hands of a few players.
Most often, the disruption comes from rule– for good or, often, for bad. Current regulatory efforts focus on the actions and behavior of some actors – mainly Facebook – creating unintended consequences for the Internet, particularly the fragmentation or fragmentation of the global Internet into one that narrows territorial boundaries more closely.
It is understandable that both the public and regulators may think that regulating the “internet” means focusing on the bigger players. Much has to do with the fact that users are often exposed to various types of illegal behaviors and content through some of the most popular services that exist on the internet. Fix misinformation, or extremism, or ideological silos, or security on Facebook, the thought goes, and you fix it online entirely.
But this is a useless and misleading story. First, the internet is not a monolith, so treating it as it is simply will not work. Second, many of the issues that regulators are trying to address are not problems online; they are social. Terrorism, child abuse and misinformation are not the offspring of the internet; they existed before the internet and will continue to exist after it as they are embedded in human societies. However, they are treated as if they were exclusively internet problems. Third, and most importantly, regulators need to stop thinking about the Internet as Facebook and treat it as such. In the regulatory landscape of the internet, there is a set of different issues, and the inclusion of Facebook in all of them, direct or indirect, adds to the current complexity. Content moderation, privacy, brokerage, competition, encryption – these are all broader issues related to the internet, not just Facebook. However, the pattern that has emerged is to treat them as Facebook issues. What this means is that, instead of focusing on trying to address them in ways that are appropriate for the entire internet ecosystem, they are addressed through Facebook lenses. This has been accurately characterized as “Facebook Derangement Syndrome”.
The global regulatory agenda is replete with such examples. In the UK, cybersecurity legislation wants to ban end-to-end encryption because of Facebook’s plan to introduce it as a default setting in its Messenger service. On the other side of the Commonwealth, Australia recently introduced a media bargaining code aimed primarily at Facebook. Facebook “left” the country before renegotiating a new deal. Similarly, in what appears to be a coordinated effort, Canada has pledged to work with Australia in an effort to impose regulatory restrictions on Facebook.
And this trend is not limited to the Commonwealth.
India’s new intermediary guidelines aim to tighten regulatory control over Facebook and its partner company WhatsApp as Brazil’s fake news bill, which was approved by the Senate, is focusing on moderating Facebook content and WhatsApp traceability. In France, there has been talk of introducing “new rules” for Facebook as Germany’s Network Implementation Act – NetzDG – was drafted with the main focus on Facebook mitigation. Finally, in the United States, the Trump administration issued an unsuccessful executive order aimed at regulating Facebook for bias.
This approach to limiting regulation on Facebook is not at all uncommon. It reflects the enigma of the lead agent who, over the years, has allowed companies like Facebook to propose policies and put in place tools that may have an impact on how regulation is enforced. The main problem-agent is mainly characterized by conflict of interest and moral hazards. Due to information asymmetries, the agent retains bargaining power, and this creates some unknowns: The director is unable to know the information the agent holds. Even when she does, she cannot be sure that the agent is acting in her best interest. So the director ends up focusing directly on the agent, not taking into account any peripheral issues that may be relevant.
The key agent problem may help explain why governments seem willing to introduce regulations targeting Facebook; however, it does not help to explain why, in the process of doing so, the main loser is the internet and its users.
Over the past two years, Facebook has said, “We support regulation” and “we want the updated Internet regulations to set clear guidelines for addressing today’s most difficult challenges.” This statement would be significant if it did not come out as a service to itself. At this stage, adjustment is inevitable and Facebook knows it – just like the rest of the big tech. In trying to adapt to this new reality, companies often take advantage of their dominant position to drive regulatory processes, often at the expense of self-regulation.
In this context, the question we need to ask is not whether regulation is appropriate, but what are the real implications of regulation in such a way? There is already an argument that focusing on some big players has an impact on the health of innovation and the ability of newcomers to compete. And, then, there is the internet. The global reach of the internet is one of its main strengths. It is a feature, not a mistake. Among other things, it allows the maintenance of supply chains worldwide; allows people to communicate; low cost; and makes information sharing easier, all the while helping to address social issues such as poverty or climate change.
To this end, attempting to regulate based on one or a handful of companies may jeopardize this very important purpose of the internet. It can create fragmentation, in the sense that it does not allow data to flow across borders or networks to interconnect, and this can be very real and have a very big impact. It can place restrictions on how information and data are distributed and how networks can interact. These are important compromises and they should be part of any adjustment process.
So where do we go from here?
Certainly, the answer may not be stopping regulation. We must acknowledge, however, that the current approach often generates undesirable consequences that only superficially affect those that need to be addressed.
In this light, one possible way forward is to experiment with regulation. Experimental adjustment is a relatively unused approach, however it is flexible enough to accommodate dynamic markets, such as the internet. Originally linked to the work of John Dewey, this idea is based on the fact that, in policy-making, the way we approach justice theories and strategies depends on “the experience of pursuing them; it is these changes that then allow us to consider how to better achieve our goals. The advantage of this view is that it regards unintended consequences as an opportunity to better define appropriate regulatory frameworks and how to achieve the desired goals.
Adjusting the internet does not experiment enough, and when it does, it seems to have the wrong focus. In Australia, for example, trying to secure strong journalism in an age of disinformation on social media platforms led to a “connection tax” that undermines the architecture, history and economy of the internet. This is partly due to the role that large technology companies play in the regulatory process. One of the immediate things that can be noticed with regulating the internet is the process that some actors decide: In the beginning, they act in favor of supporting existing policies and bureaucracies. The view is that longevity brings legitimacy, and as a result, politics becomes its cause. Once this strategy is integrated into the process, these powerful forces move towards pushing their regulatory agenda.
Forshtë therefore has a certain appeal towards flexible regulatory systems that allow different units to experiment with different approaches and make room for assessments that separate the respective rules from the irrelevant and existing ones. Although experimentation offers neither a drastic approach nor is it intended to replace traditional ways of regulation, it can limit the risks of politicization as politics becomes more context-focused.
One of the first things we need is an online impact assessment that looks at different parts of the internet infrastructure and the effect that regulation can have. It is not just about regulating some actors. It is about protecting the global infrastructure on which we all depend every day.
The Internet has a problem with Facebook, but the Internet is not Facebook.
This article represents the views of the author and not those of his employer, the Internet Society. Facebook is a member organization of the Internet Society.
Tension of the Future is a partnership of Slate, New America and Arizona State University that explores emerging technologies, public policy and society.