The fight for the internet (1)

When writing about technological leaps on the internet, one can fall into the trap of oversimplification or reduction.

I didn’t want to produce a dry and truncated technical report that overlooks the actual drivers of the development of the internet nor the changes that have ebbed and flowed over the past decade within the largest network in the history of humankind.

These changes are becoming more apparent in the legislative preoccupation with online privacy issues, the rising interest in blockchain technology and cryptocurrencies (such as Bitcoin), and other efforts to move away from centralized structures of control.

It wasn’t long before I drew parallels between these struggles within the internet and the fears voiced by German philosophers Heidegger and Horkheimer (the Frankfurt School) about how technology has developed and changed our lives following the Industrial Revolution. Heidegger called his observations “calculative thinking,” and warned of the statistical nature of technology: the tendency to reduce value to what can be quantitatively or mathematically measured (such time or distance or, in our market economy, the efficiency of production) — an absolute Cartesian rationalism that has overtaken all that was “spiritual” or “ethical” in the pre-Renaissance world. In doing so, he was pointing out the mistakes of his contemporaries, such as the claims by physicist Max Planck that “what is real is what can be measured.”

“All distances in time and space are shrinking… [Man] now receives instant information, by radio, of events which he formerly learned about only years later, if at all…Distant sites of the most ancient cultures are shown on film as if they stood this very moment amidst today’s street traffic…He puts the greatest distances behind himself and thus puts everything before himself at the shortest range… The peak of this abolition of every possibility of remoteness is reached by television, which will soon pervade and dominate the whole machinery of communication…Yet the frantic abolition of all distances brings no nearness; for nearness does not consist in shortness of distance.”

—An excerpt from a 1950 lecture Martin Heidegger gave before the Bavarian Academy of Fine Arts

Horkheimer defines “instrumental reason” — a term he used to describe this phenomenon in his book Critique of Instrumental Reason — as a mode of thinking whose main objective is to solve problems based on quantifiable means and efficiency, a mode that completely disregards the humanitarian or ethical aspects of such solutions. 

In A Treatise of Human Nature, Hume wrote about the “is-ought problem” — the distinction between positive statements (about what is) and prescriptive statements (about what ought to be). The former is the basis for the scientific method, while the latter is an ethical issue that can be explored through social or religious inquiry. 

Horkheimer built on this idea to formulate his critique of the post-industrial-revolution phases of production. Heidegger also pointed to the vital distinction between what is and what ought to be, although he didn’t formulate his philosophical discourse around a specific economic phenomenon, as Horkheimer did. Heidegger instead placed his philosophy in the context of the human being’s relationship to stages of knowledge (the Ontic-Ontological distinction), making his framework the most suitable in explaining developments of the internet through battles currently taking place in the background, sometimes far removed from internet users.

But before getting into what’s happening behind the scenes in the world of the internet, let’s go back to a fundamental question: What is the internet?

At first glance, the question might seem silly. But in reality, the common perception is that it is Facebook or Google or other, similar services.

Even the definition of the internet as a network you use to reach websites or services is still lacking.

When you surf websites, you’re using the World Wide Web, a subset of the internet. The World Wide Web is simply the most prevalent and important protocol on the internet, no more and no less. Yet, even if our understanding of the internet extends to include other technologies such as email or torrenting, we’ll find that such a utilitarian — or to be more accurate, objective-based — definition still fails to capture the underlying reality of such technologies.

This basic understanding lacks what Heidegger called “releasement” — meaning that it confuses objectives and results for essence. This leads to mistakenly understanding the internet through its practical usefulness. Instead, we should be examining the concept unrestrained by its function.

Heidegger termed the realization of this mistake “transcendentalism,” a concept that has allowed for attempts to develop, or correct, the internet, which I will discuss in the second part of this series.

But first, let’s go through the circumstances and interests that led to the current formulation of the internet and the World Wide Web. 

How did the internet start? Communication in the event of nuclear disasters

First, we have to keep in mind the original reason behind the creation of the internet as we try to understand the nature of this technology.

In the late 1960s, at the peak of the Cold War, the Defense Advanced Research Projects Agency, the Pentagon’s research arm, was trying to build a communication network that could withstand a nuclear attack. Paul Baran, one of those working on the project, was able to develop a new technique for data transfer called “packet switching,” with the purpose of building a decentralized network that couldn’t be destroyed by targeting one central hub.

Then in 1974, Americans Bob Kahn and Vint Cerf developed a communication protocol that allowed for a more efficient and secure transfer of data — TCP/IP — which later became the technological stepping stone for the creation of the internet. This protocol assigns a unique address to every machine connected to the internet, which allows for communication between all the computers on the network.

While the original objective was to build a decentralized communication network, the end result was computers connected to Internet Service Providers (ISPs) which run smaller regional networks — such as TE Data, or WE in Egypt — that are connected to the mother network, the internet, which is essentially made up of ISPs in other countries.

This technology paved the way for some of the protocols that we now use on a daily basis, such as HyperText Transfer Protocol (HTTP), BitTorrent and email. HTTP came to dominate the internet as the original Usenet and ARPANET protocols faded, eventually becoming the basis for the World Wide Web. 

The desire to share knowledge and the creation of the World Wide Web

In the 1980s, British computer scientist Tim Berners-Lee was working on designing a database for the European Organization for Nuclear Research (CERN). He worked sporadically on the project until CERN started using the TCP/IP protocol, which enabled him to create a data network based on a theoretical device mentioned in a 1945 article by American inventor and engineer Vannevar Bush — the MEMEX — that links data together in a way similar to hyperlinks and HTTP that we use on the web today. 

This concept was further developed and every element/resource on this network, not just those servers, was given a unique identification number called a URI — and so the Web was created.

The idea was to connect all data inside the network in a manner that allows for building knowledge as Vannevar had envisioned for MEMEX. The machine would collect human experience and knowledge and link them together where they overlapped. The concept is similar to Wikipedia — you click on a word, a video, an audio clip, or a picture, on a page and you are taken to another page with more information on that particular element.

In short, the original objective was to share information and make it more accessible. But vigorous efforts to develop this concept inadvertently led to defective results. We ended up with a model on top of the internet that made web addresses easily readable — URLs and domain names that follow the rules set by the Internet Corporation for Assigned Names and Numbers (ICANN), such as Facebook.com, for example, that hid the network’s complexity.

This naming paradigm made a handful of monopoly services such as Facebook or YouTube more popular by virtue of the strength of their brand versus other, smaller sites.

In fact, the current state of the internet and the web comes back to the centralized system it was trying to improve on, a phenomenon that can be explained by the following: 

  • Web paradigm/model #1: Client/server

Let’s take this very article you’re reading as an example. To reach it, you clicked on a link, then the page was loaded on your screen. On your browser’s address bar, you’ll find the main address (madamasr.com in our case) followed by forward slash characters (/) separating sub-links that lead to the article’s page, or the resource identifier, also known as URI.

The browser acts as a “client” that asks for resources (the page’s data) from the server (or one of its mirrors if you’re reading this inside Egypt in 2019). The server, in this case, is just a powerful computer able to fulfill the website visitors’ requests.

One of the most common protocols used in this model is HTTP, which you’ll find preceding all the URLs of the websites you visit.

  • Web paradigm/model #2: The Cloud

This model is a natural and functional evolution of the structure previously mentioned, that of servers responding to users’ requests. 

Servers and resources are placed under one “virtual” cover with the same address, which raises capacities to meet users’ requests — such as G Suite, which includes Gmail, Google Docs, Google Drive, Google Calendar etc.

Before we carry on any further, let’s clarify some terms:

In the context of cloud computing, the term “resources” can be broken down into its basic elements, doing away with the old notion of super-servers or super-computers (hosting hard drive space or computing processing ability). The resources here are the technical capabilities to connect machines and networks. Data stored in a cloud is in fact stored on several servers, most likely in different data centers, if not in different cities and countries. A system manages this data so you can access all of it when you log in to your iCloud or Instagram, for example.

The nature of cloud technology further entrenched the separation between producers (developers) and consumers (internet users). The technological divide between the producer and consumer widens with the latter increasingly reliant on services provided by the former, as opposed to a more open, decentralized system.

When resources are divided in such a manner, the owners of these systems and data centers end up with surplus capacity which they can then monetize, creating a new strain of web services.

These new services fall into two main categories:

  1. Services targeting developers, such as Infrastructure as a Service (IaaS) or Platform as a Service (Paas).

IaaS specifically targets online web developers by providing hosting services and virtual servers that are operated and maintained to maximize organizational efficiency according to the demands of users. In this way, web developers get cheaper, more efficient solutions in exchange for relinquishing real ownership of their platforms. Among the most prominent providers of such services today are Amazon, Google, and Microsoft, companies which may also integrate with each other, as Oracle did with Microsoft in June 2019.

PaaS provides similar solutions to IaaS and also offers computing and development environments (i.e. programming languages and operating systems for developers and companies). Rackspace and Force.com are two prominent examples.

  1. Services targeting consumers such as Software as a Service (SaaS) and Data as a service (DaaS).

Most internet users deal with these services several times daily when they back up files online, share a status on social media, create a document on Office 365, watch a YouTube video, or watch a movie on Netflix or Watch iT, its Egyptian, government-run counterpart. Google recently unveiled Google Statdia, a gaming service using the same technology, which is set to compete with hardware consoles such as Playstation and Xbox. 

All of these services are manifestations of the idea of focusing on consumers’ productivity in the new edition of the web, known as Web 2.0.

The monetization model for such services relies on you giving up ownership of your behavioral data (such as a joke on social media, or your preference for a product) or your files, in exchange for storage space, a user-friendly interface, and a number of additional features. In effect, your very identity, your relationships and all of your preferences have become commodified and sold.

This is what we are witnessing: A boom in a new discipline — data science — which uses quantitative reasoning to produce statistics and analysis of big data (i.e. all the information available in this new system.)

  • In short, both developers and users sell their identities, property, and personal narratives to a handful of corporations in exchange for cheaper, more efficient services. This is the pinnacle of calculative thinking (what can be mathematically counted, such as number of followers) and instrumental reasoning (price and efficiency of services).

In this system, individuals pay the price by allowing themselves to become resources for corporations. They hand over their identities, their ideas, and everything they own to the service providers. This behavioral data is then computed using algorithms into statistical data that corporations use to make economic and political decisions — ways to control the masses.

  • The clearest and most basic consequence of this system occasionally manifests itself in incidents such as leaked celebrity pictures, or Mark Zuckerburg’s congressional hearing in light of the revelations that Cambridge Analytica misused Facebook data to influence voters in the US election. These consequences come on top of the fact that the centralization of these systems makes them weaker and more prone to collapse — defeating the very reason the internet was originally created.

But to get the full picture, it is necessary to make note of an issue that has been entrenched in the roots of the internet since its earliest days. 

The centralization of Internet Service Providers and ICANN Organization

The circumstances that led to the centralization of the internet (a system originally built on decentralization), coalesced when the World Wide Web was activated. 

It all started with an attempt to accommodate the human tendency to remember names better than numbers — you surely prefer typing “Google.com” instead of “172.217.10.110.” 

In 1983, American computer scientist Paul Mockapetris was looking for an alternative to typing out a long string of numbers every time he wanted to reach resources on ARPANET — one of the first networks to use the TCP/IP protocol. Mockapetris presented papers that argued for replacing numbers with an index that matches numbers with easily-written names. It wasn’t long before users started adopting his system.

As the internet grew bigger, the need grew for a “neutral” organization to be responsible for this index, which is copied on machines around the world called DNS servers (think of it as a phonebook, but for browsers). The non-profit ICANN organization was created in 1998 for that purpose.

It runs 13 root servers (as per the margins of the fourth version of IPv4 protocol) responsible for feeding and updating the rest of the DNS Servers with the same index. Ten of these root servers are located in the United States, while the remaining three are in Stockholm, Amsterdam, and Tokyo. These servers are responsible for organizing the addresses we use daily when we visit websites (such a monopoly occasionally creates conflicts between countries).

Tasking ISPs with managing regional networks also automatically gives governments and owners the power to block websites and domains however they please, given the absence of international regulations banning such practices. 

This location-based naming system allowed for the disappearance of half of the internet’s websites on one October day in 2016. Once again, a fundamental flaw in the naming system brought everything on the internet deeper towards a centralization system, with all of its flaws.

This is how the internet gradually shifted from a distributed system to a centralized system. This centralization produced by the aforementioned relationships is easy to point out. Much of the activity on the internet takes place on — and is controlled by — a handful of corporations.

To put the extent of this centralization in perspective: Facebook has 1.52 billion visitors daily. The concentration of visitors to websites and services that currently monopolize the internet speaks for itself. Centralization, in this case, took shape spontaneously according to the rules and economic logic of the free market, producing an oligopoly of Microsoft, Amazon, Facebook, Apple, Google and other internet titans. 

The calculative thinking of consumers helped create this situation through their total surrender of ownership and identity in exchange for some benefits, like lower costs or a better user experience. Meanwhile, developers’ instrumental reasoning pushed them to give up ownership of their tools and products, and prioritize profit and economic gain over users’ privacy and identity, and the ethical questions over managing such massive troves of data. 

We need to find a way out of this.

AD
 
 

You have a right to access accurate information, be stimulated by innovative and nuanced reporting, and be moved by compelling storytelling.

Subscribe now to become part of the growing community of members who help us maintain our editorial independence.
Know more

Join us

Your support is the only way to ensure independent,
progressive journalism
survives.