Web 3.0 is Bullshit

by aestetix

Every so often, a new buzzword permeates through the tech world, inspiring endless blog articles, conference talks, and empty corporate announcements.

The most recent of these is "Web 3.0," which is being hailed as some kind of "decentralized Facebook using the blockchain."

From both a technical and a conceptual view, Web 3.0 is bullshit, and in this article we will attempt to explain why.

Let's quickly cover the framing that tech pundits are trying to use to delineate between Web 1.0 and Web 2.0, and then look at the actual turning point.  In many discussions (and on Wikipedia), we can see people making the argument that Web 1.0 started in 1991 and lasted until 2004.  This is mostly reasonable.

They then proceed to say that Web 1.0 was "read-only," static pages with no user interaction, and wasn't capable of showing ads.

This is completely wrong, and only establishes that the pundits making this arguments never used Web 1.0.  In the mid-1990s, we had sites like GeoCities, which allowed people to log in and create their own web pages, and sites like Slashdot had - and still have - massive post interaction; in fact, Slashdot gave us the term "Anonymous Coward," used to describe someone posting comments anonymously.  As far as advertisements, technologies like pop-up blockers and ad-blockers were a direct response to the constant barrage of advertisements from websites.  And all of this was happening long before 2004.

A better marker for Web 2.0 is at the protocol level.

With Web 1.0, while we could do all of the things that tech pundits now claim we couldn't, viewing updates did involve refreshing the page.  That is, the browser would make an HTTP request, get an HTTP response, and that would be the end of it.  Web programmers came up with ways of making websites seem interactive, using tricks with hidden "<div>", as well as JavaScript and CSS elements; however, the real innovation came when we realized we could use a JavaScript object called XMLHttpRequest to make a new HTTP request from the browser, grab the response with JavaScript, and parse it back into the page without refreshing the page.

This fundamentally transformed the web.  It enabled things like real-time updates without the need to click an "update" button, and opened the door to a lot of drag-and-drop web technologies we now take for granted.

We also realized that because JavaScript - like many programming languages - is Turing complete, we could use it to recreate almost all of the software we used on the computer, such as email clients and word processors, in the web browser.  We now take tools like Gmail and Google Docs for granted, but in 2004 such an idea was revolutionary.  In many respects, these uses of JavaScript brought us into the modern Web 2.0 era.

Defining Web 3.0 is a lot trickier.

As soon as the term "Web 2.0" was coined, tech pundits were trying to slap the label "Web 3.0" on every new craze.  Some said that Web 3.0 was Big Data systems like MapReduce.  Others suggested it was the advent of mobile devices (iPhone vs. Android).  Still others ensured us it was the walled gardens of Big Tech itself.  It's a bit ironic to watch the same pundits who once said that Web 3.0 was Big Tech now announce that Web 3.0 is the technology that will help free us from Big Tech.

But let's take the pundits at face value and assume that Web 3.0 is what they claim: using blockchain technology to allow people to set up applications in decentralized systems that are free from Big Tech.  The problem with this claim is that both Web 1.0 and Web 2.0 were already decentralized, without needing a blockchain.  In addition, questions that many tech pundits are now posing regarding how to curb censorship while preventing crime with Web 3.0 have already been addressed, as the same issues have arisen time and time again over the last 30 years.

Web 3.0 purports to enable data ownership, but we can already do that.

All we need to do is set up our own websites on our own servers, which is now easier than ever.  If the issue is interoperability, we have all kinds of technology to do that.  If we want to find a list of open systems and protocols, we can simply look at technologies that Big Tech companies initially embraced and then abandoned to force everyone into their walled gardens.  It might be worth asking why Google killed off Google Reader, which used RSS; or Google Talk, which supported XMPP.

Regarding the questions of censorship and free speech, this is not a new debate.

In the early 1990s, we had the debate on whether there should be an "Internet Driver's License" when the Internet was known as the 'Information Superhighway."

We then saw the advent of laws like the Digital Millennium Copyright Act (DMCA) and the Communications Decency Act (CDA), both passed in 1996, attempting to address issues of digital ownership and speech.  We also had websites like WhiteHouse.com (a porn site), and Nissan.com, a small family-owned computer company which has been fending off lawsuits from the massive car company since 1994.  And, for that matter, consider the decades of debates within ICANN about whether to create a .xxx top-level domain.

In conclusion, Web 3.0 is bullshit that simply revives old ideas, and the only new insights revealed by the current "discussion" are the level of absolute ignorance of tech pundits and the depth of greed of venture capitalists who are probably trying to recoup losses from bad investments into crappy cryptocurrency startups.

Slapping a blockchain onto a website is not going to magically solve everything.

Then again, what more can we expect from the ruling class that is attempting to usher in the "metaverse?"

Return to $2600 Index