How to Search Google Without Running Their Yucky Scripts
by N1xis10t (n1xis10t@protonmail.ch)
On January 17, 2025, the news broke that (((Google))) now required all their users to run JavaScript to get search results. I'm here to tell you how painfully easy it is to bypass - nay, completely ignore - this requirement.
Google told TechCrunch that "Enabling JavaScript allows us to better protect our services and users from bots and evolving forms of abuse and spam," and also "to provide the most relevant and up-to-date information." It would take a disturbingly small amount of extra effort (or none at all) for scraper operators to get past this requirement, so I think that Google's motivation might not be quite what it seems.
Before I elucidate, let me show you how to do it. Google will try to redirect you to a page that tells you to enable JavaScript, so make sure you are using Firefox unless you can find a way to prevent your other browsers from automatically following redirects (I found a setting for it in Chrome and Brave, but it didn't stop Google from redirecting).
1.) If you already know how to turn off JavaScript and automatic redirection in Firefox, do that, and then skip to Step 4. Otherwise, type about:config into the address bar and hit Enter. If it tells you that you are a moron and might damage your system, just ignore that.
2.) In this config page, type javascript.enabled into the search bar, and then when that configuration option shows up below, toggle it to false with the little double-arrow icon thing over on the right side of the screen.
3.) Next, search for the accessibility.blockautorefresh option, and toggle it to true.
4.) Go to google.com and type in your query. Hitting Enter won't work, so just click the little Google Search button below the input form to submit your query.
5.) This will get you to a page that says "Please click here if you are not redirected within a few seconds." Don't click the link. Everything you want is in the page that you are now on, it's just hidden. Press Ctrl+Shift+i to open the developer tools.
6.) The developer toolbox is split into three sections. On the right there should be some stuff about layout, in the center there should be some style information, and on the left there is a bunch of HTML. Type #main into the search box at the top of the HTML section and hit Enter. It will highlight a little div element a ways down in the page.
7.) Now look in that center toolbox section that has style information. You should see a piece that says display:none. This is the CSS rule that keeps our precious search results hidden. Hover over the words display:none and uncheck the little checkbox that appears to the left of the words.
8.) Congratulations! The search results should be visible in the page now, and you can scroll through them. If you click to the next page of results, you will have to repeat Step 6 and Step 7.
So, yeah. Maybe it isn't as easy as getting in a car accident, but it certainly isn't the next Zodiac cipher. I would assume that most scrapers don't attempt to render web pages, so I'm not sure this would actually even be noticed by most bot owners. According to TechCrunch, some tools did seem to be affected though.
I must conclude that one of two things is the case: either Google is colossally stupid and doesn't know how to keep people from scraping, or Google doesn't actually care about bots at all, but instead their only goal with this restriction is to get individual non-malicious people to turn JavaScript back on in their browsers.
If I had to guess, I would guess case two. I suppose the key takeaway here is that people shouldn't be able to forgo fancy interactive features in order to speed up their browser and protect themselves from untrusted scripts. Turn JavaScript back on, peasant!
Update: I had to figure out a new bypass method because, as it turns out, it's not quite that simple. You can make about 20 searches with this method before Google stops including the results in the pages that it gives you. Of course, the easiest way to get more results is to turn on JavaScript, reload the page, and then turn off JavaScript again. You get about 20 more searches before you have to do it again, so that obviously isn't a great solution.
Another way to do it would be to figure out what the JavaScript in the page does to make Google trust us again, and then isolate that functionality from the rest of the JavaScript and either execute it in the browser or write an implementation of it in a different computer language. That would be difficult or at least time consuming though, and I found a much easier solution.
There is a text-only web browser called Lynx that doesn't run any JavaScript, and if we use it to make Google searches it - surprisingly - just works. There also doesn't appear to be any limit to the number of searches we can make through Lynx. Google actually gives this browser pages that contain no JavaScript (but do contain all the results), and we can get that same special treatment if we change the user agent string of our normal browser to be the one that Lynx uses. It is pretty easy to find tutorials on the web for how to change your user agent string, so I won't tell you how here. This is what you need to change it to:
Lynx/2.9.0dev.10 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/3.7.1Using this user agent string will be just fine for most websites, but walmart.com (and probably some others) won't let you do any browsing because they think you're a bot. It works really great for Google though.
I haven't seen any JavaScript or AI summaries in this incarnation of Google, and it has a far more utilitarian interface that I really like. I might end up having to write another part to this article if a bunch of people start writing bots that use this method, but for now it works perfectly.
I would like to rescind my earlier comments about Google's motivations, because I'm really not sure anymore.
Use Lynx or at least pretend to, and let me know if you stumble across any other websites that are cooler when viewed this way. Thanks for reading!