The Cost of Shallow Knowledge: A Tale From the Front Lines of Security

by Phonax

I was born in the early 1970s - so yeah, I'm officially an old geezer now.

Recently, I bought all the back issues of 2600 after kicking off a video series revisiting some of the hacks I pulled in the 1980s and especially the 1990s (youtube.com/CallousCoder).

Looking back - and especially at 2600 and the Dutch Hack-Tic magazine from the 1989-1994 era - it's clear just how much has changed.  And not always for the better.

Let's be honest: a lot of what's in 2600 today isn't very technical anymore.  Hell, this piece might be one of them (meta, I know).  And that's strange, because we're now drowning in interesting CVEs year after year.  You'd expect more deep-dives, not fewer.  But instead, there's been a noticeable shift toward broader, less technical content.  The magazine's gotten thicker - but in many cases, less useful.

And this trend isn't limited to 2600.  In the enterprise world, I see the same rot setting in.  Most so-called "security experts" - CISSPs, "pen-testers" - can't find a zero-day if it smacked them in the face.  They follow checklists.  They run scripts written by someone else - often not even understanding what they do.  That's not hacking.  That's compliance theater.

Developers and devops engineers can (and should) automate most of this stuff, so that real pen-testers can examine the code for these nasty logical oversights - that I too have created; we all are fallible.

Finding actual security flaws?  That takes deep understanding.  Intuition.  Curiosity.  And, sadly, it's becoming a rare skill, it seems.

I once had a client - one of the largest privately owned companies in the Netherlands - ask my team to do a technical verification of a new product before committing €1.2 million.  Sensible, right?

So just my manager and I, we looked at the business case first.  Honestly, it was brilliant in its simplicity.  My manager (still a good friend, despite me being freelance now for 17 years) and I looked at each other and said, "Why didn't we think of this?"

Now knowing what it does and understanding all the components involved, and crucially which are the critical components, we turned to the tech.

The product's goal was to enable energy producers - think for example greenhouse farmers - to temporarily reduce their own power consumption when energy prices spiked, and sell the saved energy that they generate with their generator, to the grid.

Simple example: turning off greenhouse lights for a few hours without harming crops like tomatoes, could significantly offset energy costs for these companies and shorten their ROI.  Immediately, red flags.

"What happens if a client promises to deliver energy and can't?"

"Massive fines," they told us, "and repeated violations can get you banned from trading on the grid entirely."

So I zeroed in on the telemetry: the system where the trading back office instructs the clients to shed load - say, cut 15 kWh of usage so it can be sold instead and make that bid on the energy market.  On their development system, I used a basic telnet connection to test the telemetry host - every client has one, showing the scale of the issue.

Then I tried connecting a second session.  No dice.  Classic beginner's mistake: accept() on the socket without handing off the connection to a worker thread or non-blocking select/poll loop.  I didn't even need to look at the code to know.

Their developer came in, looked frustrated, and muttered, "Guess we need to restore the system...  I can't seem to send stuff to the telemetry host after my update."  My manager shot me a look.  "That was you, wasn't it?"

"Yup.  Let's keep that quiet for now, so we don't outstay our welcome.  But we just found a way to DoS a critical system using a dumb implementation bug.  And one that can even happen during day to day business!'  We requested access to the code - because that surely had to be a treasure trove of misery.

At first the subcontractor refused, but legal pressure from our client - again, a very big name - got us the source within 20 minutes.

When I saw the code, my heart sank - rolled up in the fetus position and started sucking my thumb.

This wasn't bespoke code.  It was re-used from another client.  And that client was a household name - meaning their production systems were also vulnerable to this DoS.  And it wasn't just this one bug.

I saw they called send() on a socket without checking the number of bytes actually sent.  Another rookie mistake.  Combined with the lack of input validation, I could remotely inject malformed data into the telemetry stream and stop systems from delivering promised power.

That's not just an exploit.  That's a theoretical threat to our grid stability!  If enough clients promise to deliver energy, say 800 MW - which for this household name wasn't rare - and you can prevent them from delivering it, you can cause an imbalance in the grid.

Our national grid runs at about 21 GW capacity.  You do the math if you suddenly come short the promised 800 MW!  And remember: we weren't hired as pen-testers.  This was supposed to be a business viability check.  Yet we found two critical vulnerabilities in the first two hours.

Later, I found several other exploitable issues in the web front-end.  No Kali Linux.  No ready-made tools.  Hey, it was 2004!  Just by mere observation, curiosity, and old-school (hacking) instincts.  And only ten working days to do it all.

Here's a fun twist: after we submitted our report, I took off for a vacation in Orlando with my intern (has become my best friend, now for 21 years).  On the flight, I ran into one of the financial directors from the project.  He recognized me and asked, "Hey, are you also heading to Peoria?"  I blinked.  "What's in Peoria?'  I had no idea this private company had business dealings in the U.S. even.  "Oh, that's our new potential business partner - the company that makes generators we'll be co-selling with the solution you guys evaluated."

That's their business mindset.  Why settle for selling one solution when you can sell two?

I explained I was just on vacation.  He laughed, then turned serious.  "That report really shook things up.  We're re-evaluating everything, now everything is rooted in uncertainty."

"Sorry," I said sheepishly.

"Oh no!  No!  Don't.  This is what we hired you guys for - this is what we needed to know."

In the end, the purchasing price dropped from €1.2 million to €450,000.  The company in the U.S. and my customer have a flourishing partnership.  The code needed a full rewrite; they were just buying the rights to the idea and the guy's business knowledge.

Six years later it was sold for what I've heard though the grapevines was a nice 24 million to a German energy company.

Sure, there were six years worth of development in it but we were self reliant after the first year already.  In the U.S., these sorts of businesses became billion dollar ventures.  Here in Western Europe, we know what value really is.

Three months later, I was brought in again - this time as a systems developer, to avoid exactly the kind of issues I'd found.  Imagine standing across from the original business owner - the guy whose work you'd just gutted of 750k.  Not exactly a friendly workspace.

The Real Takeaway

Deep knowledge matters.  It's the foundation of hacking.  Of testing.  Of engineering.  But it's vanishing - replaced by scripts, tools, AI, and process zombies (no offense).

2600 used to be a place that celebrated depth.  I hope it becomes that again.  Because without it, we're not hackers.  We're tourists.

Return to $2600 Index