Artificial Interruption

by Alexander Urbelis (alex@urbel.is)

Feeding on Feedback: A Fatal Flaw of AI's Future

When the towers fell on September 11th, I, like many Americans - and especially New Yorkers - felt compelled to help.  After waiting in line to give blood, I was turned away.  I had studied at Magdalen College, Oxford University from 1997 through 1998 and thus triggered a ban on donors who had lived in the U.K. for six months or more since 1980.  The restriction was tied to fears of transmitting Mad Cow disease through transfusions.  I've often recalled that moment, but it resonates even more now as I think about AI systems and large language models.  The parallel is striking: just as prions in Mad Cow disrupt the brain by inducing self-propagating disorder that overwhelms the body's recycling machinery, AI models that repeatedly train on their own output risk a digital analogue - a "Model Autophagy Disorder," or MAD.

Mad Cow disease, formally known as Bovine Spongiform Encephalopathy (BSE), earned its name from the erratic, uncoordinated behavior of afflicted cattle.  The disease arises when cows consume feed contaminated with prions, i.e., misfolded proteins from other cows.  These proteins resist breakdown, accumulate in the brain, and trigger the devastating neurological decline that defines BSE.  The analogy to AI is clear: like prion-contaminated feed, self-ingested output can corrupt models, gradually degrading their ability to function.

AI systems, in turn, suffer from MAD, a form of digital cannibalism that occurs when models are trained on data that other AI systems generated.  Over time, when models continually ingest the other AI-generated data, something weird happens: the diversity and the quality of the output degrades and ultimately leads to what is termed "model collapse."

When model collapse occurs, just like the mad cows, AI systems become increasingly detached from reality.  Reality, however, in this sense, is the context of human-generated data.  This means that an AI model will begin to generate factual inaccuracies and - just like the prions that infect the brains of cattle which do not degrade - the AI models appear to have irreparable defects.

Perhaps another way of looking at this phenomenon is that when AI systems become inbred in this manner, they lose their spark, their creativity, the veneer of brilliance - because they lose their minds.  The implications of this, though possibly not immediate, could be drastic.

If you can, imagine for a moment yourself in the 1990s or really any decade before, where cell phones were not ubiquitous.  Imagine the days where, if you had to make a telephone call while driving, you needed to find a payphone, pull over, and scrounge together some change, dial the number of the person you're calling, and have a succinct conversation before your quarter's time was up.  You had that telephone number in your head.  You had the telephone numbers of all your friends and family in your head, at the ready to be dialed at a moment's notice.  I remember quite well having the ability to not only memorize important telephone numbers, but as a phone phreak, train my brain to memorize a considerable number of ill-gotten calling cards together with the PIN, credit card numbers, and numbers to hacked voicemail box systems where other phone phreaks would dole out codes.  I would then commit those numbers and codes to memory with ease.

Fast-forward now to 2025, and think about how many telephone numbers you've remembered lately?  The number can be counted on one hand, and in all likelihood, you won't need all your digits.  You may remember many of those numbers you frequently dialed more than 25 years ago, but I highly doubt you recall any of the telephone numbers associated even with your most frequent contacts.

Cell phones, even the earliest versions thereof, had onboard memory that allowed you to store these numbers in a contact directory.  That little bit of memory saved us from having to remember all those numbers.  And over time, our brains became used to not having to memorize digits in this manner.  Our neural pathways changed and now I find it quite difficult to commit new numbers to memory, despite the earlier facility I had.

I fear that in much the same way, by removing the need for humans to research, organize their thoughts, and then draft a cogent and coherent piece of writing based on those organized thoughts, that over time, we will begin to lose ability to think rationally, clearly, scientifically, and creatively.  Indeed, I already see the beginnings of this.

As a law professor for several years now at King's College London, I have a great deal of international students at the post-graduate level.  These students are very bright.  There is no doubt about that.  But while many of these students in the pre-ChatGPT and pre-LLM world may have struggled with language difficulties, and with creating an outline of their proposed dissertation, nearly all of my students now have no such travails.  In fact, the difference in the work product of the students today versus only three years ago is quite astounding.

It's not just students using AI systems to assist with their coursework.  When I was recently in Barcelona for a conference of chief legal officers, many of my legal colleagues regaled us with their innovative uses of AI modules to prepare revisions of contracts based on past contracts that have been negotiated.  This exercise saved time and a great deal of outside counsel fees.  This also sped up the process of reaching a final agreement with your counterparty and thus helped the business achieve its goals.  Everybody wins, it seems.  But maybe not.

We have to think of the young lawyers and other professionals who would have negotiated that contract.  These are teaching moments and formative experiences.  If an AI module is on both sides of a contract negotiation, that contract may be found in final form in a highly expedited fashion, but there is something slightly terrifying about the notion of non-human systems negotiating with each other about the labor of humans.

We're swiftly sliding into a stage where AI will draft our deals, research our reports, outline our ideas, write our works - songs, poems, novels - doing anything demanding deep thought or detailed design.  As we near this notorious tipping point, danger looms.

The AI allies aiding us may start to stumble.  With fewer human-forged ideas to feed on - because AI does the heavy lifting - AI systems will feast mostly on each other's feeds.  As they gulp down this synthetic slop, AI systems' efficacy will swiftly decline.  It's not beyond the pale to envision this scenario.

When AI suffers from MAD, it makes mistakes, muddles accuracy, and misses creativity - but, critically, will still churn out content.  If unchecked, errors embed in other AIs' training data.  Like prions poisoning cattle feed, these flaws infect individual models and soon spread, threatening the entire AI ecosystem.

After years of carrying our cognitive load, when AI systems begin to stumble and fall, a frightening future may unfold.  Humans may begin to de-evolve, losing the very spark of what made our species unique, and flounder at basic societal tasks.  just as our memory for numbers has faded, so too may our skills to organize, argue, and create.  This prospect profoundly worries me.

We may be living in AI's golden age.  Today, AI is fueled by millennia of human creativity, rich with unique, human-made data.  But in a decade, as AI crafts most content, these systems will starve for fresh fuel - rejecting the bland diet of their own making.

There is something ineffable and inexplicable about content that humans create that contains within it the spark of something greater.  The words of Aristotle or Emerson carry with them the weight of human experience and toil in a way that no AI could ever duplicate.  And yet, it is that very spark of life within human content that is the essential raw material for AI systems to operate.  If we are to co-exist with AI systems, the only way forward is for us to continue to create unique and original works, which, in turn, means that we cannot and should not rely on AI systems to generate that content.

This leaves us in the somewhat dystopian position of having to work to feed the machines that sustain us.  Machines that were of course there to ease our burden will have become our task masters and our burden to carry.  The way out of this is for society to place a premium on creative content that we create without the crutch of AI, to recognize that the term "artificial" is the operative word in the phrase "artificial intelligence."  I hope that we may one day come to see artificial intelligence in much the same way that we view artificial sweeteners: a cheap, ersatz replica, and potentially harmful.

If we ignore these digital echo chambers and toxic feedback loops, we risk eroding human intelligence and abilities, condemning ourselves to a stagnation (or worse, decline) where genuine progress and innovation may not merely vanish but cease to be linked to humanity at all.

Return to $2600 Index