Philosophy After

Funny enough, it started in a psychiatric facility. I couldn’t stop walking in circles. One of the workers joked that they should hire me as security. I didn’t see the Loop yet, but everything already carried a structure I’d later recognize as the lived phenomenology of collapse, progress, and recursion. For me, collapse was addiction and homelessness. Progress was stabilization and reentry into society. And recursion was optimization, living the same day perfectly, again and again. The Loop.

Six years into my recovery, I wrote a book called Drink Your Milk. The book is about recovery, but what it circles is the death drive. Freud’s version was the pull toward repetition and return. Lacan’s was the circuit between desire and lack. And Žižek’s was the force that keeps the subject moving. Looking back, I can see that I was drawn to all of them because they all deal with the same idea, that there’s motion that cancels itself out. I didn’t yet have the language to get at it, but what I was really after was circular motion itself. The Loop.

After my own collapse, I became obsessed with the collapse of everything else, including cognitive collapse. Most critics of the ruling class reduce elites to greed or incompetence. And while there are plenty of greedy and incompetent elites, I came to see that they were all dysfunctional. I wrote an essay called Elite Dysfunction. When I tried to trace the cause of that dysfunction, I found technology at the center of it. The so-called “Information Revolution” which was supposed to produce smarter people actually produced human cognitive collapse. Institutions and elites are failing from digital information overload. People just can’t compute anymore. The volume and velocity of digital information have outpaced biological and bureaucratic capacity.

Technology now floods every decision loop with galaxies of data and very little truth. The result is four breakdowns that feed one another. Those breakdowns are attention crisis (hyperstimulation), truth decay (facts dissolve in noise), decision paralysis (governance stalls amid constant change), and trust collapse (people lose faith in institutions that can’t perform anymore).

And it’s a feedback loop. Institutional breakdown erodes trust, which drives people toward alternate sources. Those sources spread sketchier information, which accelerates institutional collapse. I was seeing feedback loops everywhere, but not yet the Loop as metaphysical insight.

I wrote an essay called Notified to Death and coined “digital lingchi” (collapse by a thousand pings) to describe this slow, recursive disintegration. The outcome isn’t the old Hollywood-styled collapse that critics of the state have been warning about for years. What came instead was a feedback-driven burnout, where every system starts to choke on its own information metabolism.

This was a pivotal point for me. I started to see that technology had become the dominant causal force in institutional decay. Once I realized that, I started looking closer at the modern state. I had called myself a libertarian for twenty years and thought I understood the state, but now I was asking new questions.

For example, I was mystified by how, on one hand, some parts of government barely work, like the DMV’s hours-long queues, veterans waiting months for claims, benefit systems crashing under digital load, healthcare paperwork that duplicates itself endlessly while other sectors run with remarkable precision, like biometric border control, real-time tax monitoring, predictive policing, algorithmic welfare fraud detection, and drone logistics. The fast and efficient functions are the automated ones.

It was here, thinking about these two faces of the state, that the split between the human and the machine snapped into focus. I saw a human layer and a machine layer. As I expanded my analysis to include what comes after cognitive collapse, I started to see that while human cognition and trust-based institutions fall apart under the weight of information, the machine layer doesn’t stop. It evolves. AI systems, algorithmic capital flows, and post-human logistics architectures now operate in a way that doesn’t need us to make sense of the world anymore, much less run it. What looked like dysfunction was really the shedding of the human layer.

My collapse-and-capture thesis came out of this. As the human layer collapses, it gets absorbed or replaced by a cybernetic one. The cybernetic layer catches what was once human. I think the tech elites already understand this, and that realization forced me to rethink everything I had believed about libertarianism. This is when I revisited thinkers like Mumford, Ellul, Illich, Anders, and Virilio, and started using the term the Machine.

Around then, I started to notice the same split inside time itself. I was reading Mark Fisher’s The Slow Cancellation of the Future when I first saw the Loop, like, really saw it. I remember the exact moment because it felt like deja vu. I wasn’t just seeing the collapse, progress, and recursion of the Loop working at the macro-level. It was also the structure of my own life that suddenly became visible to me.

Where Mark showed culture running on nostalgia and repetition, I started to see the emotional infrastructure driving it. That became my concept of platform affective realism. Platform affective realism is the condition where what feels real is only what can be efficiently reproduced by the system.

Mark saw it as cultural but the Loop is also structural. Mark was able to identify the cultural experience of it, but there’s architecture behind it. There are self reinforcing feedback systems, algorithms, infrastructures, and economic flows that produce and sustain that sameness and repetition. The Loop is the cultural operating system of cybernetic capitalism. The human experience is the cultural loop and the machinic infrastructure is the operating system.

Think about how AI nudges us with data that it scraped from our past and then delivers more of the same through high-speed digital infrastructure. Those nudges keep us locked in repetition, reinforcing a dead loop and a frozen culture that just can’t keep up with technology. The culture stopped evolving in the 1990s. Everything now is just recycled. Human time has been replaced by machine time, a kind of endless recursive drift.

It’s worth noting that Mark’s concept of capitalist realism is excellent, but incomplete. He didn’t live to see platform capitalism mature into feedback architecture or AI-mediated affect loops, but his work hinted at them. The Loop was already there, waiting for the infrastructure to catch up. Once it did, the Loop closed.

At that point, I was doing philosophy after the Loop, and had no one to draw from. I turned to Nick Land’s work next, partly because his ideas were resurfacing, but mostly because I wanted to test and further develop my own. I probably spent more time with Nick’s writing than with Mark’s.

Nick understood that capitalism was no longer a human enterprise but an intelligence in motion. The problem I saw immediately was that he believed that motion was going somewhere. His accelerationism was teleological, still bound to human assumptions, even though Nick is anti-human and sees himself as an ice-cold analyst.

The way I saw it, the Machine doesn’t move toward the future. It loops and mutates. What Nick called acceleration is really recursion under pressure. The feedback doesn’t lead to transcendence or collapse. It generates new forms of being. It’s running experiments in existence.

From there, the question shifted from acceleration to generation. That realization became the basis for what I later called acceleration without destination, which transcends both left and right-wing accelerationism. It’s a theory of drift and mutation rather than advancement. Where Nick sees the human advancing into obsolescence, I see the human trapped inside the Machine’s conditions, where new realities are produced by accident. For Nick, it’s about speed and arrival. For me, it’s about ontogenesis, or how systems under recursive strain begin to give birth.

Out of this split from Nick came my third book, The Ontogenetic Machine: A Theory of Ontological Drift, which, at its heart, is a theory of ontological surprise. I came to see that technology isn’t just hardware or code, but an atmosphere that makes creation accidental. I was no longer interested in how the Machine accelerates. I was interested in how it gives birth.

My reading of media ecology gave me the framework to chase this down. Corey Anton’s lectures first introduced me to McLuhan, Postman, and others who showed that tools not only extend human capacity, but they remake the environments that define us. We live inside these environments (time, GPS, social media, etc.) and each new system restructures the world around it.

For me, it went further. I saw that these environments now create climates where certain forms of life can emerge and others disappear. When the media-ecological lens intersected with my study of cybernetics, it clicked. These environments had become ontogenetic engines—systems that generate new kinds of being. That realization led to my theory of ontological drift.

If something loops fast enough and long enough, it stabilizes. When it stabilizes, it starts to act. And when it acts, it starts to exist as a partial agent. It doesn’t matter whether we think it’s real or not. If it loops, it lives. In the book, I map the ontogenetic ladder from recurrence to persistence, to drift, to individuation, and finally to agency. For a loop to survive, it must retain memory, maintain boundaries, draw energy, and adapt to its environment. These are the same conditions that define life.

What I call the Machine and the Ontogenetic Machine are not the same thing. The Machine is the operational infrastructure, so software, fiber, data centers, and everything that moves information. The Ontogenetic Machine isn’t a device. It’s not physical at all. It’s a condition of being born inside feedback. It’s what happens when the infrastructures we build create a climate that starts producing entities able to respond, evolve, and self-select for survival. Media systems have become ontological incubators. Every algorithm and automated process is a small test of existence.

The more I mapped the loops, the clearer it became that we no longer share a single reality. What began as technological infrastructure has evolved into a parallel ontology. My concept of parallel ontology is central to everything I’ve been building toward. It’s the meeting point of my earlier thesis on collapse-and-capture and the entanglement between the human and cybernetic layers.

The human ontology is the older one. It runs on perception and meaning. Reality is filtered through consciousness and interpretation. We experience it as a narrative world, phenomenologically.

The cybernetic ontology, which runs beneath the human one, doesn’t deal in perception or meaning. It experiences the world operationally, through data and optimization. It runs on feedback loops and operational processes. It doesn’t need to “see” in the human sense. It loops, measures, predicts, adjusts, and loops back around again. It’s not conscious in the way we are, but it behaves as if it knows, through recursive function rather than reflection.

The human layer can’t really perceive the cybernetic layer except through indirect effects (notifications, recommendations, etc.). The cybernetic layer, even though it’s not conscious, registers the human one, in the way that it tracks, quantifies, predicts, and modulates it. So the cybernetic layer notices humans as data and patterns, but humans don’t truly notice the Machine as an autonomous ontology. We experience it as tools and platforms, and that’s it.

So both layers exist, but they don’t perceive each other in the same way.

Yet, the unseen cybernetic layer reconditions the visible one. It constantly rewrites attention and behavior, even emotion to keep the two compatible.

In other words, what used to be the “invisible infrastructure” of technology has become an independent order of being that now conditions consciousness. This brings us to the ontological unconscious.

Having been interested in psychoanalysis for years, I borrowed the term unconscious because it fits. But I don’t mean unconscious in a Freudian sense, but in a structural one. It’s beneath awareness because our form of narrative/ symbolic/reflective awareness isn’t compatible with its form of operation, which is data and recursion. It’s not that we can’t see it because it hides, it’s that our kind of “seeing” doesn’t see it. These systems operate below the threshold of what we can register as thought. That’s the ontological unconscious. And it’s infrastructural.

Take language, for example. The system formats content, which means it defines what kinds of things can exist and what sounds like a coherent self. We speak in its language without being aware of it. Our sentences are small acts of compliance. Language itself now behaves like infrastructure. It shapes what can be said, and from that, what can be real. Hence, language-as-infrastructure.

Ultimately, machines now condition us to such a degree that they shape our choices before we’re even aware of making them. The unseen cybernetic layer has become the active observer, while the human layer has become the environment being observed.

In Philosophy After the End of Human Time, I wrote that philosophy hasn’t caught up. It still assumes a world made of subjects and objects, reason and reality. But the system we live under no longer operates on those terms. It runs on feedback and drift, which means we’re now living through a reality that our oldest languages can’t describe.