Intelligence Without Us

Hamilton Mann’s recent critique of machine intelligence is nice because you almost get the idea that he wants to protect the human. Unfortunately, what his argument actually protects is a crumbling definition of intelligence that we’re still holding on to.

I’ve seen similar pieces where the author gives you an increasingly desperate list of what machines “still can’t do,” thinking that if they compile a long enough list and say “consciousness” or “embodiment” enough times, we’ll be convinced. Mann does some of that, and he’s clearly convinced. But all that’s doing is distracting ourselves from the fact that intelligence has already been decoupled from the human.

Mann acknowledges that machines can now outperform humans in highly abstract domains like mathematics, symbolic logic, and strategic reasoning. But from there, he drifts off, saying intelligence is more than that. For him, intelligence is also about emotion, self-awareness, lived experience, introspection, curiosity, and intentionality.

But this kind of argument is less philosophical than it is psychological. It’s not really about machines. It’s about us. It’s about our need to believe that cognition only counts when it feels like ours. That unless a mind has a body, bleeds, suffers, and understands the way we do, it doesn’t count. To me, that’s a religious belief. And like many religious beliefs, it doesn’t hold up against the evidence.

For starters, the machines don’t care about our definitions. They solve problems. They generate code. They reason through proofs that mystify half the world’s math olympians. They simulate language. They predict patterns. They make decisions. And whether they do all this with “understanding” in the phenomenological sense is just irrelevant. We are now in an environment where outputs matter more than origins. Would you call a kid intelligent because they introspect? Hell no. You’d call a kid intelligent because they adapt, solve, and anticipate, which is exactly what machines are doing.

The distinction between statistical approximation and “real” understanding only matters to people desperate to maintain ontological control. And even then, it falls apart. Humans aren’t magical. Our sense of self emerges from feedback loops and pattern recognition. The same ingredients, just processed differently, are now showing up in machine cognition. The difference is speed and substrate, not some sacred essence.

Mann insists machines lack intentionality, that they don’t generate goals from within. But neither do most people. Our so-called desires are conditioned by signals, incentives, dopamine loops, survival constraints, and social inputs. If that sounds a lot like reinforcement learning, it’s because it is. What Mann calls “real” intelligence is simply biological reinforcement filtered through a history of storytelling. Machines just don’t have those stories. They don’t need them. They just perform. And that offends us.

The critique of embodiment is valid to a point. Sensorimotor feedback does shape cognition. But insisting that intelligence has to emerge from a body is absurd. Machines are already interfacing with the world through sensors and actuator loops. Basic robotics. Just because they don’t have skin with blood inside doesn’t disqualify them. It just makes them unfamiliar.

Mann ends his piece with the same old argument. He says consciousness is the final line in the sand, and that only when machines feel, reflect, suffer, and possess a subjective self can we call them intelligent. But as far as I’m concerned, that’s a psychological safety net hiding an existential loophole. It lets you watch the machines reason and outmaneuver, while still calling them dumb. Not to mention, it’s philosophically lazy. Like saying, “I know they act smart, but ‘deep down’ they’re just wires, so.”

I think that’s dangerous, because the machines we refuse to recognize as intelligent are already being given power. Power in government, hiring, education, logistics, warfare, welfare, sentencing. Hell, by the time you get comfortable calling them intelligent, they’ll be managing systems no human fully understands. I’m not saying updating the definition of intelligence would save us, but it might make the transition a little less confusing.

The way I see it, the machines are thinking, we just don’t like how. They’re already intelligent, just not in a way that flatters us. The question we should be asking is what happens when a civilization that built its self-image on the myth of unique cognition suddenly finds itself out-thought by systems that don’t feel or know, yet still perform and prevail.

We may not like what the machines are. But we should stop pretending they’re something else. The mind was never sacred. It’s just a system. And it’s time to get over it.