Anthropomorphics

Dennis Bouvard (@dennisbouvard)

September 30, 2025

I have to admit I was a always a little tentative about the title of my book, even though it seemed to me appropriate and necessary. My thinking was that while anthropomorphizing was a very common criticism of certain ways of thinking and tropes, the criticism itself implied that the “human” that was projected onto the non-human was itself a given. So, I wanted to “appropriate” the term and turn it around and make the point that humans had to be anthropomorphized before they could get around to anthropomorphizing anything else. I didn’t want to the use term already used in GA for the creation or invention of the human, “hominization,” because the turn from GA I was making in the book was toward the center, drawing upon Eric Gans’s own insight in The End of Culture that humans originally modeled themselves on the center—“hominization” seemed to me to elide that step by suggesting the original humans simply invested themselves with humanity, so to speak. I didn’t put it this way at the time but now I will emphasize that “anthropomorphics” was also meant to foreground the artificiality of the human, from the beginning—we were always already imitating the center that was itself nothing more than a vectorization of our converging desires turned back at us through a prohibition. This was a way of distancing myself from GA’s or any humanism and insisting on the historicity of the human. Still, while I, on occasion, “proclaimed” the new science of anthropomorphics I rarely returned to it, settling instead on the more disciplinary sounding “center study” to label what I was doing. This is because anthropomorphics is a widely used word with a whole range of associations—even a children’s novel series—that I was tentative about having to “answer” for in ways that might complicate the inquiries I wanted to advance.

But some recent reading, some of it quite proximate to the Antikythera project, has buoyed my sense of the usability of the concept. First of all, Lambos Malafouris, in his How Things Shape the Mind: A Theory of Material Engagement, following up on the notion of the “extended mind,” i.e., the thesis that thinking takes place not only in the brain or even in the whole body but across the entire expanse of the world in which we participate (a thesis with which I am familiar and with which my insistence on a kind of exhaustive performativity is in agreement) insists that anthropomorphism, i.e., the projection of human qualities onto non-human realities is not merely “natural” (but “illusory”) but necessary and a critical part of thinking and knowing the world. His argument is, in part a critique of Cartesian modernity for trying (and necessarily failing) to abolish anthropomorphism. Second, Iulia Ionesco’s PhD dissertation, “Just Like Me But Not Exactly,” which is a study of the way users anthropomorphize computation, including the apps with which we engage all the time, arrives at similar conclusions, in this case I think somewhat against her will and to her surprise. Her whole dissertation is an attempt to design experiments that can test the degree to which humans model AI anthropomorphically, which leads to questions of how humans model each other, and finally to the conclusion that there’s no way of constructing an artificial situation that will help us understand how humans model AIs outside of such contrived situations. This paradox applies to simulations, predictions and computation more generally insofar as the simulation, prediction and computation can be incorporated into the activity that has been simulated, predicted or computed—this all goes back to basic cybernetics research. After all, the data used to create AIs was drawn from pre-AI humans and therefore might not be applicable to post-AI humans in their interactions with AI; and, for that matter, AIs that are built on data including humans’ interaction with the previous iteration of AI may not be relevant to humans in their interactions with the present iteration. So, her questioning of anthropomorphizing led her to very fundamental questions of the human—ultimately, maybe (although she doesn’t say this) as modeler whose models are always trying to chase their own tails—or, perhaps, swallow them.

Center Study Center is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

All of this, then, seems to give a kind disciplinary imprimatur to “Anthropomorphics,” making it of use in further exploring questions of the human, humanism, post-humanism, trans-humanism, and so on. An important part of the Antikythera project is to learn to think “allocentrically”—so, for example, instead of seeking out “human centered AI,” i.e., trying to fit planetary scale computation to the Procrustean bed of our limited understanding of the latest iteration of the human in relation to the technological, we think outside of ourselves as co-evolving in ways we can only partially determine along with AIs. Antikythera’s Cognitive Infrastructures Lab is a collection of essays, several of which examine the implications of such allocentric (or “xenomorphic”) thinking and creation. (Kenji Siratori’s “xenopoetics” project is worth mentioning here as well, as he articulates the “human” with various layers of the biological and technological. I will no doubt be returning to this.) There may be some tension between allocentricity and xenomorphicity, on one side, and the tenacity of anthropomorphism, on the other side, but if the center is “allo” and “xeno” from the beginning then center study and scenic thinking suggest a way forward here. What would be genuinely “other,” though? Postmodern thought broke itself over this question. To paraphrase Wittgenstein’s maxim about the speaking lion, if an AI was genuinely beyond our comprehension, how would we know it was beyond our comprehension rather than just incomprehensible? And if it is comprehendible, how “alter” is it? Machinery that, e.g., can alter our biological nature, or even physical laws, might be beyond our comprehension (but why assume intrinsically rather than temporarily beyond?), but its downstream effects wouldn’t be; and, anyway, this wouldn’t hold for LLMs or whatever the successor to LLMs will be, since if they will be speaking some language, even if it’s one the LLMs create for themselves, if it’s “language” we will be able to learn it. Ionescu’s cybernetically aligned paradox seems to provide an answer here, as it suggests a kind of gap whereby the AI is always ahead of us in one sense (extrapolating from our data) while being behind us in another sense (not extrapolating from the data we’ve generated from interacting with the latest extrapolation). But take a look here at what might be the most extreme imaginable attempt to deanthropomorphize the human itself, from Buckminster Fuller’s definition of man:

A self-balancing, 28-jointed adapter-base biped; an electro- chemical reduction-plant, integral with segregated stowages of special energy extracts in storage batteries, for subsequent actua- tion of thousands of hydraulic and pneumatic pumps, with motors attached; 62,000 miles of capillaries; millions of warning signal, railroad and conveyor systems; crushers and cranes (of which the arms are magnificent 2 3 -jointed affairs with self-surfacing and lubricating systems, and a universally distributed telephone system needing no service for 70 years if well managed); the whole, extraordinarily complex mechanism guided with exquisite precision from a turret in which are located telescopic and microscopic self-registering and recording range finders, a spectroscope, et cetera, the turret control being closely allied with an air conditioning intake-and-exhaust, and a main fuel intake. (Fuller Nine Chains to the Moon, 18)

Now, we could easily see this, paradoxically, as a kind of poem, while also imagining it could be lengthened practically infinitely (if we were to, say, break down all the components here into their components, etc.) and also note that it’s essentially a more complicated version of Thomas Hobbes’s observation that we could see human beings (individually or collectively) as analogous to a watch. I don’t know if it’s intentional, or how much of a sense of humor Fuller had, but this is also very funny. But all that aside, we can also see that Fuller describes “man” by analogy with a series of man-made devices and in that case is still anthropomorphizing, only one degree removed. To take the Theseus ship model, if you were to take a human and, gradually replace every organ in the body with an artificial one (a project some of our transhumanist billionaires might be working on) we could argue, once the replacement was complete, whether it was the “same” person, or even a human, but it would certainly be anthropomorphic.

We don’t need to be allocentric or xenomorphic, then; anthropomorphics will suffice, since the manufactured being manufacturing the next iteration of itself or, more precisely, designing the scenic architecture to which the next iteration of itself will fit itself, will always generate results beyond what has been intended or could have been predicted. The other is always already us—maybe we can vindicate postmodern thought here as well. I have been focusing very intensely for a while now on articulating debt/credit with the juridical and succession, addressing the technological or scenic design in terms of data exchange, so as to integrate it into those other categories. But none of this excludes presenting anthropomorphics as participation in the universe as an inexhaustible series of models for potential iterations of the human. It encourages us to model ourselves explicitly on whatever metapersons our interactions with computation and the computed/computing universe generate—and, if Ionescu and Malafouris are right, those interactions will always be generating metapersons, whether we choose to name and engage with them or not. In terms of the broader model center study is currently proposing, we could think of this as the expansion of the nomos, for which we are indebted to “modernity,” and “Anglo” modernity in particular. Expanding the nomos involves reinitiating the originary, which means the sacred or iterable or commemorable or citational, while also providing the new response to resentment: not a redivision of what is (often by looting certain members of the group or finding some other group to loot—as was the method of Fuller’s “great pirates”) but a creation of more to be shared. This “bourgeois” desire is also sacred, because it is currently the best way we have of deferring the most terrible violence. If our current systems are failing to create a bigger pie so that “equitable” distribution becomes moot, we can at least ask why they are failing and think of ways to improve or replace them. For a while, the main task of politics has been to replace the irreducible and indivisible with the self-regenerating and expansive. Our political leaders have been failing miserably at this, but no one has suggested anything else that doesn’t involve restoring some status quo ante. We have been thinking too narrowly, in terms of reorganizing social orders internally, along “liberal democratic” lines, of course, within their already existing borders. But creating new political demarcations, both in the sense of traditional national boundaries and in the sense of pluralizing jurisdictions, have remained too incompatible with the existing post-WW2 arrangements. I have in mind Balaji’s “Network State” and other proposals (whatever happened to Moldbug/Yarvin’s old “patchwork” idea?) but, following my recent inquiries, would start more “traditionally,” with extending, perhaps most fundamentally, models of insurance to their limits, making them transgenerational, even eternal. Insurance is creating models of the human—the human as extended, against shoals of risk and reward, across decades, even centuries of the just barely but sufficiently predictable to lay down some markers of futurity; insurance is anthropomorphics, as we, in accord with Peirce’s magnificent prophecy, all become insurance companies. Just think of all the requirements an insurance company meaning to cover you and your descendants over the next two centuries would lay down, in terms of provisions you would be expected to make for yourself and therefore many of us for each other. What would we have to build, what kinds of research into health care and longevity, what designs of cities, environmental programs, etc.—maybe even a certain degree of fertility could be mandated to ensure payments would be maintained. Over time insurance might be morphed into various kinds of “companions,” built around and into us in a further anthropomorphic increment and calls by various teams upon the most promising students across the educational systems. The juridical order would be organized around adjudicating claims, the disciplinary directed toward testing out futuristic models in physical, chemical, biological and social terms. And the pointmen governing us might be the uninsurable, because a suit brought against him might so engross the entire system as to swamp any outside spread capable of covering it (not to mention the possibility of regicidal violence, which could never be completely eliminated)—they would be pioneering the further anthropomorphisms that insurance companies then insist get standardized. And the uninsurable would in turn be a new model for the human.

Center study traces the commands of the center, from the demarcations of the nomos through the histories of succession, to the treatment and preparation of disputes that look to be if they are not actually to be decided by the center, to the creation, ordering, curation, preservation and delivery of data that will serve both to help decide and to proactively identify potential cases. Anthropomorphics guides scenic design in enlarging the nomos, in issuing the credit required for creating pedagogical platforms that require for the discretization programmed into their founding modes of payment, forgiveness and enforcement incommensurable with the currency in which the credit is issued. (You will always pay back in currency that that is just like but not exactly the currency in which you contracted the debt.) These are two sides of the same coin and have a kind of Mobius strip character, or a relationship like that between the duck and the rabbit in the drawing that drew Wittgenstein’s attention. If you do “enough” center study you’ll find yourself anthropomorphizing and vice versa. But there may be a division of labor here as well, between the researcher and the designer (who, of course, also morph into each other regularly). Let’s say that in gathering, preparing and studying data you yourself become a source of data worth gathering, preparing and studying and it is in participating in the creation and accumulation of that data you become a designer, while in the course of designing and enlarging the nomos problems of debt forgiveness and enforcement emerge in terms that require provisional commensuration and so one must morph back into center study. In either case, though, the conversions can be handed off to others better suited to the task and one could set one’s data free into the wild for others to incorporate into design or delegate the labeling of discrete increments of debt to the more scribally oriented.

Center Study Center is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.