Tagging and Tracking the Human
Dennis Bouvard (@dennisbouvard)
August 14, 2025
A central element of the “linguistic turn,” at least in its poststructuralist and postmodernist form, was the critique of “humanism,” taken to be the guiding ideology of the West in the post-WW 2 era and perhaps beyond. “Humanism” might be roughly equated with “liberalism,” or, in a more post-War context, “Judeo-Christian”—attempts to retro-engineer an essential identity that legitimated “liberal democracy” (also, of course, “humanist”) in philosophical, aesthetic and civilizational terms. “Humanism” was also taken to do the ideological work of distinguishing “West” from “East,” the “developed” from the “Third” world, etc. Humanism was attractive (so much so that the USSR had its own version) because it purported to confer dignity upon the person, the individual, in a way that comported with human rights, rule of law and related US-order sanctioned concepts. Morally, it was associated with granting a high level of intentionality and responsibility upon the individual (itself seen as legitimating “free market” economics that attributed success to individual efforts and capacities); aesthetically with forms of realism presenting rich portraits of individuals distinguishing themselves from constraining social settings and philosophically with realism or representationalism and phenomenology and its continuation in existentialism (i.e., with strong notions of intentionality, choice and responsibility). Heidegger advanced, I think, the first explicit critique of humanism and Althusser approached the question in especially polemical terms, perhaps at least in part as an oblique attack on Stalinism; then, all the work of thinkers like Derrida, Foucault, Barthes, Deleuze, Lacan and many others, whatever their differences, were all attacks on humanism in the name of language, or the unconscious, or desire, or some other “structuring,” “constitutive” order beyond the control or even awareness of any of us—except, perhaps, from the elevated theoretical perspective provided by the theory itself which never, though, even in Althusser’s Marxism, led to any compelling notion of agency (in fact, one important reaction to poststructuralism, cultural studies, was initiated and sustained as a search for these missing agencies among the various subalterns and identity groups).
The emergence of AI returns us to these questions in a new way, as many of the critiques of AI can themselves be critiqued as returns to earlier, discredited notions of human agency, uniqueness, cognitive specificity, ethical or creative distinctiveness, and so on—although, of course, this all depends upon whether one thinks those earlier critiques were, indeed, discrediting. Another element of the critique of humanism is that it is a vehicle and declaration of secularism—if the human is the center, or the measure, then God has been removed from the picture—and various political claims made for religious institutions are reverting to these questions as well, more or less explicitly. I can’t recall specific assertions to this effect, but I am virtually certain that Eric Gans would consider GA a humanism, and this is part of the broader range of issues on which I left GA and started Center Study. In fact, the questions raised by humanism are similar to those raised by secularism—the originary hypothesis does, indeed, cut the divine in any traditional sense out of the equation and in that sense would be the most “humanist” mode of thinking imaginable; but in place of the divine or holy we have the center which can never quite be “human” yet governs all. There is a relation to the center, not a human essence present in each individual. It is on these grounds that I aligned myself with the poststructuralist critique against what I would see as Gans’s “existentialism,” a label confirmed by his recent reliance upon Sartre’s “neant” for his descriptions of the effect of the originary deferring representation. But this is less important than bringing the question into closer focus, because I think the question of the human has changed from that of whether the human or some structure is the center to how do we think the specificity of the human against, on the one side, the animal and, on the other side, the technological. Biology and ethology continually pressure the boundary between the human and the animal while media and technology studies erode that between the human and the technical, by drawing attention to the various ways in which we are “always already” technical and, more recently, even “computational.”
I will take a paradoxical approach to the question, trying to join in the boundary erosion on both sides but precisely as a way of continuing to isolate, perhaps in ever new ways, the impossibility of erasing those boundaries. I of course see the originary hypothesis as constitutive of the human and the distinction of the human, but if the originary scene was computational (and I think some of Gans’s recent “ontological” Chronicles could be taken in that direction) then that specificity would be swallowed up within the broader “computationality” of reality. And that is the context I want to focus on here, because it is becoming clear that computationality is going to be the prevailing concept of the thinking of “planetary scale computation” currently organized around Antikythera, which seems to me the most powerful direction the thinking of the Stack is taking now. And I want more precisely to address the argument, very much countering previous iterations of anti- or post-humanism, by Blaise Aguera y Arcas, equating intelligence and, ultimately, life, with prediction. It’s easy to see one can claim that every organism is making predictions about its environment, with those predictions then getting tested, with those organisms passing more tests “graduating,” i.e., surviving and passing on its genetic material and so on. And one could fit the originary gesture within that frame: those on the scene are “predicting” that converting the grasping into a gesture will lead others to do the same. But “prediction” is a very declarative modality—you can’t predict without saying something like “X will happen,” which in turn relies upon some prior agreement regarding what we are to label as “X,” and what it counts for something, and then “X” specifically, to “happen.” We’re working with measures, bounds, fits, likes, sames, etc., already in place. We can project this speech act onto other organisms but only a the cost of obscuring something of what is happening—a bacteria is not stating that the chemical composition in the surrounding liquid environment will alter so as to take on certain proportions and then establishing conditions under which the hypothesis is tested (of course I know that no one is claiming it does this but, then, why refer to what it is doing as “prediction”—so that, of course, on could undergird it with “computations”—why not “anticipation,” or “approximation,” for example?)—the organism is, rather, inclining and maybe “upclining” in certain ways in accord with inclines and upclines in its surroundings. Each incline or upcline “indents” the surroundings, laying tracks favoring some future movements over others. By marking the environment through these tracks it moves upon it takes in “information” (feelings of new indentations allowing new passages) so that in a sense future tracks are laid upon previous ones in a way that we could then sum up as prediction or computation but is really more of a fitting. (I’ll also mention that “prediction” as a definition of intelligence is enormously reductive and excludes from intelligence all kinds of states and stances that might very well be considered intelligent. If, for example, I like, for the moment, a particular image as the wallpaper for my laptop screen, I don’t think I’m predicting anything [except maybe a good feeling the next time I look at it?—but does that count?], but I also wouldn’t exclude the possibility that there might be intelligence in the choice. No doubt other examples might come to mind.)
(I’m going to make a brief mention here of something I anticipate returning to: I am seeing in the thinking of planetary scale computation references to mimetic theory—only Girard, so far—that imply the possibility of important overlaps with center study.)
I think that this little “ontological” disagreement is in turn helpful in charting a course through the question of the human. Marking the environment is, in retrospect, “tagging” it, i.e., marking it as part of a mode of labeling and categorization and that tagging creates the tracks along which the organism will move—but the organism doesn’t tag in order to track. I am going to say that every organism marks or tags its environment or surroundings and that these tags mediate its relations with predators, prey, and various possible dangers or aids but that no animal tags in order to track—animals follow or evade in accord with the marks made by other animals but they are not themselves “labeling,” or, in terms of the originary hypothesis, ostensively naming features of their setting. This is another way of saying that animals don’t have a scene. But humans do tag in order to track, because they can refer, ostensively, through joint attention, and therefore follow the movements of pre-labeled entities. How about machines, though, and, in particular, artificial intelligence? If machine intelligence tags in order to track then the human/technical boundary collapses, at least according to the construction I’m proposing here. Now, of course, in a sense not only do machines do this but this is one of the most prominent and contested (or resented) aspects of current deployments of machine intelligence—its use for surveillance purposes. Machine intelligence, in this regard, allows for far more intricate and complex systems of tagging to track than human alone or, more precisely, humans in previous technological configurations, could ever have done.
I think there is a critical and, accepting the risk of all such claims, irreducible, difference between human and machine intelligence here. When we turn tagging and tracking power upon humans, we penetrate more deeply into scenic articulations, both horizontally and vertically and, of course, temporally. If I label a particular gesture that seems to me to be increasingly common in certain circumstances (taking into account all the work—for which machine intelligence can be very helpful—needed to identify that particular gesture, distinguishing it or, rather, constructing spaces upon which we can jointly distinguish that gesture from “similar” ones) we will find previous gestures, or traditions of gesturing, or conditions calling forth certain “families” of gestures, and this history and ethnography of gestures will in turn have reference to specific juridical institutions and models of indebtedness that have their own histories of transformation and that have been registered and recorded in various ways in media with their own histories, and so on. If the AI turns its tagging and tracking power upon itself, what does it turn up? And of what interest would that be to the AI? There’s the data upon which the machine intelligence has been trained and the algorithm or supervised or unsupervised learning process through which it does its next-token prediction, but does any of that tell us anything that wouldn’t just repeat the content that has been generated? If you gather this data and train the program on it in this way, you get these outputs. The AI could tell us that, but that doesn’t tell us or, for that matter, the AI, the next thing to do with the AI, whereas the scenic inexhaustibility of our tagging and tracking of the human gesture carries untold implications of where and how we might make our marks, lay our tracks, incline and upcline.
For the originary hypothesis to participate in history, it must be open to being treated to endless redescriptions, taking on vocabularies drawn from new social and technological settings—indeed, a “genealogical” analysis of the originary hypothesis, formulated at a particular moment by a particular thinker and propagated by specific people situated in specific institutions, organized in deliberate ways, would show that it has already been thoroughly marked or tagged by actors in particular surroundings; meanwhile, an at least partial “proof” of the hypothesis would be its commensurability with any theoretical vocabulary. Tagging and tracking are terms I’m largely taking from some brief and incomplete what I take to be speculations on a possible way of thinking the origin of language from Jacques Derrida, and which I have discussed more than once previously—and that seem to be resonant with the kind of intellectual work required by the development of AI and therefore likely to become especially important. Derrida (I’d have to find the specific references in Of Grammatology), in his speculations on an “arche-writing” that would displace the phonocentric notion of writing as a representation of speech speaks about the creation of trails and landmarks as such an arche-writing, and so I wondered whether he might have hypothesized that the first sign might have been a result of members of the group following each others’ trails and eventually creating them so as to be followed. Here, we wouldn’t have a scene or event, but rather a gradual making deliberate of the “traces” of activities carried out with other goals in mind. I rejected this hypothesis because there’s no way of accounting for when a “reader” of the track would become a “writer “ for the subsequent “reader” but still preserved this as a secondary hypothesis for spreading the sign outside of its initial ritual context, when the reciprocal tagging and tracking in the deferral of violence gets transferred to non-confrontational cooperative activity.
On the originary scene, we tag the central object and then we track each other through it and we do this because the correlation between our tagging others and our being tracked by them becomes both compelling and overwhelming forcing and enabling this shared tagging. We could say that the originary scene has been completed when the tagging and tracking system has been installed. This reciprocal tagging and tracking describes the advance and withdrawal involved on the originary scene, perhaps under, to keep Eric Jacobus’s ROBA hypothesis in the equation, “weaponized” conditions with participants on the scene bringing cutting “utensils” now appearing as weapons and then converted into something like measuring implements for dividing the meal. Tagging and tracking tracks with governance. There is equality in relation to the center on the originary scene because we are all trying to successively tag the movements of the other (under our ritually enclosed scrutiny) in accord with the commands of the center. “History,” in this case, is the successive series of tag and track iterations following the initial inequality in tagging and tracking through the center. Those who come to tag and track the community in the name of the center introduce an asymmetry wherein the oscillation between transparency and opacity on the part of the center sets the terms for the same oscillation on the part of the periphery. But those on the periphery can operate this same asymmetry in relation to their tagging and tracking of each other, thereby training each other in the training of the center. The center in turn calibrates its own distribution of opacity and transparency—tagging certain features obscures others, and no tagging and tracking could be exhaustive because each iteration of tag/track produces new features to tag and new possible pathways for movement. Those on the periphery likewise, under unequal conditions, calibrate their oscillation or fluctuation between opacity and transparency. We might say there’s an endgame here in the form of an exchange: we concede a certain opacity of the center in exchange for it allowing us to be opaque in ways that enable us to be transparent in the ways the center needs us to be transparent in order to balance transparency and opacity so as to… There’s no “grand narrative” (like those of the humanists—which, in this context, is nearly synonymous with “Whiggism”) but, rather, an ongoing reciprocity of tagging and tracking trying to “optimize” the degree and mode of transparency/opacity on both sides. It is the case, though, that tracks previously blazed or laid are never irrecoverably lost, so every attempt at optimization is drawing upon and modifying previous provisional optimizations. I’m not sure whether insisting, through all this, on a constitutive asymmetry between center and periphery makes me humanist or anti or post-humanist, but maybe that particular mode of tagging and tracking can be relegated to the sidelines for now.