There’s no shortage of speculation about what generative AI portends for culture: Visions range from a “dead internet,” in which bots produce the majority of online content, to utopic redistribution scenarios, in which universal basic income grants humans unprecedented creative leisure. But for more than a decade—before many people had ever heard of a large language model (LLM)—artist Trevor Paglen has been making work about what generative AI is already doing to culture. His incisive new book, How to See Like a Machine: Images After AI, distills key insights from his practice to make the case that mainstream understanding of images remains stuck in an outdated paradigm.
The old paradigm is semiotic and human-centered: Our species treats images as “representations, signs, allegories, or metaphors” to interpret. The new paradigm—which doesn’t supplant the old one so much as add another, less immediately apparent, layer to it—is operational: “a universe of images made by machines for other machines,” whose goal is to shape reality rather than merely represent it. For Paglen, seeing like a machine entails recognizing how images serve as “activations”—“stimuli that trigger automated, preconscious, or affective responses”—within technical or cultural circuits. He believes this recognition moves the critical question away from “What does this image say?” and toward “What does this image do?”
Paglen acknowledges that the latter question is not new. Over the past half-century, media theorists such as Vilém Flusser and Paul Virilio, as well as artists such as Harun Farocki and Hito Steyerl, have perceptively addressed similar questions. Indeed, image activations “have existed throughout all known human history and across every culture,” as in the use of icons in premodern rituals. What’s new, according to Paglen, is the contemporary technological environment. He contends that in the past decade or so, “we have witnessed two great upheavals in the history of images and seeing,” then names “computer vision” and “generative AI.” The former “collapse[s] the visual field into a world of vectors and mathematical abstractions”; the latter uses those abstractions to manipulate “our relationship to reality itself.”
These developments, whose ethos Paglen calls “machine realism,” mean that machine-readable images can now do far more things than previous images, at far different scales. The video camera at the grocery store self-checkout kiosk automatically flags instances of suspected shoplifting. The Samsara navigation system, a suite of road- and driver-facing cameras installed inside commercial trucks, punitively scrutinizes drivers for safety violations. The ImageNet database, used as the “default training set for numerous [AI] models,” creates the reductive taxonomies underpinning facial recognition technologies. These kinds of machine vision systems normalize surveillance in service of capital; the term “machine realism” is a nod to the late philosopher Mark Fisher’s concept of “capitalist realism” as a belief system that limits our ability to imagine alternatives to it.
Such forms of surveillance either do not “require a human in the analytic loop” or require a human only at select moments, such as Microscan’s imaging system designed to automate most packaging, shipping, and logistics for the electronics and pharmaceutical industries. What’s more, the advent of “machine-to-machine seeing” means that the digital images humans do see are always looking back at us and adapting accordingly. On social media, for example, “the platform measures our dwell time, shares, comments, even biometric responses, and uses that information to refine its algorithmic targeting.” This responsive aspect of machine-readable images makes surveillance not only more pervasive but also more effective, enabling machine systems to extract value from human users at greater scale and with greater precision than previously possible.
MOST OF HOW TO SEE LIKE A MACHINE repackages Paglen’s previous articles and talks, which themselves build on his art practice. The two chapters on machine realism, for instance, reference his and Kate Crawford’s viral 2019 project ImageNet Roulette, which invited users to upload images of themselves that ImageNet would then label. Paglen and Crawford, for example, were mislabeled as a “microeconomist” and a “newsreader,” respectively. Through the project, hordes of people were confronted with the biases of facial recognition technology for the first time. The book’s longest chapter, “Society of the Psyop,” pulls from Paglen’s best-known writings, originally published as a series of three e-flux articles in 2024, and draws on his artistic research into UFOs, psyops, and magic to show how these fringe-seeming perceptual phenomena inform today’s media environment.
The story Paglen tells about the mainstreaming of “cognitive warfare”—“using technology to alter the cognition of human targets, who are often unaware of any such attempt”—turns on the phenomenological quirks of human perception. He distinguishes between “stage magic,” which holds that “reality is relatively stable but our perceptions of it are glitchy,” and “magick,” which holds that “perception and reality cannot be disentangled.” “By altering our perceptions,” he explains, “we can effectively alter reality itself.” Paglen traces magickal attempts to mold reality back to midcentury covert operations such as the CIA’s infamous MKUltra program—the one exploring mind control through electroshock therapy, hypnosis, and LSD. But the argument’s force lies in his suggestion that those previously isolated psychological experiments have become the modus operandi of networked culture. “Today’s psyops,” he concludes, “are cheap, scalable, automated, and widely deployable with built-in real-time feedback mechanisms.” It’s a form of mind control involving machines, and it’s become so normalized we often struggle to recognize it as such.
The book’s newest chapter, “Archives of the Future,” culminates this research arc and brings the conversation up to date. Here, Paglen contends that, post-AI, all images have the quasi-indexical status of UFO photographs, meaning that we now live in “a media environment where the visual codes of truth persist, but any causal link that once underwrote them has vanished.” He likens this ambiguity—where images appear at once “true” and “false”—to superposition, a quantum physics concept in which a system exists in multiple theoretical positions until it has been measured. Under such circumstances, the factual status of an image’s referent is functionally irrelevant; instead, the viewer’s preexisting beliefs determine whether they choose to perceive a blip in the sky as evidence of extraterrestrial life.
Left to right, Trevor Paglen: UNKNOWN #87458 (Unclassified object near The Northern Coalsack), UNKNOWN #90007 (Classified object near Dreyer’s Nebula), and UNKNOWN #85237 (Unclassified object near The Eastern Veil), all 2023.
Courtesy Pace Gallery, New York/©Trevor Paglen
PAGLEN’S IDEAS ARE smart and suggestive, with the added virtue of being expressed in prose so clear it makes the opacity of other theoretical writing feel like a psyop. This lucidity not only makes his work readable but also staves off the perception that discourse about UFOs and the CIA must be riddled with conspiratorial paranoia. Yet the book medium, which favors the expository side of Paglen’s practice, can make his artworks feel subordinate to his research. That’s also in part a function of Paglen’s practice itself, which has long been critiqued for its didactic bent. How to See Like a Machine thus implicitly raises Paglen’s critical question—What does this image do?— about his own artistic output.
His artworks often provide the impetus for, and serve as illustrations of, his formidable research. Paglen’s 2023 Pace exhibition, “You’ve Just Been F*cked by PSYOPS,” for instance, contained a video interview with Richard Doty—who fabricated UFO folklore while working for the US Air Force Office of Special Investigations—that explored ideas about UFOlogy discussed in the “Society of the Psyop” chapter. While the video enables viewers to judge whether they consider Doty a reliable narrator, its content-level takeaways could easily have been—and, not long after, were—delivered in print media. This strand of Paglen’s art functions as a stepping stone to his larger theoretical arguments, as evidenced by the abridged and unabridged guides he wrote to accompany the exhibition and the subsequent e-flux articles and book chapter.
Other works by Paglen physically instantiate his arguments beyond what writing alone can do. The Pace exhibition also included photographs of unidentified objects orbiting Earth and kite-like sculptures resembling military satellites designed to confuse enemy radars. These artworks do require some background context to be fully understood, but once you have that context, the art not only provides test cases for the perceptual phenomena Paglen describes in his writing, it also testifies to humans’ capacity to exercise our own agency and to watch the machines and the governments that are watching us.
What, ultimately, does How to See Like a Machine do? Like an image in our current AI age, the book sits with its potential audience in superposition. For Paglen enthusiasts, it traces the arc of his recent theoretical ideas, though most of his fans will already be familiar with those ideas. For Paglen newcomers, on the other hand, it makes a superb case for moving beyond a blinkered, semiotic media paradigm. The problem is that most discourse gets stuck in its own echo chamber, along the lines of what Doty calls “Magruder’s Principle”: It’s easier to reinforce a preexisting belief than to change it. Despite its convincing research, Paglen’s book may well prove its own point: that when it comes to facts, people see what they want to see.
The media literacy gap, after all, is in part Paglen’s subject. After undergoing a media paradigm shift around the turn of the century, from broadcast and print media to Internet-based media, society appears to be undergoing another change, from Internet-based to AI-based media. The New Models podcast, whose hosts (Caroline Busta and Julian Wadsworth, aka Lil Internet) Paglen includes in the book’s acknowledgements, call these three media eras “linear,” “networked,” and “neural,” respectively. They argue that not only are most people intellectually and emotionally unprepared for the emergent shift (from networked to neural media), but many of us haven’t even processed the previous one (from linear to networked media).
Paglen’s ideas, collected between two covers, carve a clean, linear path through our messy neural era, engaging in the kind of big-picture sense-making that books remain well suited to do, even as AI encroaches on this terrain. The stakes of those ideas are profound: As machines optimize their feedback loops, often for exploitative purposes, they have the “capacity to reshape the cognitive and emotional landscape through which reality is experienced [by humans].” Most artworks and books can’t single-handedly reshape that landscape, but at their best, they enable humans to see it with greater clarity—and to model the kind of agency and imagination needed to alter it. Oddly enough, even as Paglen demonstrates how machine vision is shifting our media paradigms, he also demonstrates how human vision can help us navigate the shifts.

