By the Machine for the Machine: Virilio’s Logistics of Perception in the Age of Generative AI

A tweet by Shayan Sardarizadeh showing an AI-generated image of a US B-2 bomber that has crashed or been shot down in Iran. The tween notes that the US government reports that all B-2 bombers used to bomb Iranian nuclear sites returned to the US.

BY TRACY VALCOURT

In War and Cinema, first published in 1984, Paul Virilio defined the notion of a “logistics of perception,” a circulating system which he considered essential to the development of modern warfare. Expanding on this, Virilio noted that these logistics not only included oil, ammunitions and supplies, but also images.1 This observation marks the tactical value of information and images, which then points to other, seemingly opposing principles of warfare, whose “chief tack,” says Virilio, “involves the elimination and appearance of facts, the continuation of what Kipling meant when he said: Truth is the first casualty of war.”2 To this end, Virilio (1994) predicted that images would become increasingly weaponized so that in a mediated world,  “a war of images and sounds,” would come to usurp a war “of objects and things,”3 while anticipating that synthetic images “created by the machine for the machine,” would be a part of this visual arsenal.4 

Information and visual intelligence in the battlefield are often supplied by way of “instrumental” (Sekula) or “operational” images (Farocki), such as satellite imagery. In early modern warfare, aerial photographs exemplified this status, with the First World War providing the first instance of their use for intelligence purposes.5 In this case, Allan Sekula explains, “The meaning of a photograph consisted of whatever it yielded to a rationalized act of ‘interpretation.’ As sources of military intelligence, these pictures carried an almost wholly denotative significance.”6 As a component of a logistics of perception, aerial reconnaissance photos were heavily trafficked, their value “as cues for military action, depended on their ability to testify to a present state of affairs.”7 

A logistics of perception can also take on public iterations to supplement discrete battlefield tactics, with today’s algorithmic platform dynamics motivating a nonstop, never-ending circulation of content fomenting disinformation ecosystems to strategic ends. For example, the Israeli air strikes on Tehran in June 2025 unleashed a torrent of disinformation created by both pro-Iranian and pro-Israeli camps (Fig.1). The volume of AI-generated images disseminated, and level of confusion they created, prompted certain analysts to recognize the event as the first to use generative AI at scale during a conflict.8 Meanwhile, a notorious early example that in some respects heralded the current moment, is Colin Powell’s televised presentation to the U.N. Security Council in 2003, supported by a PowerPoint presentation featuring heavily annotated satellite images purporting to show that Iraq possessed weapons of mass destruction (Fig. 2). Through these now debunked satellite images, Powell took a generative stance on the perception of logistics that involved “the orderly movement of information and images,”9 with the understanding that if “facts” could not be perceived, then they could be produced.  

The timing of the presentation, just prior to the release of Google Maps in 2005, was essential to its persuasiveness, as before the ubiquitous embrace of Google’s navigational suite, public aerial knowledge was low. After first dissuading the public from attempting their own interpretation, given the supposed technicality of the images, Powell became the official narrator of a story whose intent was to bolster public support for a US invasion of Iraq. “The making of facts,” argue Eyal Weizman and Thomas Keenan, “depends on a delicate aesthetic balance, on new images made possible by new technologies, not only changing in front of our very eyes, but changing our very eyes—affecting the way that we can see and comprehend things.”10 By founding his argument on the satellite images promoted as empirical evidence, Powell played upon the delicate aesthetic balance produced by novel images and technologies that took advantage of the fact that a colloquial aerial vision was still coming into focus. 

Over the years, Google Maps and subsequently Google Earth (launched in 2011), along with ubiquitous aerial technologies such as drones, played integral roles in motivating a paradigm shift in visual culture. This aerial moment, wherein the view from above became the new dominant perspective of world picturing, ruptured the enduring Western pictorial landscape convention governed by Renaissance linear perspective whose mathematical logic depended on the horizon line. The effect was destabilizing. Fast forward approximately twenty years from Powell’s U.N. presentation to another technologically prompted paradigm shift, in which the majority of online images no longer hold a direct indexical relationship with the real world. In 2022, generative AI witnessed a decisive moment, when text-to-image generators such as Stable Diffusion and OpenAI’s DALL-E and Midjourney exploded in popularity in the span of a few months. Since then, it is estimated that 15 billion AI images have been created, while a 2024 Europol report predicts that by 2026, up to 90 percent of online content may be synthetically generated, which would effectively cause a collapse of the information ecosystem.11 If on a practical level, geolocation tools gave their users a newfound confidence that they would arrive at a given destination, repeat exposure to the outputs of generative AI seems to promise to extinguish confidence as a whole. 

Weizman and Keenan’s observation that novel technologies can
“change our eyes” and affect the way that we see and comprehend once again becomes a defining principle in the early GenAI era. In a short amount of time, GenAI has altered how we see, destabilizing the human relationship with visual culture at large by injecting the act of looking with a default suspicion regarding human-made authenticity. Certainly, graphics editing software like Photoshop had fragilized the concept of authenticity, and in some ways paved the way for more fulsome deceit, but the doubt cast was more superficial than fundamental. From a behavioural standpoint it is significant that synthetic image generators have introduced another process of cognitive functioning, a new lens, to the act of looking at imagery. This novel lens that automates the question: “is it real?” replaces, demotes or influences other potential emotional or intellectual responses to imagery housed in various categories, including art and evidence.  

AI-generated imagery participates in Virilio’s logistics of perception in powerful ways by weaponizing the tension between plausibility and doubt to launch an attack on the very notion of truth (“the first casualty of war”). While some danger in synthetic images is in their capacity to dupe viewers, Eliot Higgins says the real threat is that they can encourage people to deny real images.12 Known as the “liar’s dividend,” this operationalized doubt is integral to the cultivation of disinformation ecosystems, which are known to expand and intensify at times of conflict or political unrest. Part of the activation of synthetic imagery is rooted in the implications of the general acceptance of AI-generated images as equivalent to photographs and film, when materially and historically they are distinct. As media scholar Andreas Ervick reminds, “AI images might seem like concrete solids; they may resemble photographs or some other products of traditional image production. However, they are, in fact, localized zones of coherence, drawn from a flux of potential intensities from a field of noise.”13 

This is to say that while novel technologies and images can prompt new ways of seeing and understanding, established conventions and relationships with traditional media such as photography and cinema prevent the transformations from being immediate or totalizing. The epistemological exchange that allows AI-generated imagery, as a product of computer science, to operate as a proxy artifact of photographic history endows it with an artificial relationship between truth and representation that is subtly leveraged in disinformation campaigns. Even when synthetic images are of the “slop” variety and clearly do not achieve photo-realism, platform users rise to their defense, such as in the case of the AI-generated image of a little girl in a dingy holding a puppy, supposedly a victim of Hurricane Helene (Fig. 3). Disseminated by Amy Kremer, a Republican National Committee member representing Georgia, the image went viral across several social media platforms in early October 2024. Finally acknowledging the image was synthetic, Kremer posted on X that she “didn’t know where the photo came from” nor did she care and left it on her account as it was “emblematic of the trauma and pain that people were living through right now.”14 The fact that Kremer refers to the AI-generated image as a photo points to the above-mentioned slippage between categories, which seems to further embolden the politician’s position that the emblematic (or emotional) sits equivalent to the evidentiary, even in times of crisis when facts take on life-or-death urgency.  

Despite the instantaneity of their generation, a condition adjacent to urgency, Roland Meyer reminds us that the “reality” in synthetic images is “very much the product of the past rather than the present.”15 Because AI synthesis relies on “interpolating data from the past to produce an image of the present or even the future. AI image synthesis is a backward prediction: it makes plausible guesses on what could have been.”16 Thinking back to operational images like the wartime aerial photographs so urgently endowed with immediacy that Sekula proclaimed that “the photographic sense of ‘having been there,’ identified by Roland Barthes, must submit to the demands of ‘being there,’”17 the synthetic image is incapable of any of these operations. The synthetic image (or its maker) was never “there,” and hence is, among other things, a temporally limited and materially impoverished product detached from reality. Its urgency is replaced by a representational laziness – an audacious “being here.” 

A tweet by Shayan Sardarizadeh showing an AI-generated image of a US B-2 bomber that has crashed or been shot down in Iran. The tween notes that the US government reports that all B-2 bombers used to bomb Iranian nuclear sites returned to the US.

Fig. 1) Screenshot taken from the X account of BBC fact-checker, Shayan Sardarizadeh on June 22, 2025. The widely circulated AI-generated image as part of the disinformation campaign related to the missile exchange between Israel and Iran, purports to show a US B-2 crashed or shot down in Iran. According to BBC Verify, the three most popular AI-generated videos related to the event collectively amassed over 100,000 million views across multiple platforms, corroborating with unrelated footage from other conflicts, recycled videos from earlier airstrikes, as well as video game clips. https://www.bbc.com/news/articles/c0k78715enxo

An image from Colin Powell's presentation to the U.N. Security Council. Text at the top reads "Sanitization of Ammunition depot at Taji", with two black and white images below, claiming to show a chemical munitions bunker pre and post "sanitization"

Fig. 2) A selection of Colin Powell’s PowerPoint presentation featuring annotated satellite images presented before the U.N. Security Council on Feb. 5, 2003.


Screenshot of Amy Kremer's tweet showing an AI generated image of a crying child wearing a lifejacket in a green dingy, holding a puppy.
Kremer's response to being informed that the image she posted was AI generated, which she claims that the image is nevertheless "emblematic" of the trauma and pain of flood victims.

Fig. 3) Screenshot of Amy Kremer’s X post of the famous AI-generated image on Oct. 3, 2024, along with her response after Community Notes pointed out that the image was AI generated, in which she legitimizes the image for its emblematic representation of suffering. The post remains on Kremer’s X account to this day (July 10, 2025) and has received 3.1 million views

Source: https://x.com/AmyKremer/status/1841938191454240782?lang=en


Endnotes:  

  1. Virilio, Paul. The Vision Machine. Translated by Julie Rose. Bloomington Indiana University Press, 1994, 15.

  2. Ibid, 66.

  3. Ibid, 70.

  4. Ibid, 60.

  5. Allan Sekula, “The Operational Image: Steichen at War,” Artforum, Vol. 14, No. 4, December 1975. https://www.artforum.com/features/the-instrumental-image-steichen-at-war-209590/

  6. Ibid.

  7. Ibid.

  8. Murphy, Ryan, Olga Robinson, and Shayan Sardarizadeh. “Israel-Iran Conflict Unleashes Wave of AI  Disinformation.” BBC Verify, June 20, 2025.https://www.bbc.com/news/articles/c0k78715enxo.

  9. Antoine Bousquet, The Eye of War: Military Perception from the Telescope to the Drone (Minneapolis: University of Minnesota Press), 50.

  10. Thomas Keenan and Eyal Weizman, Mengele’s Skull:The Advent of a Forensic Aesthetics, Sternberg Press/Porticus, 24.

  11.  Everypixel Journal. “People Are Creating an Average of 34 Million Images Per Day: Statistics for 2024.” https://journal.everypixel.com/ai-image-statistics; Europol. “Facing Reality? Law Enforcement and the Challenge of Deepfakes, An Observatory Report from the Europol Innovation Labs.” Publications Office of the European Union, 2022.

  12. Eliot Higgins (@eliothiggins.bsky.social), “I have often warned the real risk of AI images is giving people the ability to deny the real images more than making them believe things that aren’t true.” Bluesky, Sept. 3,  2025, 7:17am. https://bsky.app/profile/eliothiggins.bsky.social/post/3lxwizge3vk2y.

  13. Ervik, Andreas. “Generative AI and the Collective Imaginary: The Technology-Guided Social Imagination in AI-Imagenesis,” IMAGE 37, no. 1, 2023, pp. 42–57, 46.

  14. Amy Kremer (@amykramer), Y’all I have no idea where this photo came from and honestly, it doesn’t matter. It is seared into my mind forever. There are people going through much worse than in this pic. So I am leaving it because it is emblematic of the trauma and pain that people were living through right now. X, Oct. 3,2024. https://x.com/AmyKremer/status/1841938191454240782?lang=en.

  15. Roland, Meyer. “Platform Realism.AI Image Synthesis and the Rise of the Generic Image.” Transbordeur, 9, (2025),11.

  16. Ibid.

  17. Allan Sekula, “The Operational Image: Steichen at War,” Artforum, Vol. 14, No. 4, December 1975.    https://www.artforum.com/features/the-instrumental-image-steichen-at-war-209590/.


Tracy Valcourt is an independent researcher and adjunct professor in the Department of Art History at Concordia University in Montreal where she teaches courses at the intersection of visual culture and surveillance studies. Her article “Rethinking Aerial Orientalism: Picturing Deserts from Above” was recently published in the International Journal of Middle East Studies.


PRINT
Next
Next

Discord