The Paradoxes of Digital Photography

Digital Photography is more real

Words by

Lev Manovich

Jurrasic Park III

Computerized design systems that flawlessly combine real photographed objects and objects synthesized by the computer. Satellites that can photograph the license plate of your car, and read the time on your watch. "Smart" weapons that recognize and follow their targets in effortless pursuit -- the kind of new, post-modern, post-industrial dance to which we were all exposed during the televised Gulf war. New medical imaging technologies that map every organ and function of the body. Online electronic libraries that enable any designer to acquire not only millions of photographs digitally stored but also dozens of styles which can be automatically applied by a computer to any image.

Published in: Photography after Photography. Exhibition catalog. Germany, 1995.   Digital Revolution?  

All of these and many other recently emerged technologies of image-making, image manipulation, and vision  -  depend on digital computers. All of them, as a whole, allow photographs to perform new, unprecedented, and still poorly understood functions. All of them radically change what a photograph is. Indeed, digital photographs function in an entirely different way from traditional -- lens and film based -- photographs. For instance, images are obtained and displayed by sequential scanning; they exist as mathematical data which can be displayed in a variety of modes -- sacrificing color, spatial or temporal resolution.

Image processing techniques make us realize that any photograph contains more information than can be seen with the human eye. Techniques of 3D computer graphics make possible the synthesis of photo realistic images -- yet, this realism is always partial, since these techniques do not permit the synthesis of any arbitrary scene.[1] Digital photographs function in an entirely different way from traditional photographs. Or do they? Shall we accept that digital imaging represents a radical rupture with photography? Is an image, mediated by computer and electronic technology, radically different from an image obtained through a photographic lens and embodied in film? If we describe film-based images using such categories as depth of field, zoom, a shot or montage, what categories should be used to describe digital images? Shall the phenomenon of digital imaging force us to rethink such fundamental concepts as realism or representation?

In this essay I will refrain from taking an extreme position of either fully accepting or fully denying the idea of a digital imaging revolution. Rather, I will present the logic of the digital image as paradoxical; radically breaking with older modes of visual representation while at the same time reinforcing these modes. I will demonstrate this paradoxical logic by examining two questions: alleged physical differences between digital and film-based representation of photographs and the notion of realism in computer generated synthetic photography.

The logic of the digital photograph is one of historical continuity and discontinuity. The digital image tears apart the net of semiotic codes, modes of display, and patterns of spectatorship in modern visual culture -- and, at the same time, weaves this net even stronger. The digital image annihilates photography while solidifying, glorifying and immortalizing the photographic. In short, this logic is that of photography after photography.    

Sign up now

Join for access to all issues, articles and open calls
Already have an account? Sign in

Payment Failed

Hey there. We tried to charge your card but, something went wrong. Please update your payment method below to continue reading Artdoc Magazine.
Update Payment Method
Have a question? Contact Support

2. Digital Photography Does Not Exist  

It is easiest to see how digital (r)evolution solidifies (rather than destroys) certain aspects of modern visual culture -- the culture synonymous with the photographic image -- by considering not photography itself but a related film- based medium -- cinema. New digital technologies promise to radically reconfigure the basic material components (lens, camera, lighting, film) and the basic techniques (the separation of production and post-production, special effects, the use of human actors and non-human props) of the cinematic apparatus as it has existed for decades. The film camera is increasingly supplemented by the virtual camera of computer graphics which is used to simulate sets and even actors (as in "Terminator 2" and "Jurassic Park"). Traditional film editing and optical printing are being replaced by digital editing and image processing which blur the lines between production and post-production, between shooting and editing. At the same time, while the basic technology of filmmaking is about to disappear being replaced by new digital technologies, cinematic codes find new roles in the digital visual culture. New forms of entertainment based on digital media and even the basic interface between a human and a computer are being increasingly modeled on the metaphors of movie making and movie viewing. With QuickTime technology, built into every Macintosh sold today, the user makes and edits digital "movies" using software packages whose very names (such as Director and Premiere) make a direct reference to cinema. Computer games are also increasingly constructed on the metaphor of a movie, featuring realistic sets and characters, complex camera angles, dissolves, and other codes of traditional filmmaking. Many new CD-ROM games go even further, incorporating actual movie-like scenes with live actors directed by well-known Hollywood directors. Finally, SIGGRAPH, the largest international conference on computer graphics technology, offers a course entitled "Film Craft in User Interface Design" based on the premise that "The rich store of knowledge created in 90 years of filmmaking and animation can contribute to the design of user interfaces of multimedia, graphics applications, and even character displays."[2] Thus, film may soon disappear -- but not cinema. On the contrary, with the disappearance of film due to digital technology, cinema acquires a truly fetishistic status. Classical cinema has turned into the priceless data bank, the stock which is guaranteed never to lose its value as classic films become the content of each new round of electronic and digital distribution media -- first video cassette, then laserdisc, and, now, CD-ROM (major movie companies are planning to release dozens of classic Hollywood films on CD- ROM by the end of 1994). Even more fetishized is "film look" itself -- the soft, grainy, and somewhat blurry appearance of a photographic image which is so different from the harsh and flat image of a video camera or the too clean, too perfect image of computer graphics. The traditional photographic image once represented the inhuman, devilish objectivity of technological vision.

Memory and nostalgia

Today, however, it looks so human, so familiar, so domesticated -- in contrast to the alienating, still unfamiliar appearance of a computer display with its 1280 by 1024 resolution, 32 bits per pixel, 16 million colors, and so on. Regardless of what it signifies, any photographic image also connotes memory and nostalgia, nostalgia for modernity and the twentieth century, the era of the pre-digital, pre-post-modern. Regardless of what it represents, any photographic image today first of all represents photography. So while digital imaging promises to completely replace the techniques of filmmaking, it at the same time finds new roles and brings new value to the cinematic apparatus, the classic films, and the photographic look. This is the first paradox of digital imaging. But surely, what digital imaging preserves and propagates are only the cultural codes of film or photography. Underneath, isn't there a fundamental physical difference between film-based image and a digitally encoded image? The most systematic answer to this question can be found in William Mitchell's recent book "The Reconfigured Eye: Visual Truth in the Post-photographic Era."[3] Mitchell's  entire analysis of the digital imaging revolution revolves around his claim that the difference between a digital image and a photograph "is grounded in fundamental physical characteristics that have logical and cultural consequences."[4] In other words, the physical difference between photographic and digital technology leads to the difference in the logical status of film-based and digital images and also to the difference in their cultural perception. How fundamental is this difference? If we limit ourselves by focusing solely, as Mitchell does, on the abstract principles of digital imaging, then the difference between a digital and a photographic image appears enormous.

Original and the copy

But if we consider concrete digital technologies and their uses, the difference disappears. Digital photography simply does not exist. The first alleged difference concerns the relationship between the original and the copy in analog and in digital cultures. Mitchell writes: "The continuous spatial and tonal variation of analog pictures is not exactly replicable, so such images cannot be transmitted or copied without degradation... But discrete states can be replicated precisely, so a digital image that is a thousand generations away from the original is indistinguishable in quality from any one of its progenitors."[5] Therefore, in digital visual culture, "an image file can be copied endlessly, and the copy is distinguishable from the original by its date since there is no loss of quality."[6] This is all true -- in principle. However, in reality, there is actually much more degradation and loss of information between copies of digital images than between copies of traditional photographs. A single digital image consists of millions of pixels. All of this data requires considerable storage space in a computer; it also takes a long time (in contrast to a text file) to transmit over a network. Because of this, the current software and hardware used to acquire, store, manipulate, and transmit digital images uniformly rely on lossy compression -- the technique of making image files smaller by deleting some information.[7] The technique involves a compromise between image quality and file size -- the smaller the size of a compressed file, the more visible are the visual artifacts introduced in deleting information. Depending on the level of compression, these artifacts range from barely noticeable to quite pronounced. At any rate, each time a compressed file is saved, more information is lost, leading to more degradation. One may argue that this situation is temporary and once cheaper computer storage and faster networks become commonplace, lossy compression will disappear. However, at the moment, the trend is quite the reverse with lossy compression becoming more and more the norm for representing visual information. If a single digital image already contains a lot of data, then this amount increases dramatically if we want to produce and distribute moving images in a digital form (one second of video, for instance, consists of 30 still images).

Digital television with its hundreds of channels and video on-demand services, the distribution of full-length films on CD-ROM or over Internet, fully digital post-production of feature films -- all of these developments will be made possible by newer compression techniques.[8] So rather than being an aberration, a flaw in the otherwise pure and perfect world of the digital, where  even a single bit of information is never lost, lossy compression is increasingly becoming the very foundation of digital visual culture. This is another paradox of digital imaging -- while in theory digital technology entails the flawless replication of data, its actual use in contemporary society is characterized by the loss of data, degradation, and noise; the noise which is even stronger than that of traditional photography.  


Finer detail

The second commonly cited difference between traditional and digital photography concerns the amount of information contained in an image. Mitchell sums it up as follows: "There is an indefinite amount of information in a continuous-tone photograph, so enlargement usually reveals more detail but yields a fuzzier and grainier picture... A digital image, on the other hand, has precisely limited spatial and tonal resolution and contains a fixed amount of information."[9] Here again Mitchell is right in principle: a digital image consists of a finite number of pixels, each having a distinct color or a tonal value, and this number determines the amount of detail an image can represent. Yet in reality this difference does not matter anymore. Current scanners, even consumer brands, can scan an image or an object with very high resolution: 1200 or 2400 pixels per inch is standard today. True, a digital image is still comprised of a finite number of pixels, but at such resolution it can record much finer detail than was ever possible with traditional photography. This nullifies the whole distinction between an "indefinite amount of information in a continuous-tone photograph" and a fixed amount of detail in a digital image. The more relevant question is how much information in an image can be useful to the viewer. Current technology has already reached the point where a digital image can easily contain much more information than anybody would ever want. This is yet another paradox of digital imaging. But even the pixel-based representation, which appears to be the very essence of digital imaging, can no longer be taken for granted. Recent computer graphics software has bypassed the limitations of the traditional pixel grid which limits the amount of information in an image because it has a fixed resolution. Live Picture, an image editing program for the Macintosh, converts a pixel-based image into a set of equations. This allows the user to work with an image of virtually unlimited size. Another paint program Matador makes possible painting on a tiny image which may consist of just a few pixels as though it were a high-resolution image (it achieves this by breaking each pixel into a number of smaller sub-pixels). In both programs, the pixel is no longer a "final frontier"; as far as the user is concerned, it simply does not exist.  

Photo fakes

Mitchell's third distinction concerns the inherent mutability of a digital image. While he admits that there has always been a tradition of impure, re-worked photography (he refers to "Henry Peach Robinson's and Oscar G. Reijlander's nineteenth century 'combination prints,' John Heartfield's photomontages"[10] as well as numerous political photo fakes of the twentieth century) Mitchell identifies straight, unmanipulated photography as the essential, "normal" photographic practice: "There is no doubt that extensive reworking of photographic images to produce seamless  transformations and combinations is technically difficult, time-consuming, and outside the mainstream of photographic practice. When we look at photographs we presume, unless we have some clear indications to the contrary, that they have not been reworked."[11] This equation of "normal" photography with straight photography allows Mitchell to claim that a digital image is radically different because it is inherently mutable: "the essential characteristic of digital information is that it can be manipulated easily and very rapidly by computer. It is simply a matter of substituting new digits for old... Computational tools for transforming, combining, altering, and analyzing images are as essential to the digital artist as brushes and pigments to a painter."[12] From this allegedly purely technological difference between a photograph and a digital image, Mitchell deduces differences in how the two are culturally perceived. Because of the difficulty involved in manipulating them, photographs "were comfortably regarded as causally generated truthful reports about things in the real world."[13] Digital images, being inherently (and so easily) mutable, call into question "our ontological distinctions between the imaginary and the real"[14] or between photographs and drawings. Furthermore, in a digital image, the essential relationship between signifier and signified is one of uncertainty.[15]

What Mitchell takes to be the essence of photographic and digital imaging technology are two traditions of visual culture.

Does this hold? While Mitchell aims to deduce culture from technology, it appears that he is actually doing the reverse. In fact, he simply identifies the pictorial tradition of realism with the essence of photographic technology and the tradition of montage and collage with the essence of digital imaging. Thus, the photographic work of Robert Weston and Ansel Adams, nineteenth and twentieth century realist painting, and the painting of the Italian Renaissance become the essence of photography; while Robinson's and Rejlander's photo composites, constructivist montage, contemporary advertising imagery (based on constructivist design), and Dutch seventeenth century painting (with its montage-like emphasis on details over the coherent whole) become the essence of digital imaging. In other words, what Mitchell takes to be the essence of photographic and digital imaging technology are two traditions of visual culture. Both existed before photography, and both span different visual technologies and mediums. Just as its counterpart, the realistic tradition extends beyond photography per se and at the same time accounts for just one of many photographic practices.

Stalin and Voroshilov in the Kremlin, Aleksandr Gerasimov, 1938

Soviet photography

If this is so, Mitchell's notion of "normal" unmanipulated photography is problematic. Indeed, unmanipulated "straight" photography can hardly be claimed to dominate the modern uses of photography. Consider, for instance, the following photographic practices. One is Soviet photography of the Stalinist era. All published photographs were not only staged but also retouched so heavily that they can hardly be called photographs at all. These images were not montages, as they maintained the unity of space and time, and yet, having lost any trace of photographic grain due to retouching, they existed somewhere between photography and painting. More precisely, we can say that Stalinist visual culture eliminated the very difference between a photograph and a painting by producing photographs which looked like paintings and paintings (I refer to Socialist Realism) which looked like photographs. If this example can be written off as an aberration of totalitarianism, consider another photographic practice closer to home: the use of photographic images in twentieth century advertising and publicity design.

This practice does not make any attempt to claim that a photographic image is a witness testifying about the unique event which took place in a distinct moment of time (which is how, according to Mitchell, we normally read photography). Instead, a photograph becomes just one graphic element among many: few photographs coexist on a single page; photographs are mixed with type; photographs are separated from each by white space, backgrounds are erased leaving only the figures, and so on. The end result being that here, as well, the difference between a painting and a photograph does not hold. A photograph as used in advertising design does not point to a concrete event or a particular object. It does not say, for example, "this hat was in this room on May 12." Rather, it simply presents "a hat" or "a beach" or "a television set" without any reference to time and location. Such examples question Mitchell's idea that digital imaging destroys the innocence of straight photography by making all photographs inherently mutable. Straight photography has always represented just one tradition of photography; it always coexisted with equally popular traditions where a photographic image was openly manipulated and was read as such.

Equally, there never existed a single dominant way of reading photography; depending on the context the viewer could (and continue to) read photographs as representations of concrete events, or as illustrations which do not claim to correspond to events which have occurred. Digital technology does not subvert "normal" photography because "normal" photography never existed.  

3. Real, All Too Real: Socialist Realism of "Jurassic Park"  

I have considered some of the alleged physical differences between traditional and digital photography. But what is a digital photograph? My discussion has focused on the distinction between a film-based representation of an image versus its representation in a computer as a grid of pixels having a fixed resolution and taking up a certain amount of computer storage space. In short, I highlighted the issue of analog versus digital representation of an image while disregarding the procedure through which this image is produced in the first place. However, if this procedure is considered another meaning of digital photography emerges. Rather than using the lens to focus the image of actual reality on film and then digitizing the film image (or directly using an array of electronic sensors) we can try to construct three-dimensional reality inside a computer and then take a picture of this reality using a virtual camera also inside a computer. In other words, 3-D computer graphics can also be thought off as digital -- or synthetic -- photography.

I will conclude by considering the current state of the art of 3-D computer graphics. Here we will encounter the final paradox of digital photography. Common opinion holds that synthetic photographs generated by computer graphics are not yet (or perhaps will never be) as precise in rendering visual reality as images obtained through a photographic lens. However, I will suggest that such synthetic photographs are already more realistic than traditional photographs. In fact, they are too real.  

The achievement of realism is the main goal of research in the 3-D computer graphics field. The field defines realism as the ability to simulate any object in such a way that its computer image is indistinguishable from its photograph. It is this ability to simulate photographic images of real or imagined objects which makes possible the use of 3-D computer graphics in military and medical simulators, in television commercials, in computer games, and, of course, in such movies as "Terminator 2" or "Jurassic Park."


These last two movies, which contain the most spectacular 3-D computer graphics scenes to date, dramatically demonstrate that total synthetic realism seems to be in sight. Yet, they also exemplify the triviality of what at first may appear to be an outstanding technical achievement -- the ability to fake visual reality. For what is faked is, of course, not reality but photographic reality, reality as seen by the camera lens. In other words, what computer graphics has (almost) achieved is not realism, but only photorealism -- the ability to fake not our perceptual and bodily experience of reality but only its photographic image.[16] This image exists outside of our consciousness, on a screen -- a window of limited size which presents a still imprint of a small part of outer reality, filtered through the lens with its limited depth of field, filtered through film's grain and its limited tonal range. It is only this film-based image which computer graphics technology has learned to simulate. And the reason we think that computer graphics has succeeded in faking reality is that we, over the course of the last hundred and fifty years, has come to accept the image of photography and film as reality.  What is faked is only a film-based image.


Photorealism

Once we came to accept the photographic image as reality the way to its future simulation was open. What remained were small details: the development of digital computers (1940s) followed by a perspective-generating algorithm (early 1960s), and then working out how to make a simulated object solid with shadow, reflection and texture (1970s), and finally simulating the artefacts of the lens such as motion blur and depth of field (1980s). So, while the distance from the first computer graphics images circa 1960 to the synthetic dinosaurs of "Jurassic Park" in the 1990s is tremendous, we should not be too impressed. For, conceptually, photorealistic computer graphics had already appeared with Felix Nadar's photographs in the 1840s and certainly with the first films of the Lumieres in the 1890s. It is they who invented 3-D computer graphics.  So the goal of computer graphics is not realism but only photorealism. Has this photorealism been achieved? At the time of this writing (May 1994) dinosaurs of "Jurassic Park" represent the ultimate triumph of computer simulation, yet this triumph took more than two years of work by dozens of designers, animators, and programmers of Industrial Light and Magic (ILM), probably the premier company specializing in the production of computer animation for feature films in the world today.
Because a few seconds of computer animation often requires months and months of work, only the huge budget of a Hollywood blockbuster could pay for such extensive and highly detailed computer-generated scenes as seen in "Jurassic Park." Most of the 3-D computer animation produced today has a much lower degree of photorealism and this photorealism is uneven, higher for some kinds of objects and lower for others.[17] And even for ILM photorealistic simulation of human beings, the ultimate goal of computer animation, still remains impossible.

Typical images produced with 3-D computer graphics still appear unnaturally clean, sharp, and geometric looking. Their limitations especially stand out when juxtaposed with a normal photograph. Thus one of the landmark achievements of "Jurassic Park" was the seamless integration of film footage of real scenes with computer simulated objects. To achieve this integration, computer-generated images had to be degraded; their perfection had to be diluted to match the imperfection of film's graininess. First, the animators needed to figure out the resolution at which to render computer graphics elements. If the resolution were too high, the computer image would have more detail than the film image and its artificiality would become apparent.

Just as Medieval masters guarded their painting secrets now leading computer graphics companies carefully guard the resolution of image they simulate. Once computer-generated images are combined with film images additional tricks are used to diminish their perfection. With the help of special algorithms, the straight edges of computer-generated objects are softened. Barely visible noise is added to the overall image to blend computer and film elements. Sometimes, as in the final battle between the two protagonists in "Terminator 2," the scene is staged in a particular location (a smoky factory in this example) which justifies addition of smoke or fog to further blend the film and synthetic elements together.

The synthetic image is free of the limitations of both human and camera vision.

Too real

So, while we normally think that synthetic photographs produced through computer graphics are inferior in comparison to real photographs, in fact, they are too perfect. But beyond that we can also say that paradoxically they are also too real. The synthetic image is free of the limitations of both human and camera vision. It can have unlimited resolution and an unlimited level of detail. It is free of the depth-of- field effect, this inevitable consequence of the lens, so everything is in focus. It is also free of grain -- the layer of noise created by film stock and by human perception. Its colors are more saturated and its sharp lines follow the economy of geometry. From the point of view of human vision it is hyperreal. And yet, it is completely realistic. It is simply a result of a different, more perfect than human, vision. Whose vision is it? It is the vision of a cyborg or a computer; a vision of Robocop and of an automatic missile. It is a realistic representation of human vision in the future when it will be augmented by computer graphics and cleansed from noise. It is the vision of a digital grid. Synthetic computer-generated image is not an inferior representation of our reality, but a realistic representation of a different reality. By the same logic, we should not consider clean, skinless, too flexible, and in the same time too jerky, human figures in 3-D computer animation as unrealistic, as imperfect approximation to the real thing -- our bodies. They are perfectly realistic representation of a cyborg body yet to come, of a world reduced to geometry, where efficient representation via a geometric model becomes the basis of reality. The synthetic image simply represents the future. In other words, if a traditional photograph always points to the past event, a synthetic photograph points to the future event. We are now in a position to characterize the aesthetics of "Jurassic Park."

Socialist Realism

This aesthetic is one of Soviet Socialist Realism. Socialist Realism wanted to show the future in the present by projecting the perfect world of future socialist society on a visual reality familiar to the viewer -- streets, faces, and cities of the 1930s. In other words, it had to retain enough of then everyday reality while showing how that reality would look in the future when everyone's body will be healthy and muscular, every street modern, every face transformed by the spirituality of communist ideology. Exactly the same happens in "Jurassic Park." It tries to show the future of sight itself -- the perfect cyborg vision free of noise and capable of grasping infinite details -- vision exemplified by the original computer graphics images before they were blended with film images. But just as Socialist Realist paintings blended the perfect future with the imperfect reality of the 1930s and never depicted this future directly (there is not a single Socialist Realist work of art set in the future), "Jurassic Park" blends the future super-vision of computer graphics with the familiar vision of film image. In "Jurassic Park," the computer image bends down before the film image, its perfection is undermined by every possible means and is also masked by the film's content. This is then, the final paradox of digital photography. Its images are not inferior to the visual realism of traditional photography. They are perfectly real -- all too real.  



NOTES  
Lev Manovich, "Assembling Reality: Myths of Computer Graphics," AFTERIMAGE 20, no. 2 (September 1992): 12- 14.  SIGGRAPH 93. ADVANCE PROGRAM (ACM: New York, 1993), 28.  William Mitchell, THE RECONFIGURED EYE: VISUAL TRUTH IN THE POST-PHOTOGRAPHIC ERA (Cambridge, Mass.: The MIT Press, 1992).  Ibid., 4.   Ibid., 6.   6. Ibid., 49.   Currently the most widespread technique for compressing digital photographs is JPEG. For instance, every Macintosh comes with JPEG compression software.  For almost a century, our standard of visual fidelity was determined by the film image. A video or television image was always viewed as an imperfect, low quality substitute for the "real thing" -- a film-based image. Today, however, a new even lower quality image is becoming increasingly popular -- an image of computer multimedia. Its quality is exemplified by a typical, as of this writing, Quicktime movie: 320 by 240 pixels, 10- 15 frames a second. Is the 35 mm film image going to remain the unchallenged standard with computer  technology eventually duplicating its quality? Or will a low quality computer image be gradually accepted by the public as the new standard of visual truth?  Mitchell, THE RECONFIGURED EYE, 6.   Ibid., 7.   Ibid.   Ibid.   13. Ibid., 225.   14. Ibid.   15. Ibid., 17.   The research in virtual reality aims to go beyond the screen image in order to simulate both the perceptual and bodily experience of reality.  See Manovich, "Assembling Reality."

More information and articles

The Paradoxes of Digital Photography

Digital Photography is more real

Words by

Lev Manovich

Digital Photography is more real
Jurrasic Park III

Computerized design systems that flawlessly combine real photographed objects and objects synthesized by the computer. Satellites that can photograph the license plate of your car, and read the time on your watch. "Smart" weapons that recognize and follow their targets in effortless pursuit -- the kind of new, post-modern, post-industrial dance to which we were all exposed during the televised Gulf war. New medical imaging technologies that map every organ and function of the body. Online electronic libraries that enable any designer to acquire not only millions of photographs digitally stored but also dozens of styles which can be automatically applied by a computer to any image.

Published in: Photography after Photography. Exhibition catalog. Germany, 1995.   Digital Revolution?  

All of these and many other recently emerged technologies of image-making, image manipulation, and vision  -  depend on digital computers. All of them, as a whole, allow photographs to perform new, unprecedented, and still poorly understood functions. All of them radically change what a photograph is. Indeed, digital photographs function in an entirely different way from traditional -- lens and film based -- photographs. For instance, images are obtained and displayed by sequential scanning; they exist as mathematical data which can be displayed in a variety of modes -- sacrificing color, spatial or temporal resolution.

Image processing techniques make us realize that any photograph contains more information than can be seen with the human eye. Techniques of 3D computer graphics make possible the synthesis of photo realistic images -- yet, this realism is always partial, since these techniques do not permit the synthesis of any arbitrary scene.[1] Digital photographs function in an entirely different way from traditional photographs. Or do they? Shall we accept that digital imaging represents a radical rupture with photography? Is an image, mediated by computer and electronic technology, radically different from an image obtained through a photographic lens and embodied in film? If we describe film-based images using such categories as depth of field, zoom, a shot or montage, what categories should be used to describe digital images? Shall the phenomenon of digital imaging force us to rethink such fundamental concepts as realism or representation?

In this essay I will refrain from taking an extreme position of either fully accepting or fully denying the idea of a digital imaging revolution. Rather, I will present the logic of the digital image as paradoxical; radically breaking with older modes of visual representation while at the same time reinforcing these modes. I will demonstrate this paradoxical logic by examining two questions: alleged physical differences between digital and film-based representation of photographs and the notion of realism in computer generated synthetic photography.

The logic of the digital photograph is one of historical continuity and discontinuity. The digital image tears apart the net of semiotic codes, modes of display, and patterns of spectatorship in modern visual culture -- and, at the same time, weaves this net even stronger. The digital image annihilates photography while solidifying, glorifying and immortalizing the photographic. In short, this logic is that of photography after photography.    

2. Digital Photography Does Not Exist  

It is easiest to see how digital (r)evolution solidifies (rather than destroys) certain aspects of modern visual culture -- the culture synonymous with the photographic image -- by considering not photography itself but a related film- based medium -- cinema. New digital technologies promise to radically reconfigure the basic material components (lens, camera, lighting, film) and the basic techniques (the separation of production and post-production, special effects, the use of human actors and non-human props) of the cinematic apparatus as it has existed for decades. The film camera is increasingly supplemented by the virtual camera of computer graphics which is used to simulate sets and even actors (as in "Terminator 2" and "Jurassic Park"). Traditional film editing and optical printing are being replaced by digital editing and image processing which blur the lines between production and post-production, between shooting and editing. At the same time, while the basic technology of filmmaking is about to disappear being replaced by new digital technologies, cinematic codes find new roles in the digital visual culture. New forms of entertainment based on digital media and even the basic interface between a human and a computer are being increasingly modeled on the metaphors of movie making and movie viewing. With QuickTime technology, built into every Macintosh sold today, the user makes and edits digital "movies" using software packages whose very names (such as Director and Premiere) make a direct reference to cinema. Computer games are also increasingly constructed on the metaphor of a movie, featuring realistic sets and characters, complex camera angles, dissolves, and other codes of traditional filmmaking. Many new CD-ROM games go even further, incorporating actual movie-like scenes with live actors directed by well-known Hollywood directors. Finally, SIGGRAPH, the largest international conference on computer graphics technology, offers a course entitled "Film Craft in User Interface Design" based on the premise that "The rich store of knowledge created in 90 years of filmmaking and animation can contribute to the design of user interfaces of multimedia, graphics applications, and even character displays."[2] Thus, film may soon disappear -- but not cinema. On the contrary, with the disappearance of film due to digital technology, cinema acquires a truly fetishistic status. Classical cinema has turned into the priceless data bank, the stock which is guaranteed never to lose its value as classic films become the content of each new round of electronic and digital distribution media -- first video cassette, then laserdisc, and, now, CD-ROM (major movie companies are planning to release dozens of classic Hollywood films on CD- ROM by the end of 1994). Even more fetishized is "film look" itself -- the soft, grainy, and somewhat blurry appearance of a photographic image which is so different from the harsh and flat image of a video camera or the too clean, too perfect image of computer graphics. The traditional photographic image once represented the inhuman, devilish objectivity of technological vision.

Memory and nostalgia

Today, however, it looks so human, so familiar, so domesticated -- in contrast to the alienating, still unfamiliar appearance of a computer display with its 1280 by 1024 resolution, 32 bits per pixel, 16 million colors, and so on. Regardless of what it signifies, any photographic image also connotes memory and nostalgia, nostalgia for modernity and the twentieth century, the era of the pre-digital, pre-post-modern. Regardless of what it represents, any photographic image today first of all represents photography. So while digital imaging promises to completely replace the techniques of filmmaking, it at the same time finds new roles and brings new value to the cinematic apparatus, the classic films, and the photographic look. This is the first paradox of digital imaging. But surely, what digital imaging preserves and propagates are only the cultural codes of film or photography. Underneath, isn't there a fundamental physical difference between film-based image and a digitally encoded image? The most systematic answer to this question can be found in William Mitchell's recent book "The Reconfigured Eye: Visual Truth in the Post-photographic Era."[3] Mitchell's  entire analysis of the digital imaging revolution revolves around his claim that the difference between a digital image and a photograph "is grounded in fundamental physical characteristics that have logical and cultural consequences."[4] In other words, the physical difference between photographic and digital technology leads to the difference in the logical status of film-based and digital images and also to the difference in their cultural perception. How fundamental is this difference? If we limit ourselves by focusing solely, as Mitchell does, on the abstract principles of digital imaging, then the difference between a digital and a photographic image appears enormous.

Original and the copy

But if we consider concrete digital technologies and their uses, the difference disappears. Digital photography simply does not exist. The first alleged difference concerns the relationship between the original and the copy in analog and in digital cultures. Mitchell writes: "The continuous spatial and tonal variation of analog pictures is not exactly replicable, so such images cannot be transmitted or copied without degradation... But discrete states can be replicated precisely, so a digital image that is a thousand generations away from the original is indistinguishable in quality from any one of its progenitors."[5] Therefore, in digital visual culture, "an image file can be copied endlessly, and the copy is distinguishable from the original by its date since there is no loss of quality."[6] This is all true -- in principle. However, in reality, there is actually much more degradation and loss of information between copies of digital images than between copies of traditional photographs. A single digital image consists of millions of pixels. All of this data requires considerable storage space in a computer; it also takes a long time (in contrast to a text file) to transmit over a network. Because of this, the current software and hardware used to acquire, store, manipulate, and transmit digital images uniformly rely on lossy compression -- the technique of making image files smaller by deleting some information.[7] The technique involves a compromise between image quality and file size -- the smaller the size of a compressed file, the more visible are the visual artifacts introduced in deleting information. Depending on the level of compression, these artifacts range from barely noticeable to quite pronounced. At any rate, each time a compressed file is saved, more information is lost, leading to more degradation. One may argue that this situation is temporary and once cheaper computer storage and faster networks become commonplace, lossy compression will disappear. However, at the moment, the trend is quite the reverse with lossy compression becoming more and more the norm for representing visual information. If a single digital image already contains a lot of data, then this amount increases dramatically if we want to produce and distribute moving images in a digital form (one second of video, for instance, consists of 30 still images).

Digital television with its hundreds of channels and video on-demand services, the distribution of full-length films on CD-ROM or over Internet, fully digital post-production of feature films -- all of these developments will be made possible by newer compression techniques.[8] So rather than being an aberration, a flaw in the otherwise pure and perfect world of the digital, where  even a single bit of information is never lost, lossy compression is increasingly becoming the very foundation of digital visual culture. This is another paradox of digital imaging -- while in theory digital technology entails the flawless replication of data, its actual use in contemporary society is characterized by the loss of data, degradation, and noise; the noise which is even stronger than that of traditional photography.  


Finer detail

The second commonly cited difference between traditional and digital photography concerns the amount of information contained in an image. Mitchell sums it up as follows: "There is an indefinite amount of information in a continuous-tone photograph, so enlargement usually reveals more detail but yields a fuzzier and grainier picture... A digital image, on the other hand, has precisely limited spatial and tonal resolution and contains a fixed amount of information."[9] Here again Mitchell is right in principle: a digital image consists of a finite number of pixels, each having a distinct color or a tonal value, and this number determines the amount of detail an image can represent. Yet in reality this difference does not matter anymore. Current scanners, even consumer brands, can scan an image or an object with very high resolution: 1200 or 2400 pixels per inch is standard today. True, a digital image is still comprised of a finite number of pixels, but at such resolution it can record much finer detail than was ever possible with traditional photography. This nullifies the whole distinction between an "indefinite amount of information in a continuous-tone photograph" and a fixed amount of detail in a digital image. The more relevant question is how much information in an image can be useful to the viewer. Current technology has already reached the point where a digital image can easily contain much more information than anybody would ever want. This is yet another paradox of digital imaging. But even the pixel-based representation, which appears to be the very essence of digital imaging, can no longer be taken for granted. Recent computer graphics software has bypassed the limitations of the traditional pixel grid which limits the amount of information in an image because it has a fixed resolution. Live Picture, an image editing program for the Macintosh, converts a pixel-based image into a set of equations. This allows the user to work with an image of virtually unlimited size. Another paint program Matador makes possible painting on a tiny image which may consist of just a few pixels as though it were a high-resolution image (it achieves this by breaking each pixel into a number of smaller sub-pixels). In both programs, the pixel is no longer a "final frontier"; as far as the user is concerned, it simply does not exist.  

Photo fakes

Mitchell's third distinction concerns the inherent mutability of a digital image. While he admits that there has always been a tradition of impure, re-worked photography (he refers to "Henry Peach Robinson's and Oscar G. Reijlander's nineteenth century 'combination prints,' John Heartfield's photomontages"[10] as well as numerous political photo fakes of the twentieth century) Mitchell identifies straight, unmanipulated photography as the essential, "normal" photographic practice: "There is no doubt that extensive reworking of photographic images to produce seamless  transformations and combinations is technically difficult, time-consuming, and outside the mainstream of photographic practice. When we look at photographs we presume, unless we have some clear indications to the contrary, that they have not been reworked."[11] This equation of "normal" photography with straight photography allows Mitchell to claim that a digital image is radically different because it is inherently mutable: "the essential characteristic of digital information is that it can be manipulated easily and very rapidly by computer. It is simply a matter of substituting new digits for old... Computational tools for transforming, combining, altering, and analyzing images are as essential to the digital artist as brushes and pigments to a painter."[12] From this allegedly purely technological difference between a photograph and a digital image, Mitchell deduces differences in how the two are culturally perceived. Because of the difficulty involved in manipulating them, photographs "were comfortably regarded as causally generated truthful reports about things in the real world."[13] Digital images, being inherently (and so easily) mutable, call into question "our ontological distinctions between the imaginary and the real"[14] or between photographs and drawings. Furthermore, in a digital image, the essential relationship between signifier and signified is one of uncertainty.[15]

What Mitchell takes to be the essence of photographic and digital imaging technology are two traditions of visual culture.

Does this hold? While Mitchell aims to deduce culture from technology, it appears that he is actually doing the reverse. In fact, he simply identifies the pictorial tradition of realism with the essence of photographic technology and the tradition of montage and collage with the essence of digital imaging. Thus, the photographic work of Robert Weston and Ansel Adams, nineteenth and twentieth century realist painting, and the painting of the Italian Renaissance become the essence of photography; while Robinson's and Rejlander's photo composites, constructivist montage, contemporary advertising imagery (based on constructivist design), and Dutch seventeenth century painting (with its montage-like emphasis on details over the coherent whole) become the essence of digital imaging. In other words, what Mitchell takes to be the essence of photographic and digital imaging technology are two traditions of visual culture. Both existed before photography, and both span different visual technologies and mediums. Just as its counterpart, the realistic tradition extends beyond photography per se and at the same time accounts for just one of many photographic practices.

Stalin and Voroshilov in the Kremlin, Aleksandr Gerasimov, 1938

Soviet photography

If this is so, Mitchell's notion of "normal" unmanipulated photography is problematic. Indeed, unmanipulated "straight" photography can hardly be claimed to dominate the modern uses of photography. Consider, for instance, the following photographic practices. One is Soviet photography of the Stalinist era. All published photographs were not only staged but also retouched so heavily that they can hardly be called photographs at all. These images were not montages, as they maintained the unity of space and time, and yet, having lost any trace of photographic grain due to retouching, they existed somewhere between photography and painting. More precisely, we can say that Stalinist visual culture eliminated the very difference between a photograph and a painting by producing photographs which looked like paintings and paintings (I refer to Socialist Realism) which looked like photographs. If this example can be written off as an aberration of totalitarianism, consider another photographic practice closer to home: the use of photographic images in twentieth century advertising and publicity design.

This practice does not make any attempt to claim that a photographic image is a witness testifying about the unique event which took place in a distinct moment of time (which is how, according to Mitchell, we normally read photography). Instead, a photograph becomes just one graphic element among many: few photographs coexist on a single page; photographs are mixed with type; photographs are separated from each by white space, backgrounds are erased leaving only the figures, and so on. The end result being that here, as well, the difference between a painting and a photograph does not hold. A photograph as used in advertising design does not point to a concrete event or a particular object. It does not say, for example, "this hat was in this room on May 12." Rather, it simply presents "a hat" or "a beach" or "a television set" without any reference to time and location. Such examples question Mitchell's idea that digital imaging destroys the innocence of straight photography by making all photographs inherently mutable. Straight photography has always represented just one tradition of photography; it always coexisted with equally popular traditions where a photographic image was openly manipulated and was read as such.

Equally, there never existed a single dominant way of reading photography; depending on the context the viewer could (and continue to) read photographs as representations of concrete events, or as illustrations which do not claim to correspond to events which have occurred. Digital technology does not subvert "normal" photography because "normal" photography never existed.  

3. Real, All Too Real: Socialist Realism of "Jurassic Park"  

I have considered some of the alleged physical differences between traditional and digital photography. But what is a digital photograph? My discussion has focused on the distinction between a film-based representation of an image versus its representation in a computer as a grid of pixels having a fixed resolution and taking up a certain amount of computer storage space. In short, I highlighted the issue of analog versus digital representation of an image while disregarding the procedure through which this image is produced in the first place. However, if this procedure is considered another meaning of digital photography emerges. Rather than using the lens to focus the image of actual reality on film and then digitizing the film image (or directly using an array of electronic sensors) we can try to construct three-dimensional reality inside a computer and then take a picture of this reality using a virtual camera also inside a computer. In other words, 3-D computer graphics can also be thought off as digital -- or synthetic -- photography.

I will conclude by considering the current state of the art of 3-D computer graphics. Here we will encounter the final paradox of digital photography. Common opinion holds that synthetic photographs generated by computer graphics are not yet (or perhaps will never be) as precise in rendering visual reality as images obtained through a photographic lens. However, I will suggest that such synthetic photographs are already more realistic than traditional photographs. In fact, they are too real.  

The achievement of realism is the main goal of research in the 3-D computer graphics field. The field defines realism as the ability to simulate any object in such a way that its computer image is indistinguishable from its photograph. It is this ability to simulate photographic images of real or imagined objects which makes possible the use of 3-D computer graphics in military and medical simulators, in television commercials, in computer games, and, of course, in such movies as "Terminator 2" or "Jurassic Park."


These last two movies, which contain the most spectacular 3-D computer graphics scenes to date, dramatically demonstrate that total synthetic realism seems to be in sight. Yet, they also exemplify the triviality of what at first may appear to be an outstanding technical achievement -- the ability to fake visual reality. For what is faked is, of course, not reality but photographic reality, reality as seen by the camera lens. In other words, what computer graphics has (almost) achieved is not realism, but only photorealism -- the ability to fake not our perceptual and bodily experience of reality but only its photographic image.[16] This image exists outside of our consciousness, on a screen -- a window of limited size which presents a still imprint of a small part of outer reality, filtered through the lens with its limited depth of field, filtered through film's grain and its limited tonal range. It is only this film-based image which computer graphics technology has learned to simulate. And the reason we think that computer graphics has succeeded in faking reality is that we, over the course of the last hundred and fifty years, has come to accept the image of photography and film as reality.  What is faked is only a film-based image.


Photorealism

Once we came to accept the photographic image as reality the way to its future simulation was open. What remained were small details: the development of digital computers (1940s) followed by a perspective-generating algorithm (early 1960s), and then working out how to make a simulated object solid with shadow, reflection and texture (1970s), and finally simulating the artefacts of the lens such as motion blur and depth of field (1980s). So, while the distance from the first computer graphics images circa 1960 to the synthetic dinosaurs of "Jurassic Park" in the 1990s is tremendous, we should not be too impressed. For, conceptually, photorealistic computer graphics had already appeared with Felix Nadar's photographs in the 1840s and certainly with the first films of the Lumieres in the 1890s. It is they who invented 3-D computer graphics.  So the goal of computer graphics is not realism but only photorealism. Has this photorealism been achieved? At the time of this writing (May 1994) dinosaurs of "Jurassic Park" represent the ultimate triumph of computer simulation, yet this triumph took more than two years of work by dozens of designers, animators, and programmers of Industrial Light and Magic (ILM), probably the premier company specializing in the production of computer animation for feature films in the world today.
Because a few seconds of computer animation often requires months and months of work, only the huge budget of a Hollywood blockbuster could pay for such extensive and highly detailed computer-generated scenes as seen in "Jurassic Park." Most of the 3-D computer animation produced today has a much lower degree of photorealism and this photorealism is uneven, higher for some kinds of objects and lower for others.[17] And even for ILM photorealistic simulation of human beings, the ultimate goal of computer animation, still remains impossible.

Typical images produced with 3-D computer graphics still appear unnaturally clean, sharp, and geometric looking. Their limitations especially stand out when juxtaposed with a normal photograph. Thus one of the landmark achievements of "Jurassic Park" was the seamless integration of film footage of real scenes with computer simulated objects. To achieve this integration, computer-generated images had to be degraded; their perfection had to be diluted to match the imperfection of film's graininess. First, the animators needed to figure out the resolution at which to render computer graphics elements. If the resolution were too high, the computer image would have more detail than the film image and its artificiality would become apparent.

Just as Medieval masters guarded their painting secrets now leading computer graphics companies carefully guard the resolution of image they simulate. Once computer-generated images are combined with film images additional tricks are used to diminish their perfection. With the help of special algorithms, the straight edges of computer-generated objects are softened. Barely visible noise is added to the overall image to blend computer and film elements. Sometimes, as in the final battle between the two protagonists in "Terminator 2," the scene is staged in a particular location (a smoky factory in this example) which justifies addition of smoke or fog to further blend the film and synthetic elements together.

The synthetic image is free of the limitations of both human and camera vision.

Too real

So, while we normally think that synthetic photographs produced through computer graphics are inferior in comparison to real photographs, in fact, they are too perfect. But beyond that we can also say that paradoxically they are also too real. The synthetic image is free of the limitations of both human and camera vision. It can have unlimited resolution and an unlimited level of detail. It is free of the depth-of- field effect, this inevitable consequence of the lens, so everything is in focus. It is also free of grain -- the layer of noise created by film stock and by human perception. Its colors are more saturated and its sharp lines follow the economy of geometry. From the point of view of human vision it is hyperreal. And yet, it is completely realistic. It is simply a result of a different, more perfect than human, vision. Whose vision is it? It is the vision of a cyborg or a computer; a vision of Robocop and of an automatic missile. It is a realistic representation of human vision in the future when it will be augmented by computer graphics and cleansed from noise. It is the vision of a digital grid. Synthetic computer-generated image is not an inferior representation of our reality, but a realistic representation of a different reality. By the same logic, we should not consider clean, skinless, too flexible, and in the same time too jerky, human figures in 3-D computer animation as unrealistic, as imperfect approximation to the real thing -- our bodies. They are perfectly realistic representation of a cyborg body yet to come, of a world reduced to geometry, where efficient representation via a geometric model becomes the basis of reality. The synthetic image simply represents the future. In other words, if a traditional photograph always points to the past event, a synthetic photograph points to the future event. We are now in a position to characterize the aesthetics of "Jurassic Park."

Socialist Realism

This aesthetic is one of Soviet Socialist Realism. Socialist Realism wanted to show the future in the present by projecting the perfect world of future socialist society on a visual reality familiar to the viewer -- streets, faces, and cities of the 1930s. In other words, it had to retain enough of then everyday reality while showing how that reality would look in the future when everyone's body will be healthy and muscular, every street modern, every face transformed by the spirituality of communist ideology. Exactly the same happens in "Jurassic Park." It tries to show the future of sight itself -- the perfect cyborg vision free of noise and capable of grasping infinite details -- vision exemplified by the original computer graphics images before they were blended with film images. But just as Socialist Realist paintings blended the perfect future with the imperfect reality of the 1930s and never depicted this future directly (there is not a single Socialist Realist work of art set in the future), "Jurassic Park" blends the future super-vision of computer graphics with the familiar vision of film image. In "Jurassic Park," the computer image bends down before the film image, its perfection is undermined by every possible means and is also masked by the film's content. This is then, the final paradox of digital photography. Its images are not inferior to the visual realism of traditional photography. They are perfectly real -- all too real.  



NOTES  
Lev Manovich, "Assembling Reality: Myths of Computer Graphics," AFTERIMAGE 20, no. 2 (September 1992): 12- 14.  SIGGRAPH 93. ADVANCE PROGRAM (ACM: New York, 1993), 28.  William Mitchell, THE RECONFIGURED EYE: VISUAL TRUTH IN THE POST-PHOTOGRAPHIC ERA (Cambridge, Mass.: The MIT Press, 1992).  Ibid., 4.   Ibid., 6.   6. Ibid., 49.   Currently the most widespread technique for compressing digital photographs is JPEG. For instance, every Macintosh comes with JPEG compression software.  For almost a century, our standard of visual fidelity was determined by the film image. A video or television image was always viewed as an imperfect, low quality substitute for the "real thing" -- a film-based image. Today, however, a new even lower quality image is becoming increasingly popular -- an image of computer multimedia. Its quality is exemplified by a typical, as of this writing, Quicktime movie: 320 by 240 pixels, 10- 15 frames a second. Is the 35 mm film image going to remain the unchallenged standard with computer  technology eventually duplicating its quality? Or will a low quality computer image be gradually accepted by the public as the new standard of visual truth?  Mitchell, THE RECONFIGURED EYE, 6.   Ibid., 7.   Ibid.   Ibid.   13. Ibid., 225.   14. Ibid.   15. Ibid., 17.   The research in virtual reality aims to go beyond the screen image in order to simulate both the perceptual and bodily experience of reality.  See Manovich, "Assembling Reality."

More information and articles

The Paradoxes of Digital Photography

Digital Photography is more real

Words by

Lev Manovich

The Paradoxes of Digital Photography
Jurrasic Park III

Computerized design systems that flawlessly combine real photographed objects and objects synthesized by the computer. Satellites that can photograph the license plate of your car, and read the time on your watch. "Smart" weapons that recognize and follow their targets in effortless pursuit -- the kind of new, post-modern, post-industrial dance to which we were all exposed during the televised Gulf war. New medical imaging technologies that map every organ and function of the body. Online electronic libraries that enable any designer to acquire not only millions of photographs digitally stored but also dozens of styles which can be automatically applied by a computer to any image.

Published in: Photography after Photography. Exhibition catalog. Germany, 1995.   Digital Revolution?  

All of these and many other recently emerged technologies of image-making, image manipulation, and vision  -  depend on digital computers. All of them, as a whole, allow photographs to perform new, unprecedented, and still poorly understood functions. All of them radically change what a photograph is. Indeed, digital photographs function in an entirely different way from traditional -- lens and film based -- photographs. For instance, images are obtained and displayed by sequential scanning; they exist as mathematical data which can be displayed in a variety of modes -- sacrificing color, spatial or temporal resolution.

Image processing techniques make us realize that any photograph contains more information than can be seen with the human eye. Techniques of 3D computer graphics make possible the synthesis of photo realistic images -- yet, this realism is always partial, since these techniques do not permit the synthesis of any arbitrary scene.[1] Digital photographs function in an entirely different way from traditional photographs. Or do they? Shall we accept that digital imaging represents a radical rupture with photography? Is an image, mediated by computer and electronic technology, radically different from an image obtained through a photographic lens and embodied in film? If we describe film-based images using such categories as depth of field, zoom, a shot or montage, what categories should be used to describe digital images? Shall the phenomenon of digital imaging force us to rethink such fundamental concepts as realism or representation?

In this essay I will refrain from taking an extreme position of either fully accepting or fully denying the idea of a digital imaging revolution. Rather, I will present the logic of the digital image as paradoxical; radically breaking with older modes of visual representation while at the same time reinforcing these modes. I will demonstrate this paradoxical logic by examining two questions: alleged physical differences between digital and film-based representation of photographs and the notion of realism in computer generated synthetic photography.

The logic of the digital photograph is one of historical continuity and discontinuity. The digital image tears apart the net of semiotic codes, modes of display, and patterns of spectatorship in modern visual culture -- and, at the same time, weaves this net even stronger. The digital image annihilates photography while solidifying, glorifying and immortalizing the photographic. In short, this logic is that of photography after photography.    

2. Digital Photography Does Not Exist  

It is easiest to see how digital (r)evolution solidifies (rather than destroys) certain aspects of modern visual culture -- the culture synonymous with the photographic image -- by considering not photography itself but a related film- based medium -- cinema. New digital technologies promise to radically reconfigure the basic material components (lens, camera, lighting, film) and the basic techniques (the separation of production and post-production, special effects, the use of human actors and non-human props) of the cinematic apparatus as it has existed for decades. The film camera is increasingly supplemented by the virtual camera of computer graphics which is used to simulate sets and even actors (as in "Terminator 2" and "Jurassic Park"). Traditional film editing and optical printing are being replaced by digital editing and image processing which blur the lines between production and post-production, between shooting and editing. At the same time, while the basic technology of filmmaking is about to disappear being replaced by new digital technologies, cinematic codes find new roles in the digital visual culture. New forms of entertainment based on digital media and even the basic interface between a human and a computer are being increasingly modeled on the metaphors of movie making and movie viewing. With QuickTime technology, built into every Macintosh sold today, the user makes and edits digital "movies" using software packages whose very names (such as Director and Premiere) make a direct reference to cinema. Computer games are also increasingly constructed on the metaphor of a movie, featuring realistic sets and characters, complex camera angles, dissolves, and other codes of traditional filmmaking. Many new CD-ROM games go even further, incorporating actual movie-like scenes with live actors directed by well-known Hollywood directors. Finally, SIGGRAPH, the largest international conference on computer graphics technology, offers a course entitled "Film Craft in User Interface Design" based on the premise that "The rich store of knowledge created in 90 years of filmmaking and animation can contribute to the design of user interfaces of multimedia, graphics applications, and even character displays."[2] Thus, film may soon disappear -- but not cinema. On the contrary, with the disappearance of film due to digital technology, cinema acquires a truly fetishistic status. Classical cinema has turned into the priceless data bank, the stock which is guaranteed never to lose its value as classic films become the content of each new round of electronic and digital distribution media -- first video cassette, then laserdisc, and, now, CD-ROM (major movie companies are planning to release dozens of classic Hollywood films on CD- ROM by the end of 1994). Even more fetishized is "film look" itself -- the soft, grainy, and somewhat blurry appearance of a photographic image which is so different from the harsh and flat image of a video camera or the too clean, too perfect image of computer graphics. The traditional photographic image once represented the inhuman, devilish objectivity of technological vision.

Memory and nostalgia

Today, however, it looks so human, so familiar, so domesticated -- in contrast to the alienating, still unfamiliar appearance of a computer display with its 1280 by 1024 resolution, 32 bits per pixel, 16 million colors, and so on. Regardless of what it signifies, any photographic image also connotes memory and nostalgia, nostalgia for modernity and the twentieth century, the era of the pre-digital, pre-post-modern. Regardless of what it represents, any photographic image today first of all represents photography. So while digital imaging promises to completely replace the techniques of filmmaking, it at the same time finds new roles and brings new value to the cinematic apparatus, the classic films, and the photographic look. This is the first paradox of digital imaging. But surely, what digital imaging preserves and propagates are only the cultural codes of film or photography. Underneath, isn't there a fundamental physical difference between film-based image and a digitally encoded image? The most systematic answer to this question can be found in William Mitchell's recent book "The Reconfigured Eye: Visual Truth in the Post-photographic Era."[3] Mitchell's  entire analysis of the digital imaging revolution revolves around his claim that the difference between a digital image and a photograph "is grounded in fundamental physical characteristics that have logical and cultural consequences."[4] In other words, the physical difference between photographic and digital technology leads to the difference in the logical status of film-based and digital images and also to the difference in their cultural perception. How fundamental is this difference? If we limit ourselves by focusing solely, as Mitchell does, on the abstract principles of digital imaging, then the difference between a digital and a photographic image appears enormous.

Original and the copy

But if we consider concrete digital technologies and their uses, the difference disappears. Digital photography simply does not exist. The first alleged difference concerns the relationship between the original and the copy in analog and in digital cultures. Mitchell writes: "The continuous spatial and tonal variation of analog pictures is not exactly replicable, so such images cannot be transmitted or copied without degradation... But discrete states can be replicated precisely, so a digital image that is a thousand generations away from the original is indistinguishable in quality from any one of its progenitors."[5] Therefore, in digital visual culture, "an image file can be copied endlessly, and the copy is distinguishable from the original by its date since there is no loss of quality."[6] This is all true -- in principle. However, in reality, there is actually much more degradation and loss of information between copies of digital images than between copies of traditional photographs. A single digital image consists of millions of pixels. All of this data requires considerable storage space in a computer; it also takes a long time (in contrast to a text file) to transmit over a network. Because of this, the current software and hardware used to acquire, store, manipulate, and transmit digital images uniformly rely on lossy compression -- the technique of making image files smaller by deleting some information.[7] The technique involves a compromise between image quality and file size -- the smaller the size of a compressed file, the more visible are the visual artifacts introduced in deleting information. Depending on the level of compression, these artifacts range from barely noticeable to quite pronounced. At any rate, each time a compressed file is saved, more information is lost, leading to more degradation. One may argue that this situation is temporary and once cheaper computer storage and faster networks become commonplace, lossy compression will disappear. However, at the moment, the trend is quite the reverse with lossy compression becoming more and more the norm for representing visual information. If a single digital image already contains a lot of data, then this amount increases dramatically if we want to produce and distribute moving images in a digital form (one second of video, for instance, consists of 30 still images).

Digital television with its hundreds of channels and video on-demand services, the distribution of full-length films on CD-ROM or over Internet, fully digital post-production of feature films -- all of these developments will be made possible by newer compression techniques.[8] So rather than being an aberration, a flaw in the otherwise pure and perfect world of the digital, where  even a single bit of information is never lost, lossy compression is increasingly becoming the very foundation of digital visual culture. This is another paradox of digital imaging -- while in theory digital technology entails the flawless replication of data, its actual use in contemporary society is characterized by the loss of data, degradation, and noise; the noise which is even stronger than that of traditional photography.  


Finer detail

The second commonly cited difference between traditional and digital photography concerns the amount of information contained in an image. Mitchell sums it up as follows: "There is an indefinite amount of information in a continuous-tone photograph, so enlargement usually reveals more detail but yields a fuzzier and grainier picture... A digital image, on the other hand, has precisely limited spatial and tonal resolution and contains a fixed amount of information."[9] Here again Mitchell is right in principle: a digital image consists of a finite number of pixels, each having a distinct color or a tonal value, and this number determines the amount of detail an image can represent. Yet in reality this difference does not matter anymore. Current scanners, even consumer brands, can scan an image or an object with very high resolution: 1200 or 2400 pixels per inch is standard today. True, a digital image is still comprised of a finite number of pixels, but at such resolution it can record much finer detail than was ever possible with traditional photography. This nullifies the whole distinction between an "indefinite amount of information in a continuous-tone photograph" and a fixed amount of detail in a digital image. The more relevant question is how much information in an image can be useful to the viewer. Current technology has already reached the point where a digital image can easily contain much more information than anybody would ever want. This is yet another paradox of digital imaging. But even the pixel-based representation, which appears to be the very essence of digital imaging, can no longer be taken for granted. Recent computer graphics software has bypassed the limitations of the traditional pixel grid which limits the amount of information in an image because it has a fixed resolution. Live Picture, an image editing program for the Macintosh, converts a pixel-based image into a set of equations. This allows the user to work with an image of virtually unlimited size. Another paint program Matador makes possible painting on a tiny image which may consist of just a few pixels as though it were a high-resolution image (it achieves this by breaking each pixel into a number of smaller sub-pixels). In both programs, the pixel is no longer a "final frontier"; as far as the user is concerned, it simply does not exist.  

Photo fakes

Mitchell's third distinction concerns the inherent mutability of a digital image. While he admits that there has always been a tradition of impure, re-worked photography (he refers to "Henry Peach Robinson's and Oscar G. Reijlander's nineteenth century 'combination prints,' John Heartfield's photomontages"[10] as well as numerous political photo fakes of the twentieth century) Mitchell identifies straight, unmanipulated photography as the essential, "normal" photographic practice: "There is no doubt that extensive reworking of photographic images to produce seamless  transformations and combinations is technically difficult, time-consuming, and outside the mainstream of photographic practice. When we look at photographs we presume, unless we have some clear indications to the contrary, that they have not been reworked."[11] This equation of "normal" photography with straight photography allows Mitchell to claim that a digital image is radically different because it is inherently mutable: "the essential characteristic of digital information is that it can be manipulated easily and very rapidly by computer. It is simply a matter of substituting new digits for old... Computational tools for transforming, combining, altering, and analyzing images are as essential to the digital artist as brushes and pigments to a painter."[12] From this allegedly purely technological difference between a photograph and a digital image, Mitchell deduces differences in how the two are culturally perceived. Because of the difficulty involved in manipulating them, photographs "were comfortably regarded as causally generated truthful reports about things in the real world."[13] Digital images, being inherently (and so easily) mutable, call into question "our ontological distinctions between the imaginary and the real"[14] or between photographs and drawings. Furthermore, in a digital image, the essential relationship between signifier and signified is one of uncertainty.[15]

What Mitchell takes to be the essence of photographic and digital imaging technology are two traditions of visual culture.

Does this hold? While Mitchell aims to deduce culture from technology, it appears that he is actually doing the reverse. In fact, he simply identifies the pictorial tradition of realism with the essence of photographic technology and the tradition of montage and collage with the essence of digital imaging. Thus, the photographic work of Robert Weston and Ansel Adams, nineteenth and twentieth century realist painting, and the painting of the Italian Renaissance become the essence of photography; while Robinson's and Rejlander's photo composites, constructivist montage, contemporary advertising imagery (based on constructivist design), and Dutch seventeenth century painting (with its montage-like emphasis on details over the coherent whole) become the essence of digital imaging. In other words, what Mitchell takes to be the essence of photographic and digital imaging technology are two traditions of visual culture. Both existed before photography, and both span different visual technologies and mediums. Just as its counterpart, the realistic tradition extends beyond photography per se and at the same time accounts for just one of many photographic practices.

Stalin and Voroshilov in the Kremlin, Aleksandr Gerasimov, 1938

Soviet photography

If this is so, Mitchell's notion of "normal" unmanipulated photography is problematic. Indeed, unmanipulated "straight" photography can hardly be claimed to dominate the modern uses of photography. Consider, for instance, the following photographic practices. One is Soviet photography of the Stalinist era. All published photographs were not only staged but also retouched so heavily that they can hardly be called photographs at all. These images were not montages, as they maintained the unity of space and time, and yet, having lost any trace of photographic grain due to retouching, they existed somewhere between photography and painting. More precisely, we can say that Stalinist visual culture eliminated the very difference between a photograph and a painting by producing photographs which looked like paintings and paintings (I refer to Socialist Realism) which looked like photographs. If this example can be written off as an aberration of totalitarianism, consider another photographic practice closer to home: the use of photographic images in twentieth century advertising and publicity design.

This practice does not make any attempt to claim that a photographic image is a witness testifying about the unique event which took place in a distinct moment of time (which is how, according to Mitchell, we normally read photography). Instead, a photograph becomes just one graphic element among many: few photographs coexist on a single page; photographs are mixed with type; photographs are separated from each by white space, backgrounds are erased leaving only the figures, and so on. The end result being that here, as well, the difference between a painting and a photograph does not hold. A photograph as used in advertising design does not point to a concrete event or a particular object. It does not say, for example, "this hat was in this room on May 12." Rather, it simply presents "a hat" or "a beach" or "a television set" without any reference to time and location. Such examples question Mitchell's idea that digital imaging destroys the innocence of straight photography by making all photographs inherently mutable. Straight photography has always represented just one tradition of photography; it always coexisted with equally popular traditions where a photographic image was openly manipulated and was read as such.

Equally, there never existed a single dominant way of reading photography; depending on the context the viewer could (and continue to) read photographs as representations of concrete events, or as illustrations which do not claim to correspond to events which have occurred. Digital technology does not subvert "normal" photography because "normal" photography never existed.  

3. Real, All Too Real: Socialist Realism of "Jurassic Park"  

I have considered some of the alleged physical differences between traditional and digital photography. But what is a digital photograph? My discussion has focused on the distinction between a film-based representation of an image versus its representation in a computer as a grid of pixels having a fixed resolution and taking up a certain amount of computer storage space. In short, I highlighted the issue of analog versus digital representation of an image while disregarding the procedure through which this image is produced in the first place. However, if this procedure is considered another meaning of digital photography emerges. Rather than using the lens to focus the image of actual reality on film and then digitizing the film image (or directly using an array of electronic sensors) we can try to construct three-dimensional reality inside a computer and then take a picture of this reality using a virtual camera also inside a computer. In other words, 3-D computer graphics can also be thought off as digital -- or synthetic -- photography.

I will conclude by considering the current state of the art of 3-D computer graphics. Here we will encounter the final paradox of digital photography. Common opinion holds that synthetic photographs generated by computer graphics are not yet (or perhaps will never be) as precise in rendering visual reality as images obtained through a photographic lens. However, I will suggest that such synthetic photographs are already more realistic than traditional photographs. In fact, they are too real.  

The achievement of realism is the main goal of research in the 3-D computer graphics field. The field defines realism as the ability to simulate any object in such a way that its computer image is indistinguishable from its photograph. It is this ability to simulate photographic images of real or imagined objects which makes possible the use of 3-D computer graphics in military and medical simulators, in television commercials, in computer games, and, of course, in such movies as "Terminator 2" or "Jurassic Park."


These last two movies, which contain the most spectacular 3-D computer graphics scenes to date, dramatically demonstrate that total synthetic realism seems to be in sight. Yet, they also exemplify the triviality of what at first may appear to be an outstanding technical achievement -- the ability to fake visual reality. For what is faked is, of course, not reality but photographic reality, reality as seen by the camera lens. In other words, what computer graphics has (almost) achieved is not realism, but only photorealism -- the ability to fake not our perceptual and bodily experience of reality but only its photographic image.[16] This image exists outside of our consciousness, on a screen -- a window of limited size which presents a still imprint of a small part of outer reality, filtered through the lens with its limited depth of field, filtered through film's grain and its limited tonal range. It is only this film-based image which computer graphics technology has learned to simulate. And the reason we think that computer graphics has succeeded in faking reality is that we, over the course of the last hundred and fifty years, has come to accept the image of photography and film as reality.  What is faked is only a film-based image.


Photorealism

Once we came to accept the photographic image as reality the way to its future simulation was open. What remained were small details: the development of digital computers (1940s) followed by a perspective-generating algorithm (early 1960s), and then working out how to make a simulated object solid with shadow, reflection and texture (1970s), and finally simulating the artefacts of the lens such as motion blur and depth of field (1980s). So, while the distance from the first computer graphics images circa 1960 to the synthetic dinosaurs of "Jurassic Park" in the 1990s is tremendous, we should not be too impressed. For, conceptually, photorealistic computer graphics had already appeared with Felix Nadar's photographs in the 1840s and certainly with the first films of the Lumieres in the 1890s. It is they who invented 3-D computer graphics.  So the goal of computer graphics is not realism but only photorealism. Has this photorealism been achieved? At the time of this writing (May 1994) dinosaurs of "Jurassic Park" represent the ultimate triumph of computer simulation, yet this triumph took more than two years of work by dozens of designers, animators, and programmers of Industrial Light and Magic (ILM), probably the premier company specializing in the production of computer animation for feature films in the world today.
Because a few seconds of computer animation often requires months and months of work, only the huge budget of a Hollywood blockbuster could pay for such extensive and highly detailed computer-generated scenes as seen in "Jurassic Park." Most of the 3-D computer animation produced today has a much lower degree of photorealism and this photorealism is uneven, higher for some kinds of objects and lower for others.[17] And even for ILM photorealistic simulation of human beings, the ultimate goal of computer animation, still remains impossible.

Typical images produced with 3-D computer graphics still appear unnaturally clean, sharp, and geometric looking. Their limitations especially stand out when juxtaposed with a normal photograph. Thus one of the landmark achievements of "Jurassic Park" was the seamless integration of film footage of real scenes with computer simulated objects. To achieve this integration, computer-generated images had to be degraded; their perfection had to be diluted to match the imperfection of film's graininess. First, the animators needed to figure out the resolution at which to render computer graphics elements. If the resolution were too high, the computer image would have more detail than the film image and its artificiality would become apparent.

Just as Medieval masters guarded their painting secrets now leading computer graphics companies carefully guard the resolution of image they simulate. Once computer-generated images are combined with film images additional tricks are used to diminish their perfection. With the help of special algorithms, the straight edges of computer-generated objects are softened. Barely visible noise is added to the overall image to blend computer and film elements. Sometimes, as in the final battle between the two protagonists in "Terminator 2," the scene is staged in a particular location (a smoky factory in this example) which justifies addition of smoke or fog to further blend the film and synthetic elements together.

The synthetic image is free of the limitations of both human and camera vision.

Too real

So, while we normally think that synthetic photographs produced through computer graphics are inferior in comparison to real photographs, in fact, they are too perfect. But beyond that we can also say that paradoxically they are also too real. The synthetic image is free of the limitations of both human and camera vision. It can have unlimited resolution and an unlimited level of detail. It is free of the depth-of- field effect, this inevitable consequence of the lens, so everything is in focus. It is also free of grain -- the layer of noise created by film stock and by human perception. Its colors are more saturated and its sharp lines follow the economy of geometry. From the point of view of human vision it is hyperreal. And yet, it is completely realistic. It is simply a result of a different, more perfect than human, vision. Whose vision is it? It is the vision of a cyborg or a computer; a vision of Robocop and of an automatic missile. It is a realistic representation of human vision in the future when it will be augmented by computer graphics and cleansed from noise. It is the vision of a digital grid. Synthetic computer-generated image is not an inferior representation of our reality, but a realistic representation of a different reality. By the same logic, we should not consider clean, skinless, too flexible, and in the same time too jerky, human figures in 3-D computer animation as unrealistic, as imperfect approximation to the real thing -- our bodies. They are perfectly realistic representation of a cyborg body yet to come, of a world reduced to geometry, where efficient representation via a geometric model becomes the basis of reality. The synthetic image simply represents the future. In other words, if a traditional photograph always points to the past event, a synthetic photograph points to the future event. We are now in a position to characterize the aesthetics of "Jurassic Park."

Socialist Realism

This aesthetic is one of Soviet Socialist Realism. Socialist Realism wanted to show the future in the present by projecting the perfect world of future socialist society on a visual reality familiar to the viewer -- streets, faces, and cities of the 1930s. In other words, it had to retain enough of then everyday reality while showing how that reality would look in the future when everyone's body will be healthy and muscular, every street modern, every face transformed by the spirituality of communist ideology. Exactly the same happens in "Jurassic Park." It tries to show the future of sight itself -- the perfect cyborg vision free of noise and capable of grasping infinite details -- vision exemplified by the original computer graphics images before they were blended with film images. But just as Socialist Realist paintings blended the perfect future with the imperfect reality of the 1930s and never depicted this future directly (there is not a single Socialist Realist work of art set in the future), "Jurassic Park" blends the future super-vision of computer graphics with the familiar vision of film image. In "Jurassic Park," the computer image bends down before the film image, its perfection is undermined by every possible means and is also masked by the film's content. This is then, the final paradox of digital photography. Its images are not inferior to the visual realism of traditional photography. They are perfectly real -- all too real.  



NOTES  
Lev Manovich, "Assembling Reality: Myths of Computer Graphics," AFTERIMAGE 20, no. 2 (September 1992): 12- 14.  SIGGRAPH 93. ADVANCE PROGRAM (ACM: New York, 1993), 28.  William Mitchell, THE RECONFIGURED EYE: VISUAL TRUTH IN THE POST-PHOTOGRAPHIC ERA (Cambridge, Mass.: The MIT Press, 1992).  Ibid., 4.   Ibid., 6.   6. Ibid., 49.   Currently the most widespread technique for compressing digital photographs is JPEG. For instance, every Macintosh comes with JPEG compression software.  For almost a century, our standard of visual fidelity was determined by the film image. A video or television image was always viewed as an imperfect, low quality substitute for the "real thing" -- a film-based image. Today, however, a new even lower quality image is becoming increasingly popular -- an image of computer multimedia. Its quality is exemplified by a typical, as of this writing, Quicktime movie: 320 by 240 pixels, 10- 15 frames a second. Is the 35 mm film image going to remain the unchallenged standard with computer  technology eventually duplicating its quality? Or will a low quality computer image be gradually accepted by the public as the new standard of visual truth?  Mitchell, THE RECONFIGURED EYE, 6.   Ibid., 7.   Ibid.   Ibid.   13. Ibid., 225.   14. Ibid.   15. Ibid., 17.   The research in virtual reality aims to go beyond the screen image in order to simulate both the perceptual and bodily experience of reality.  See Manovich, "Assembling Reality."

More information and articles

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.