SCC308 banner
Topic 1 banner

Development of Computer Graphics

Given how accustomed the general public has become to seeing various forms of computer generated imagery (CGI) in films and on television over the least few years, they might be forgiven for thinking that computer graphics was somehow 'invented' about a decade ago. In fact, the urge to display the output from computer programs in graphical form - originally on paper and then on a display screen - goes back to late 1950s. The beginnings of a commecial computer graphics industry lie in the late 1960s, based initially on the development of military applications, notably flight simulators. Throughout the 1960s and 1970s computer graphics was primarily used in research - as indeed, were most computers - and, to a lesser extent, in business. It is only in the last fifteen years that computer graphics has become 'mainstream', first in various parts of the mass entertainment industry (films, advertising, games and television) and then in home computer and entertainment systems.

Goals of Computer graphics

As we shall see, various development 'threads' have come together to create the explosive growth in computer graphics that we have seen in recent years. We can define these by the continuing attempts to achieve greater levels of INTERACTIVITY, a greater degree of image REALISM, and REAL-TIME image generation. There has traditionally been some degree of mutual exclusivity between these goals: greater levels of realism mean more calculations, which means less possibility of real-time image creation; if images cannot be generated in real-time then the system cannot really be interactive. However, developments in hardware (and, to a lesser extent, improved algorithms) have taken us much closer to simultaneously achieving all three of these goals. High-end graphics workstations can generate high-quality, fully rendered images and are fast enough to allow the user to manipulate them 'on the fly'. This capability will almost certainly become commonplace in desktop systems within the next few years.

The "Distinctiveness" of Computer Graphics

Although computer graphics is obviously in one sense just another branch of computer science - like databases, expert systems, parallel processing and so on - it can be argued that it exhibits some distinctive features that set it apart. The most significant of these are:
  1. It provides a dynamic environment. From its earliest days computer graphics has been expected to provide real-time image support. No-one is going to be very interested in working with a flight simulator that takes thirty seconds to re-draw 'the screen' each time a pilot moves the controls; even less in a missile early-warning systems that takes five minutes to re-display a set of points (missiles!).
  2. It provides an interactive environment. There are two kinds of interactivity of relevance here: the ability of the creater of graphic to work with the images in an easy and useful manner, and the ability of the 'consumer' of graphics (particulaly in the entertainment field) to control them smoothly and effortless and quickly.
  3. It provides a simulation environment. One of the great strengths of computer graphics is that it can generate images of imaginary (and indeed impossible) worlds just as readily as it can produce representations of our 'real' world. Some of these worlds are mathematical abstractions, and 'exist' only as equations and their graphical representations.
  4. It provides an environment for visualising large quantities of data. The most significant contribution that computer graphics has made in science has been the development of technqiues for visualising - seeing the structure of - huge amounts of data that are generated by modern data collection techniques.

Computer Graphics "Operations"

Computer graphics has certain operations that are common to whatever field it is applied. In particular, the creation of 'realistic' images - of people, places or things - basically involves the following stages:

Modelling

No Java
In order to create images we need to define objects in some way; we may do this by using

Most objects also have attributes - such as colour, density, surface texture etc - which can be regarded as a 'property' of the object, or a consequence of the technique we use to generate (render) the image.

Storing

In most cases we will want to retain the scenes and images we create for future use, first in the computer memory (whilst they are being processed and manipulated) and later on disk, so that we can re-use them, or convert them to another form.

Manipulating

Central to the interactive process of computer graphics is the ability to changing the shape, position and characteristics of objects and images.

Rendering

No Java
Just as in the 'real world' we see things because of the physical interaction between light, objects and our visual system, so in computer graphics we need to 'create' a picture by applying an algorithmic version of the physics of the real world to create the artificial image. This is what in computer graphics is called the rendering process.

Viewing

Once a scene made up of objects (static or dynamic) has been rendered it must be able to be displayed, from various viewpoints, on various devices (screen printer, film ...) - otherwise, how can we see it?

Interactive vs Non-interactive graphics

Until relatively recently the most significant distinction between different graphics systems was their level of interactivity. This was basically a 'binary' division: a small number of specialised systems were designed to work interactively, but most images were generated using some form of offline operation, mainly by using batch processing techniques. Now, with the development of a broad market for graphics systems and the consequent R&D into improved graphics hardware, most graphics are created dynamically and interactively. The exceptions are fully-rendered images on desktop systems (which are still too slow for this to be done in anything approaching real-time), and production-quality animation.

Major application areas

Evolution of Computer Graphics

The following table summarises a personal view of some of the key developments in computer graphics over the last thirty years:

Date Development People - organisations - institutions
1951 WHIRLWIND The Whirlwind project, under the direction of Jay Forrester, was the US missile early-warning system of the late 1950s. Central to this system were large-format vector display screens that displayed the crticial data in high resolution, and that could be updated continually.
1950's "Computer Art" The earliest attempts to use computer displays in a 'non-functional' way were made by James Whitney Sr. in the late 1950s when he generated 'visual feedback loops' by pointing a camera at the the display screen and using the image as input to the system to generate abstract patterns.
1962 Sketchpad At the start of the 1960s a doctorate student at MIT, Ivan Sutherland, created what we today would call a graphics workstation, complete with display system, input device (lightpen) and interactive engineering design software. The system - called Sketchpad - was the forerunner of all modern graphics systems.
1964- 'Photorealism' at the University of Utah Many of the techniques that are at the heart of 'realistic' c.g.i. were developed at Utah in the late 1960s and early 1970s. Key figures - all of whom went on to make other contributions to the growth of computer graphics - include Ivan Sutherland, David Evans, Edwin Catmull and James Blinn
1969 Evans & Sutherland In the late 1960s Ivan Sutherland and David Evans set up what was effectively the first commercial c.g. company when they formed Evans & Sutherland. The company remains at the forefront of simulation systems, particularly flight simulators.
1974-77 Animation at NYIT In the early 1970s Edwin Catmull set up a computer graphics laboratory at the New York Institute of Technology with two aims: to develop computer-based animation systems that would be produce output of sufficient quality to be attractive to the film industry, and to involve artists in the animation process. The most important artist to work in the laboratory - and whose work had the most influence on the development of computer animation - was probably Ed Emshwiller.
1982 TRON
Star Trek (Genesis effect)
Although the NYIT laboratory largely failed in its first aim, two films released in 1982 had long-term impact on the development of computer graphics. Whilst c.g.i. was used throughout Tron - and proved quite cost-effective to produce - its impact was lessened by its 'non-realism' and realtively low technical quality.
On the other hand, the short section of c.g.i in Star Trek from Lucasfilm's computer graphics division (which would later split into Industrial Light and Magic, and Pixar) would have a profound effect, introducing as it did several key technical effects (such as particle systems and caustics) that would become an essential part of the 'armoury' of computer animators.
1982 Ray Tracing Ray-trace image

Although new techniques for defining the appearance of objects had been developed, by the early 1980s there was still no rendering technique that had any real approximation to the physical processes by which 'real' things are seen. The development of the ray tracing method by Turner Whitted changed this, and today this is one of the most widely-used rendering methods. It is especially good at rendering reflections, refractions and shadows.

1983 Fractals Fractal Landscape

One of the key limitations of modelling 'natural' scenes was that most conventional geometric systems could not generate the key components of natural-looking landscape - such as mountains, trees and clouds. The application of fractal systems to these areas by Loren Carpenter and others radically extended the range of 'scenes' that could be effectively modelled and rendered.

1985 Radiosity The attraction of ray tracing was somewhat offset by its slowness, and by its emphasis on reflections and 'shininess'. In order to render more realistically the 'softer' world around us - particularly that of lighted interiors - Don Greenberg and his colleagues at Cornell University developed the radiosity rendering process, based on physical principles established by lighting engineers.
1986 Renderman In order to smoothly link animation and rendering, and to allow animators to create scenes without needing to program them, a group at Pixar - led by Pat Hanrahan - created an extensible 'procedural language' for controlling the animation/rendering process.
1988 Tin Toy Having set up Pixar to continue his aim of integrating animation into the 'mainstream', Edwin Catmull hired the 'traditonal' animator John Lassiter, whose emphasis on character and story were quite different from the 'technology-driven' approach of most other computer animators. As well as commercial work, Lassiter produced a series of successful short films, culminating in Tin Toy: the first completely computer-animated film to win the Academy Award for animation.
1989 The Abyss One of the strongest advocates of the utility of c.g.i. in films has been the director James Cameron; in a series of films - starting with The Abyss and including Aliens and The Terminator - he used Industrial Light and Magic to create effects that were central to both the visual astyle and narrative structure of the films.
1995 Toy Story The release of Pixar's Toy Story is significant for two reasons: it is the first full-length, wholly computer-generated, feature film, and it was backed financially by the Disney Corporation (for whom the film's director, John Lassiter used to work) - the 'home' of conventional film animation. Computer animation has clearly become "mainstream".


Contents  Re-read  Next chapter

[Contents]  [Re-read]  [Next Topic]


Last modified 3rd November 1998