Historical Prospective Objective, scope and limitations


Subfields in computer graphics



Download 349.66 Kb.
Page3/10
Date20.10.2016
Size349.66 Kb.
#6497
1   2   3   4   5   6   7   8   9   10

Subfields in computer graphics


A broad classification of major subfields in computer graphics might be:

  1. Geometry: studies ways to represent and process surfaces

  2. Animation: studies with ways to represent and manipulate motion

  3. Rendering: studies algorithms to reproduce light transport

  4. Imaging: studies image acquisition or image editing

Geometry


http://upload.wikimedia.org/wikipedia/en/thumb/1/1b/stanford_bunny_qem.png/350px-stanford_bunny_qem.png

The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example).


Geometry Subfields

  • Implicit surface modeling - an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation.

  • Digital geometry processing - surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.

  • Discrete differential geometry - a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.

  • Point-based graphics - a recent field which focuses on points as the fundamental representation of surfaces.

  • Subdivision surfaces

  • Out-of-core mesh processing - another recent field which focuses on mesh datasets that do not fit in main memory.

Animation


The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally.
Subfields

  • Performance capture

  • Character animation

  • Physical simulation (e.g. cloth modelling, animation of fluid dynamics, etc.)

Rendering


http://upload.wikimedia.org/wikipedia/en/thumb/e/e3/cornellbox_pathtracing_irradiancecaching.png/250px-cornellbox_pathtracing_irradiancecaching.png

Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). See Rendering (computer graphics) for more information.


Transport

Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.
Scattering

Models of scattering and shading are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Shading can be broken down into two orthogonal issues, which are often studied independently:



  1. scattering - how light interacts with the surface at a given point

  2. shading - how material properties vary across the surface

The former problem refers to scattering, i.e., the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)
Other subfields

  • Physically-based rendering - concerned with generating images according to the laws of geometric optics.

  • Real time rendering - focuses on rendering for interactive applications, typically using specialized hardware like GPUs.

  • Non-photorealistic rendering.

  • Relighting - recent area concerned with quickly re-rendering scenes.



Geometry processing



Geometry processing, or mesh processing, is a fast-growing area of research that uses concepts from applied mathematicscomputer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. Applications of geometry processing algorithms already cover a wide range of areas from multimediaentertainment and classical computer-aided design, to biomedical computingreverse engineering and scientific computing.


Download 349.66 Kb.

Share with your friends:
1   2   3   4   5   6   7   8   9   10




The database is protected by copyright ©ininet.org 2024
send message

    Main page