Title page for ETD etd-09252012-104720

Document Type Doctoral Thesis
Author Rautenbach, Helperus Ritzema
Email prautenbach@cs.up.ac.za
URN etd-09252012-104720
Document Title An empirically derived system for high-speed rendering
Degree PhD
Department Computer Science
Advisor Name Title
Prof D G Kourie Supervisor
  • shaders
  • fuzzy logic
  • high dynamic range lighting
  • volumetric materials
  • instruction set utilisation
  • light maps
  • dynamic algorithm selection
  • shadow mapping
  • refraction
  • parralax mapping
  • normal maps
  • physics
  • reflection
  • distributed rendering
  • displacement mapping
  • depth-of-field
  • chromatic dispersion
  • stencil shadow volumes
  • algorithms
  • specular highlights
  • soft shadows
  • spatial subdivision
  • ambient occlusion
  • particles
Date 2012-09-06
Availability unrestricted
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas.

Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU.

All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry.

Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments.

Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised.

The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU.

© 2012 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria.

Please cite as follows:

Rautenbach, HR 2012, An empirically derived system for high-speed rendering , PhD thesis, University of Pretoria, Pretoria, viewed yymmdd < http://upetd.up.ac.za/thesis/available/etd-09252012-104720/ >


  Filename       Size       Approximate Download Time (Hours:Minutes:Seconds) 
 28.8 Modem   56K Modem   ISDN (64 Kb)   ISDN (128 Kb)   Higher-speed Access 
  00front.pdf 364.36 Kb 00:01:41 00:00:52 00:00:45 00:00:22 00:00:01
  01part-I.pdf 1.28 Mb 00:05:55 00:03:02 00:02:39 00:01:19 00:00:06
  02part-II.pdf 962.24 Kb 00:04:27 00:02:17 00:02:00 00:01:00 00:00:05
  03references.pdf 136.91 Kb 00:00:38 00:00:19 00:00:17 00:00:08 < 00:00:01
  04appendices.pdf 1.10 Mb 00:05:06 00:02:37 00:02:17 00:01:08 00:00:05

Browse All Available ETDs by ( Author | Department )

If you have more questions or technical problems, please Contact UPeTD.