The Library
Perceptually guided high-fidelity rendering exploiting movement bias in visual attention
Tools
Hasic, Jasminka, Chalmers, Alan and Sikudova, Elena (2010) Perceptually guided high-fidelity rendering exploiting movement bias in visual attention. ACM Transactions on Applied Perception, Vol.8 (No.1). pp. 1-19. doi:10.1145/1857893.1857899 ISSN 1544-3558.
Research output not available from this repository.
Request-a-Copy directly from author or use local Library Get it For Me service.
Official URL: http://dx.doi.org/10.1145/1857893.1857899
Abstract
A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of "realism in real time" in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2 degrees foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritized based on the saliency of the objects in the scene or the task the user is performing. Such "glimpses" of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed.
Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics work has shown that both fixed viewpoint and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high-saliency movement that may be present. A high-saliency movement can be generated in a scene if an otherwise static objects starts moving. In this article, we investigate, through psychophysical experiments, including eye-tracking, the perception of rendering quality in dynamic complex scenes based on the introduction of a moving object in a scene. Two types of object movement are investigated: (i) rotation in place and (ii) rotation combined with translation. These were chosen as the simplest movement types. Future studies may include movement with varied acceleration. The object's geometry and location in the scene are not salient. We then use this information to guide our high-fidelity selective renderer to produce perceptually high-quality images at significantly reduced computation times. We also show how these results can have important implications for virtual environment and computer games applications.
Item Type: | Journal Article | ||||
---|---|---|---|---|---|
Subjects: | Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software T Technology > TK Electrical engineering. Electronics Nuclear engineering |
||||
Divisions: | Faculty of Science, Engineering and Medicine > Engineering > WMG (Formerly the Warwick Manufacturing Group) | ||||
Journal or Publication Title: | ACM Transactions on Applied Perception | ||||
Publisher: | Association for Computing Machinery, Inc. | ||||
ISSN: | 1544-3558 | ||||
Official Date: | October 2010 | ||||
Dates: |
|
||||
Volume: | Vol.8 | ||||
Number: | No.1 | ||||
Number of Pages: | 19 | ||||
Page Range: | pp. 1-19 | ||||
DOI: | 10.1145/1857893.1857899 | ||||
Status: | Peer Reviewed | ||||
Publication Status: | Published | ||||
Access rights to Published version: | Restricted or Subscription Access |
Data sourced from Thomson Reuters' Web of Knowledge
Request changes or add full text files to a record
Repository staff actions (login required)
View Item |