Top-down visual attention for efficient rendering of task related scenes
Sundstedt, Veronica, Chalmers, Alan, Cater, |Kirsten and Debattista, Kurt (2004) Top-down visual attention for efficient rendering of task related scenes. In: VMV 2004 : 9th International Fall Workshop Vision, Modeling, and Visualization 2004, Stanford (California), USA, 16-18 Nov 2004. Published in: Vision, modeling, and visualization 2004 : proceedings, November 16-18, 2004, Standford, USAFull text not available from this repository.
Official URL: http://www.mpi-inf.mpg.de/conferences/vmv04/
The perception of a virtual environment depends on the user and the task the user is currently performing in that environment. Models of the human visual system can thus be exploited to significantly reduce computational time when rendering high fidelity images, without compromising the perceived visual quality. This paper considers how an image can be selectively rendered when a user is performing a visual task in an environment. In particular, we investigate to what level viewers fail to notice degradations in image quality, between nontask related areas and task related areas, when quality parameters such as image resolution, edge antialiasing and reflection and shadows are altered.
|Item Type:||Conference Item (Paper)|
|Divisions:||Faculty of Science > WMG (Formerly the Warwick Manufacturing Group)|
|Journal or Publication Title:||Vision, modeling, and visualization 2004 : proceedings, November 16-18, 2004, Standford, USA|
|Conference Paper Type:||Paper|
|Title of Event:||VMV 2004 : 9th International Fall Workshop Vision, Modeling, and Visualization 2004|
|Type of Event:||Conference|
|Location of Event:||Stanford (California), USA|
|Date(s) of Event:||16-18 Nov 2004|
Actions (login required)