dc.description.abstract | Mobile devices, such as smart phones, are being increasingly utilized for watching videos. Video processing requires frequent memory access that consume a significant amount of power due to large data size and intensive computational requirements. This limits battery life and frustrates users. Memory designers are focused on hardware-level power-optimization techniques without consideration of how hardware performance influences viewers' actual experience. The human visual system is limited in its ability to detect subtle degradations in image quality. For example, under conditions of high ambient illumination – such as outdoors in direct sunlight – the veiling luminance (i.e., glare) on the screen of a mobile device can effectively mask imperfections in the image. Under these circumstances, a video can be rendered in lower than full quality without the viewer being able to detect any difference in quality. As a result, the isolation between hardware design and viewer experience significantly increases hardware implementation overhead and power consumption due to overly pessimistic design margins, while integrating the two would have the opposite effect.
In this dissertation, viewer-awareness, content-awareness, and hardware adaptation are integrated to achieve power optimization without degrading video quality, as perceived by users. Specifically, this dissertation will (i) experimentally and mathematically connect viewer experience, ambient illuminance, and memory performance; (ii) develop energy-quality adaptive hardware that can adjust memory usage based on ambient luminance to reduce power usage without impacting viewer experience; (iii) design various mobile video systems to fully evaluate the effectiveness of the developed methodologies; and (iv) provide an overview of bleeding edge related area research then push the boundary further using the novel techniques discussed to achieve optimized quality, silicone area overhead, and power reduction in video memory. | en_US |