Show simple item record

dc.contributor.authorHaidous, Ali Ahmad
dc.description.abstractMobile devices, such as smart phones, are being increasingly utilized for watching videos. Video processing requires frequent memory access that consume a significant amount of power due to large data size and intensive computational requirements. This limits battery life and frustrates users. Memory designers are focused on hardware-level power-optimization techniques without consideration of how hardware performance influences viewers' actual experience. The human visual system is limited in its ability to detect subtle degradations in image quality. For example, under conditions of high ambient illumination – such as outdoors in direct sunlight – the veiling luminance (i.e., glare) on the screen of a mobile device can effectively mask imperfections in the image. Under these circumstances, a video can be rendered in lower than full quality without the viewer being able to detect any difference in quality. As a result, the isolation between hardware design and viewer experience significantly increases hardware implementation overhead and power consumption due to overly pessimistic design margins, while integrating the two would have the opposite effect. In this dissertation, viewer-awareness, content-awareness, and hardware adaptation are integrated to achieve power optimization without degrading video quality, as perceived by users. Specifically, this dissertation will (i) experimentally and mathematically connect viewer experience, ambient illuminance, and memory performance; (ii) develop energy-quality adaptive hardware that can adjust memory usage based on ambient luminance to reduce power usage without impacting viewer experience; (iii) design various mobile video systems to fully evaluate the effectiveness of the developed methodologies; and (iv) provide an overview of bleeding edge related area research then push the boundary further using the novel techniques discussed to achieve optimized quality, silicone area overhead, and power reduction in video memory.en_US
dc.publisherNorth Dakota State Universityen_US
dc.rightsNDSU policy 190.6.2en_US
dc.titleTurning Visual Noise Into Hardware Efficiency: Systems of Viewer and Content Aware Power-Quality Scalable Embedded Memories With ECC-Adaptation for Big Videos and Deep Learningen_US
dc.typeDissertationen_US
dc.typeVideoen_US
dc.date.accessioned2022-06-07T16:23:35Z
dc.date.available2022-06-07T16:23:35Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/10365/32699
dc.subjectbig videosen_US
dc.subjectcontent awareen_US
dc.subjectembedded memoryen_US
dc.subjectlow poweren_US
dc.subjectviewer awareen_US
dc.identifier.orcid0000-0002-9799-0631
dc.rights.urihttps://www.ndsu.edu/fileadmin/policy/190.pdfen_US
ndsu.degreeDoctor of Philosophy (PhD)en_US
ndsu.collegeEngineeringen_US
ndsu.departmentElectrical and Computer Engineeringen_US
ndsu.advisorWang, Danling


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record