Video from a Single Coded Exposure Photograph using a Learned Over-Complete Dictionary

Yasunobu Hitomi; Jinwei Gu; Mohit Gupta; Tomoo Mitsunaga; Shree K. Nayar

Video from a Single Coded Exposure Photograph using a Learned Over-Complete Dictionary
Hitomi, Yasunobu
Gu, Jinwei
Gupta, Mohit
Mitsunaga, Tomoo
Nayar, Shree K.
Computer Science
Permanent URL:
Book/Journal Title:
2011 IEEE International Conference on Computer Vision: 6-13 November 2011, Barcelona, Spain
Cameras face a fundamental tradeoff between the spatial and temporal resolution - digital still cameras can capture images with high spatial resolution, but most high-speed video cameras suffer from low spatial resolution. It is hard to overcome this tradeoff without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing and reconstructing the space-time volume in order to overcome this tradeoff. Our approach has two important distinctions compared to previous works: (1) we achieve sparse representation of videos by learning an over-complete dictionary on video patches, and (2) we adhere to practical constraints on sampling scheme which is imposed by architectures of present image sensor devices. Consequently, our sampling scheme can be implemented on image sensors by making a straightforward modification to the control unit. To demonstrate the power of our approach, we have implemented a prototype imaging system with per-pixel coded exposure control using a liquid crystal on silicon (LCoS) device. Using both simulations and experiments on a wide range of scenes, we show that our method can effectively reconstruct a video from a single image maintaining high spatial resolution.
Computer science
Publisher DOI:
Item views:
text | xml

In Partnership with the Center for Digital Research and Scholarship at Columbia University Libraries/Information Services | Terms of Use