Video impainting under constrained camera motion:

Video impainting under constrained camera motion:

Abstract
A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this paper. The region to be inpainted is general: It may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, each frame is segment into foreground and background. Segmentation is used to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by re-ducing the search space. In the first video inpainting step, moving objects reconstruct in the foreground that are “occluded” by the region to be inpainted. To this end,gap  filled as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, inpaint the remaining hole with the background. To accomplish this, first frames are aligned and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples are shown supporting these findings.
Copyright © 2012 Student guide.