Video Stabilization using Procrustes Analysis of Trajectories

Geethu Miriam Jacob and Sukhendu Das
Visualization and Perception Lab
Department of Computer Science and Engineering, Indian Institute of Technology, Madras, India

Accepted in Indian Conference on Computer Vision, Graphics and Image Processsing (ICVGIP-2016), ACM

DOI:https://doi.org/10.1145/3009977.3009989

Abstract

Video Stabilization algorithms are often necessary at the pre-processing stage for many applications in video analytics. The major challenges in video stabilization are the presence of jittery motion paths of a camera, large foreground moving objects with arbitrary motion and occlusions. In this paper, a simple, yet powerful video stabilization algorithm is proposed, by eliminating the trajectories with higher dynamism appearing due to jitter. A block-wise stabilization of the camera motion is performed, by analyzing the trajectories in Kendall's shape space. A 3-stage iterative process is proposed for each block of frames. At the first stage of the iterative process, the trajectories with relatively higher dynamism (estimated using optical flow) are eliminated. At the second stage, a Procrustes alignment is performed on the remaining trajectories and Frechet mean of the aligned trajectories is estimated. Finally, the Frechet mean is stabilized and a transformation of the stabilized Frechet mean to the original space (of the trajectories) yields the stabilized trajectories. A global optimization function has been designed for stabilization, thus minimizing wobbles and distortions in the frames. As the motion paths of the higher and lower dynamic regions become more distinct after stabilization, this iterative process helps in the identification of the stabilized background trajectories (with lower dynamism), which are used to warp the frames for rendering the stabilized frames. Experiments are done with varying levels of jitter introduced on stable videos, apart from a few benchmarked natural jittery videos. In cases, where synthetic jitter is fused on stable videos, an error norm comparing the groundtruth scores (scores of the stable videos) to the scores of the stabilized videos, is used for comparative study of performance. The results show the superiority of our proposed method over other state-of-the-art methods.

--> PDF

Stabilization Results

 

More Qualitative Stablization Results (Videos). Click the links below to view the results of each category (uploaded in youtube)

Simple Running

Crowd

Large Parallax

Zooming

Quantitative evaluation of stabilization using cropping measure (ideally near to 1) for various categories of videos 
Category

Warp Stabilizer

(Liu et al., ACM TOG, 2011)

Youtube Stabilizer

(Grundmann et al., CVPR, 2011)

Bundled Paths

(Liu et al, ACM TOG, 2013)

Proposed
Simple 0.825 0.783 0.856 0.871
Running 0.643 0.715 0.810 0.885
Crowd 0.7734 0.774 0.855 0.852
Large Parallax 0.661 0.8361 0.864 0.854
Zooming 0.607 0.867 0.785 0.877


Quantitative evaluation of stabilization using distortion score (lower the better, with a minimum of 1) for various categories of videos 
Category

Warp Stabilizer

(Liu et al., ACM TOG, 2011)

Youtube Stabilizer

(Grundmann et al., CVPR, 2011)

Bundled Paths

(Liu et al, ACM TOG, 2013)

Proposed
Simple 1.03 1.02 1.025 1.01
Running 1.32 1.73 1.13 1.08
Crowd 1.025 1.03 1.05 1.02
Large Parallax 1.029 1.039 1.02 1.01
Zooming 1.012 1.09 1.02 1.011


Quantitative evaluation of stabilization using stability score (higher the better) for various categories of videos 
Category

Warp Stabilizer

(Liu et al., ACM TOG, 2011)

Youtube Stabilizer

(Grundmann et al., CVPR, 2011)

Bundled Paths

(Liu et al, ACM TOG, 2013)

Proposed
Simple 0.349 0.305 0.447 0.531
Running 0.190 0.231 0.438 0.517
Crowd 0.359 0.289 0.360 0.439
Large Parallax 0.211 0.366 0.354 0.374
Zooming 0.156 0.346 0.432 0.536