Moving Object Segmentation in Jittery Videos by Stabilizing Trajectories Modeled in Kendall's Shape Spaces

Geethu Miriam Jacob and Sukhendu Das
Visualization and Perception Lab
Department of Computer Science and Engineering, Indian Institute of Technology, Madras, India

Accepted in British Machine Vision Conference (BMVC-2017)

Abstract

Moving Object Segmentation is a challenging task for jittery/wobbly videos. For jittery videos, the non-smooth camera motion makes discrimination between foreground objects and background layers hard to solve. While most recent works for moving video object segmentation fail in this scenario, our method generates an accurate segmentation of a single moving object. The proposed method performs a sparse segmentation, where frame-wise labels are assigned only to trajectory coordinates, followed by the pixel-wise labeling of frames. The sparse segmentation involving stabilization and clustering of trajectories in a 3-stage iterative process. At the 1st stage, the trajectories are clustered using pairwise Procrustes distance as a cue for creating an affinity matrix. The 2nd stage performs a block-wise Procrustes analysis of the trajectories and estimates Frechet means (in Kendall's shape space) of the clusters. The Frechet means represent the average trajectories of the motion clusters. An optimization function has been formulated to stabilize the Frechet means, yielding stabilized trajectories at the 3rd stage. The accuracy of the motion clusters are iteratively refined, producing distinct groups of stabilized trajectories. Next, the labels obtained from the sparse segmentation are propagated for pixel-wise labeling of the frames, using a GraphCut based energy formulation. Use of Procrustes analysis and energy minimization in Kendall's shape space for moving object segmentation in jittery videos, is the novelty of this work. Second contribution comes from experiments performed on a dataset formed of 20 real-world natural jittery videos, with manually annotated ground truth. Experiments are done with controlled levels of artificial jitter on videos of SegTrack2 dataset. Qualitative and quantitative results indicate the superiority of the proposed method.

Flowchart of the Algorithm

Flowchart


Segmentation Results on Jittery Videos


Downloadable Files

       Data and Groundtruth of 20 Natural Jittery Videos
       Supplementary File

Visual Segmentation Results. Click the links below to download the comparative results of each video shot

Baby Cheery_Girl Climb Cycling1
Cycling2 Cycling3 Doll Dog
Drone1 Drone2 Staircase1 Staircase2
Skating Train Walk1 Walk2
Jitter pattern JRH (highest randomness added to parameters of JT1)

Click here to download all the comparative results


Quantitative evaluation of Segmentation using Intersection Over Union (IOU) score (higher, the better) for natural jittery videos 
Video

Zhang et al. [1] (CVPR'13)

Papazoglou and Ferrari [2] (ICCV'13)

Ochs et al. [3] (PAMI'14)

Faktor et al. [4] (BMVC'14)

Wang et al. [5] (CVPR'15)

Proposed
Walk1 0.401 0.135 0.02 0.715 0.139 0.720
Walk2 0.009 0.123 0 0.480 0.151 0.841
Cheery_Girl 0.144 0.201 0.09 0.587 0.573 0.756
Doll 0.139 0.926 0.819 0.350 0.078 0.933
Dog 0.736 0.733 0.559 0.758 0.775 0.785
Baby 0.116 0.671 0.007 0.360 0.222 0.847
Skating1 0.033 0.248 0.318 0.627 0.523 0.713
Skating2 - 0.106 0.327 0.531 0.536 0.596
Car 0.029 0.06 0.058 0 0 0.103
Cycling1 0.558 0.359 0.610 0.462 0.342 0.613
Cycling2 0.654 0.649 0.462 0.831 0.689 0.833
Cycling3 0.701 0.342 0.605 0.490 0.401 0.723
Climb1 0.54 0.764 0.03 0.844 0.476 0.810
Climb2 0.591 0.024 0.418 0.443 0.416 0.505
Drone1 0.715 0.755 0.658 0.703 0.689 0.770
Drone2 0.487 0.436 0.549 0.325 0.348 0.588
Drone3 0.41 0.531 0.561 0.601 0.630 0.661
Train 0.211 0.37 0.535 0.837 0.831 0.850
Staircase1 0.726 0.296 0.713 0.651 0.488 0.782
Staircase2 0.875 0.889 0.801 0.001 0.103 0.901
Average 0.456 0.431 0.392 0.529 0.421 0.723


Quantitative evaluation of Segmentation using Intersection Over Union (IOU) score (higher, the better) for synthetic jitter JRL, JRM and JRH.  
Jitter Level

Zhang et al. [1]

Papazoglou et al. [2]

Ochs et al. [3]

Faktor et al. [4]

Wang et al. [5]

Proposed
Low level 0.586 0.637 0.327 0.692 0.535 0.695
Medium Level 0.551 0.575 0.525 0.686 0.506 0.690
High Level 0.543 0.585 0.479 0.654 0.470 0.688


References

[1] D. Zhang, O. Javed, and M. Shah, "Video object segmentation through spatially accurate and temporally dense extraction of primary object regions", in CVPR, 2013

[2]  A. Papazoglou and V. Ferrari, "Fast object segmentation in unconstrained video", in ICCV, 2013.

[3]  P. Ochs, J. Malik, and T. Brox, "Segmentation of moving objects by long term video analysis", IEEE TPAMI , vol. 36, no. 6, pp. 11871200, 2014.

[3]  Alon Faktor and Michal Irani, Video segmentation by non-local consensus voting. In BMVC, 2014.

[3]  Wenguan Wang, Jianbing Shen, and Fatih Porikli. Saliency-aware geodesic video object segmentation. In CVPR, 2015.