M. Pawan Kumar













M. Pawan Kumar, P. Torr and A. Zisserman

In Internal Journal of Computer Vision (IJCV), 2008

We present an unsupervised approach for learning a layered representation of a scene from a video for motion segmentation. Our method is applicable to any video containing piecewise parametric motion. The learnt model is a composition of layers, which consist of one or more segments. The shape of each segment is represented using a binary matte and its appearance is given by the rgb value for each point belonging to the matte. Included in the model are the effects of image projection, lighting, and motion blur. Furthermore, spatial continuity is explicitly modelled resulting in contiguous segments. Unlike previous approaches, our method does not use reference frame(s) for initialization. The two main contributions of our method are: (i) A novel algorithm for obtaining the initial estimate of the model by dividing the scene into rigidly moving components using efficient loopy belief propagation; and (ii) Refining the initial estimate using $\alpha\beta$-swap and $\alpha$-expansion algorithms, which guarantee a strong local minima. Results are presented on several classes of objects with different types of camera motion, e.g.\ videos of a human walking shot with static or translating cameras. We compare our method with the state of the art and demonstrate significant improvements.

[Paper]    [Project Page ]