·
Home
Xun
Xu, Loong-Fah Cheong
and Zhuwen Li
National University of Singapore, Intel Lab
Downloads: [PDF] [Supplemntary]
[Dataset Introduction] [Demo Code & Data] [GitHub] [Original Sequences]
In CVPR 2018
3D motion segmentation has been the key problem in computer vision research due to the application in structure from motion and robotics. Traditional motion segmentation approaches are often evaluated on artificial dataset like Hopkins 155 [1] and its variants. Because the vanishing camera translation effect is often overlooked, these approaches would fail in real world scenes where camera is carrying out significant translation and scene has complex structure. We proposed the KT3DMoSeg to address the 3D motion segmentation problem in real world scenes. The KT3DMoSeg dataset was created upon the KITTI benchmark [2] by manually selecting 22 sequences and labelling each individual foreground object. We select sequence with more significant camera translation so camera mounted on moving cars are preferred. We are interested in the interplay of multiple motions, so clips with more than 3 motions are also chosen, as long as these moving objects contain enough features for forming motion hypotheses. 22 short clips, each with 10-20 frames, are chosen for evaluation. We extract dense trajectories from each sequence using [3] and prune out trajectories shorter than 5 frames.
@InProceedings{XuCL_CVPR18, author = {Xun Xu, Loong-Fah Cheong, Zhuwen Li}, title = {Motion Segmentation by Exploiting Complementary Geometric Models}, booktitle = {IEEE Conference on Computer Vision and Pattern Recognition}, year = {2018} }
@InProceedings{XuCL_TPAMI19, author = {Xun Xu, Loong-Fah Cheong, Zhuwen Li}, title = {3D Rigid Motion Segmentation with Mixed and Unknown Number of Models}, booktitle = {IEEE Transactions on Pattern Analysis and Matchine Intelligence}, year = {2019} }