Terrible Results from Kinect into reconstruction_system

I have tried to feed a kinect colour and depth frame image set into the reconstruction system it appears that my fragments are good for the most part but my end result looks terrible see the photo supplied. Is there any reason why the registratoin is coming out pretty bad I am moving slowly around the scene and am making sure there are plenty of frames.

For reference the objects I am interested in reconstructing are spheres suspeneded in the air I thought due to the fact that the objects were coloured would help improve the ICP algorithm but it appears the sphere wasn’t reconstructed properly.


I think the issue might be the align_depth_to_color as it distorts the point cloud. Try to point the sensor on a flat area, e.g. the ceiling and save a frame. Is this the same as if you would use the Kinect Viewer?