Terrible results from realsense_recorder into reconstruction_system

I have tried the reconstruction system on the example bedroom dataset and got great/expected results. Now I’m trying it on my own dataset that was captured using realsense_recorder.py and not getting good results. The captured dataset contains over 1000 images of a small scene and appears to have no errors. For example, here is a corresponding color/depth capture:

000071

000071

I’m using these codes unmodified, as instructed in the O3D documentation:

python realsense_recorder.py --record_imgs
python run_system.py config/realsense.json --make --register --refine --integrate

The resulting point cloud has no discernible shapes and is colorless:

any idea where error may be occuring or what I can improve? The only concern I can think of is that a lot of captures are probably redundant and might just be adding confusion. Is there a way to specify capture at a lower framerate in realsense_recorder?

Can you tell us what camera you are using?

I can see from your colour image that either 1. you are moving too fast to capture good data or 2. a combination of this and the fact your frame rate is too low.

Given that you didn’t modify the script and I think it’s set at 30 fps… I would say you are moving way too fast. having a lower frame rate will mean you have to move even slower.

from what i can see you’re not using the open3d.visualizer … why? some visulaisers will not show any colour from the ply file, beware

Can you tell us what camera you are using?

Realsense D435i

from what i can see you’re not using the open3d.visualizer … why?

Using CloudCompare - I find it to be better quality than O3D visualizer and allows me to do more trial operations very easily. It has never had a problem showing color before, even with point clouds generated by O3D, but good idea. I checked with the visualizer and the color is indeed there. It’s just terribly registered.
edit: cloud compare separated the ply into 2 layers - mesh and vertices. By default, mesh was shown and vertices was hidden. Showing vertices as well displayed the color.

Given that you didn’t modify the script and I think it’s set at 30 fps… I would say you are moving way too fast. having a lower frame rate will mean you have to move even slower.

I think I felt the need to move quickly because of the rapid photo capture (me trying to avoid redundant captures). Is it problematic for the algorithm to have lots of redundancy in the dataset? I feel like @ 30 fps, especially if I slow down to prevent the blurring, I’ll only have ~1cm pose difference between captures. Just seems like I’d need a couple thousand images to capture any meaningful scene at that rate. Should I try something like 5fps and slow down my movements?

you’re pretty much in the same position as me I think. I’m using the same camera.

config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 60)
    config.enable_stream(rs.stream.color, 848, 480, rs.format.bgr8, 60)

I use these settings in realsense_recorder.py

I understand where you are coming from completely. but what i have found is that open3d really likes data, the more you give it the better your results will be. i’m saying this considering your current experimentation.

go slow (even using 60 fps), and try small scenes

what is you main goal, why did you get this camera?

I am having moderate success those stream settings. In the settings config for reconstruction I am using:
“max_depth”: 2.5,
“n_frames_per_fragment”: 15,
“n_keyframes_per_n_frame”: 8,
“voxel_size”: 0.04,
“max_depth_diff”: 0.04,
“preference_loop_closure_odometry”: 0.04,
“preference_loop_closure_registration”: 0.04,
“tsdf_cubic_size”: 3,