I have read several other Stackoverflow threads on this topic, but none seem to apply to my particular case. I have RGB 24-bit images at 3840x2880 downsampled to 640x480 and 16-bit grayscale depth images, normalized from disparity maps. Here the depth image:
Camera intrinsics:
{
“width”: 640,
“height”: 480,
“intrinsic_matrix”: [
1618.67566,
0,
940.940942,
0,
1618.67566,
724.873718,
0,
0,
1
]
}
Config JSON:
{
“name”: “Captured frames using custom pinhole camera”,
“path_dataset”: “dataset/realsense/”,
“path_intrinsic”: “dataset/realsense/camera_intrinsics.json”,
“max_depth”: 3.0,
“voxel_size”: 0.05,
“max_depth_diff”: 0.7,
“preference_loop_closure_odometry”: 0.1,
“preference_loop_closure_registration”: 5.0,
“tsdf_cubic_size”: 3.0,
“icp_method”: “color”,
“global_registration”: “ransac”,
“python_multi_threading”: true
}
Fragments are generated without a problem, but upon registration, no poses may be found. I am able to run reconsruction without a problem using Realsense D455 RGBD output, so I know that the pipeline works.
What, in my dataset, could be causing this to fail?