PoseGraph read JSON failed: empty edges

I have read several other Stackoverflow threads on this topic, but none seem to apply to my particular case. I have RGB 24-bit images at 3840x2880 downsampled to 640x480 and 16-bit grayscale depth images, normalized from disparity maps. Here the depth image:

image

Camera intrinsics:

{
“width”: 640,
“height”: 480,
“intrinsic_matrix”: [
1618.67566,
0,
940.940942,
0,
1618.67566,
724.873718,
0,
0,
1
]
}

Config JSON:

{
“name”: “Captured frames using custom pinhole camera”,
“path_dataset”: “dataset/realsense/”,
“path_intrinsic”: “dataset/realsense/camera_intrinsics.json”,
“max_depth”: 3.0,
“voxel_size”: 0.05,
“max_depth_diff”: 0.7,
“preference_loop_closure_odometry”: 0.1,
“preference_loop_closure_registration”: 5.0,
“tsdf_cubic_size”: 3.0,
“icp_method”: “color”,
“global_registration”: “ransac”,
“python_multi_threading”: true
}

Fragments are generated without a problem, but upon registration, no poses may be found. I am able to run reconsruction without a problem using Realsense D455 RGBD output, so I know that the pipeline works.

What, in my dataset, could be causing this to fail?

We have been troubleshooting this problem for the past 4 days now, and have come to the conclusion that Open3D is likely hardware proprietary. Although documentation is sparse, it’s fairly clear there is a preference for Intel Realsense cameras,and the intrinsics and calibrations are all geared toward that type of camera. I do not see this pipeline will support other types of depth cameras or dual RGB / depth streams. Regardless of correct intrinsics, normalized grayscale images, and literally thousands of images, Open3D cannot deliver even basic results.

Again, we have tested successfully with a Realsense D455 camera, but given that particular camera’s gross inaccuracies and depth noise, it is not fitted to more robust, large-scale scanning projects. So we would like to request that, once this project comes back online and developers are giving attention to it again, that other types of data streams will be supported. Limiting the pipeline to Intel Realsense may simplify support, but ultimately limits this project’s ability to be of greater use in a wider array of contexts. It shows great promise, and we will continue to watch for updates that indicate a broader spectrum of support for non-Intel hardware.

Did you scale your intrinsic_matrix to match the 640x480 image?

Yes. we have tried that. The current intrinsics JSON shows the following:

{
“width”: 640,
“height”: 480,
“intrinsic_matrix”: [
1618.67566,
0,
320,
0,
1618.67566,
240,
0,
0,
1
]
}

I am told that fx and fy do not change, regardless of resolution conversions.

This is incorrect. You have to scale the focal as well. The focal here is expressed in pixels and is tied to the specific image size (3840x2880). In your case scale [focal, cx, cy] by 640/3840.

I’ve had to do the same thing when working with the intrinsic matrix coming from an iPhone’s TruthDepth camera. The intrinsic is for a 12MP image but the output image is 640x480.

The original camera angles give us fx,fy,cx,cy of 1614.15662,1614.15662,960,720.

So you are saying the proper intrinsic for this camera is:

{
“width”: 640,
“height”: 480,
“intrinsic_matrix”: [
269.0261,160,
0,120,201.76957,
0,0,1]
}

This is considering that 3840x2880 are divided by 2.37895 and 4, respectively, to derive the original intrinsics values.Which values did you use for the TrueDepth camera? It sounds like you have the same intrinsics as we do.

This is the calibration I got from the TruthDepth’s camera

width: 4032
height: 3024
fx: 2739.79
fy: 2739.79
cx: 2029.73
cy: 1512.20

my scaling is 640/4032 = 0.15873. So the effective intrinsic I’m using is
fx: 434.89
fy: 434.89
cx: 322.18
cy: 240.03