Point Cloud from Depth Images

Hi all.
So lately, I am trying to reconstruct a point cloud from Long Throw depth images from the Hololens 2. But, when i get the point cloud from one depth image the walls of the room appear to be curved a lot instead of straight, as expected, and later, when I try to make a global registration, like this:

    result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
        source_down, target_down, source_fpfh, target_fpfh, True,
        distance_threshold,
        o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
        3, [
            o3d.pipelines.registration.CorrespondenceCheckerBasedOnNormal(
                0.3),
             o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
                distance_threshold)
        ], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))

The resulting point cloud seems to be a bit shifted, as the registration could not find a good matching, although the difference between 2 consecutive frames is irrelevant.
Does anyone has an idea how to solve the problem?

1 Like

I am not familiar with that function – but I have converted depth maps to point clouds.

The curvature could indicate that your camera matrix is not correct. Have you completed a camera calibration that you are happy with?

I first undistort my images with cv2.initUndistortRectifyMap, then used
the camera matrix parameters in numpy to calculate the 3d coords of each pixel.