Aligning color and depth images

Hi,

I’m reading color and depth images from Samsung Galaxy S10 5G ToF camera and trying to create point cloud from RGB-D image. But the color and depth images are not aligned.

I have intrinsic calibration params from color and depth cameras, and the extrinsics matrix as well. Is there any way to pass these params for open3d to automatically align the images? Or, should I manually warp the depth image beforehand?

Any tips would be helpful. Thanks!

pretty sure this example does what you want…

I might be wrong

Unfortunately it doesn’t.
As PinholeCameraIntrinsic I’m using the params from depth camera. Is that correct? What about color camera intrinsics?
Shouldn’t both of them be used for mapping as explained here or am I missing something?

tbh I’m too new at this and coding to give any real advice… i’d wait for someone else.
I’m cutting an pasting from examples and i use a realsense camera.

http://www.open3d.org/docs/release/python_api … i’m working from these files and tutorials

I would post the information that you currently have, as this would help anyone trying to help you. ie. your parameters.

right now i’m working on this example … http://www.open3d.org/docs/release/tutorial/Advanced/color_map_optimization.html

I replace

pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
            rgbd_image,
            o3d.camera.PinholeCameraIntrinsic(
                o3d.camera.PinholeCameraIntrinsicParameters.
                PrimeSenseDefault))
        o3d.visualization.draw_geometries([pcd])
    rgbd_images.append(rgbd_image)`

with

pinhole_camera_intrinsic = o3d.io.read_pinhole_camera_intrinsic(
"/datset/realsense/camera_intrinsic.json")
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
            rgbd_image, 
            pinhole_camera_intrinsic)
         o3d.visualization.draw_geometries([pcd])
    rgbd_images.append(rgbd_image)

So far I have shitty results, so i’m sure something is wrong. Anyway maybe i’ve helped a little

Thanks for sharing your thoughts!
This is what I have so far

import numpy
import math
from transforms3d import quaternions
import open3d as o3d

if __name__ == '__main__':
    R=quaternions.quat2mat([0.7008417, -0.7132461, -0.0012983065, 0.0099677965])
    t=numpy.array([-0.01873879, -0.0, -0.0])

    #pose of camera
    extrinsic=numpy.zeros([4,4])
    for i in range(3):
        for j in range(3):
            extrinsic[i][j]=R[i][j]

    extrinsic[3][3]=1
    for m in range(3):
        extrinsic[m][3]=t[m]
    
    print(extrinsic)

    source_color = o3d.io.read_image("04.jpg")
    source_depth = o3d.io.read_image("04.png")

    rgbd_image=o3d.geometry.RGBDImage.create_from_color_and_depth(source_color, source_depth, 1000, 4, False)

    #intrinsic params
    cx = 119.956635
    cy = 91.285805
    fx = 212.31969
    fy = 212.64055
    w = 240
    h = 180

    target_pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, o3d.camera.PinholeCameraIntrinsic(w, h, fx, fy, cx, cy), extrinsic)
    target_pcd.transform([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]])

    o3d.visualization.draw_geometries([target_pcd])

Extrinsic and intrinsic params are obtained through Android API. I’m running the code with attached two images and get the result from the original post. From these two images, it’s clear to see that they are misaligned. And also, the color image was originally of higher resolution but I’ve resized it since create_from_color_and_depth requires the two images to be of the same resolution.

04 04

In case somebody else stumbles upon this issue, Open3D doesn’t have a function for aligning rgb and depth frames. As the documentation says,

We assume that the color images and the depth images are synchronized and registered.

I ended up customizing this algorithm for Galaxy S10 5G camera outputs. Hope this helps.

Thanks!