Thanks for sharing your thoughts!
This is what I have so far
import numpy
import math
from transforms3d import quaternions
import open3d as o3d
if __name__ == '__main__':
R=quaternions.quat2mat([0.7008417, -0.7132461, -0.0012983065, 0.0099677965])
t=numpy.array([-0.01873879, -0.0, -0.0])
#pose of camera
extrinsic=numpy.zeros([4,4])
for i in range(3):
for j in range(3):
extrinsic[i][j]=R[i][j]
extrinsic[3][3]=1
for m in range(3):
extrinsic[m][3]=t[m]
print(extrinsic)
source_color = o3d.io.read_image("04.jpg")
source_depth = o3d.io.read_image("04.png")
rgbd_image=o3d.geometry.RGBDImage.create_from_color_and_depth(source_color, source_depth, 1000, 4, False)
#intrinsic params
cx = 119.956635
cy = 91.285805
fx = 212.31969
fy = 212.64055
w = 240
h = 180
target_pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, o3d.camera.PinholeCameraIntrinsic(w, h, fx, fy, cx, cy), extrinsic)
target_pcd.transform([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]])
o3d.visualization.draw_geometries([target_pcd])
Extrinsic and intrinsic params are obtained through Android API. I’m running the code with attached two images and get the result from the original post. From these two images, it’s clear to see that they are misaligned. And also, the color image was originally of higher resolution but I’ve resized it since create_from_color_and_depth
requires the two images to be of the same resolution.