Integration of custom RGBD data

I have several rgbd images with known camera poses. For testing, I rendered it with opengl render. I want to get mesh from it but have some problems with it. I used integrate method from ReconstructionSystem python example but replaced poses from a graph with my own poses. I got final mesh as not aligned multi objects (see on the image). When I had run the example on a realsense dataset all works fine but I noticed one strange moment: depth values in depth maps have mean values ~1600 but camera poses (reconstructed from open3d pipeline) have translations values ~2.2. I think that my problem in non-consistent depth/camera translation values. Do you have any ideas about what can I do with this?

Do you have the intrinsic matrix and depth scales for your camera? You may have to change them in PointCloud.create_from_rgbd_image and RGBDImage.create_from_color_and_depth.