Compensate LiDAR Point Cloud Motion Distortion

Hi I am using velodyne VLP-16 LiDAR to acquire point cloud and using it while moving the LiDAR. By moving the LiDAR while scanning, I knew that some sort of distortion caused by the movement will be present because the motion is more than the update rate or scan time of the LiDAR. I use a GPS to map the point cloud into a global space frame but this is still done using the LiDAR timestamp which generated after one full scan of the sensor without taking into account of the scan-time of each point in the cloud.

I wanted to know is there any open example of how one compensate the motion distortion ? Or any advice on how to map/transform some certain areas/portion of point cloud, or each individual points (instead of a “full” point cloud) in to a certain position on Open3D?

This is kind of a research problem and not yet covered in our pipeline AFAIK… I can point you to a relevant research paper
Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM
Park et al, ICRA 2018

For Open3D, currently you may access the buffer by np.array(pcd.points) and use numpy to manually do the transformation. We are in the process of providing a native Tensor interface. Once that is done, the overall process may be simplified.

Thank You for the reply, really happy to have some new ideas from other people.

What boggling my mind in your explanation of transformation using np.array(pcd.points) is that:

  1. There is no information about laser angle/channel for each points, so we have to calculate from what particular laser (channel and firing angle) for each point. This way I imagine the process will be somewhat slow because we calculate and transform for each point and prone to some kind of error for calculating the origin laser. OR am I wrong and the points on a LiDAR pointcloud array are actually somewhat consistently ordered and could be easily manipulated according to its laser origin information ?

  2. I have an idea of using Open3D crop_point_cloud function (which I believe could be easily reused for every raw lidar pointcloud because it utilize some kind of JSON file to crop the pointcloud) and then transform the cropped pointcloud (instead of each point) to the desired position. But I still haven’t gotten myself into how to make the JSON file for the cropping, is there any guide for making this crop JSON file ?

  1. At current we do not support specific LiDAR data format as you discribed… we only maniplulate point clouds with as an array of 3D positions. You may have to store the angle/channel info with your own data structure.
  2. As far as I know crop will generate a JSON file for further verification/reuse, so there should be no need to create one in advance…