Like a few of the other other new users here I had a question.
Open3d looks like an awesome library and since it’s got some information on converting to/from numpy I think it has some great options for some of the stuff I am doing. I am definitely interested in its use case for the Azure Kinect but I had a question the point cloud use. Most of the examples are point clouds from surfaces such as a surface scan from a ToF camera like the Azure Kinect. What about it’s use for dense point cloud for example a computed tomography like xray ct? I see it that a numpy array can be converted to a pcd object but I wasn’t sure of its use for dense clouds and wondered if this was an issue?
I also noticed some examples batching work and parallelisation with Joblib, which is awesome! So thats great if you have bigger than memory data sets. I am guessing this ones more of a question on has anyone tried with Dask? I have 3d voxel arrays saved as chunked zarr files and access via Dask. I can have a go at trying to use joblib but thought I would ask if any one has tried and if there are any incompatabilities?
I’m reading through the docs and will have a go at this in a few days once I feel a bit comfortable (and get some other data analysis that I have to do). It would be great if I could get some pointers, ultimately I will be having a go at trying to use the ICP alignment, I have a number of files (that are quite big). So I can cut out a section and align (to save some memory).
But when I get time to play with my azure kinect I will be definitely back to this library!