In the meantime I got a new PC, so I've started testing some functions in CC. When I open a file for the first time, CC asks me if I want to compute normals - which works fine. I merged some point clouds and improved the data, and now wanted to create a mesh with it. Unfortunately it seems that some normals were lost (or I didn't compute all of the clouds), so I get the Error message that the clouds must have normals.
So far, so good - but when I manually start to compute the normals, everything that happens is that my CPU is completely used (workload 100%), and I can't barely open CC anymore. I've waited for several hours, but nothing changes. Ok, the file size of the .bin file is about 1,2 GB, so I'm sure it has to take some time, but this is incredible long (also compared to the computing of the normals when I do it right after the start). I've just tested it once more, and after an hour it has just 4%...
Is this some kind of bug, or should I just wait a little bit longer?
By the way, the original scan files (.dp) had a combined file size of about 100 MB. When I save them (even without normals) I get a file size of over 1.5 GB... Is there any way to reduce the size?
Best regards,
Jan
Computing normals is terrible slow
-
- Posts: 17
- Joined: Mon Jun 08, 2015 9:17 am
- Location: Nuremberg, Germany
Re: Computing normals is terrible slow
Indeed, DP files are highly compressed (one of the best compression I know of by the way). And it also has a gridded structure so when we compute the normals right at loading time, it' much easier (this is why CC suggests the user to do it at this time).
But if you compute the normals later, as the gridded structure is lost and the algorithm is meant to work on just any cloud, the processing time is bound to be much higher. But you are right, the processing time shouldn't be this high. The main parameter that influences the processing time is the 'kernel radius'. Its initial value is guessed in a very simple way by CC (a percentage of the bounding-box size). But in fact it should depend on the cloud density, the surface roughness, the noise, etc.
You should start with a very low value (you can divide the initial guess by 10 for instance). Hopefully it should go much faster (it will still be longer than the gridded computation, but it should be not this bad). If the normals look too 'rough' or even some normals are black whatever the orientation of the cloud, it means that they couldn't be computed (because the kernel was too small). In this case you can retry with a bigger radius until you get proper normals everywhere.
In fact this radius is used to extract neighbors around each point, in order to estimate the local surface. So if the kernel is too small you won't get enough neighbors (I think the minimum is 3 or 4) or the surface estimation won't be very good. And if the kernel is too high, the algorithm will have to process a lot of unnecessary points each time (thus the huge processing time) and the result will be too smooth.
Sometimes a good idea is also to subsample your cloud first with a regular density (selecting the right radius will then be easier, and the algorithm will advance at a more stable pace).
But if you compute the normals later, as the gridded structure is lost and the algorithm is meant to work on just any cloud, the processing time is bound to be much higher. But you are right, the processing time shouldn't be this high. The main parameter that influences the processing time is the 'kernel radius'. Its initial value is guessed in a very simple way by CC (a percentage of the bounding-box size). But in fact it should depend on the cloud density, the surface roughness, the noise, etc.
You should start with a very low value (you can divide the initial guess by 10 for instance). Hopefully it should go much faster (it will still be longer than the gridded computation, but it should be not this bad). If the normals look too 'rough' or even some normals are black whatever the orientation of the cloud, it means that they couldn't be computed (because the kernel was too small). In this case you can retry with a bigger radius until you get proper normals everywhere.
In fact this radius is used to extract neighbors around each point, in order to estimate the local surface. So if the kernel is too small you won't get enough neighbors (I think the minimum is 3 or 4) or the surface estimation won't be very good. And if the kernel is too high, the algorithm will have to process a lot of unnecessary points each time (thus the huge processing time) and the result will be too smooth.
Sometimes a good idea is also to subsample your cloud first with a regular density (selecting the right radius will then be easier, and the algorithm will advance at a more stable pace).
Daniel, CloudCompare admin
-
- Posts: 17
- Joined: Mon Jun 08, 2015 9:17 am
- Location: Nuremberg, Germany
Re: Computing normals is terrible slow
Thanks for your answer! I'll try that later. It seems that I'm a patient waiter... but I just wanted to see how long it will take if I don't change anything at the settings (and I've already started when I wrote the question). Well, it was about 6.5 hours and now I have a result to show... I have to admit, I'm not really impressed. Are the black parts a cause of wrong normals? Is it possible to change them manually?
I'll test the computing with different kernel radius settings, maybe it will give an better result. Do you think it could help if I perform an HPR before the computing?
I'll test the computing with different kernel radius settings, maybe it will give an better result. Do you think it could help if I perform an HPR before the computing?
- Attachments
-
- result.JPG (159.82 KiB) Viewed 8546 times
-
- good timing.jpg (10 KiB) Viewed 8546 times
Re: Computing normals is terrible slow
Nice scan ;)
The kernel is clearly too big (it gives a smooth effect) so I think the black normals are just wrongly oriented.
Did you tell CloudCompare to resolve the normals orientations? (normally CC asks for this at the end of the process - otherwise you can launch it manually with 'Edit > Normals > Orient Normals > With Minimum Spanning Tree').
And if the cloud is noisy (especially with overlapping scans that are close to each other but with a slight shift due to a too coarse registration for instance) then you should definitely resample the cloud (with Edit > Subsample).
P.S.: did you understand why you miss some normals while all points come from DP files?
P.P.S.: don't hesitate to share the (DP) files with me if you want me to try on my side and give you the best sequence/parameters
The kernel is clearly too big (it gives a smooth effect) so I think the black normals are just wrongly oriented.
Did you tell CloudCompare to resolve the normals orientations? (normally CC asks for this at the end of the process - otherwise you can launch it manually with 'Edit > Normals > Orient Normals > With Minimum Spanning Tree').
And if the cloud is noisy (especially with overlapping scans that are close to each other but with a slight shift due to a too coarse registration for instance) then you should definitely resample the cloud (with Edit > Subsample).
P.S.: did you understand why you miss some normals while all points come from DP files?
P.P.S.: don't hesitate to share the (DP) files with me if you want me to try on my side and give you the best sequence/parameters
Daniel, CloudCompare admin
-
- Posts: 17
- Joined: Mon Jun 08, 2015 9:17 am
- Location: Nuremberg, Germany
Re: Computing normals is terrible slow
Thanks! :)
I've just tried and tested some calculations with the "Orient Normals" function and got several results. None of them is completely fine, but I'm getting close to. Special thanks for your tip with the subsampling method, I've already searched for a function like this!
I don't know why it hadn't saved the normals, maybe I forgot to calculate them when I've opened one file (it was created by combining 6 single scans). In other files this has worked pretty fine.
I've just tried and tested some calculations with the "Orient Normals" function and got several results. None of them is completely fine, but I'm getting close to. Special thanks for your tip with the subsampling method, I've already searched for a function like this!
I don't know why it hadn't saved the normals, maybe I forgot to calculate them when I've opened one file (it was created by combining 6 single scans). In other files this has worked pretty fine.
- Attachments
-
- new try3.JPG (90.5 KiB) Viewed 8532 times
-
- new try2.JPG (81.78 KiB) Viewed 8532 times
-
- new try.JPG (84.96 KiB) Viewed 8532 times
-
- Posts: 17
- Joined: Mon Jun 08, 2015 9:17 am
- Location: Nuremberg, Germany
Re: Computing normals is terrible slow
And by the way, this is what the calculated Mesh (Poisson Surface Reconstruction) looks like (with double side lightning, as some parts like e.g. the wheels, and the tow of the crane have still wrong normals)
- Attachments
-
- created mesh.JPG (54.76 KiB) Viewed 8532 times
-
- another side view of the mesh.JPG (55.21 KiB) Viewed 8532 times
-
- Posts: 1
- Joined: Fri Jul 24, 2015 12:25 pm
- Contact:
Re: Computing normals is terrible slow
Hi youngtimer
Your mesh look very nice and crispy. I am wondering how you did it.
I have a pts but I am struggling to make good looking mesh of it such as whats in the attached posts.
may u kindly let me know the settings u use:
revai@lloydhill.co.za
+27216867500
Your mesh look very nice and crispy. I am wondering how you did it.
I have a pts but I am struggling to make good looking mesh of it such as whats in the attached posts.
may u kindly let me know the settings u use:
revai@lloydhill.co.za
+27216867500