Click here to Skip to main content
15,881,938 members
Please Sign up or sign in to vote.
5.00/5 (1 vote)
See more:
it is mentioned as Hu-moments are invariant from scale, rotation and orientation. But when i compared the Hu-moments by using same image with different sizes and angles, i got slightly different values.

i get image with 100,75(height,width) at first time and then same image with 200,150(height, width) at second time.

can someone explain the reason for that? plz help.

thank you
Posted
Comments
Sergey Alexandrovich Kryukov 29-May-13 16:40pm    
How big the the difference? It could be just withing the accuracy of the method.
—SA
Sergey Alexandrovich Kryukov 29-May-13 16:53pm    
It's quite explainable, and you could find the explanation by yourself is you tried to think a bit more, but this is an interesting question. I voted 5 for it.
Please see my answer. Good luck.
—SA

1 solution

Please see my comment to the question. Imagine all your calculations are correct, which is pretty likely. It could be just withing the accuracy of the method. But then, if you re-sample some image, it is not exactly the same image. Think of this in this way: you image itself isexpressed in pixel values, is only an approximation of some "ideal scene". Colors are mangled, some may be saturated or be a subject of value discretization. But what is more important to us, the pixels themselves do not exist in the nature: each pixels gets its color based on some averaged value reflecting the light coming from some 3D angle, and also mangled by optical system aberrations; and also, pixels as electronic devices are not independent, they slightly affect each other through some parasitic effects. But what comes from the camera is the single point of truth. What happens when you re-sample and rotate the image in software? Original correlations between pixels are majorly destroyed, and, essentially, you introduce additional discretization error. It will be especially apparent when you upscale the image (which always entails some inherent quality loss). If you read about the algorithm (or even if you won't but just think about it), you will see that some non-existent pixels have to be "created" using one or another extrapolation algorithms. Such algorithms always have some limited accuracy. In other words, you create some new image, only closely resembling the original (or not even so closely, if this is a high magnification). And this image is somewhat different in all its features, including its moments.

—SA
 
Share this answer
 
v3
Comments
David Jhones 30-May-13 11:08am    
hi Sergey, thank you very much for your effort in my question. excellent work. i must think out of the box . thank you very much.
Sergey Alexandrovich Kryukov 30-May-13 11:35am    
My pleasure. Thank you for an interesting question, which evokes some thinking. :-)
—SA

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900