lets go through step by step:
first this code, (I hope you know how macro works):
static const unsigned char BitsSetTable256[256] =
{
# define B2(n) n, n+1, n+1, n+2
# define B4(n) B2(n), B2(n+1), B2(n+1), B2(n+2)
# define B6(n) B4(n), B4(n+1), B4(n+1), B4(n+2)
B6(0), B6(1), B6(1), B6(2)
};
B4(0),B4(0+1), B4(0+1), b4(0+2)
B2(0), B2(0), B2(0+1), B2(0+2)0, 0+1, 0+1, 0+2
second option
c = BitsSetTable256[v & 0xff] + BitsSetTable256[(v >> 8) & 0xff] + BitsSetTable256[(v >> 16) & 0xff] + BitsSetTable256[v >> 24];
in above indexing you will never get value more than 255, so no worry to verify how it work check the code in your compier
unsigned int a=2863311530; unsigned int b;
b=a & 0xff;
b=a>>8 & 0xff;
b=a>>16 & 0xff;
b=a>>24 & 0xff;
now check the definition below:
unsigned char * p = (unsigned char *) &v;
take your compiler again and test the below code:
unsigned int a=1684234849;
unsigned char *b=(unsigned char*)&a;
I hope you would understand how the below code work with the help of above example, cause you know each byte can hold between 0-255 values
c = BitsSetTable256[p[0]] + BitsSetTable256[p[1]] +
BitsSetTable256[p[2]] +
BitsSetTable256[p[3]];
I also believe you will understand the rest of the code.
I gave you the explanation of the code.