The art of passing integers from python to cuda

I recently had some problems with some python -> pycuda -> cuda implementations that stole me quite a lot of time for debugging.

As general rule, when passing an integer to cuda via pycuda specify in every case the number of bits used  for it.

Usually you can pass an integer to cuda using npy.uint(2)...Seen the nomenclature of numpy, one  should expect to receive an unsigned integer in the cuda code. Obviously that is not the case because I was receiving everything but the value I transmitted. Instead one has to specify npy.uint32(2) to transmit a unsigned integer.

So just a little help for conversions (untested) :

numpy cuda (C)
uint16 unsigned short
int16 short
uint32 unsigned int
int32 int

I skip the long int etc...as these types are a bit weird to declare in C.