It seems that uint32_t
is much more prevalent in uint32_fast_t
(I realise this is anecdotal evidence). That seems counter-intuitive to me, though.
Almost always when I see an implementation use uint32_t
, all it really wants is an integer that can hold values up to 4294967296 (usually a much lower bound somewhere between 65536 and 4294967296).
It seems weird to then use uint32_t
, as the 'exactly 32 bits' guarantee is not needed, and the 'fastest available >= 32 bits' guarantee of uint32_fast_t
seem to be exactly the right idea. Moreover, while it's usually implemented, uint32_t
is not actually guaranteed to exist.
Why, then, do most people seem to use uint32_t
? Is it simply better known?
Aucun commentaire:
Enregistrer un commentaire