mercredi 3 août 2016

Bit wise '&' with signed vs unsigned operand

I faced and interesting scenario in which I got different results depending on the right operand type and I can't really understand the reason for it.

Here is the minimal code:

#include <iostream>
#include <cstdint>

int main()
{
    uint16_t check = 0x8123U;

    uint64_t new_check = (check & 0xFFFF) << 16;

    std::cout << std::hex << new_check << std::endl;

    new_check = (check & 0xFFFFU) << 16;

    std::cout << std::hex << new_check << std::endl;

    return 0;
}

I compiled this code with g++ (gcc version 4.5.2) on Linux 64bit: g++ -std=c++0x -Wall example.cpp -o example

The output was:

ffffffff81230000

81230000

I can't really understand the reason for the output in the first case.

Why at some point any of the temporal calculation results would be promoted to a signed 64bit value (int64_t) resulting in the sign extension?

I would accept a result of '0' in both cases if 16bit value gets to be shifted 16 bits left in first place and then promoted to 64bit value. I also do accept the second output if the compiler first promotes the check to uint64_t and then performs the other operations...

But how comes that & with 0xFFFF (int32_t) vs. 0xFFFFU (uint32_t) would result in those two different outputs?

Aucun commentaire:

Enregistrer un commentaire