I need to take in and process very large image buffers for a program I'm writing, so I ran a few quick tests to see approximately the maximum amount of heap storage I can use for my program.
Running the below code produces no errors.
uint64_t sz = 1; sz <<= 30;
uint8_t *test = new uint8_t[sz];
std::cout << "size is " << sz << " bytes.\n";
delete[] test;
However, if I left shift sz by 31 instead of 30, or if I create two test arrays with the above size, I receive a std::bad_alloc exception.
This seems to indicate that I can use a maximum of ~1GB of heap space, despite having a 64 bit system with 8GB of RAM. Is there a reason for this? Is this an alterable value? As my tags indicate, I'm runnings Windows and using Visual Studio. Ideally, I'd like to load as much of the buffer into RAM as possible without incurring swapping.
(As an aside, on my Mac running Yosemite, I can not only allocate this amount of heap storage, but can even write to it in short increments to force it all to actually be allocated (not just set aside by malloc) and subsequently read from it. Activity monitor reports 1, 2, 4, 8, 16... GB of RAM in use by the program, though the vast majority is compressed.)
Aucun commentaire:
Enregistrer un commentaire