It's kind of like a real-time hash:
Check
https://en.wikipedia.org/wiki/Hash_table for "Incremental Resizing".
What I do is just simulate the linear time sieve of Erathostenes.
Normally for amortized cost, you would just double(or any k * size with k > 1.5) the size of the array.
For incremental resizing, you allocate the memory for the bigger array before you fill the smaller one (allocation is supposed to be fast, at least in single threaded programs). Then at each step you would answer from the already solved sieve (the smaller one) but do some extra computation in the bigger one such as by the time you finish with the smaller sieve, the big one is up and ready.
I've added comments in the source code which should also help with understanding.