We're developing a astrophysical stochastic model which is extremely hpc-intensive. Using 24 threads, a run takes many days. Using a 64-core, 128 thread chip would certainly speed our work. For historical and other reasons, we are developing under Windows which apparently has a 64-thread limit. My question: is there anything that prevents running two instances of the code, each using 64 threads, concurrently under windows 7 or 10? Using hyperthreading gives us about a 50% speedup. Since it is a Markov-chain-type process, summing the results from two concurrent processes works perfectly well.
Is there any definitive reference on the processor group concept for Windows?