Slower speed on WLS
AnsweredDear Staff,
I already set up WLS on the NVIDIA CUDA Centre (NCC) GPU system and unfortunately found out that the speed on the same simulation is much slower than in my single machine relating to optimization, as I tracked components of computational time. Except for the optimization part, the rest of the parts are speeded up on the server. On my single machine, my CPU on my single machine is AMD Ryze 5 PRO 4650U with Radeon Graphics, instruction set [SSE2|AVX|AVX2], while the CPU on the server is Intel(R) Xeon(R) Gold 6238R CPU @2.20GHz, instruction set [SSE2|AVX|AVX2|AVX512]. I am sure I am doing the very same simulation, and the only difference is doing simulations in my single machine with a named-user license and in the server with WLS separately.
May I ask how I should fix this issue and improve the performance on the server? Because I want to run several simulations with Gurobi in different CPUs in parallel.
-
Hi Xin Ye,
Whatever the performance difference, I can assure you it WLS has no part to play, it is simply a licensing mechanism.
A couple of suggestions for you:
i) Run the model several times, with different values for Seed. This will give you a statistical distribution of run times which is better for making comparisons, rather than a single run each.
ii) Calculate the "work unit"/second rate for your machines. This does not have to be done with your model, you can use any model. To do this solve a model (to optimality or timeout) and look at the line in the log which states the time, and work units, taken, eg
Explored .. .nodes (... simplex iterations) in 5.95 seconds (4.21 work units)
I'd also suggest setting Threads=1, Cuts=0, Heuristics=0. If the work units per second is smaller on the server then this is a good indication to expect the solver to run faster on your single machine. Googling the specs of the CPUs it does look like your server has more threads (56 vs 12) which could be helpful but this is not always advantageous, there are several algorithms in our solver which are single threaded and so whether more threads will help will depend on the nature of the solve, and 12 threads for a single solve is already pretty generous. But for running multiple model solves in parallel the higher number of threads on the server will be useful, you will likely just need a decent amount of RAM to match.
- Riley
0
Please sign in to leave a comment.
Comments
1 comment