Same model, different solve times
AnsweredI am working on neural network verification. The "main" of the program will loop through a set of 100 images and attempt to verify them by encoding the problem as a MILP. The result is either optimal, infeasible, or timeout, meaning Gurobi could not produce an answer in 30mins.
My questions is that why would some images that are known to time out on a university physical machine (16GB, 4 core, 3.4GHz) then become solvable in, for example, 50secs on a less powerful university virtual machine (8GB, 4 core, 2.0GHz). The contrast is quite stark.
-
Hi James,
This effect is called "Performance Variability" and is a well-known phenomenon. Changing the order of variables or constraints or using a different random seed can drastically impact the performance in either direction although the mathematical model is identical. This is described for example in the MIPLIB 2010 paper. Gurobi tries to exploit this a bit when running in ConcurrentMIP mode. Using different hardware also plays an important role as the search path within the branch-and-bound tree can be significantly different.
You could try running with multiple different random seeds to get a feeling for how large the variability is on your model.
Cheers,
Matthias1 -
Hi James
I am also interested in neural network verification. Can you please share your code? It would work for me as a starting point. Also, if you like you can share any tutorial/blog about encoding a neural network and using Gurobi solver on it.
Thanks
0
Please sign in to leave a comment.
Comments
2 comments