Adding the callback function
AnsweredDear support team,
I hope you are doing well. I am trying to count the number of different solutions to a problem and have some issues.
- If I write a problem as an LP format, like \(z_a + z_b = 5\), and if instead I write that as the parametric form, like \( \sum_{j} z_{j} = 5 \ \forall j \in \{a,b\}\), may this lead to the different model fingerprint?
- The objective value and the solution of the main binary variables in the above models are the same, however, the strange thing is \(\texttt{Solution count}\) that is in the first model (LP format) \(Solution \ count \ 3: 32,12,4\) and in the second one \(Solution \ count \ 2: 32,16\). Would you say please, is it a normal behavior? (I should note the model contains indicator and general Max constraints).
- The callback function I used to count the different solutions is something like this:
global cut_expr
global sol_cnt
def callback_function(model, where):
cut_expr = []
sol_cnt = 0
if where == gp.GRB.Callback.MIPSOL:
Z = model.cbGetSolution(z)
print("An incumbent solution is found: ", Z)
for j in J:
if Z[j] < 0.5:
print("The value of incumbent solution (0-based): ", Z)
cut_expr.append(z[j])
else:
print("The value of incumbent solution (1-based): ", Z)
cut_expr.append(1.0 - z[j])
print("Adding the cust into the model: ")
model.cbLazy(cut_expr[j] >= 1)
sol_cnt += 1
print("sol_cnt: ", sol_cnt)
model.params.LazyConstraints = 1
model.optimize(callback_function)
where the variable \(z_{j}\) is the main binary variable in the model. Could you say please, if I am in the right way?
Also, the first and second model logs are in the following:
Set parameter LazyConstraints to value 1
Gurobi Optimizer version 12.0.0 build v12.0.0rc1 (linux64 - "Ubuntu 22.04.3 LTS")
CPU model: Intel(R) Xeon(R) CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]
Thread count: 1 physical cores, 2 logical processors, using up to 2 threads
Non-default parameters:
LazyConstraints 1
Optimize a model with 6 rows, 26 columns and 18 nonzeros
Model fingerprint: 0x6b8d0af6
Model has 18 simple general constraints
6 MAX, 12 INDICATOR
Variable types: 20 continuous, 6 integer (6 binary)
Coefficient statistics:
Matrix range [1e+00, 3e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+01]
RHS range [4e+00, 4e+00]
GenCon coe range [1e+00, 1e+00]
Presolve added 48 rows and 30 columns
Presolve time: 0.00s
Presolved: 54 rows, 56 columns, 132 nonzeros
Variable types: 20 continuous, 36 integer (36 binary)
Found heuristic solution: objective 4.0000000
Found heuristic solution: objective 12.0000000
Root relaxation: objective 3.200000e+01, 46 iterations, 0.00 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 32.0000000 32.00000 0.00% - 0s
Explored 1 nodes (46 simplex iterations) in 0.04 seconds (0.00 work units)
Thread count was 2 (of 2 available processors)
Solution count 3: 32 12 4
Optimal solution found (tolerance 1.00e-04)
Best objective 3.200000000000e+01, best bound 3.200000000000e+01, gap 0.0000%
Set parameter LazyConstraints to value 1
Gurobi Optimizer version 12.0.0 build v12.0.0rc1 (linux64 - "Ubuntu 22.04.3 LTS")
CPU model: Intel(R) Xeon(R) CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]
Thread count: 1 physical cores, 2 logical processors, using up to 2 threads
Non-default parameters:
LazyConstraints 1
Optimize a model with 6 rows, 26 columns and 18 nonzeros
Model fingerprint: 0xdc12c9b9
Model has 18 simple general constraints
6 MAX, 12 INDICATOR
Variable types: 20 continuous, 6 integer (6 binary)
Coefficient statistics:
Matrix range [1e+00, 3e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+01]
RHS range [4e+00, 4e+00]
GenCon coe range [1e+00, 1e+00]
Presolve added 48 rows and 30 columns
Presolve time: 0.00s
Presolved: 54 rows, 56 columns, 132 nonzeros
Variable types: 20 continuous, 36 integer (36 binary)
Found heuristic solution: objective 16.0000000
Root relaxation: objective 3.200000e+01, 43 iterations, 0.00 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 32.0000000 32.00000 0.00% - 0s
Explored 1 nodes (43 simplex iterations) in 0.05 seconds (0.00 work units)
Thread count was 2 (of 2 available processors)
Solution count 2: 32 16
Optimal solution found (tolerance 1.00e-04)
Best objective 3.200000000000e+01, best bound 3.200000000000e+01, gap 0.0000%
- Indeed, I am using the Numpy library to define the model parameters and it seems it contains the following warning:
<ipython-input-36-f0ed18a60bb9>:28: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future.
Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
do I have to use any different data structure if this may some issues in the solving process?
Best regards
-
Hi Abbas,
If I write a problem as an LP format ... may this lead to the different model fingerprint?
No it shouldn't. You can test this with toy models:
import gurobipy as gp
m1 = gp.Model()
x = m1.addVars(2)
m1.addConstr(x[0] + x[1] <= 1)
m1.update()
m2 = gp.Model()
x = m2.addVars(2)
m2.addConstr(gp.quicksum(x[i] for i in [0,1]) <= 1)
m2.update()
m3 = gp.Model()
x = m3.addVars(2)
m3.addConstr(x[1] + x[0] <= 1)
m3.update()
m4 = gp.Model()
x = m4.addVars(2)
m4.addConstr(gp.quicksum(x[i] for i in [1,0]) <= 1)
m4.update()
assert len(set([m.Fingerprint for m in [m1, m2, m3, m4]])) == 1If you're getting different fingerprints then it is for some other reason. I would write out the two models to LP files then use a diff tool to understand where the models are different.
The objective value and the solution of the main binary variables ... is it a normal behavior? (I should note the model contains indicator and general Max constraints).
Unless you are using PoolSearchMode to change the default behavior, then Gurobi will be searching for an optimal solution, and any solution found along the way will be reported in the solution count. Under certain conditions Gurobi is deterministic, but different models (as indicated by different fingerprints) will certainly lead to different solution paths and so it is completely normal to see a different solution count, even if the models are theoretically equivalent.
I might need more context to understand your question about the callback. In terms of correctness of code it looks ok. But if you are adding a lazy constraint to cut off a "solution", then it's not really a solution so I wouldn't expect your sol_cnt to match Gurobi's.
do I have to use any different data structure if this may some issues in the solving process
So Gurobi doesn't use numpy in the solving process. All APIs are a thin wrapper around the Gurobi solver which is written in C. This means any warnings that are issued by numpy will not affect the solving process, but eventually the behavior you are using in numpy will be removed and at that point your code will fail to run - but this is separate from Gurobi.
- Riley
0 -
Dear Riley,
Many thanks for your comments and answer. In order to see what's happened, I am managing a file to use as an MRE to see the results as well as the solver log to compare the two different models.
Regarding what you mentioned about the callback function, I am unsure if I understand what you mean correctly. The callback function tried to cut off a feasible solution whenever one was found and counted the number of its solutions. If you think it does another thing, I am wondering if you could give me a template to do that in the right way.
Also, another way I am planning to count the solution is by defining a loop to run a specific number of optimizations. When the optimization is terminated the callback function tries to remove the current feasible solution, actually the optimal one, and run the loop in the next iteration until the loop is over. Would you say please, is it what you mean by correct counting the solution?
All the best
0 -
Hi Abbas,
By way of example, the lazy constraints in our tsp.py example are used to eliminate potential solutions that violate subtour constraints, because they do not correspond to valid tours. If a potential solution is cut off then it wouldn't count towards the solution count, because it is not valid. In your code you are increasing the value of sol_cnt even when you reject an invalid solution, so you should not expect that number to be the same as Gurobi's solution count.
If you are wanting to enumerate all solutions then using Solution Pools would be the best approach.
- Riley
0 -
Dear Riley,
Thank you so much for clarifying. Please let me check something, and I will get back to you if I have any further questions.
All the best
0 -
Dear Riley,
Just as a follow-up question, how can I write the above problem in an .lp/mps format in which I can solve that by other solvers? The problem contains indicators and general max constraints and writing that in an LP format leads to unsupported issues by different solvers.
Best regards
0 -
Hi Abbas,
Unfortunately this problem does not have a good solution. The MPS format is not 100% standardized among solvers and frameworks, and particularly so when it comes to more exotic features like general constraints. The situation is even worse for LP files.
The only solution is to compare formats between two solvers/frameworks and make manual adjustments (potentially a Python script would make quick work of it).
- Riley
0 -
Dear Riley,
Many thanks for your clarification.
All the best
0
Please sign in to leave a comment.
Comments
7 comments