Skip to main content

Possible to add constraints that depend on values of the variables?

Answered

Comments

9 comments

  • Eli Towle
    Gurobi Staff Gurobi Staff

    You could write these constraints as indicator constraints using Model.addGenConstrIndicator() (or the overloaded operator \(\texttt{>>}\)):

    MyAgentShare = my_model.addVars(nagents, name="AgentShare")

    for i in range(ngroups):
    my_model.addConstr(
    (MyAlpha[i] == 1)
    >> (
    gp.quicksum(MyAgentShare[j] for j in range(nagents) if MyGroups[i, j] == 1)
    == MyValues[i]
    ),
    name="define_agent_share",
    )

    The constraint on the right-hand side of the \(\texttt{>>}\) operator is enforced if \(\texttt{MyAlpha[i]}\) equals \( 1 \).

    With the above constraints, the model does not prohibit positive values from being assigned to \(\texttt{MyAgentShare[j]}\) for agents \(\texttt{j}\) that are not included in any of the chosen groups. If this is a problem, you could add constraints that force \(\texttt{MyAgentShare[j]}\) to \( 0 \) if none of the groups it participates in are chosen:

    for j in range(nagents):
       M = max(MyValues[i] for i in range(ngroups) if MyGroups[i, j] == 1)
    my_model.addConstr(
    MyAgentShare[j]
    <= M * gp.quicksum(MyAlpha[i] for i in range(ngroups) if MyGroups[i, j] == 1),
    name="restrict_positive_share",
    )

    If \(\texttt{MyAlpha[i]}\) equals \( 0 \) for all groups \(\texttt{i}\) that agent \(\texttt{j}\) participates in, then the constraint is equivalent to \(\texttt{MyAgentShare[j]} \leq 0 \). However, if \(\texttt{MyAlpha[i]}\) equals \(1\) for one of the groups \(\texttt{i}\) that agent \(\texttt{j}\) participates in, then the constraint becomes \(\texttt{MyAgentShare[j]} \leq \texttt{M}\), where \(\texttt{M}\) is equal to the largest \(\texttt{MyValue}\) value from among groups agent \(\texttt{j}\) participates in. Thus, the constraint does not effectively restrict \(\texttt{MyAgentShare[j]}\) at all.

    1
  • Aleksandrs Smilgins
    Conversationalist
    Curious

    Dear Eli,

    Thank you so much! This is a great help to me and I really appreciate your answer.

    Have a great day!

     

    0
  • Aleksandrs Smilgins
    Conversationalist
    Curious

    Dear Eli,

    Once again, thank you so much for the answer. First I only checked the first part of your code, but now I can see that I actually also need the second part, where agents that are not in any of the chosen groups should have a MyAgentShare value of 0. Unfortunately, that part does not work (which is called "restrict_positive_share"), since I cannot add this constraint. It gives me the following error:

    GurobiError: Invalid argument to Model.addConstr

    And it looks like it dislikes this line:

        M = max(MyValues[i] for i in range(ngroups) if MyGroups[i, j] == 1)
    

    Can you help - how should I change it?


    2) Also, at the moment the solver can choose between many values of MyAgentShare for those agents that are part of any implemented group. Is it somehow possible to run an additional optimization problem where the groups are defined like in the solution of the initial problem (same values of MyAlpha, same value of the objective function), but then among all solutions we would like to maximize agent A's value of MyAgentShare. Then I can take agents one by one, and then for each agent calculate the max possible value of MyAgentShare. Is that possible? 

    Thank you!

     

    0
  • Eli Towle
    Gurobi Staff Gurobi Staff

    Can you please post a minimal, self-contained code snippet that reproduces those errors?

    Is it somehow possible to run an additional optimization problem where the groups are defined like in the solution of the initial problem (same values of MyAlpha, same value of the objective function), but then among all solutions we would like to maximize agent A's value of MyAgentShare. Then I can take agents one by one, and then for each agent calculate the max possible value of MyAgentShare. Is that possible? 

    If I understand your question correctly, it sounds like you could do this with a multi-objective optimization problem. For example, you could define \(n+1\) hierarchical objectives, where \(n\) is the number of agents. The first objective is to maximize the total value, like you have now. The second objective would be to maximize the first agent's value, the third to maximize the second agent's value, and so on. A hierarchical multi-objective optimization approach like this works by ensuring the quality of each multi-objective step's solution does not degrade (within tolerances) with respective to previous, higher-priority objective functions.

    You could also implement a hierarchical multi-objective approach yourself:

    1. Solve the original model.
    2. Add a constraint that the current objective function cannot degrade past the optimal objective function value.
    3. Change the objective function to maximize the first/next agent's value and solve the model.
    4. Repeat steps 2-3 until you have maximized all agents' values.
    0
  • Aleksandrs Smilgins
    Conversationalist
    Curious

    Dear Eli,

    Thank you very much for the answer. Ragarding the first question. I have tried to make a smaller example, and then it worked. It turned out that in the original code I was importing the data from Excel. And then MyValues was a vertical matrix with 1 column rather than a vector, so it could not use the max function. I have solved the problem by saying

    MyValue=np.squeeze(np.transpose(MyValue))

     

    Regarding the second part about the multi-objective. I have tried to read more about it, but could not make it work. It says I should use setObjectiveN, but in the documentation it does not follow the same syntax as in Python. For instance, I have tried this in order to maximize Agent 1's share:

    my_model.setObjectiveN(1, MyAgentShare[1], gp.GRB.MAXIMIZE)

    But it gives me the the following error: "TypeError: an integer is required"

    So, what should I write? This is my main problem while using Gurobi in Python. I know there is something called setObjectiveN, and I know that it should have an integer as input. But I do not know how to write it, and it is not written in the documentation. If I misunderstand something, could you please let me know where to look for this information?


    I also tried to look more into your solution for the part that is marked as "restrict_positive_shares". I do understand that this trick works, but I do not understand why one cannot make it more straightforward, like this (DOES NOT WORK):

    for j in range(nagents):
        M = max(MyGroups[i, j] for i in range(ngroups) if MyAlpha[i]==1)
        my_model.addConstr(
            MyAgentShare[j] == 0 if M == 0 else MyAgentShare[j]>=0,
            name="restrict_positive_share",
        )


    This part does not work, but what is wrong here? I use a more straightforward logic: in the matrix MyGroups  (with 0 and 1 only) for each agent find the max value in the rows that are chosen by solver. If that value is 0, then this agent is not part of any implemented groups. I am using the max function, and I am setting the constraints based on the value of M, which also worked in your solution. But why your solution works, while this one does not (the error is GurobiError: Constraint has no bool value (are you trying "lb <= expr <= ub"?) ).

    Thank you!

    0
  • Eli Towle
    Gurobi Staff Gurobi Staff

    The documentation for Python's Model.setObjectiveN() describes its usage. For your use case, you should pass the method three arguments:

    1. The linear expression to optimize
    2. The index of the objective
    3. The priority of the objective

    You can also use the \(\texttt{abstol}\) keyword argument to change the default allowable absolute degradation tolerance. To maximize all objective functions, set the ModelSense model attribute. Altogether:

    my_model.ModelSense = gp.GRB.MAXIMIZE

    # First objective: maximize total group value
    my_model.setObjectiveN(
      gp.quicksum(MyAlpha[i] * MyValues[i] for i in range(ngroups)), 0, nagents, abstol=0
    )

    # Subsequent objectives: maximize share of each agent in sequence
    for j in range(nagents):
      my_model.setObjectiveN(MyAgentShare[j], j + 1, nagents - j - 1, abstol=0)

    Gurobi's functional code examples are also helpful for learning how to use certain Gurobi features in different languages. The multiobj.py example is particularly relevant for your model.

    I do understand that this trick works, but I do not understand why one cannot make it more straightforward, like this (DOES NOT WORK): [...] what is wrong here?

    You cannot conditionally set \(\texttt{M}\) based on the value of a model variable like \(\texttt{MyAlpha}\). Gurobi and other optimization solvers require you to explicitly define your model constraints in mathematical terms. Thus, logical statements "if-then" relationships must be incorporated into the model via mathematical expressions, which often requires binary variables. Alternatively, Gurobi does provide helper functions like Model.addGenConstrIndicator(), Model.addGenConstrAnd(), and Model.addGenConstrOr() for defining logical relationships in your model - Gurobi internally translates relationships defined with these helper functions into mathematical terms before solving.

    0
  • Aleksandrs Smilgins
    Conversationalist
    Curious

    Dear Eli,

    Thank you very much for the clear answer! Yes, now I understand why your constraint works and mine does not. I have looked more into this, and again I have a few issued which I would like to get clarified. Could you please help me with the following:

    1) It looks like gp.GRB.MAXIMIZE is set for all objectives. Is there a way to maximize the first objective, and then to minimize each agent's value of AgentShare? One can multiply by -1, but maybe it is possible to state it is each objective separately.

    2) In the subsequent objectives you suggest using nagents - j - 1 as priorities. It looks like you are first maximizing the last agent's share, then the share of the agent n-1, etc. So, agent 1's AgentShare is least important. Is my understanding correct? 

    3) Is it possible to use "=" when setting up the problem? In general, suppose I have a certain group structure  denoted by AlphaFixed, and then I want to rerun the problem while setting MyAlpha = AlphaFixed. How can I add this constraint?

    4) May be related to 3) - suppose that some elements in the vector MyValues are 0, such that it does not matter whether the group is implemented or not, since its value is zero. However, if this happens, in order to make the solution easier we would like to ignore such groups (make sure that MyAlpha[i] is set to 0 for that particular i). I have tried the following, and it does not work:

    for i in range(ngroups):
       my_model.addConstr(
           (MyValues[i] == 0)
           >> (MyAlpha[i]=0),
           name="ignore_groups_with_no_savings",
       )

    error: SyntaxError: cannot assign to subscript here. Maybe you meant '==' instead of '='?
    And switching to == gives another error. 
    Can this be added?

    Thank you!

    0
  • Eli Towle
    Gurobi Staff Gurobi Staff

    1) It looks like gp.GRB.MAXIMIZE is set for all objectives. Is there a way to maximize the first objective, and then to minimize each agent's value of AgentShare? One can multiply by -1, but maybe it is possible to state it is each objective separately.

    A multi-objective model has a single objective sense, meaning you must maximize or minimize all objectives. If you have a maximization problem, you can minimize an agent's share by setting the corresponding ObjNWeight attribute to -1. The easiest way to set this attribute is by setting the \(\texttt{weight}\) keyword argument to -1 in your call to Model.setObjectiveN().

    See the Specifying Multiple Objectives section of the documentation for more details.

    2) In the subsequent objectives you suggest using nagents - j - 1 as priorities. It looks like you are first maximizing the last agent's share, then the share of the agent n-1, etc. So, agent 1's AgentShare is least important. Is my understanding correct?

    No, higher-priority objectives are optimized earlier. Thus, total group value (priority \(n\)) is optimized first, then the first agent's share (priority \(n - 1\)), then the second agent's share (priority \(n - 2\)), and so on.

    3) Is it possible to use "=" when setting up the problem? In general, suppose I have a certain group structure denoted by AlphaFixed, and then I want to rerun the problem while setting MyAlpha = AlphaFixed. How can I add this constraint?

    Do you mean you want to solve the model multiple times for different group structures? If so, I would wrap your model-building code in a function that accepts a group structure \(\texttt{MyAlpha}\) as an argument. For example:

    def build_and_solve(my_alpha):
    m = gp.Model()  

    # build model using my_alpha
    # ...

    m.optimize()

    # process and return results here
    # ...

    results1 = build_and_solve(my_alpha1)
    results2 = build_and_solve(my_alpha2)

     

    4) May be related to 3) - suppose that some elements in the vector MyValues are 0, such that it does not matter whether the group is implemented or not, since its value is zero. However, if this happens, in order to make the solution easier we would like to ignore such groups (make sure that MyAlpha[i] is set to 0 for that particular i). I have tried the following, and it does not work:

    [...]

    error: SyntaxError: cannot assign to subscript here. Maybe you meant '==' instead of '='? And switching to == gives another error. Can this be added?

    If I understand your question correctly, you can just fix the corresponding values of \(\texttt{MyAlpha}\) to 0. For example:

    for i in range(ngroups):
    if MyValues[i] == 0:
    MyAlpha[i].UB = 0

    Here, we set the upper bound of the relevant binary variables to 0, which fixes their values to 0.

    You don't need indicator constraints here, because the constraint/bound you want to add depends on the value of the fixed, constant value \(\texttt{MyValues[i]}\). In contrast, we previously used an indicator constraint to enforce a constraint that depended on the value of a model variable.

    0
  • Aleksandrs Smilgins
    Conversationalist
    Curious

    Thank you very much! It is a great help for me, and I appreciated it a lot!

    0

Please sign in to leave a comment.