Skip to main content

Weighted multi-objectives not working as expected

Answered

Comments

4 comments

  • Mario Ruthmair
    Gurobi Staff Gurobi Staff

    Hi Jose,

    This is a quite complicated problem but I try to answer at least some of your questions:

    • Quadratic objective functions are allowed, also in multi-objective models, so your second objective is fine.
    • When you query the objective values after optimization, you will get the value of the objective function without being multiplied with its weight. So having non-zero values for all 3 objectives is ok, even if two of them have weight 0. Those 2 objectives are just not considered when searching for an optimal solution and will most probably have sub-optimal values.
    • Weights [1,0,0] will just minimize the first objective function. Likewise, weights [0,1,0] and [0,0,1] will minimize only the second and the third objective, respectively. 

    If weights [0,1,0] give you a better objective value for the first function than weights [1,0,0], then there is a problem somewhere. Could you show the 2 solver logs for these two cases? There could for example be numerical issues.

    Best regards,
    Mario

    0
  • Jose peeterson
    Gurobi-versary
    Conversationalist
    Curious

    Hi Mario,

    Thanks for your reply.

    I understand the 1st and 2nd bullet points. 

    For your 3rd point, somehow the results of MOO is different when I define the objectives individually using

    m.setObjectiveN()

    versus when I define my own single weighted sum of objectives using 

    m.setObjective( w1*obj1 + w2*obj2 + w2*obj2 )

    I find the results of the single weighted sum of objectives to be correct and make sense. Maybe, can you explain this?

    The exact syntax for the two cases is given below.

    case 1:

        m.setObjectiveN( ( num_stab*sum([a*b*vbat for a,b in zip(tot_char_curr,WEPV)]) - utopia_obj1 ) / ( nadir_obj1 - utopia_obj1 ) ,0,weight = W1)
      m.setObjectiveN( ( num_stab*sum( cap_loss_array) - utopia_obj2) / (nadir_obj2 - utopia_obj2) ,1,weight = W2)
      m.setObjectiveN( ( num_stab*(sum([a*b for a,b in zip(tot_char_curr,weights)]) ) - utopia_obj3 ) / (nadir_obj3 - utopia_obj3) ,2,weight = W3 )
    case 2:
    div_obj1 = 1/( nadir_obj1 - utopia_obj1 )
    div_obj2 = 1/( nadir_obj2 - utopia_obj2 )
    div_obj3 = 1/( nadir_obj3 - utopia_obj3 )

    m.setObjective( W1*(  sum([num_stab*a*b for a,b in zip(tot_char_curr,WEPV)])  - utopia_obj1 )*div_obj1 + W2*( num_stab*sum( cap_loss_array) - utopia_obj2 )*div_obj2 + W3*(   -1*num_stab*(sum([a*b for a,b in zip(tot_char_curr,weights)])) - utopia_obj3 )*div_obj3    , GRB.MINIMIZE )

     

    I know the reason for the last problem.

    "If weights [0,1,0] give you a better objective value for the first function than weights [1,0,0], then there is a problem somewhere."

    I am doing a sequence of optimizations and the results from the past optimization updates the states used in the future optimizations so I get these unexpected results. 

    Thanks, :)

     

    0
  • Mario Ruthmair
    Gurobi Staff Gurobi Staff

    Hi Jose,

    Your two ways of defining a single weighted objective function should be equivalent and the usage of the methods seems to be correct.
    However, I can spot two differences in your code snippets:

    1. In case 2 and objective 1, the multiplier "vbat" is missing.
    2. In case 1 and objective 3, the multiplier "-1" to convert it to maximization is missing.

    Best regards,
    Mario

    0
  • Jose peeterson
    Gurobi-versary
    Conversationalist
    Curious

    Hi Mario,

    Thanks, I have made those two changes and now the code behaves in the same manner. 

    Best regards,

    Peeterson.

    0

Please sign in to leave a comment.