メインコンテンツへスキップ

Logarithmic Optimization Problem

回答済み

コメント

5件のコメント

  • David Torres Sanchez
    Gurobi Staff Gurobi Staff

    Hi Xiao,

    Can you try using the latest version (v11.0.1) and setting the parameter FuncNonlinear=1 (instead of the ones you are setting)

    Cheers, 
    David

    0
  • XIAO ROU YU
    First Comment
    First Question

    Hi David,

    I already use the latest version(v11.0.1) and setting the parameter

    model.Params.FuncNonlinear=1
    But still got the same results...
     
    I wonder if my origin code is wrong?
     
    Here's my output:
     
    I noticed that there are gap 0.0011%
     
    Thank you!
    0
  • David Torres Sanchez
    Gurobi Staff Gurobi Staff

    Hi Xiao,

    Thanks for the extra output. When creating variables the default lower bound is 0.
    Hence that solution is not allowed, if we allow the \(\texttt{y}\) variable to be negative:

    y1 = model.addVar(name="y1", lb=-1000)
    y2 = model.addVar(name="y1", lb=-1000) # Only this one is needed

    We get (roughly) the expected result:

    Best objective 1.933075925509e-03, best bound 1.933093487611e-03, gap 0.0009%
    optimal value w: [[9.99996442e-01], [3.55767355e-06]]

    Cheers, 
    David

    0
  • XIAO ROU YU
    First Comment
    First Question

    Hi David,

    Thanks a lot !

    "When creating variables the default lower bound is 0."

    This is very helpful for me.

    But I still want to ask why there are nonzero gap?

    0
  • David Torres Sanchez
    Gurobi Staff Gurobi Staff

    Hi Xiao,

    This is lower than the default MIPGap (1e-4).

    You are right, setting the MIPGap explicitly to 0 we get:

    Best objective 1.933093486649e-03, best bound 1.933093486649e-03, gap 0.0000%
    optimal value w: [[1.], [0.]]

    Cheers, 
    David

    0

サインインしてコメントを残してください。