メインコンテンツへスキップ

linprog in c++

進行中

コメント

7件のコメント

  • Jaromił Najman
    • Gurobi Staff Gurobi Staff

    Translating the linprog.m example into C++ is not possible because the linprog interface is not available in C++. There is also no Matrix API for C++ in Gurobi. Thus, it is also not directly possible to translate the code snippet you provided into C++.

    C++ examples for your particular case which might help you switching from MATLAB to C++ are dense_c++.cpp, diet_c++.cpp, facility_c++.cpp, and workforce1_c++.cpp.

    0
  • Zohar Levi

    dense_optimize() in dense_c++.cpp seems close enough and requires only minor adjustments, thanks.

    0
  • Zohar Levi

    The mechanism for adding constraints is incredibly slow.

    Summing the matrix coeff takes 0.02sec.

    Gurobi adding constraints takes 7sec. 

    Gurobi optimizing the model takes 5sec.

    GRBEnv genv;
    GRBModel gmod = GRBModel( genv );
    GRBVar *gvar = gmod.addVars( nullptr, nullptr, nullptr, nullptr, nullptr, nvars );

    double sum = 0; // test Eigen speed
    for ( int r = 0; r < A.outerSize(); ++r ) {
        GRBLinExpr lhs = 0;
        for ( SparseMatrix<double, RowMajor>::InnerIterator it( A, r ); it; ++it ) {
            lhs += it.value() * gvar[it.col()];
            sum += it.value();
        }
        gmod.addConstr( lhs, '<', b[r] );
    }
    0
  • Jaromił Najman
    • Gurobi Staff Gurobi Staff

    Hi Zohar,

    As described in the documentation, using the \(\texttt{+=}\) operator is not the most efficient way. You should try using the addTerms method instead cf. also How do I efficiently add constraints using std::vector in C++?

    If this is still too slow, could you please provide data for \(\texttt{A}\) in your example to make it reproducible?

    Best regards, 
    Jaromił

    0
  • Zohar Levi

    Using addTerms instead of += to add one term at the time cut the time by half, but not in an order of magnitude (e.g. from 2sec to 1sec, instead of 0.02sec).

    addTerms of all eq terms didn't make much difference either since it's a lot of sparse constraints. Also, it begs the question how to do that. Gurobi expects an array of vars. Creating such an array (duplicating vars) is costly (as opposed to an array of pointers). For the same reason, I'm not sure how to use addConstrs efficiently.

    Anyway, the time is split almost evenly between a call to addTerms and a call to addConstr.

    I would have expected to be able to give you my system in a form of a CRS rep of a sparse matrix. Meaning, the time shouldn't be more than iterating my matrix (0.02sec).

     

     

     

    0
  • Jaromił Najman
    • Gurobi Staff Gurobi Staff

    addTerms of all eq terms didn't make much difference either since it's a lot of sparse constraints. Also, it begs the question how to do that. Gurobi expects an array of vars. Creating such an array (duplicating vars) is costly (as opposed to an array of pointers). For the same reason, I'm not sure how to use addConstrs efficiently.

    Unfortunately, this is a current limitation of the C++, C#, and Java APIs, which do not support Matrix operations yet.

    Could you share a minimal working example where you experience such long model building time in C++?

    If possible, you could try Gurobi's Python Matrix API, cf. MLinExpr and the matrix1.py example.

    Alternatively, you could try calling the C API from your C++ code. The GRBloadmodel should be the fastest way to build your model although it is not very intuitive and easy to use.

    Best regards, 
    Jaromił

    0
  • Zohar Levi

    The C interface supports CSC sparse rep, which should do the trick (c/c++ are all the same). 

    About a working example. I have a 3M X 30M matrix, with ~10 nnz in a row. If you use the addTerms in a double loop 3e6 x 10, you could probably reproduce my issue.

    0

サインインしてコメントを残してください。