Skip to main content

Gurobi Syntax Error - Non-UTF-8 code

Answered

Comments

11 comments

  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Fabio,

    There is a few reasons we could be seeing this error.

     

    I've had a quick look at your license file and I can't see any problematic characters in there, so if there is an issue with the "á" character then it's not the contents of the license file - but it could be the location.  I'd try relocating your Gurobi license to another place on your hard drive where the full path does not contain any special characters.   See Where do I place the Gurobi license file (gurobi.lic)?

     

    The issue could also be arising due to Pyomo.  We came across a similar problem (if not the same) several months ago and the solution was in the specification of the solver to Pyomo:

    https://support.gurobi.com/hc/en-us/community/posts/14858984752785-ApplicationError-Solver-gurobi-did-not-exit-normally

     

    Lastly if neither of these solutions work, then can I suggest using Pyomo to write the file to MPS, then trying to run Gurobi from the command line.  The success (or non-success) of this will certainly help narrow down where the issue lies.

    - Riley

    0
  • Fábio Castro
    Conversationalist
    First Question

    This link - https://support.gurobi.com/hc/en-us/community/posts/14858984752785-ApplicationError-Solver-gurobi-did-not-exit-normally

    seemed to work, but only on smaller problems unfortunately...

     

    When solving larger problems, either the following error occurs:

    Or It just ends at "MemoryError".

    Now, the way I know this is unusual is because the same exact code ran perfectly on 16GB of RAM, and it now does this on 32GB.

     

    This is the message when it gets a MemoryError:

    Traceback (most recent call last):

      File C:\ProgramData\anaconda3\Lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
        exec(code, globals, locals)

      File c:\scenarioreducer-main\co2_27_08.py:3181
        results = solver.solve(model, tee=True)

      File ~\AppData\Roaming\Python\Python311\site-packages\pyomo\solvers\plugins\solvers\direct_solver.py:130 in solve
        self._presolve(*args, **kwds)

      File ~\AppData\Roaming\Python\Python311\site-packages\pyomo\solvers\plugins\solvers\direct_solver.py:68 in _presolve
        self._set_instance(model, kwds)

      File ~\AppData\Roaming\Python\Python311\site-packages\pyomo\solvers\plugins\solvers\gurobi_direct.py:472 in _set_instance
        self._add_block(model)

      File ~\AppData\Roaming\Python\Python311\site-packages\pyomo\solvers\plugins\solvers\gurobi_direct.py:490 in _add_block
        DirectOrPersistentSolver._add_block(self, block)

      File ~\AppData\Roaming\Python\Python311\site-packages\pyomo\solvers\plugins\solvers\direct_or_persistent_solver.py:250 in _add_block
        self._set_objective(obj)

      File ~\AppData\Roaming\Python\Python311\site-packages\pyomo\solvers\plugins\solvers\gurobi_direct.py:640 in _set_objective
        gurobi_expr, referenced_vars = self._get_expr_from_pyomo_expr(

      File ~\AppData\Roaming\Python\Python311\site-packages\pyomo\solvers\plugins\solvers\gurobi_direct.py:315 in _get_expr_from_pyomo_expr
        repn = generate_standard_repn(expr, quadratic=True)

      File pyomo\\repn\\standard_repn.pyx:385 in pyomo.repn.standard_repn.generate_standard_repn

      File pyomo\\repn\\standard_repn.pyx:1181 in pyomo.repn.standard_repn._generate_standard_repn

      File pyomo\\repn\\standard_repn.pyx:510 in pyomo.repn.standard_repn._collect_sum

      File pyomo\\repn\\standard_repn.pyx:1148 in pyomo.repn.standard_repn._collect_standard_repn

      File pyomo\\repn\\standard_repn.pyx:1073 in pyomo.repn.standard_repn._collect_linear

    MemoryError

     

    Thank you,

    Fábio

    0
  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Fábio,

    In order to say more about the memory error we'd probably need to see a Gurobi log.  As suggested earlier you can always write the problem to mps from Pyomo, then run the model via the Gurobi command line, which will at least rule out anything on the Pyomo end of things.

    We do have some guidelines on reducing memory consumption detailed here:
    How do I avoid an out-of-memory condition?

    Another consideration is how much RAM is being occupied by other processes?  It looks like you're on windows so Resource Monitor would be useful here.

    Lastly, the nature of the solve can change as you vary parameters or input data.  It is quite possible for the solver to reach an optimal solution without needed a big branch and bound tree, yet in another solve with only small differences a large branch and bound tree is required (which is often where the RAM is maxed out).

    - Riley

     

    0
  • Fábio Castro
    Conversationalist
    First Question

    Hi there, I've tried writing the problem to mps and solving it, and it did work, and it also solved the problem way faster than normal. However, maybe due to my own ignorance, I don't know how to access individual variable values... The decision variables are very necessary for me. Since pyomo managed to write the file just fine, and it solved through gurobi_cl, what could the problem be when solving directly, without saving to a seperate file? Could the issue be with how Gurobi is communicating in the IDE? The log is as follows:

    (base) C:\Windows\System32>gurobi_cl C:\ScenarioReducer-main/TestWorkstation.lp
    Set parameter Username
    Set parameter LogFile to value "gurobi.log"
    Academic license - for non-commercial use only - expires 2024-10-01
    Using license file C:\Users\FßbioDanielDaSilvaCa\gurobi.lic

    Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (win64)
    Copyright (c) 2023, Gurobi Optimization, LLC

    Read LP format model from file C:\ScenarioReducer-main/TestWorkstation.lp
    Reading time = 31.50 seconds
    x1: 8987465 rows, 4385219 columns, 110079270 nonzeros

    CPU model: 13th Gen Intel(R) Core(TM) i9-13900KF, instruction set [SSE2|AVX|AVX2]
    Thread count: 24 physical cores, 32 logical processors, using up to 32 threads

    Optimize a model with 8987465 rows, 4385219 columns and 110079270 nonzeros
    Model fingerprint: 0x819d2e8e
    Variable types: 4190465 continuous, 194754 integer (65158 binary)
    Coefficient statistics:
      Matrix range     [1e-05, 7e+03]
      Objective range  [8e-02, 1e+06]
      Bounds range     [1e+00, 1e+00]
      RHS range        [4e-05, 2e+02]
    Presolve removed 0 rows and 0 columns (presolve time = 7s) ...
    Presolve removed 4343060 rows and 1 columns (presolve time = 13s) ...
    Presolve removed 8769342 rows and 1 columns (presolve time = 17s) ...
    Presolve removed 8769344 rows and 4316983 columns (presolve time = 22s) ...
    Presolve removed 8791366 rows and 4316983 columns (presolve time = 26s) ...
    Presolve removed 8920843 rows and 4317685 columns (presolve time = 31s) ...
    Presolve removed 8930839 rows and 4330200 columns
    Presolve time: 34.41s
    Presolved: 56626 rows, 55019 columns, 190576 nonzeros
    Variable types: 53266 continuous, 1753 integer (613 binary)
    Deterministic concurrent LP optimizer: primal and dual simplex
    Showing first log only...


    Root simplex log...

    Iteration    Objective       Primal Inf.    Dual Inf.      Time
           0    1.8462910e+07   9.054557e+02   1.227752e+10     44s
    Concurrent spin time: 0.01s

    Solved with dual simplex

    Root simplex log...

    Iteration    Objective       Primal Inf.    Dual Inf.      Time
       33380    1.3756903e+07   0.000000e+00   0.000000e+00     45s

    Root relaxation: objective 1.375690e+07, 33380 iterations, 0.96 seconds (2.11 work units)
    Total elapsed time = 45.03s

        Nodes    |    Current Node    |     Objective Bounds      |     Work
     Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

         0     0 1.3757e+07    0  134          - 1.3757e+07      -     -   45s
         0     0 1.3850e+07    0  315          - 1.3850e+07      -     -   45s
         0     0 1.3853e+07    0  333          - 1.3853e+07      -     -   46s
         0     0 1.3853e+07    0  334          - 1.3853e+07      -     -   46s
         0     0 1.3967e+07    0  315          - 1.3967e+07      -     -   46s
         0     0 1.3978e+07    0  320          - 1.3978e+07      -     -   47s
         0     0 1.3979e+07    0  327          - 1.3979e+07      -     -   47s
         0     0 1.3979e+07    0  325          - 1.3979e+07      -     -   47s
         0     0 1.4937e+07    0  318          - 1.4937e+07      -     -   48s
         0     0 1.4950e+07    0  323          - 1.4950e+07      -     -   49s
         0     0 1.4955e+07    0  340          - 1.4955e+07      -     -   49s
         0     0 1.4955e+07    0  342          - 1.4955e+07      -     -   49s
         0     0 1.4959e+07    0  349          - 1.4959e+07      -     -   50s
         0     0 1.4959e+07    0  309          - 1.4959e+07      -     -   50s
         0     0 1.4961e+07    0  369          - 1.4961e+07      -     -   51s
         0     0 1.4961e+07    0  375          - 1.4961e+07      -     -   51s
         0     0 1.4966e+07    0  360          - 1.4966e+07      -     -   52s
         0     0 1.4966e+07    0  360          - 1.4966e+07      -     -   52s
         0     2 1.4966e+07    0  360          - 1.4966e+07      -     -   55s
        55    79 1.5058e+07    6  314          - 1.4980e+07      -  1118   60s
       205   236 1.5068e+07   11  266          - 1.4984e+07      -  1036   65s
       383   424 1.5180e+07   18  149          - 1.4984e+07      -   967   70s
    H  494   501                    1.527726e+07 1.4984e+07  1.92%   885   88s
       610   546 1.5001e+07    9  206 1.5277e+07 1.4987e+07  1.90%   798   90s
    H  653   587                    1.508468e+07 1.4987e+07  0.65%   887   94s
       692   544     cutoff   13      1.5085e+07 1.4987e+07  0.65%   876   95s
       784   553     cutoff   15      1.5085e+07 1.4987e+07  0.65%   836  124s
    H  785   553                    1.502490e+07 1.4987e+07  0.25%   835  124s
       818   427     cutoff   15      1.5025e+07 1.4987e+07  0.25%   828  125s

    Cutting planes:
      Learned: 1
      Gomory: 5
      Cover: 25
      Implied bound: 4
      MIR: 1021
      StrongCG: 19
      Flow cover: 9255
      Flow path: 1529
      GUB cover: 31
      Inf proof: 2
      Zero half: 5
      Network: 143
      Relax-and-lift: 1532

    Explored 1437 nodes (765927 simplex iterations) in 130.35 seconds (144.31 work units)
    Thread count was 32 (of 32 available processors)

    Solution count 3: 1.50249e+07 1.50847e+07 1.52773e+07

    Optimal solution found (tolerance 1.00e-04)
    Best objective 1.502490318279e+07, best bound 1.502442340221e+07, gap 0.0032%





    Regarding the out-of-memory, while I appreciate the guidelines provided, it does not seem to make sense to me, since the problem with the same solver options, solved in 16GB RAM but not the current 32GB.

    The RAM occupied by other processes is negligible, not surpassing 1GB for all processes. 

     

    Thank you,

    Fábio

    0
  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Fábio,

    I've tried writing the problem to mps and solving it, and it did work, and it also solved the problem way faster than normal

    This is interesting.  In my previous answer I alluded to performance variability, and the idea that small changes to the way that a model is run can lead to different solution paths, i.e. how the solver was able to arrive at the final solution.  A rigorous testing of the hypothesis "the model solves faster on gurobi command line than in Pyomo", would require running on each many times with different values of Seed.

    When using Pyomo you are building a model and then the solver is declared, which then needs to load the model.  So the model exists in memory twice - once in Pyomo and again in the solver.  This overhead does not exist when running the MPS file from the command line.  I can't comment on how efficiently Pyomo uses memory, but the fact that Gurobi command line was able to solve the model suggests that Pyomo could be the culprit.  Again, I would be using Resource Monitor while Pyomo is executing, and verify the memory consumption.

    Regarding the out-of-memory, while I appreciate the guidelines provided, it does not seem to make sense to me, since the problem with the same solver options, solved in 16GB RAM but not the current 32GB.

    Just to reiterate, if you solve your model on a different machine, the solution path will almost certainly be different, even when running with the same value of Seed.  If you were to try different values of Seed, you may find that 30% of the time the model solves quickly with under 16GB memory required, and 30% of the time it requires much more than the 32GB.  If you were to consistently run the same model with the same Seed value on both machines, then each machine would solve exactly the same way each time and a comparison would be flawed.  The only way to make a sound conclusion is to run the model across different values of Seed, and then compare the overall performance between machines.

    Now if you were to run the model across 10 seeds and find that all 10 instances on a 16GB machine were successful, but all 10 instances on a 32GB failed then this would be a little surprising, but the first thing I would do is check the solver logs, and see how they differ (especially in the number of threads used).

    I hope this is some help.

    - Riley

     

    0
  • Fábio Castro
    Conversationalist
    First Question

    Hi there!

     

    So, for the last few days I've been testing several solver settings to see if it will help. 
    While before the error was not always consistent, now the error is always the following:

    As you can see I've even set the MIPGap to 5, just to check wheter or not it could find a solution but nothing... The only way I've been able to solve the model is through saving it to an lp file and then solving it using gurobi_cl... But that only gets me the objective, and I cant access the decision variables.

    I've also attempted random Seed numbers, all leading to this same error. I've tried 20 different seeds (random number generated).


    With this machine I've tried several Thread number limits (32, 16, 8, 4, 2..) - leading to nothing. On the 16GB machine I had limited it to 8, which still did not work here. 

    Not quite sure what I can do...

    Thank you,

    Fábio

    0
  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Fábio,

    There are at least a couple of things being layered on top of Gurobi here, including Pyomo, (jupyter notebooks?) and Spyder.  I think we need to switch from looking for solutions to focusing on diagnostics.

    In python the psutil provides the ability to query RAM:

    import psutil

    vm = psutil.virtual_memory()
    print(vm)

    If you're using interactive Python, such as notebooks, then first reset the kernel.  Then I would appeal to this virtual_memory function at various points in your code:

    - first cell
    - after reading in any data sources
    - immediately before optimization

    I'd suggest also restarting your computer if it's been a while, perhaps Pyomo has caused a memory leak.

    If you're using Jupyter notebooks have you tried running your Python code as a script in the terminal?

    - Riley

    0
  • Fábio Castro
    Conversationalist
    First Question

    Hello Riley,

    I appreciate your help very much. I assume I was eager to find a solution since it seemed unrealistic for the exact same problem to work on my previous machine with 16GB RAM in the exact same software (Spyder IDE, Pyomo, Gurobi) and not work on this new machine with 32GB. I understand that the seed might be the reason, and I'm currently doing testing on that (so far 3 tests each machine, equal seeds, and only works on the 16GB machine so far). The only change in code I have from before is the result from the first interaction on this post, changing this: 

    solver = SolverFactory("gurobi")
    to this:
    solver = SolverFactory("gurobi",solver_io="python")

    I'm not sure if this could have any implications.

     

    As for the memory tracking, first cell: svmem(total=34163646464, available=26146443264, percent=23.5, used=8017203200, free=26146443264)

    After loading data, before pyomo building: svmem(total=34163646464, available=25988206592, percent=23.9, used=8175439872, free=25988206592)

    Immediately after all the constraints, before the solving from Gurobi: svmem(total=34163646464, available=16987823412, percent=50.3, used=17175823052, free=16987823412)

    So the Pyomo building takes a significant portion of memory, but this is the same for the 16GB machine, which solves, but takes a significant amount of time doing so.

    I also tried to measure immediately after the command to solve, but it ends up at the same error. While this is happening I constantly check Windows Resource Manager. The process is pretty much always the same: Constant memory increases until about the 95%-99% max value, and then it stays there for close to an hour (oscillating between 95% and 99%), before eventually crashing, leading to the error from the previous post. This pattern of reaching 95%-99% memory, and staying there, is similar in my previous machine, but it eventually just solves and does not lead to an error.

     

    I apologize for the overload of information. I'm considering reducing problem size in some way, but I felt that it should be able to work, especially when it worked previously on an inferior machine...

    I appreciate your willingness to help a bunch!

    I've also restarted the kernels, and the computer itself, to no avail.

     

    Best regards,

    Fábio

    0
  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Fábio,

    I'm currently working on a memory issue reported in a commercial ticket, where setting the parameter Method=1 helped.  Can you try this, as well as limiting the Thread number, to something like 8 or less?  The 32 threads on your machine may not be that useful, regardless of a memory error.

    - Riley

     

    0
  • Fábio Castro
    Conversationalist
    First Question

    Hello Riley,

    Ultimately that did not work as well. 


    Although I observed something strange. I decided to eventually reduce the number of scenarios I was using, down to 32 from 64, but decided to add "Stages" of investment, as part of a multi-period investment I'm working on. Turns out that, despite the total of variables and constraints being 1.5X as much as the original unsolvable problem with 64 scenarios, it solves, and quite fast...

    I've figured it could be some scenario causing the issue, so I tried with the first 32 and the last 32, but both worked perfectly fine. I'm out of ideas at what could possible cause issues with the original model... The issue is not with the machine since it is solving larger problems with ease. It is not with the code of the model because it worked on a previous machine with less memory, so I'm really all out of ideas ahah

    I appreciate your help though, ultimately I decided that just 32 scenarios was enough and not a hinderance to solution quality.

    You were very caring and helpful throughout the discussion. I appreciate all the help. 

     

    Best regards,

    Fábio

    0
  • Riley Clement
    Gurobi Staff Gurobi Staff

    Thanks for the kind words Fábio, sorry we couldn't get to the bottom of it, but glad to hear you've found a compromise.  Best of luck.

    - Riley

    0

Please sign in to leave a comment.