Skip to main content

Numerical trouble encountered error after few iterations when run svm gurobi model for multiple runs

Awaiting user input

Comments

8 comments

  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Manju,

    All optimization runs end up with a "Sub-optimal termination". It might be difficult numerics or it might be something else. Unfortunately, I cannot tell from just looking at the log.

    Could you please share the model with numerical trouble and one of those "Sub-optimal" ones? Note that uploading files in the Community Forum is not possible but we discuss an alternative in Posting to the Community Forum.

    Best regards, 
    Jaromił

    0
  • Manju Bala
    Collaborator
    Curious

    Hi,

    The model on which I am working, code for that is mentioned in post and  reposting at end of this post. I did not get sub optimal solution as this numerical exception occurred while optimizing. I am sharing the model link in drive also:

    https://drive.google.com/drive/folders/1B8JGiDE2KoS8cY3WpPB9u7OBgNcJGWgJ?usp=drive_link  

    At this location, model .ipynb file is there. I am getting the below output: 

    Gurobi Optimizer version 10.0.2 build v10.0.2rc0 (win64)
    
    CPU model: 12th Gen Intel(R) Core(TM) i7-1250U, instruction set [SSE2|AVX|AVX2]
    Thread count: 10 physical cores, 12 logical processors, using up to 12 threads
    
    Optimize a model with 0 rows, 3003 columns and 0 nonzeros
    Model fingerprint: 0x7dd0c3c7
    Model has 3000 quadratic constraints
    Coefficient statistics:
      Matrix range     [0e+00, 0e+00]
      QMatrix range    [2e-01, 2e-01]
      QLMatrix range   [5e-04, 6e+00]
      Objective range  [1e+00, 1e+00]
      Bounds range     [0e+00, 0e+00]
      RHS range        [0e+00, 0e+00]
      QRHS range       [1e+00, 1e+00]
    Presolve time: 0.08s
    Presolved: 11998 rows, 15001 columns, 32996 nonzeros
    Presolved model has 3000 second-order cone constraints
    Ordering time: 0.00s
    
    Barrier statistics:
     Dense cols : 4
     Free vars  : 1
     AA' NZ     : 4.199e+04
     Factor NZ  : 7.800e+04 (roughly 12 MB of memory)
     Factor Ops : 5.219e+05 (less than 1 second per iteration)
     Threads    : 1
    
                      Objective                Residual
    Iter       Primal          Dual         Primal    Dual     Compl     Time
       0   3.01748748e+04  0.00000000e+00  1.30e+00 1.93e+00  3.20e+00     0s
       1   3.59563350e+03 -5.50728664e+02  1.66e-02 1.47e-01  3.12e-01     0s
       2   9.95917980e+02 -5.39043662e+01  5.53e-03 4.55e-02  8.30e-02     0s
       3   5.33411213e+02  1.99624769e+02  5.70e-03 4.66e-03  2.35e-02     0s
       4   3.39764049e+02  2.72322412e+02  1.99e-02 1.64e-04  4.53e-03     0s
       5   2.85469453e+02  2.78645885e+02  4.44e+00 1.72e-05  4.54e-04     1s
    
     
     
       6   2.80946069e+02  2.79143250e+02  3.14e+01 1.49e-05  1.20e-04     1s
       7   2.80574154e+02  2.79968552e+02  3.05e+01 1.48e-05  4.03e-05     1s
       8   2.80157563e+02  2.79986493e+02  4.78e+00 1.01e-05  1.14e-05     1s
       9   2.80156807e+02  2.79987354e+02  4.87e+00 1.31e-05  1.13e-05     1s
      10   2.80156807e+02  2.79987356e+02  1.02e+01 1.33e-05  1.13e-05     1s
      11   2.80156807e+02  2.79987358e+02  6.89e+00 1.33e-05  1.13e-05     1s
      12   2.80156807e+02  2.79987385e+02  6.88e+00 1.34e-05  1.13e-05     1s
      13   2.80156807e+02  2.79987385e+02  6.88e+00 1.34e-05  1.13e-05     2s
      14   1.99400000e+14  0.00000000e+00  1.79e+11 1.00e+03  2.53e+13     2s
      15   3.21297048e+12  0.00000000e+00  2.60e+08 1.00e+04  6.68e+11     2s
      16   2.79820962e+12 -1.53968976e+06  2.18e+08 8.40e+03  5.19e+11     2s
      17   8.11840880e+11 -3.73341484e+06  4.75e+07 1.83e+03  4.76e+10     2s
      18   7.55793741e+08 -1.61202636e+04  3.64e+05 1.63e+00  1.45e+05     3s
      19   4.57633956e+07 -1.48019122e+04  7.18e+05 3.19e-01  5.63e+03     3s
      20   3.17018585e+06 -1.48200250e+04  1.64e+07 2.90e-01  5.42e+02     3s
      21   3.17018616e+06 -1.48191085e+04  1.32e+07 2.90e-01  7.21e+02     3s
      22   3.17018376e+06 -1.48192647e+04  6.25e+08 2.90e-01  9.23e+02     3s
      23   3.16881699e+06 -1.48196614e+04  1.58e+11 2.90e-01  5.64e+03     3s
      24   3.16882343e+06 -1.48191057e+04  1.58e+11 2.90e-01  2.16e+03     4s
      25   3.16882394e+06 -1.48191232e+04  1.58e+11 2.90e-01  9.78e+02     4s
      26   3.16920538e+06 -1.48191207e+04  9.98e+12 2.90e-01  3.88e+06     4s
      27   3.16920538e+06 -1.48191207e+04  9.98e+12 2.90e-01  3.88e+06     4s
      28   3.16920538e+06 -1.48191207e+04  9.98e+12 2.90e-01  3.88e+06     4s
    
    Barrier performed 28 iterations in 4.09 seconds (0.47 work units)
    Numerical trouble encountered
    
    
     
    model is:

     

    num_runs = 500  # Number of runs for calculating mean and standard deviation

    # Perform multiple runs to calculate out-of-sample error
    for _ in range(num_runs):
        
    # Set the random seed for reproducibility
        
    # Parameters for the multivariate normal distributions
        mean1 = 1.5 * np.ones(2)  # Mean for class 1
        mean2 = -1.5 * np.ones(2)  # Mean for class -1
        covariance_matrix = np.eye(2)  # Covariance matrix I


    # Parameters for the multivariate normal distribution
        mean_outlier = np.zeros(2)  # Mean vector of zeros
        covariance_matrix_outlier = 3 * np.eye(2)  # Covariance matrix 3I

    # Number of samples for each class
        num_samples = 2000

    # Generate samples for each class
        class1_samples = np.random.multivariate_normal(mean1, covariance_matrix, num_samples)
        class2_samples = np.random.multivariate_normal(mean2, covariance_matrix, num_samples)

    # Assign labels: +1 for class1, -1 for class2
        labels_class1 = np.ones(num_samples)  # +1 for class1
        labels_class2 = -np.ones(num_samples)   # -1 for class2

    # Concatenate the samples and labels
        X = np.vstack((class1_samples, class2_samples))
        y = np.concatenate((labels_class1, labels_class2))

    # Split the data into 75% training and 25% validation
        X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
       

    # Create a Gurobi model
        model = gp.Model("svm_model")
       

    # Define the number of features and samples
        n = X_train.shape[1]  # Number of features
        m = X_train.shape[0]  # Number of samples


    # Define variables for the coefficients (weights), bias, slack variables, and Lq* norm approximations
        w = model.addVars(n, lb=-GRB.INFINITY, ub=GRB.INFINITY, name="w")
        b = model.addVar(lb=-GRB.INFINITY, ub=GRB.INFINITY, name="b")
        xi = model.addVars(m, lb=0.0, name="xi")

    # Define the margin parameter and q* value
        rho = 0.2  # Adjust as needed
        q_star = 2  # Choose appropriate q value




    # Define the objective function to minimize the sum of slack variables
        obj = gp.quicksum(xi[i] for i in range(m))

        model.setObjective(obj, GRB.MINIMIZE)
        
    # Add constraints (SVM constraints)
        for i in range(m):
            q_norm_expression = gp.quicksum(w[j] ** q_star for j in range(n))
            model.addConstr(y_train[i] * (gp.quicksum(w[j] * X_train[i, j] for j in range(n)) + b) - rho * q_norm_expression >= 1 - xi[i])
        
    # Optimize the model
       
        model.optimize()

       # Extract weight vector and bias
        weights = np.array([w[j].x for j in range(n)])
        print(weights)
        bias = b.x

    Thanks

    Manju

    0
  • Manju Bala
    Collaborator
    Curious

    Hi,

    As I mentioned model ran for few iterations as I am running it for multiple iterations, So it ran for few then in between stopped and gave exception. I have not much idea about it, i am new to mathematical modelling so i am trying different parameters for eg. presolve = 0 . But still i am getting same exception after few iterations. If any other log is required kindly let me know.

     

    Thanks

    Manju

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Thank you for sharing Manju.

    We will try to find out what is going on. In the meantime, you could try using the BarHomogeneous parameter to use the "more careful" Barrier algorithm. This should hopefully avoid running into the "numerical trouble" state.

    Best regards, 
    Jaromił

    0
  • Manju Bala
    Collaborator
    Curious

    Hi,

    I tried with ' BarHomogeneous ' parameter with value 1 and ran the model . After 45 iterations, got same error 'Numerical Trouble Encountered' . 

    Below is the output: 

    Set parameter BarHomogeneous to value 1
    Gurobi Optimizer version 10.0.2 build v10.0.2rc0 (win64)
    
    CPU model: 12th Gen Intel(R) Core(TM) i7-1250U, instruction set [SSE2|AVX|AVX2]
    Thread count: 10 physical cores, 12 logical processors, using up to 12 threads
    
    Optimize a model with 0 rows, 3003 columns and 0 nonzeros
    Model fingerprint: 0x064c38cc
    Model has 3000 quadratic constraints
    Coefficient statistics:
      Matrix range     [0e+00, 0e+00]
      QMatrix range    [2e-01, 2e-01]
      QLMatrix range   [8e-04, 5e+00]
      Objective range  [1e+00, 1e+00]
      Bounds range     [0e+00, 0e+00]
      RHS range        [0e+00, 0e+00]
      QRHS range       [1e+00, 1e+00]
    Presolve time: 0.04s
    Presolved: 11998 rows, 15001 columns, 32996 nonzeros
    Presolved model has 3000 second-order cone constraints
    Ordering time: 0.00s
    
    Barrier statistics:
     Dense cols : 4
     Free vars  : 1
     AA' NZ     : 4.199e+04
     Factor NZ  : 7.800e+04 (roughly 12 MB of memory)
     Factor Ops : 5.219e+05 (less than 1 second per iteration)
     Threads    : 1
    
                      Objective                Residual
    Iter       Primal          Dual         Primal    Dual     Compl     Time
       0   3.10304613e+04  0.00000000e+00  1.34e+00 1.92e+00  3.28e+00     0s
       1   4.20459965e+03 -1.52414403e+02  1.95e-01 4.04e-01  3.76e-01     0s
       2   8.40279043e+02  8.16684904e+01  5.91e-02 1.66e-01  8.41e-02     0s
       3   3.78665494e+02  2.11733075e+02  4.63e-03 5.95e-02  2.42e-02     1s
       4   3.09693784e+02  2.37114671e+02  1.51e-03 3.61e-02  1.33e-02     1s
       5   2.90690699e+02  2.46578855e+02  7.85e-04 2.64e-02  9.32e-03     1s
       6   2.84121291e+02  2.50257742e+02  8.41e-02 2.23e-02  7.74e-03     1s
       7   2.79349256e+02  2.52821307e+02  3.65e-02 1.92e-02  6.58e-03     1s
       8   2.77733818e+02  2.54323098e+02  3.21e-02 1.73e-02  5.91e-03     2s
       9   2.74607803e+02  2.55560880e+02  2.23e-02 1.58e-02  5.33e-03     2s
      10   2.72517135e+02  2.57982662e+02  2.05e+00 1.24e-02  4.21e-03     2s
      11   2.71252588e+02  2.58752749e+02  3.08e+00 1.14e-02  3.83e-03     2s
      12   2.70451672e+02  2.59484948e+02  2.44e+00 1.04e-02  3.49e-03     2s
      13   2.69331010e+02  2.60783727e+02  6.78e-01 8.50e-03  2.85e-03     2s
      14   2.68992052e+02  2.61407637e+02  6.18e-01 7.63e-03  2.56e-03     3s
      15   2.68335446e+02  2.62149274e+02  1.43e+00 6.61e-03  2.21e-03     3s
      16   2.67868225e+02  2.63255793e+02  4.29e-01 4.94e-03  1.66e-03     3s
      17   2.67587023e+02  2.63708581e+02  3.35e-01 4.27e-03  1.43e-03     3s
      18   2.67353940e+02  2.64068937e+02  2.03e+00 3.67e-03  1.23e-03     3s
      19   2.67049573e+02  2.64856117e+02  1.77e+00 2.44e-03  8.25e-04     4s
      20   2.66775254e+02  2.65276023e+02  1.15e+00 1.73e-03  5.86e-04     4s
      21   2.66615919e+02  2.65469684e+02  3.31e-01 1.39e-03  4.69e-04     4s
      22   2.66434215e+02  2.65721917e+02  7.37e-02 9.41e-04  3.15e-04     4s
      23   2.66309794e+02  2.66001896e+02  3.40e+00 4.46e-04  1.48e-04     4s
      24   2.66290832e+02  2.66131101e+02  2.54e+00 2.05e-04  6.97e-05     4s
      25   2.66287060e+02  2.66153244e+02  4.72e+00 1.66e-04  5.65e-05     5s
      26   2.66285535e+02  2.66167576e+02  8.78e+00 1.39e-04  4.79e-05     5s
      27   2.66285535e+02  2.66167576e+02  1.49e+00 1.39e-04  4.79e-05     5s
      28   2.66285535e+02  2.66167576e+02  1.49e+00 1.39e-04  4.79e-05     5s
      29   6.91292349e+10  0.00000000e+00  1.01e+07 1.00e+03  1.40e+09     5s
      30   3.24199214e+10 -2.24055653e+05  6.22e+11 4.17e+02  3.17e+08     5s
      31   3.15813594e+08 -2.28305927e+04  6.01e+09 5.82e+03  3.02e+05     6s
      32   2.41507941e+07 -2.14250174e+04  4.51e+08 5.20e+03  6.67e+04     6s
      33   2.50838270e+07 -2.14766286e+04  9.19e+08 5.21e+03  7.08e+04     6s
      34   2.50851711e+07 -2.14770544e+04  2.22e+09 5.21e+03  4.08e+04     6s
      35   2.50851711e+07 -2.14770544e+04  2.22e+09 5.21e+03  4.08e+04     6s
      36   2.50851711e+07 -2.14770544e+04  2.22e+09 5.21e+03  4.08e+04     6s
    
    Barrier performed 36 iterations in 6.10 seconds (0.66 work units)
    Numerical trouble encountered
    
    
     
     
    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    Cell In[3], line 93
         90  model.optimize()
         92 # Extract weight vector and bias
    ---> 93  weights = np.array([w[j].x for j in range(n)])
         94  print(weights)
         95  bias = b.x
    
    Cell In[3], line 93, in <listcomp>(.0)
         90  model.optimize()
         92 # Extract weight vector and bias
    ---> 93  weights = np.array([w[j].x for j in range(n)])
         94  print(weights)
         95  bias = b.x
    
    File src\gurobipy\var.pxi:125, in gurobipy.Var.__getattr__()
    
    File src\gurobipy\var.pxi:153, in gurobipy.Var.getAttr()
    
    File src\gurobipy\attrutil.pxi:100, in gurobipy.__getattr()
    
    AttributeError: Unable to retrieve attribute 'x'
    
    
     
    I used model.params.BarHomogeneous = 1 .
     
     
    Thanks
    Manju
     

     

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Manju,

    For now there is no other workaround. You could wait for the next release where the stability of the Barrier algorithm is improved. However, it is not guaranteed that Barrier does not run into any numerical issues in the new release.

    It would be best if you catch and handle the error appropriately.

    Best regards, 
    Jaromił

    0
  • Manju Bala
    Collaborator
    Curious

    Hi,

    Thanks. I am working on my master thesis on topic Robust Optimization of support vector machine in which I have to optimize this algorithm for real big data and then find out mean error after multiple runs. 

    Can you please suggest how to handle this issue? As of now I am stuck because if its giving this issue for random number then there are higher chances there it will give same error for real data. If you can suggest me any solution so that I can proceed then it would be great. 

    Thanks

    Manju

     

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    As of now I am stuck because if its giving this issue for random number then there are higher chances there it will give same error for real data.

    Often, real data can be more robust than randomly generated data. Randomly generated data often results in almost parallel rows, small/big coefficients, etc. You could just try to run your algorithm with real data.

    You could wait for the upcoming release which is planned for December.

    If you still run into this error, you would have to somehow handle the given model, e.g., exclude this particular model for the given run and continue differently, maybe perturb the data?

    0

Please sign in to leave a comment.