メインコンテンツへスキップ

Joblib and WLS License

回答済み

コメント

4件のコメント

  • Matthias Miltenberger
    Gurobi Staff Gurobi Staff

    This may happen due to a quick succession of terminating one job and starting another on another machine. You are allowed to run two jobs at the same time on two different machines and you can also check the usage in the Web License Manager.

    You may also open a ticket and ask for a larger baseline usage from our Licensing team to properly run on your cluster.

    How do I get support as an Academic user? – Gurobi Help Center

    Cheers,
    Matthias

    1
  • Fabio Mercurio
    First Comment
    First Question

    Hi Matthias, 

    Unfortunately, it seems the problem is still persisting. In the Web Licence Manager, after having 32 allowed sessions, I get 64 active sessions (over baseline). I still think this is related to the quick succession.

    Is it possible to link one gurobi environment to a unique job of joblib, and this environment never changes?

    Indeed, by parallelizing over instances, I create for each instance one gurobi environment, which is used for several models sequentially until the instance is solved and I dispose the environment. 

    Since I repeat for thousands of instances on a limited number of jobs, this quick succession can confuse the license (or license manager).

    How can I have a unique environment for a unique joblib job? I hope the question is clear. Thanks.

     

    Best,

    Fabio

    0
  • Fabio Mercurio
    First Comment
    First Question

    I share a minimum reproducible example:

    import numpy as np
    import joblib
    from joblib import Parallel, delayed
    from contextlib import contextmanager
    from tqdm import tqdm
    import gurobipy as grb


    def cpu_instance_analysis(iter):
        n_players = np.random.randint(2, 5)
        player_values = dict()

        for player in range(n_players):
            num_extr = np.random.uniform(1, 2)

            with grb.Model() as max_model:
                x_vars = max_model.addVar(vtype=grb.GRB.CONTINUOUS, lb=0, name=f"x")

                ### Constraints
                constraints = max_model.addLConstr(
                    lhs=x_vars, sense=grb.GRB.LESS_EQUAL, rhs=num_extr, name="limit"
                )

                ### Objective
                max_model.setObjective(x_vars, sense=grb.GRB.MAXIMIZE)

                max_model.setParam("OutputFlag", False)
                max_model.optimize()

                player_values[player] = max_model.objVal

        with grb.Model() as min_player_model:
            minimum_p = min(player_values.values())

            p_var = min_player_model.addVar(vtype=grb.GRB.CONTINUOUS, lb=0, name=f"x")

            ### Constraints
            constraints = {
                ii: min_player_model.addLConstr(
                    lhs=p_var, sense=grb.GRB.LESS_EQUAL, rhs=minimum_p, name="limit"
                )
                for ii in range(n_players)
            }

            ### Objective
            min_player_model.setObjective(p_var, sense=grb.GRB.MAXIMIZE)

            min_player_model.setParam("OutputFlag", False)
            min_player_model.optimize()

        return (iter, (minimum_p < 1.5))


    @contextmanager
    def tqdm_joblib(tqdm_object):
        """Context manager to patch joblib to report into tqdm progress bar given as argument"""

        class TqdmBatchCompletionCallback(joblib.parallel.BatchCompletionCallBack):
            def __call__(self, *args, **kwargs):
                tqdm_object.update(n=self.batch_size)

                return super().__call__(*args, **kwargs)

        old_batch_callback = joblib.parallel.BatchCompletionCallBack

        joblib.parallel.BatchCompletionCallBack = TqdmBatchCompletionCallback

        try:
            yield tqdm_object

        finally:
            joblib.parallel.BatchCompletionCallBack = old_batch_callback

            tqdm_object.close()


    def main():
        with tqdm_joblib(tqdm(desc="Progress", total=20)) as progress_bar:
            results_dict = dict(
                Parallel(n_jobs=-1, verbose=0, backend="multiprocessing")(
                    delayed(cpu_instance_analysis)(iter) for iter in range(10000)
                )
            )


    if __name__ == "__main__":
        main()
    0
  • Matthias Miltenberger
    Gurobi Staff Gurobi Staff

    Hi Fabio,

    Please excuse my delayed response.

    From the code snippet you shared, it seems that you are not working with Gurobi environments explicitly. This means that for every single new Model() call, you will create a new environment.

    To have more control over the environments, you will have to create those explicitly and pass them to the Model() call. Please see this example code for how to do that:

    with gp.Env(params=connection_params) as env:
        # 'env' is now set up according to the connection parameters.
        # The environment is disposed of automatically through the context manager
        # upon leaving this block.
        with gp.Model(env=env) as model:
            # 'model' is now an instance tied to the enclosing Env object 'env'.
            # The model is disposed of automatically through the context manager
            # upon leaving this block.
            try:
                populate_and_solve(model)
            except:
                # Add appropriate error handling here.
                raise

    With this, you should be able to create and use an environment to build and optimize several models iteratively.

    Best regards,
    Matthias

    0

サインインしてコメントを残してください。