In the previous article we gave a general introduction to Databricks concepts and architecture. Here we will look more specifically at the use of Gurobi on Databricks. For practical instructions on getting started using Databricks, have a look at the article on installation steps. Note that we don't go into too much detail about licensing options; please reach out to your Technical Account Manager if you want to discuss your specific use case and learn how to best set up Gurobi.
Architecture
When working with Gurobi, you're typically performing a few steps sequentially:
- Retrieving and preprocessing input data
- Defining a mathematical optimization model
- Solving the model ("optimization" by Gurobi)
- Retrieving the solution
- Postprocessing and storing the solution
Important:
- Gurobi does not use Apache Spark for distributing the work from step 3 for a single model. This means in principle solving the model will happen on the node where you call
model.optimize()
. We do provide other options for offloading the work outside the cluster, as you can see below. - For a single model, steps 2-4 must be performed on the same node since the model is kept in-memory. Therefore you may parallelize steps 1 and 5 (pre- and postprocessing), as long as you don't invoke the Gurobi API in these steps.
- You can run multiple models in parallel on different nodes, as long as steps 2-4 for a specific model are performed on a single node.
These principles allow the following architectures.
- Run everything in a single script/notebook. This means all steps are performed on the driver node in your cluster. Unless you offload pre- and postprocessing to worker nodes yourself, there's no point in setting up a multi-node cluster: the worker nodes would remain idle. For the same reason, autoscaling (dynamically adding worker nodes to your cluster) would not have an effect on optimization.
-
Manually call Spark. You may call the Spark library explicitly to build and solve models on the worker nodes. In Python this can be done using PySpark with functions like
SparkContext.parallellize()
. Since this approach is not Gurobi-specific we don't go into detail here. Steps 2-4 for a single model must be performed on a single node, so you would typically partition and distribute input data from step 1 to your worker nodes and each parallel task would then build a model using one subset of input data, solve it and return a solution. -
Let Gurobi offload optimization outside the cluster. There are other options in case you want to perform optimization outside your driver node. For both options below, step 3 is executed elsewhere while steps 1-2 and 4-5 run on your Databricks cluster.
- Compute Server: Dedicated component for optimizing mathematical models which are defined elsewhere. You manage this component yourself and run it in your own environment (e.g. on a virtual machine, in a container etcetera). During steps 2-4 above, a session with the Compute Server is initiated automatically by the Gurobi API. Step 3 is performed on the Compute Server and does not consume resources on your Databricks cluster.
- Instant Cloud: Online service managed by Gurobi. Similar to the Compute Server option, steps 2-4 initiate a remote session and step 3 is performed on a machine managed by Gurobi Instant Cloud.
Licensing
Each of the options above leads to different Gurobi licensing requirements.
- When running everything on the driver node, we encourage the use of a Web License Service license. Setup is very simple and you don't need to worry about which hardware runs Gurobi since the license is not tied to a specific machine. This license also works when using custom container images for your cluster. You should think about the number of concurrent tasks on the driver node using Gurobi to pick a suitable license type.
- When using Spark to distribute Gurobi work to worker nodes, you would again use the Web License Service. However, the number of licenses/tokens required now also depends on the number of nodes that will use Gurobi at the same time.
- Compute Server and Instant Cloud require specific licenses. In these situations you do not need a license for your Databricks cluster since it only acts as a client to your Compute Server or Instant Cloud resources.
Comments
0 comments
Article is closed for comments.