What does serverless mean?
The core idea of serverless computing consists of three items:
- The execution of some code is triggered based on a given event.
- The specified code is executed without having specified the computing environment, which is allocated dynamically.
- The assigned computing environment is short-lived, i.e., less than 15 minutes or so.
Three of the most commonly used platforms for serverless computing are AWS Lambda, AWS Fargate, and Azure Functions.
How can Gurobi be used within this setup?
Within a dynamically allocated computing environment, the notion of a machine is not well-defined. Therefore, licenses that are tied to machines do not work well in this context, but rather licenses that enable dynamic licensing (Floating Use, WLS) or offloading of the computation (Compute Server, Instant Cloud):
- Floating Use: In this setup, an external token server hands out tokens enabling Gurobi usage. In a serverless world, the token will be requested by the code to be executed, the optimization takes place within the serverless environment, and the token is released again when the computation terminates.
- Web License Service: This service is similar to the "Floating Use" option above, however the token is handed out from a service that Gurobi operates to a local containerized environment and/or a regular machine to perform the optimization.
- Compute Server: Here, a machine (e.g., a VM) is set up which will perform the optimization. Any number of clients can submit jobs to this machine. This means once the code in the serverless compute environment runs an optimization, the problem is sent to this compute server, it is solved and the solution is returned to the client (= the code running in the serverless environment). When batch mode is used in conjunction with a Cluster Manager, it is actually possible to sever the connection to the Compute Server and retrieve the result at a later stage with a given batch ID (see an example here).
- Instant Cloud: In the Instant Cloud, the optimization will take place on the user-specified machine pools similar to the Compute Server (see above). However, unlike the Compute Server, Gurobi will handle the connection to the cloud provider, etc., and the user does not need to configure this themselves. When optimizing from a serverless environment, the same thing happens as with the Compute Server, i.e., the problem is sent to the Instant Cloud to be solved, and the solution is then returned to the client.
What are suitable use cases for Gurobi within serverless?
One does not have control over the computing environment available at runtime in a serverless world. Paired with the inherently short life of the environments, using Gurobi within this setting is often only advantageous in one of the following use cases:
- Short solution times: if Gurobi is able to solve the optimization problem at hand very quickly (in the range of <100 seconds on a regular laptop), then probably it will also be solved in a serverless environment. For example within AWS Lambda, "CPU power [is allocated] linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent of one full vCPU".
- Quick model building paired with batch mode: When using a Cluster Manager in a Compute Server setup (see above), it is possible to submit a job for optimization and then sever the connection. Therefore if the model building time is short (in the range of <100 seconds on a regular laptop), then the problem can be sent to the Cluster Manager, and then retrieved at a later stage.
Please sign in to leave a comment.