The purpose of this page is to give an impression of the Gurobi Evaluation experience and the information we want to gather during this process. It is not a complete list of all the information we may need to make a licensing recommendation, because this may depend on the details of your compute environment. Similar to this licensing article, the purpose of this article is to explain what factors are important. Make sure to work with a (Technical) Account Manager to learn all the details and make the right choice for your situation.
Contents
What is the purpose of a Gurobi Trial?
What is done during the evaluation?
What is the evaluation period like?
Evaluation Technical Focus Areas
Concepts
In this document we use a few important concepts.
- Model: The mathematical description of a particular problem you want Gurobi to solve.
- Trial License: The free license we provide during the evaluation period. You are not limited to only one license type - many users will try several licensing options within their tech stack and architecture.
- Evaluation: The experience of trying out Gurobi and seeing how it works for your application and integrates into your environment.
- Performance Profile: How long it takes your model to reach your solution criteria using a certain amount of computational resources (CPU, Memory).
- Usage Pattern: The frequency and quantity at which you are running Gurobi.
- License Mechanism: The technology used to access and maintain your Gurobi license in accordance with our EULA.
Evaluation Experience
What is the purpose of a Gurobi Trial?
During the Evaluation period, you are probably trying to answer these two questions:
- “Is Gurobi the right solver for my optimization application?"
- "How will Gurobi fit into my application architecture?"
How do you know Gurobi is the right solver for you? Many of our customers care about one or more of these factors:
- Performance. How fast does Gurobi solve my model?
- Modeling Experience. What API do you want to work in? Are there benefits to using some of our advanced modeling features?
- Support. What resources and training material are available?
- Integration. Does Gurobi fit into my tech stack?
After establishing your evaluation goals we can begin testing Gurobi.
What is done during the evaluation?
Often, a Gurobi evaluation can be a 2-stage process. First, we work with you to establish the Gurobi performance profile and usage pattern for your model. Then we focus on licensing and selection of the license mechanism.
In many cases, the size of the license you need can change depending on the model performance profile so it is important to understand your compute requirements. You can use a trial license to benchmark your Gurobi performance on your system, or we can help you to run some tests.
The final choice of licensing mechanism will also depend on your tech stack and architecture. We have licenses designed to run offline or in containers, cloud environments, etc., and additional products to help with queueing for high-throughput scenarios and service administration. We can perform architecture reviews to determine which is the best fit. Sometimes users will test several trial license mechanisms to decide which one they want to move forward with.
Who will be involved?
On the Gurobi side, you can work with our team of Account Managers (AMs) and Technical Account Managers (TAMs). The AMs are focused on your commercial relationship with Gurobi while the TAMs are focused on ensuring the technical fit of Gurobi for your application. This approach makes it easy to have parallel activities/conversations with multiple stakeholders around Gurobi procurement and technical integration.
The Gurobi Optimization Experts may also get involved if performance tuning is done during the evaluation period. Access to our Advisory & Support team is included during the Evaluation Period. Since this is part of a commercial Gurobi license, it’s important to try out working with our Experts too.
On the user side, we don't just work with the individuals writing the models and testing Gurobi. Many companies will involve their IT department for application deployment and licensing support, and business teams if they want support determining an application’s ROI.
How long will it take?
We work on your timeline to determine how long an evaluation period should last. The average time to test Gurobi’s performance on your model and integration into your architecture (i.e., establish “technical fit”) is around 1 month.
What is the evaluation period like?
The Gurobi team loves talking about mathematical optimization and its many use cases, and getting to know our customers who frequently report experiences like this:
- "If it wasn't for the easy access to try Gurobi, the fantastic documentation, and of course the always helpful personal support, we would probably not have progressed down the optimization path"
- "The support from Gurobi has exceeded our expectations. We don't need to understand how Gurobi works, but we do need to understand how our model works - and when we talk to Gurobi, we get smarter"
The evaluation period should feel informative, productive, and gratifying for optimization users and business stakeholders.
Evaluation Technical Focus Areas
At the beginning of your Gurobi trial when we are establishing your use case scenario, we will probably ask you many questions about your model, optimization goals, computing resources, and deployment architecture. We do this in order to establish what practical help you may need in terms of support and licensing recommendations. Also, we want to understand what you consider to be a successful evaluation.
Below you will find a non-exhaustive list of questions we might ask. If you don’t yet know the answers to all of these, that is alright too. We can provide trial licenses and support any benchmarking activities needed to establish your performance profile.
Evaluation Support:
- What stage of development is your model in?
- How many individuals and roles will be involved?
- Are there any hurdles to integrating Gurobi we should be aware of?
Model & Performance:
- What are the typical runtimes for your model using?
- How many threads are used to achieve the current runtime?
- What is your target runtime?
- Have you analyzed log files to consider parameters that may improve runtimes, or discussed this with the Gurobi team?
- Is your model a MIP or one of our other supported model types, like QCP or MINLP?
- What API and modeling language are you using?
Usage Pattern:
- How often will Gurobi solve a model?
- What does an average day look like in terms of Gurobi usage?
- What is the expected peak usage?
- Are jobs triggered manually or on a schedule?
Deployment:
- Are you using containers / cloud services?
- Is queueing needed?
- How do you plan to scale over the next few years?
Example Scenario
The scenario below shows the timeline of an evaluation period for an experienced optimization professional who already has a model developed, and their architecture is simpler so the focus is on performance. If this accelerated 2-week scenario does not represent your anticipated timeline that is alright! Every evaluation is unique, and each step in the below example can be expanded and customized to your organization.
Scenario: Accelerated Evaluation focused on Gurobi Performance
Julia works at a small trucking company and has developed new models for truck loading and routing. She has been testing the models with an open source solver, and is confident the formulation represents all of the business needs. The application must run (including data input, model building & solve time, and solution analysis) in less than 10 minutes. With the open source solver it takes hours to run the model, so she wants to try Gurobi. Julia is the main scheduler for the company so she will be the only person running the model on her laptop, but it will be run several times a day as new orders come in.
- Julia fills out the Free Trial request on the Gurobi website.
- Day 1: A call is scheduled for the same afternoon with an Account Manager (AM) and Technical Account Manager (TAM). We quickly agree on a licensing mechanism that supports Julia’s Gurobi use on her laptop, and spend the rest of the call reviewing her model performance with the open source solver.
- Day 1: A trial license is created and Julia downloads this from her Gurobi Portal account.
- Day 2-5: Since Julia is using a generic modeling language, she switched the solver choice to Gurobi and starts observing the performance changes for several model instances with different data inputs. Julia also exports .MPS files to quickly import into a gurobipy environment, to see what it's like to work with the Gurobi python API.
- Day 7: Julia sends an email to the Gurobi team, reporting that Gurobi ran in <5 minutes for all but 3 model instances, which still took ~20min to solve.
- Day 7: Julia and the TAM discuss on a short call, and plan for Julia to upload these model instances so the Gurobi Optimization Experts can study the solving behavior and log files.
- Day 8-10: The Optimization Expert runs dozens of baseline model tests (with different seeds) to replicate Julia’s performance on her laptop. The TAM and Optimization Expert study the logs together, and it looks like Gurobi struggled in some cases to prove optimality, and setting MIPFocus=2 improved the slower runs. The Optimization Expert sent a summary of their tests and parameter recommendation to Julia.
- Day 14: Things got busy, so it took a few days for Julia to verify the model performance with MIPFocus=2 on her laptop. Now, everything is solving within 5 minutes or less.
- Day 15: Julia sends a message to the Gurobi account team saying she’s happy with the solver performance and ready to buy the same license she has been testing. The account team still sets up a technical call for 2 weeks later to check if she needs help using any advanced features (Multi-Objective formulations were also discussed earlier) after she converts her code into gurobipy.
Comments
0 comments
Article is closed for comments.