In my optimisation problem, I need to find a minimum of information entropy:
$$-\sum_i p_i \log p_i \rightarrow \min$$
where variables `p_i` are bilinear in decision variables. At the moment, I model this by introducing intermediate variables $$q_i = \log p_i$$ and make use of Gurobi's general constraints.
My problem seems to be rather large so it does finish with a timeout. I have reasons to believe that in some cases the incumbent solutions I obtain are not globally optimal.
I have the following idea. Since `\log p` behaves "bad" around `p = 0`, I want to consider only the function `p \log p`, which is rather nice-behaving on `[0,1]`. My hope is that approximating `p \log p` is better from the numerical point of view.
My question is how to implement this the best way with Gurobi? For example, how to add constraint requiring that $$r_i = p_i \log p_i ?$$
Please sign in to leave a comment.