automatically get values of auxiliary variables
ユーザーの入力を待っています。I have main variables (V) and auxiliary variables (U) with the below formula example
U1 = 0.5 V1 + 0.7 V2 + 0.1 V3
After Gurobi solves the optimization problems, I get an array of V, like V1=5, V2=7, V3=9. However, I do not have a value for each Ui. It still returns the above formula.
Is there any way to automatically get the value of each U after having V? Following the example, U1 should be 0.5 * 5 + 0.7 * 7 + 0.1 * 9 = 8.3
-
Did you add your U-variables with addVar(s) to the model? And also the constraint (U1= 0.5 V1 + 0.7 V2 + 0.1 V3) with addConstr?
0 -
yes, I do. Here is an example. U-variables[0] = {0: <gurobi.LinExpr: 0.0 state_value_variable[1] + 0.0 state_value_variable[33] + 0.0 state_value_variable[153] + 0.26963645536163366 state_value_variable[8] + 0.2964055079203106 state_value_variable[18]}
Gurobi solved for specific values of state_value_variable, but not for U-variables
0 -
Here are all my codes. It's scarcely long because I haven't known how to optimize code with Gurobi's variables. There are two problems I need your support
1) As you can see, the variable CV_value_variable is calculated by a formula. Thus, when Gurobi completed, that variable did not have specific values. Therefore, I must manually recalculate the values for CV_value_variable
2) I am trying to do multi-processing on my computer, but it hasn't worked with the variable, state_value_variable. Do you suggest any solutions
0 -
for _ in range(5):state_value_table_new = state_value_table_gu.copy()invest_policy_table_gu = invest_policy_table.copy()exit_policy_table_gu = exit_policy_table.copy()entry_policy_table_gu = entry_policy_table.copy()
# create a new modelm = gp.Model('naive_LP_model')
# create variablesstate_value_variable = m.addVars(num_state, lb=0, ub=float('inf'),vtype='C', name='state_value_variable')
aux_variable = m.addVars(num_state * n, lb=0, ub=float('inf'),vtype='C', name='aux_variable')
CV_value_variable = m.addVars(num_state * len(invArr), lb=0, ub=float('inf'),vtype='C', name='aux_variable')
# set objectivem.setObjective(state_value_variable.sum(), GRB.MINIMIZE)
# add constraintsfor state_idx in range(num_state):
state = all_states_10[state_idx]industry_state = list(state[1])lst_firm_states = extract_firm_states_operator(state)
lst_profit = get_profit(industry_state, profit_params, industry_params)
for kappa_idx in range(n):
m.addConstr(aux_variable[state_idx * n + kappa_idx] >= lst_kappa[kappa_idx])
for inv in range(len(invArr)):# firms' actionsfirm_0 = {'inv': None, 'rho': None, 'lam': None}firm_1 = firm_action(lst_firm_states[1], tuple([lst_firm_states[1], state[1]]),invest_policy_table, exit_policy_table, entry_policy_table)firm_2 = firm_action(lst_firm_states[2], tuple([lst_firm_states[2], state[1]]),invest_policy_table, exit_policy_table, entry_policy_table)lst_firm_actions = [firm_0, firm_1, firm_2]
# firms' action probabilitiesfirm_0['inv'] = invArr[inv]lst_act_prob_0 = action_probability(lst_firm_states[0], firm_0, transition_params,industry_params, target_firm=True)lst_act_prob_1 = action_probability(lst_firm_states[1], firm_1, transition_params,industry_params, target_firm=False)lst_act_prob_2 = action_probability(lst_firm_states[2], firm_2, transition_params,industry_params, target_firm=False)all_act_prob = [lst_act_prob_0, lst_act_prob_1, lst_act_prob_2]
# transition matrixtrans_mat = Transition_table(industry_state, lst_firm_states, all_act_prob)
# CV valueexpected_value = 0for next_state in list(trans_mat.keys()):expected_value += trans_mat[next_state] * state_value_variable[all_states_10.index(next_state)]CV_value = -economic_params['invCost'] * firm_0['inv'] + \economic_params['beta'] * expected_valueCV_value_variable[state_idx * len(invArr) + inv] = CV_value
m.addConstr(aux_variable[state_idx * n + kappa_idx] >= CV_value)
m.addConstr(state_value_variable[state_idx] >= lst_profit[state[0]] +\sum([aux_variable[state_idx * n + i] for i in range(n)])/n)
# Run LP modelm.optimize()
for i in range(num_state):state_value_table_gu[all_states_10[i]] = state_value_variable[i].X
for i in range(num_state):state = all_states_10[i]industry_state = list(state[1])lst_firm_states = extract_firm_states_operator(state)firm_0 = {'inv': None, 'rho': None, 'lam': None}firm_1 = firm_action(lst_firm_states[1], tuple([lst_firm_states[1], state[1]]),invest_policy_table, exit_policy_table, entry_policy_table)firm_2 = firm_action(lst_firm_states[2], tuple([lst_firm_states[2], state[1]]),invest_policy_table, exit_policy_table, entry_policy_table)lst_firm_actions = [firm_0, firm_1, firm_2]
CV_value_opt, inv_opt = CV_operator(lst_firm_actions, state, invArr, economic_params,transition_params, industry_params, state_value_table_gu)
invest_policy_table[all_states_10[i]] = inv_optexit_policy_table[all_states_10[i]] = CV_value_opt
for industry_state in lst_industry_state_2:entry_policy_table[industry_state] = cal_entry_value_opt(industry_state, state_value_table_new, economic_params,invest_policy_table_gu, exit_policy_table_gu, entry_policy_table_gu)
delta_inv = max(abs(np.array(list(invest_policy_table.values())) - np.array(list(invest_policy_table_gu.values()))))delta_exit = max(abs(np.array(list(exit_policy_table.values())) - np.array(list(exit_policy_table_gu.values()))))delta_entry = max(abs(np.array(list(entry_policy_table.values())) - np.array(list(entry_policy_table_gu.values()))))delta_policy = max(delta_inv, delta_exit, delta_entry)
print(delta_policy)0 -
I am not sure whether I understand your problem. The code you added is not executable (no indentation, undefined objects...).
I see that CV_value_variable is added to the model but there is no (model)constraint related to this variable?
It seems to be dependent on state_value_variable. But then you need to add this relation as constraints to the model, for example (simplified):m.addConstr(CV_value_variable[0] == gp.quicksum(state_value_variable[i] for i in range(num_state)))
Then if m.optimize() results in a solution, print(CV_value_variable[0].x) gives you the value for this variable (which is the sum of the other variables here).
0
サインインしてコメントを残してください。
コメント
5件のコメント