IPOPT_OPTIONS Sets options for IPOPT. OPT = IPOPT_OPTIONS OPT = IPOPT_OPTIONS(OVERRIDES) OPT = IPOPT_OPTIONS(OVERRIDES, FNAME) OPT = IPOPT_OPTIONS(OVERRIDES, MPOPT) Sets the values for the options.ipopt struct normally passed to IPOPT. Inputs are all optional, second argument must be either a string (FNAME) or a vector (MPOPT): OVERRIDES - struct containing values to override the defaults FNAME - name of user-supplied function called after default options are set to modify them. Calling syntax is: MODIFIED_OPT = FNAME(DEFAULT_OPT); MPOPT - MATPOWER options vector, used to set print_level MPOPT - MATPOWER options vector, uses the following entries: OPF_VIOLATION (16) - used to set opt.constr_viol_tol VERBOSE (31) - used to opt.print_level IPOPT_OPT (60) - user option file, if MPOPT(60) is non-zero it is appended to 'ipopt_user_options_' to form the name of a user-supplied function used as FNAME described above, except with calling syntax: MODIFIED_OPT = FNAME(DEFAULT_OPT, MPOPT); Output is an options.ipopt struct to pass to IPOPT. Example: If MPOPT(60) = 3, then after setting the default IPOPT options, IPOPT_OPTIONS will execute the following user-defined function to allow option overrides: opt = ipopt_user_options_3(opt, mpopt); The contents of ipopt_user_options_3.m, could be something like: function opt = ipopt_user_options_3(opt, mpopt) opt.nlp_scaling_method = 'none'; opt.max_iter = 500; opt.derivative_test = 'first-order'; See the options reference section in the IPOPT documentation for details on the available options. http://www.coin-or.org/Ipopt/documentation/ See also IPOPT, MPOPTION.
0001 function opt = ipopt_options(overrides, mpopt) 0002 %IPOPT_OPTIONS Sets options for IPOPT. 0003 % 0004 % OPT = IPOPT_OPTIONS 0005 % OPT = IPOPT_OPTIONS(OVERRIDES) 0006 % OPT = IPOPT_OPTIONS(OVERRIDES, FNAME) 0007 % OPT = IPOPT_OPTIONS(OVERRIDES, MPOPT) 0008 % 0009 % Sets the values for the options.ipopt struct normally passed to 0010 % IPOPT. 0011 % 0012 % Inputs are all optional, second argument must be either a string 0013 % (FNAME) or a vector (MPOPT): 0014 % 0015 % OVERRIDES - struct containing values to override the defaults 0016 % FNAME - name of user-supplied function called after default 0017 % options are set to modify them. Calling syntax is: 0018 % MODIFIED_OPT = FNAME(DEFAULT_OPT); 0019 % MPOPT - MATPOWER options vector, used to set print_level 0020 % MPOPT - MATPOWER options vector, uses the following entries: 0021 % OPF_VIOLATION (16) - used to set opt.constr_viol_tol 0022 % VERBOSE (31) - used to opt.print_level 0023 % IPOPT_OPT (60) - user option file, if MPOPT(60) is 0024 % non-zero it is appended to 'ipopt_user_options_' to form 0025 % the name of a user-supplied function used as FNAME 0026 % described above, except with calling syntax: 0027 % MODIFIED_OPT = FNAME(DEFAULT_OPT, MPOPT); 0028 % 0029 % Output is an options.ipopt struct to pass to IPOPT. 0030 % 0031 % Example: 0032 % 0033 % If MPOPT(60) = 3, then after setting the default IPOPT options, 0034 % IPOPT_OPTIONS will execute the following user-defined function 0035 % to allow option overrides: 0036 % 0037 % opt = ipopt_user_options_3(opt, mpopt); 0038 % 0039 % The contents of ipopt_user_options_3.m, could be something like: 0040 % 0041 % function opt = ipopt_user_options_3(opt, mpopt) 0042 % opt.nlp_scaling_method = 'none'; 0043 % opt.max_iter = 500; 0044 % opt.derivative_test = 'first-order'; 0045 % 0046 % See the options reference section in the IPOPT documentation for 0047 % details on the available options. 0048 % 0049 % http://www.coin-or.org/Ipopt/documentation/ 0050 % 0051 % See also IPOPT, MPOPTION. 0052 0053 % MATPOWER 0054 % $Id: ipopt_options.m,v 1.8 2011/11/10 21:33:53 cvs Exp $ 0055 % by Ray Zimmerman, PSERC Cornell 0056 % Copyright (c) 2010 by Power System Engineering Research Center (PSERC) 0057 % 0058 % This file is part of MATPOWER. 0059 % See http://www.pserc.cornell.edu/matpower/ for more info. 0060 % 0061 % MATPOWER is free software: you can redistribute it and/or modify 0062 % it under the terms of the GNU General Public License as published 0063 % by the Free Software Foundation, either version 3 of the License, 0064 % or (at your option) any later version. 0065 % 0066 % MATPOWER is distributed in the hope that it will be useful, 0067 % but WITHOUT ANY WARRANTY; without even the implied warranty of 0068 % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 0069 % GNU General Public License for more details. 0070 % 0071 % You should have received a copy of the GNU General Public License 0072 % along with MATPOWER. If not, see <http://www.gnu.org/licenses/>. 0073 % 0074 % Additional permission under GNU GPL version 3 section 7 0075 % 0076 % If you modify MATPOWER, or any covered work, to interface with 0077 % other modules (such as MATLAB code and MEX-files) available in a 0078 % MATLAB(R) or comparable environment containing parts covered 0079 % under other licensing terms, the licensors of MATPOWER grant 0080 % you additional permission to convey the resulting work. 0081 0082 %%----- initialization and arg handling ----- 0083 %% defaults 0084 verbose = 2; 0085 fname = ''; 0086 0087 %% second argument 0088 if nargin > 1 && ~isempty(mpopt) 0089 if ischar(mpopt) %% 2nd arg is FNAME (string) 0090 fname = mpopt; 0091 have_mpopt = 0; 0092 else %% 2nd arg is MPOPT (MATPOWER options vector) 0093 have_mpopt = 1; 0094 verbose = mpopt(31); %% VERBOSE 0095 if mpopt(60) %% IPOPT_OPT 0096 fname = sprintf('ipopt_user_options_%d', mpopt(60)); 0097 end 0098 end 0099 else 0100 have_mpopt = 0; 0101 end 0102 0103 %%----- set default options for IPOPT ----- 0104 %% printing 0105 if verbose 0106 opt.print_level = min(12, verbose*2+1); 0107 else 0108 opt.print_level = 0; 0109 end 0110 0111 %% convergence 0112 opt.tol = 1e-8; %% default 1e-8 0113 opt.max_iter = 250; %% default 3000 0114 opt.dual_inf_tol = 0.1; %% default 1 0115 if have_mpopt 0116 opt.constr_viol_tol = mpopt(16); %% default 1e-4 0117 opt.acceptable_constr_viol_tol = mpopt(16)*100; %% default 1e-2 0118 end 0119 opt.compl_inf_tol = 1e-5; %% default 1e-4 0120 opt.acceptable_tol = 1e-8; %% default 1e-6 0121 % opt.acceptable_iter = 15; %% default 15 0122 % opt.acceptable_dual_inf_tol = 1e+10; %% default 1e+10 0123 opt.acceptable_compl_inf_tol = 1e-3; %% default 1e-2 0124 % opt.acceptable_obj_change_tol = 1e+20; %% default 1e+20 0125 % opt.diverging_iterates_tol = 1e+20; %% default 1e+20 0126 0127 %% NLP scaling 0128 % opt.nlp_scaling_method = 'none'; %% default 'gradient-based' 0129 0130 %% NLP 0131 % opt.fixed_variable_treatment = 'make_constraint'; %% default 'make_parameter' 0132 % opt.honor_original_bounds = 'no'; %% default 'yes' 0133 % opt.check_derivatives_for_naninf = 'yes'; %% default 'no' 0134 0135 %% initialization 0136 % opt.least_square_init_primal = 'yes'; %% default 'no' 0137 % opt.least_square_init_duals = 'yes'; %% default 'no' 0138 0139 %% barrier parameter update 0140 opt.mu_strategy = 'adaptive'; %% default 'monotone' 0141 0142 %% linear solver 0143 % opt.linear_solver = 'ma27'; 0144 % opt.linear_solver = 'ma57'; 0145 % opt.linear_solver = 'pardiso'; 0146 % opt.linear_solver = 'wsmp'; 0147 % opt.linear_solver = 'mumps'; %% default 'mumps' 0148 % opt.linear_solver = 'custom'; 0149 % opt.linear_scaling_on_demand = 'no'; %% default 'yes' 0150 0151 %% step calculation 0152 % opt.mehrotra_algorithm = 'yes'; %% default 'no' 0153 % opt.fast_step_computation = 'yes'; %% default 'no' 0154 0155 %% restoration phase 0156 % opt.expect_infeasible_problem = 'yes'; %% default 'no' 0157 0158 %% derivative checker 0159 % opt.derivative_test = 'second-order'; %% default 'none' 0160 0161 %% hessian approximation 0162 % opt.hessian_approximation = 'limited-memory'; %% default 'exact' 0163 0164 % ma57 options 0165 %opt.ma57_pre_alloc = 3; 0166 %opt.ma57_pivot_order = 4; 0167 0168 %%----- call user function to modify defaults ----- 0169 if ~isempty(fname) 0170 if have_mpopt 0171 opt = feval(fname, opt, mpopt); 0172 else 0173 opt = feval(fname, opt); 0174 end 0175 end 0176 0177 %%----- apply overrides ----- 0178 if nargin > 0 && ~isempty(overrides) 0179 names = fieldnames(overrides); 0180 for k = 1:length(names) 0181 opt.(names{k}) = overrides.(names{k}); 0182 end 0183 end 0184 0185 0186 %-------------------------- Options Documentation -------------------------- 0187 % (as printed by IPOPT 3.8) 0188 % ### Output ### 0189 % 0190 % print_level 0 <= ( 5) <= 12 0191 % Output verbosity level. 0192 % Sets the default verbosity level for console output. The larger this 0193 % value the more detailed is the output. 0194 % 0195 % output_file ("") 0196 % File name of desired output file (leave unset for no file output). 0197 % NOTE: This option only works when read from the ipopt.opt options file! 0198 % An output file with this name will be written (leave unset for no file 0199 % output). The verbosity level is by default set to "print_level", but can 0200 % be overridden with "file_print_level". The file name is changed to use 0201 % only small letters. 0202 % Possible values: 0203 % - * [Any acceptable standard file name] 0204 % 0205 % file_print_level 0 <= ( 5) <= 12 0206 % Verbosity level for output file. 0207 % NOTE: This option only works when read from the ipopt.opt options file! 0208 % Determines the verbosity level for the file specified by "output_file". 0209 % By default it is the same as "print_level". 0210 % 0211 % print_user_options ("no") 0212 % Print all options set by the user. 0213 % If selected, the algorithm will print the list of all options set by the 0214 % user including their values and whether they have been used. In some 0215 % cases this information might be incorrect, due to the internal program 0216 % flow. 0217 % Possible values: 0218 % - no [don't print options] 0219 % - yes [print options] 0220 % 0221 % print_options_documentation ("no") 0222 % Switch to print all algorithmic options. 0223 % If selected, the algorithm will print the list of all available 0224 % algorithmic options with some documentation before solving the 0225 % optimization problem. 0226 % Possible values: 0227 % - no [don't print list] 0228 % - yes [print list] 0229 % 0230 % print_timing_statistics ("no") 0231 % Switch to print timing statistics. 0232 % If selected, the program will print the CPU usage (user time) for 0233 % selected tasks. 0234 % Possible values: 0235 % - no [don't print statistics] 0236 % - yes [print all timing statistics] 0237 % 0238 % option_file_name ("") 0239 % File name of options file (to overwrite default). 0240 % By default, the name of the Ipopt options file is "ipopt.opt" - or 0241 % something else if specified in the IpoptApplication::Initialize call. If 0242 % this option is set by SetStringValue BEFORE the options file is read, it 0243 % specifies the name of the options file. It does not make any sense to 0244 % specify this option within the options file. 0245 % Possible values: 0246 % - * [Any acceptable standard file name] 0247 % 0248 % replace_bounds ("no") 0249 % Indicates if all variable bounds should be replaced by inequality 0250 % constraints 0251 % This option must be set for the inexact algorithm 0252 % Possible values: 0253 % - no [leave bounds on variables] 0254 % - yes [replace variable bounds by inequality 0255 % constraints] 0256 % 0257 % skip_finalize_solution_call ("no") 0258 % Indicates if call to NLP::FinalizeSolution after optimization should be 0259 % suppressed 0260 % In some Ipopt applications, the user might want to call the 0261 % FinalizeSolution method separately. Setting this option to "yes" will 0262 % cause the IpoptApplication object to suppress the default call to that 0263 % method. 0264 % Possible values: 0265 % - no [call FinalizeSolution] 0266 % - yes [do not call FinalizeSolution] 0267 % 0268 % print_info_string ("no") 0269 % Enables printing of additional info string at end of iteration output. 0270 % This string contains some insider information about the current iteration. 0271 % Possible values: 0272 % - no [don't print string] 0273 % - yes [print string at end of each iteration output] 0274 % 0275 % 0276 % 0277 % ### Convergence ### 0278 % 0279 % tol 0 < ( 1e-08) < +inf 0280 % Desired convergence tolerance (relative). 0281 % Determines the convergence tolerance for the algorithm. The algorithm 0282 % terminates successfully, if the (scaled) NLP error becomes smaller than 0283 % this value, and if the (absolute) criteria according to "dual_inf_tol", 0284 % "primal_inf_tol", and "cmpl_inf_tol" are met. (This is epsilon_tol in 0285 % Eqn. (6) in implementation paper). See also "acceptable_tol" as a second 0286 % termination criterion. Note, some other algorithmic features also use 0287 % this quantity to determine thresholds etc. 0288 % 0289 % s_max 0 < ( 100) < +inf 0290 % Scaling threshold for the NLP error. 0291 % (See paragraph after Eqn. (6) in the implementation paper.) 0292 % 0293 % max_iter 0 <= ( 3000) < +inf 0294 % Maximum number of iterations. 0295 % The algorithm terminates with an error message if the number of 0296 % iterations exceeded this number. 0297 % 0298 % max_cpu_time 0 < ( 1e+06) < +inf 0299 % Maximum number of CPU seconds. 0300 % A limit on CPU seconds that Ipopt can use to solve one problem. If 0301 % during the convergence check this limit is exceeded, Ipopt will terminate 0302 % with a corresponding error message. 0303 % 0304 % dual_inf_tol 0 < ( 1) < +inf 0305 % Desired threshold for the dual infeasibility. 0306 % Absolute tolerance on the dual infeasibility. Successful termination 0307 % requires that the max-norm of the (unscaled) dual infeasibility is less 0308 % than this threshold. 0309 % 0310 % constr_viol_tol 0 < ( 0.0001) < +inf 0311 % Desired threshold for the constraint violation. 0312 % Absolute tolerance on the constraint violation. Successful termination 0313 % requires that the max-norm of the (unscaled) constraint violation is less 0314 % than this threshold. 0315 % 0316 % compl_inf_tol 0 < ( 0.0001) < +inf 0317 % Desired threshold for the complementarity conditions. 0318 % Absolute tolerance on the complementarity. Successful termination 0319 % requires that the max-norm of the (unscaled) complementarity is less than 0320 % this threshold. 0321 % 0322 % acceptable_tol 0 < ( 1e-06) < +inf 0323 % "Acceptable" convergence tolerance (relative). 0324 % Determines which (scaled) overall optimality error is considered to be 0325 % "acceptable." There are two levels of termination criteria. If the usual 0326 % "desired" tolerances (see tol, dual_inf_tol etc) are satisfied at an 0327 % iteration, the algorithm immediately terminates with a success message. 0328 % On the other hand, if the algorithm encounters "acceptable_iter" many 0329 % iterations in a row that are considered "acceptable", it will terminate 0330 % before the desired convergence tolerance is met. This is useful in cases 0331 % where the algorithm might not be able to achieve the "desired" level of 0332 % accuracy. 0333 % 0334 % acceptable_iter 0 <= ( 15) < +inf 0335 % Number of "acceptable" iterates before triggering termination. 0336 % If the algorithm encounters this many successive "acceptable" iterates 0337 % (see "acceptable_tol"), it terminates, assuming that the problem has been 0338 % solved to best possible accuracy given round-off. If it is set to zero, 0339 % this heuristic is disabled. 0340 % 0341 % acceptable_dual_inf_tol 0 < ( 1e+10) < +inf 0342 % "Acceptance" threshold for the dual infeasibility. 0343 % Absolute tolerance on the dual infeasibility. "Acceptable" termination 0344 % requires that the (max-norm of the unscaled) dual infeasibility is less 0345 % than this threshold; see also acceptable_tol. 0346 % 0347 % acceptable_constr_viol_tol 0 < ( 0.01) < +inf 0348 % "Acceptance" threshold for the constraint violation. 0349 % Absolute tolerance on the constraint violation. "Acceptable" termination 0350 % requires that the max-norm of the (unscaled) constraint violation is less 0351 % than this threshold; see also acceptable_tol. 0352 % 0353 % acceptable_compl_inf_tol 0 < ( 0.01) < +inf 0354 % "Acceptance" threshold for the complementarity conditions. 0355 % Absolute tolerance on the complementarity. "Acceptable" termination 0356 % requires that the max-norm of the (unscaled) complementarity is less than 0357 % this threshold; see also acceptable_tol. 0358 % 0359 % acceptable_obj_change_tol 0 <= ( 1e+20) < +inf 0360 % "Acceptance" stopping criterion based on objective function change. 0361 % If the relative change of the objective function (scaled by 0362 % Max(1,|f(x)|)) is less than this value, this part of the acceptable 0363 % tolerance termination is satisfied; see also acceptable_tol. This is 0364 % useful for the quasi-Newton option, which has trouble to bring down the 0365 % dual infeasibility. 0366 % 0367 % diverging_iterates_tol 0 < ( 1e+20) < +inf 0368 % Threshold for maximal value of primal iterates. 0369 % If any component of the primal iterates exceeded this value (in absolute 0370 % terms), the optimization is aborted with the exit message that the 0371 % iterates seem to be diverging. 0372 % 0373 % 0374 % 0375 % ### NLP Scaling ### 0376 % 0377 % nlp_scaling_method ("gradient-based") 0378 % Select the technique used for scaling the NLP. 0379 % Selects the technique used for scaling the problem internally before it 0380 % is solved. For user-scaling, the parameters come from the NLP. If you are 0381 % using AMPL, they can be specified through suffixes ("scaling_factor") 0382 % Possible values: 0383 % - none [no problem scaling will be performed] 0384 % - user-scaling [scaling parameters will come from the user] 0385 % - gradient-based [scale the problem so the maximum gradient at 0386 % the starting point is scaling_max_gradient] 0387 % - equilibration-based [scale the problem so that first derivatives are 0388 % of order 1 at random points (only available 0389 % with MC19)] 0390 % 0391 % obj_scaling_factor -inf < ( 1) < +inf 0392 % Scaling factor for the objective function. 0393 % This option sets a scaling factor for the objective function. The scaling 0394 % is seen internally by Ipopt but the unscaled objective is reported in the 0395 % console output. If additional scaling parameters are computed (e.g. 0396 % user-scaling or gradient-based), both factors are multiplied. If this 0397 % value is chosen to be negative, Ipopt will maximize the objective 0398 % function instead of minimizing it. 0399 % 0400 % nlp_scaling_max_gradient 0 < ( 100) < +inf 0401 % Maximum gradient after NLP scaling. 0402 % This is the gradient scaling cut-off. If the maximum gradient is above 0403 % this value, then gradient based scaling will be performed. Scaling 0404 % parameters are calculated to scale the maximum gradient back to this 0405 % value. (This is g_max in Section 3.8 of the implementation paper.) Note: 0406 % This option is only used if "nlp_scaling_method" is chosen as 0407 % "gradient-based". 0408 % 0409 % nlp_scaling_obj_target_gradient 0 <= ( 0) < +inf 0410 % Target value for objective function gradient size. 0411 % If a positive number is chosen, the scaling factor the objective function 0412 % is computed so that the gradient has the max norm of the given size at 0413 % the starting point. This overrides nlp_scaling_max_gradient for the 0414 % objective function. 0415 % 0416 % nlp_scaling_constr_target_gradient 0 <= ( 0) < +inf 0417 % Target value for constraint function gradient size. 0418 % If a positive number is chosen, the scaling factor the constraint 0419 % functions is computed so that the gradient has the max norm of the given 0420 % size at the starting point. This overrides nlp_scaling_max_gradient for 0421 % the constraint functions. 0422 % 0423 % 0424 % 0425 % ### NLP ### 0426 % 0427 % nlp_lower_bound_inf -inf < ( -1e+19) < +inf 0428 % any bound less or equal this value will be considered -inf (i.e. not lower 0429 % bounded). 0430 % 0431 % nlp_upper_bound_inf -inf < ( 1e+19) < +inf 0432 % any bound greater or this value will be considered +inf (i.e. not upper 0433 % bounded). 0434 % 0435 % fixed_variable_treatment ("make_parameter") 0436 % Determines how fixed variables should be handled. 0437 % The main difference between those options is that the starting point in 0438 % the "make_constraint" case still has the fixed variables at their given 0439 % values, whereas in the case "make_parameter" the functions are always 0440 % evaluated with the fixed values for those variables. Also, for 0441 % "relax_bounds", the fixing bound constraints are relaxed (according to" 0442 % bound_relax_factor"). For both "make_constraints" and "relax_bounds", 0443 % bound multipliers are computed for the fixed variables. 0444 % Possible values: 0445 % - make_parameter [Remove fixed variable from optimization 0446 % variables] 0447 % - make_constraint [Add equality constraints fixing variables] 0448 % - relax_bounds [Relax fixing bound constraints] 0449 % 0450 % dependency_detector ("none") 0451 % Indicates which linear solver should be used to detect linearly dependent 0452 % equality constraints. 0453 % The default and available choices depend on how Ipopt has been compiled. 0454 % This is experimental and does not work well. 0455 % Possible values: 0456 % - none [don't check; no extra work at beginning] 0457 % - mumps [use MUMPS] 0458 % - wsmp [use WSMP] 0459 % - ma28 [use MA28] 0460 % 0461 % dependency_detection_with_rhs ("no") 0462 % Indicates if the right hand sides of the constraints should be considered 0463 % during dependency detection 0464 % Possible values: 0465 % - no [only look at gradients] 0466 % - yes [also consider right hand side] 0467 % 0468 % num_linear_variables 0 <= ( 0) < +inf 0469 % Number of linear variables 0470 % When the Hessian is approximated, it is assumed that the first 0471 % num_linear_variables variables are linear. The Hessian is then not 0472 % approximated in this space. If the get_number_of_nonlinear_variables 0473 % method in the TNLP is implemented, this option is ignored. 0474 % 0475 % kappa_d 0 <= ( 1e-05) < +inf 0476 % Weight for linear damping term (to handle one-sided bounds). 0477 % (see Section 3.7 in implementation paper.) 0478 % 0479 % bound_relax_factor 0 <= ( 1e-08) < +inf 0480 % Factor for initial relaxation of the bounds. 0481 % Before start of the optimization, the bounds given by the user are 0482 % relaxed. This option sets the factor for this relaxation. If it is set 0483 % to zero, then then bounds relaxation is disabled. (See Eqn.(35) in 0484 % implementation paper.) 0485 % 0486 % honor_original_bounds ("yes") 0487 % Indicates whether final points should be projected into original bounds. 0488 % Ipopt might relax the bounds during the optimization (see, e.g., option 0489 % "bound_relax_factor"). This option determines whether the final point 0490 % should be projected back into the user-provide original bounds after the 0491 % optimization. 0492 % Possible values: 0493 % - no [Leave final point unchanged] 0494 % - yes [Project final point back into original bounds] 0495 % 0496 % check_derivatives_for_naninf ("no") 0497 % Indicates whether it is desired to check for Nan/Inf in derivative matrices 0498 % Activating this option will cause an error if an invalid number is 0499 % detected in the constraint Jacobians or the Lagrangian Hessian. If this 0500 % is not activated, the test is skipped, and the algorithm might proceed 0501 % with invalid numbers and fail. 0502 % Possible values: 0503 % - no [Don't check (faster).] 0504 % - yes [Check Jacobians and Hessian for Nan and Inf.] 0505 % 0506 % jac_c_constant ("no") 0507 % Indicates whether all equality constraints are linear 0508 % Activating this option will cause Ipopt to ask for the Jacobian of the 0509 % equality constraints only once from the NLP and reuse this information 0510 % later. 0511 % Possible values: 0512 % - no [Don't assume that all equality constraints are 0513 % linear] 0514 % - yes [Assume that equality constraints Jacobian are 0515 % constant] 0516 % 0517 % jac_d_constant ("no") 0518 % Indicates whether all inequality constraints are linear 0519 % Activating this option will cause Ipopt to ask for the Jacobian of the 0520 % inequality constraints only once from the NLP and reuse this information 0521 % later. 0522 % Possible values: 0523 % - no [Don't assume that all inequality constraints 0524 % are linear] 0525 % - yes [Assume that equality constraints Jacobian are 0526 % constant] 0527 % 0528 % hessian_constant ("no") 0529 % Indicates whether the problem is a quadratic problem 0530 % Activating this option will cause Ipopt to ask for the Hessian of the 0531 % Lagrangian function only once from the NLP and reuse this information 0532 % later. 0533 % Possible values: 0534 % - no [Assume that Hessian changes] 0535 % - yes [Assume that Hessian is constant] 0536 % 0537 % 0538 % 0539 % ### Initialization ### 0540 % 0541 % bound_push 0 < ( 0.01) < +inf 0542 % Desired minimum absolute distance from the initial point to bound. 0543 % Determines how much the initial point might have to be modified in order 0544 % to be sufficiently inside the bounds (together with "bound_frac"). (This 0545 % is kappa_1 in Section 3.6 of implementation paper.) 0546 % 0547 % bound_frac 0 < ( 0.01) <= 0.5 0548 % Desired minimum relative distance from the initial point to bound. 0549 % Determines how much the initial point might have to be modified in order 0550 % to be sufficiently inside the bounds (together with "bound_push"). (This 0551 % is kappa_2 in Section 3.6 of implementation paper.) 0552 % 0553 % slack_bound_push 0 < ( 0.01) < +inf 0554 % Desired minimum absolute distance from the initial slack to bound. 0555 % Determines how much the initial slack variables might have to be modified 0556 % in order to be sufficiently inside the inequality bounds (together with 0557 % "slack_bound_frac"). (This is kappa_1 in Section 3.6 of implementation 0558 % paper.) 0559 % 0560 % slack_bound_frac 0 < ( 0.01) <= 0.5 0561 % Desired minimum relative distance from the initial slack to bound. 0562 % Determines how much the initial slack variables might have to be modified 0563 % in order to be sufficiently inside the inequality bounds (together with 0564 % "slack_bound_push"). (This is kappa_2 in Section 3.6 of implementation 0565 % paper.) 0566 % 0567 % constr_mult_init_max 0 <= ( 1000) < +inf 0568 % Maximum allowed least-square guess of constraint multipliers. 0569 % Determines how large the initial least-square guesses of the constraint 0570 % multipliers are allowed to be (in max-norm). If the guess is larger than 0571 % this value, it is discarded and all constraint multipliers are set to 0572 % zero. This options is also used when initializing the restoration phase. 0573 % By default, "resto.constr_mult_init_max" (the one used in 0574 % RestoIterateInitializer) is set to zero. 0575 % 0576 % bound_mult_init_val 0 < ( 1) < +inf 0577 % Initial value for the bound multipliers. 0578 % All dual variables corresponding to bound constraints are initialized to 0579 % this value. 0580 % 0581 % bound_mult_init_method ("constant") 0582 % Initialization method for bound multipliers 0583 % This option defines how the iterates for the bound multipliers are 0584 % initialized. If "constant" is chosen, then all bound multipliers are 0585 % initialized to the value of "bound_mult_init_val". If "mu-based" is 0586 % chosen, the each value is initialized to the the value of "mu_init" 0587 % divided by the corresponding slack variable. This latter option might be 0588 % useful if the starting point is close to the optimal solution. 0589 % Possible values: 0590 % - constant [set all bound multipliers to the value of 0591 % bound_mult_init_val] 0592 % - mu-based [initialize to mu_init/x_slack] 0593 % 0594 % least_square_init_primal ("no") 0595 % Least square initialization of the primal variables 0596 % If set to yes, Ipopt ignores the user provided point and solves a least 0597 % square problem for the primal variables (x and s), to fit the linearized 0598 % equality and inequality constraints. This might be useful if the user 0599 % doesn't know anything about the starting point, or for solving an LP or 0600 % QP. 0601 % Possible values: 0602 % - no [take user-provided point] 0603 % - yes [overwrite user-provided point with least-square 0604 % estimates] 0605 % 0606 % least_square_init_duals ("no") 0607 % Least square initialization of all dual variables 0608 % If set to yes, Ipopt tries to compute least-square multipliers 0609 % (considering ALL dual variables). If successful, the bound multipliers 0610 % are possibly corrected to be at least bound_mult_init_val. This might be 0611 % useful if the user doesn't know anything about the starting point, or for 0612 % solving an LP or QP. This overwrites option "bound_mult_init_method". 0613 % Possible values: 0614 % - no [use bound_mult_init_val and least-square 0615 % equality constraint multipliers] 0616 % - yes [overwrite user-provided point with least-square 0617 % estimates] 0618 % 0619 % 0620 % 0621 % ### Barrier Parameter Update ### 0622 % 0623 % mu_max_fact 0 < ( 1000) < +inf 0624 % Factor for initialization of maximum value for barrier parameter. 0625 % This option determines the upper bound on the barrier parameter. This 0626 % upper bound is computed as the average complementarity at the initial 0627 % point times the value of this option. (Only used if option "mu_strategy" 0628 % is chosen as "adaptive".) 0629 % 0630 % mu_max 0 < ( 100000) < +inf 0631 % Maximum value for barrier parameter. 0632 % This option specifies an upper bound on the barrier parameter in the 0633 % adaptive mu selection mode. If this option is set, it overwrites the 0634 % effect of mu_max_fact. (Only used if option "mu_strategy" is chosen as 0635 % "adaptive".) 0636 % 0637 % mu_min 0 < ( 1e-11) < +inf 0638 % Minimum value for barrier parameter. 0639 % This option specifies the lower bound on the barrier parameter in the 0640 % adaptive mu selection mode. By default, it is set to the minimum of 1e-11 0641 % and min("tol","compl_inf_tol")/("barrier_tol_factor"+1), which should be 0642 % a reasonable value. (Only used if option "mu_strategy" is chosen as 0643 % "adaptive".) 0644 % 0645 % adaptive_mu_globalization ("obj-constr-filter") 0646 % Globalization strategy for the adaptive mu selection mode. 0647 % To achieve global convergence of the adaptive version, the algorithm has 0648 % to switch to the monotone mode (Fiacco-McCormick approach) when 0649 % convergence does not seem to appear. This option sets the criterion used 0650 % to decide when to do this switch. (Only used if option "mu_strategy" is 0651 % chosen as "adaptive".) 0652 % Possible values: 0653 % - kkt-error [nonmonotone decrease of kkt-error] 0654 % - obj-constr-filter [2-dim filter for objective and constraint 0655 % violation] 0656 % - never-monotone-mode [disables globalization] 0657 % 0658 % adaptive_mu_kkterror_red_iters 0 <= ( 4) < +inf 0659 % Maximum number of iterations requiring sufficient progress. 0660 % For the "kkt-error" based globalization strategy, sufficient progress 0661 % must be made for "adaptive_mu_kkterror_red_iters" iterations. If this 0662 % number of iterations is exceeded, the globalization strategy switches to 0663 % the monotone mode. 0664 % 0665 % adaptive_mu_kkterror_red_fact 0 < ( 0.9999) < 1 0666 % Sufficient decrease factor for "kkt-error" globalization strategy. 0667 % For the "kkt-error" based globalization strategy, the error must decrease 0668 % by this factor to be deemed sufficient decrease. 0669 % 0670 % filter_margin_fact 0 < ( 1e-05) < 1 0671 % Factor determining width of margin for obj-constr-filter adaptive 0672 % globalization strategy. 0673 % When using the adaptive globalization strategy, "obj-constr-filter", 0674 % sufficient progress for a filter entry is defined as follows: (new obj) < 0675 % (filter obj) - filter_margin_fact*(new constr-viol) OR (new constr-viol) 0676 % < (filter constr-viol) - filter_margin_fact*(new constr-viol). For the 0677 % description of the "kkt-error-filter" option see "filter_max_margin". 0678 % 0679 % filter_max_margin 0 < ( 1) < +inf 0680 % Maximum width of margin in obj-constr-filter adaptive globalization 0681 % strategy. 0682 % 0683 % adaptive_mu_restore_previous_iterate("no") 0684 % Indicates if the previous iterate should be restored if the monotone mode 0685 % is entered. 0686 % When the globalization strategy for the adaptive barrier algorithm 0687 % switches to the monotone mode, it can either start from the most recent 0688 % iterate (no), or from the last iterate that was accepted (yes). 0689 % Possible values: 0690 % - no [don't restore accepted iterate] 0691 % - yes [restore accepted iterate] 0692 % 0693 % adaptive_mu_monotone_init_factor 0 < ( 0.8) < +inf 0694 % Determines the initial value of the barrier parameter when switching to the 0695 % monotone mode. 0696 % When the globalization strategy for the adaptive barrier algorithm 0697 % switches to the monotone mode and fixed_mu_oracle is chosen as 0698 % "average_compl", the barrier parameter is set to the current average 0699 % complementarity times the value of "adaptive_mu_monotone_init_factor". 0700 % 0701 % adaptive_mu_kkt_norm_type ("2-norm-squared") 0702 % Norm used for the KKT error in the adaptive mu globalization strategies. 0703 % When computing the KKT error for the globalization strategies, the norm 0704 % to be used is specified with this option. Note, this options is also used 0705 % in the QualityFunctionMuOracle. 0706 % Possible values: 0707 % - 1-norm [use the 1-norm (abs sum)] 0708 % - 2-norm-squared [use the 2-norm squared (sum of squares)] 0709 % - max-norm [use the infinity norm (max)] 0710 % - 2-norm [use 2-norm] 0711 % 0712 % mu_strategy ("monotone") 0713 % Update strategy for barrier parameter. 0714 % Determines which barrier parameter update strategy is to be used. 0715 % Possible values: 0716 % - monotone [use the monotone (Fiacco-McCormick) strategy] 0717 % - adaptive [use the adaptive update strategy] 0718 % 0719 % mu_oracle ("quality-function") 0720 % Oracle for a new barrier parameter in the adaptive strategy. 0721 % Determines how a new barrier parameter is computed in each "free-mode" 0722 % iteration of the adaptive barrier parameter strategy. (Only considered if 0723 % "adaptive" is selected for option "mu_strategy"). 0724 % Possible values: 0725 % - probing [Mehrotra's probing heuristic] 0726 % - loqo [LOQO's centrality rule] 0727 % - quality-function [minimize a quality function] 0728 % 0729 % fixed_mu_oracle ("average_compl") 0730 % Oracle for the barrier parameter when switching to fixed mode. 0731 % Determines how the first value of the barrier parameter should be 0732 % computed when switching to the "monotone mode" in the adaptive strategy. 0733 % (Only considered if "adaptive" is selected for option "mu_strategy".) 0734 % Possible values: 0735 % - probing [Mehrotra's probing heuristic] 0736 % - loqo [LOQO's centrality rule] 0737 % - quality-function [minimize a quality function] 0738 % - average_compl [base on current average complementarity] 0739 % 0740 % mu_init 0 < ( 0.1) < +inf 0741 % Initial value for the barrier parameter. 0742 % This option determines the initial value for the barrier parameter (mu). 0743 % It is only relevant in the monotone, Fiacco-McCormick version of the 0744 % algorithm. (i.e., if "mu_strategy" is chosen as "monotone") 0745 % 0746 % barrier_tol_factor 0 < ( 10) < +inf 0747 % Factor for mu in barrier stop test. 0748 % The convergence tolerance for each barrier problem in the monotone mode 0749 % is the value of the barrier parameter times "barrier_tol_factor". This 0750 % option is also used in the adaptive mu strategy during the monotone mode. 0751 % (This is kappa_epsilon in implementation paper). 0752 % 0753 % mu_linear_decrease_factor 0 < ( 0.2) < 1 0754 % Determines linear decrease rate of barrier parameter. 0755 % For the Fiacco-McCormick update procedure the new barrier parameter mu is 0756 % obtained by taking the minimum of mu*"mu_linear_decrease_factor" and 0757 % mu^"superlinear_decrease_power". (This is kappa_mu in implementation 0758 % paper.) This option is also used in the adaptive mu strategy during the 0759 % monotone mode. 0760 % 0761 % mu_superlinear_decrease_power 1 < ( 1.5) < 2 0762 % Determines superlinear decrease rate of barrier parameter. 0763 % For the Fiacco-McCormick update procedure the new barrier parameter mu is 0764 % obtained by taking the minimum of mu*"mu_linear_decrease_factor" and 0765 % mu^"superlinear_decrease_power". (This is theta_mu in implementation 0766 % paper.) This option is also used in the adaptive mu strategy during the 0767 % monotone mode. 0768 % 0769 % mu_allow_fast_monotone_decrease("yes") 0770 % Allow skipping of barrier problem if barrier test is already met. 0771 % If set to "no", the algorithm enforces at least one iteration per barrier 0772 % problem, even if the barrier test is already met for the updated barrier 0773 % parameter. 0774 % Possible values: 0775 % - no [Take at least one iteration per barrier problem] 0776 % - yes [Allow fast decrease of mu if barrier test it met] 0777 % 0778 % tau_min 0 < ( 0.99) < 1 0779 % Lower bound on fraction-to-the-boundary parameter tau. 0780 % (This is tau_min in the implementation paper.) This option is also used 0781 % in the adaptive mu strategy during the monotone mode. 0782 % 0783 % sigma_max 0 < ( 100) < +inf 0784 % Maximum value of the centering parameter. 0785 % This is the upper bound for the centering parameter chosen by the quality 0786 % function based barrier parameter update. (Only used if option "mu_oracle" 0787 % is set to "quality-function".) 0788 % 0789 % sigma_min 0 <= ( 1e-06) < +inf 0790 % Minimum value of the centering parameter. 0791 % This is the lower bound for the centering parameter chosen by the quality 0792 % function based barrier parameter update. (Only used if option "mu_oracle" 0793 % is set to "quality-function".) 0794 % 0795 % quality_function_norm_type ("2-norm-squared") 0796 % Norm used for components of the quality function. 0797 % (Only used if option "mu_oracle" is set to "quality-function".) 0798 % Possible values: 0799 % - 1-norm [use the 1-norm (abs sum)] 0800 % - 2-norm-squared [use the 2-norm squared (sum of squares)] 0801 % - max-norm [use the infinity norm (max)] 0802 % - 2-norm [use 2-norm] 0803 % 0804 % quality_function_centrality ("none") 0805 % The penalty term for centrality that is included in quality function. 0806 % This determines whether a term is added to the quality function to 0807 % penalize deviation from centrality with respect to complementarity. The 0808 % complementarity measure here is the xi in the Loqo update rule. (Only 0809 % used if option "mu_oracle" is set to "quality-function".) 0810 % Possible values: 0811 % - none [no penalty term is added] 0812 % - log [complementarity * the log of the centrality 0813 % measure] 0814 % - reciprocal [complementarity * the reciprocal of the 0815 % centrality measure] 0816 % - cubed-reciprocal [complementarity * the reciprocal of the 0817 % centrality measure cubed] 0818 % 0819 % quality_function_balancing_term("none") 0820 % The balancing term included in the quality function for centrality. 0821 % This determines whether a term is added to the quality function that 0822 % penalizes situations where the complementarity is much smaller than dual 0823 % and primal infeasibilities. (Only used if option "mu_oracle" is set to 0824 % "quality-function".) 0825 % Possible values: 0826 % - none [no balancing term is added] 0827 % - cubic [Max(0,Max(dual_inf,primal_inf)-compl)^3] 0828 % 0829 % quality_function_max_section_steps 0 <= ( 8) < +inf 0830 % Maximum number of search steps during direct search procedure determining 0831 % the optimal centering parameter. 0832 % The golden section search is performed for the quality function based mu 0833 % oracle. (Only used if option "mu_oracle" is set to "quality-function".) 0834 % 0835 % quality_function_section_sigma_tol 0 <= ( 0.01) < 1 0836 % Tolerance for the section search procedure determining the optimal 0837 % centering parameter (in sigma space). 0838 % The golden section search is performed for the quality function based mu 0839 % oracle. (Only used if option "mu_oracle" is set to "quality-function".) 0840 % 0841 % quality_function_section_qf_tol 0 <= ( 0) < 1 0842 % Tolerance for the golden section search procedure determining the optimal 0843 % centering parameter (in the function value space). 0844 % The golden section search is performed for the quality function based mu 0845 % oracle. (Only used if option "mu_oracle" is set to "quality-function".) 0846 % 0847 % 0848 % 0849 % ### Line Search ### 0850 % 0851 % alpha_red_factor 0 < ( 0.5) < 1 0852 % Fractional reduction of the trial step size in the backtracking line search. 0853 % At every step of the backtracking line search, the trial step size is 0854 % reduced by this factor. 0855 % 0856 % accept_every_trial_step ("no") 0857 % Always accept the first trial step. 0858 % Setting this option to "yes" essentially disables the line search and 0859 % makes the algorithm take aggressive steps, without global convergence 0860 % guarantees. 0861 % Possible values: 0862 % - no [don't arbitrarily accept the full step] 0863 % - yes [always accept the full step] 0864 % 0865 % accept_after_max_steps -1 <= ( -1) < +inf 0866 % Accept a trial point after maximal this number of steps. 0867 % Even if it does not satisfy line search conditions. 0868 % 0869 % alpha_for_y ("primal") 0870 % Method to determine the step size for constraint multipliers. 0871 % This option determines how the step size (alpha_y) will be calculated 0872 % when updating the constraint multipliers. 0873 % Possible values: 0874 % - primal [use primal step size] 0875 % - bound-mult [use step size for the bound multipliers (good 0876 % for LPs)] 0877 % - min [use the min of primal and bound multipliers] 0878 % - max [use the max of primal and bound multipliers] 0879 % - full [take a full step of size one] 0880 % - min-dual-infeas [choose step size minimizing new dual 0881 % infeasibility] 0882 % - safer-min-dual-infeas [like "min_dual_infeas", but safeguarded by 0883 % "min" and "max"] 0884 % - primal-and-full [use the primal step size, and full step if 0885 % delta_x <= alpha_for_y_tol] 0886 % - dual-and-full [use the dual step size, and full step if 0887 % delta_x <= alpha_for_y_tol] 0888 % - acceptor [Call LSAcceptor to get step size for y] 0889 % 0890 % alpha_for_y_tol 0 <= ( 10) < +inf 0891 % Tolerance for switching to full equality multiplier steps. 0892 % This is only relevant if "alpha_for_y" is chosen "primal-and-full" or 0893 % "dual-and-full". The step size for the equality constraint multipliers 0894 % is taken to be one if the max-norm of the primal step is less than this 0895 % tolerance. 0896 % 0897 % tiny_step_tol 0 <= (2.22045e-15) < +inf 0898 % Tolerance for detecting numerically insignificant steps. 0899 % If the search direction in the primal variables (x and s) is, in relative 0900 % terms for each component, less than this value, the algorithm accepts the 0901 % full step without line search. If this happens repeatedly, the algorithm 0902 % will terminate with a corresponding exit message. The default value is 10 0903 % times machine precision. 0904 % 0905 % tiny_step_y_tol 0 <= ( 0.01) < +inf 0906 % Tolerance for quitting because of numerically insignificant steps. 0907 % If the search direction in the primal variables (x and s) is, in relative 0908 % terms for each component, repeatedly less than tiny_step_tol, and the 0909 % step in the y variables is smaller than this threshold, the algorithm 0910 % will terminate. 0911 % 0912 % watchdog_shortened_iter_trigger 0 <= ( 10) < +inf 0913 % Number of shortened iterations that trigger the watchdog. 0914 % If the number of successive iterations in which the backtracking line 0915 % search did not accept the first trial point exceeds this number, the 0916 % watchdog procedure is activated. Choosing "0" here disables the watchdog 0917 % procedure. 0918 % 0919 % watchdog_trial_iter_max 1 <= ( 3) < +inf 0920 % Maximum number of watchdog iterations. 0921 % This option determines the number of trial iterations allowed before the 0922 % watchdog procedure is aborted and the algorithm returns to the stored 0923 % point. 0924 % 0925 % theta_max_fact 0 < ( 10000) < +inf 0926 % Determines upper bound for constraint violation in the filter. 0927 % The algorithmic parameter theta_max is determined as theta_max_fact times 0928 % the maximum of 1 and the constraint violation at initial point. Any 0929 % point with a constraint violation larger than theta_max is unacceptable 0930 % to the filter (see Eqn. (21) in the implementation paper). 0931 % 0932 % theta_min_fact 0 < ( 0.0001) < +inf 0933 % Determines constraint violation threshold in the switching rule. 0934 % The algorithmic parameter theta_min is determined as theta_min_fact times 0935 % the maximum of 1 and the constraint violation at initial point. The 0936 % switching rules treats an iteration as an h-type iteration whenever the 0937 % current constraint violation is larger than theta_min (see paragraph 0938 % before Eqn. (19) in the implementation paper). 0939 % 0940 % eta_phi 0 < ( 1e-08) < 0.5 0941 % Relaxation factor in the Armijo condition. 0942 % (See Eqn. (20) in the implementation paper) 0943 % 0944 % delta 0 < ( 1) < +inf 0945 % Multiplier for constraint violation in the switching rule. 0946 % (See Eqn. (19) in the implementation paper.) 0947 % 0948 % s_phi 1 < ( 2.3) < +inf 0949 % Exponent for linear barrier function model in the switching rule. 0950 % (See Eqn. (19) in the implementation paper.) 0951 % 0952 % s_theta 1 < ( 1.1) < +inf 0953 % Exponent for current constraint violation in the switching rule. 0954 % (See Eqn. (19) in the implementation paper.) 0955 % 0956 % gamma_phi 0 < ( 1e-08) < 1 0957 % Relaxation factor in the filter margin for the barrier function. 0958 % (See Eqn. (18a) in the implementation paper.) 0959 % 0960 % gamma_theta 0 < ( 1e-05) < 1 0961 % Relaxation factor in the filter margin for the constraint violation. 0962 % (See Eqn. (18b) in the implementation paper.) 0963 % 0964 % alpha_min_frac 0 < ( 0.05) < 1 0965 % Safety factor for the minimal step size (before switching to restoration 0966 % phase). 0967 % (This is gamma_alpha in Eqn. (20) in the implementation paper.) 0968 % 0969 % max_soc 0 <= ( 4) < +inf 0970 % Maximum number of second order correction trial steps at each iteration. 0971 % Choosing 0 disables the second order corrections. (This is p^{max} of 0972 % Step A-5.9 of Algorithm A in the implementation paper.) 0973 % 0974 % kappa_soc 0 < ( 0.99) < +inf 0975 % Factor in the sufficient reduction rule for second order correction. 0976 % This option determines how much a second order correction step must 0977 % reduce the constraint violation so that further correction steps are 0978 % attempted. (See Step A-5.9 of Algorithm A in the implementation paper.) 0979 % 0980 % obj_max_inc 1 < ( 5) < +inf 0981 % Determines the upper bound on the acceptable increase of barrier objective 0982 % function. 0983 % Trial points are rejected if they lead to an increase in the barrier 0984 % objective function by more than obj_max_inc orders of magnitude. 0985 % 0986 % max_filter_resets 0 <= ( 5) < +inf 0987 % Maximal allowed number of filter resets 0988 % A positive number enables a heuristic that resets the filter, whenever in 0989 % more than "filter_reset_trigger" successive iterations the last rejected 0990 % trial steps size was rejected because of the filter. This option 0991 % determine the maximal number of resets that are allowed to take place. 0992 % 0993 % filter_reset_trigger 1 <= ( 5) < +inf 0994 % Number of iterations that trigger the filter reset. 0995 % If the filter reset heuristic is active and the number of successive 0996 % iterations in which the last rejected trial step size was rejected 0997 % because of the filter, the filter is reset. 0998 % 0999 % corrector_type ("none") 1000 % The type of corrector steps that should be taken (unsupported!). 1001 % If "mu_strategy" is "adaptive", this option determines what kind of 1002 % corrector steps should be tried. 1003 % Possible values: 1004 % - none [no corrector] 1005 % - affine [corrector step towards mu=0] 1006 % - primal-dual [corrector step towards current mu] 1007 % 1008 % skip_corr_if_neg_curv ("yes") 1009 % Skip the corrector step in negative curvature iteration (unsupported!). 1010 % The corrector step is not tried if negative curvature has been 1011 % encountered during the computation of the search direction in the current 1012 % iteration. This option is only used if "mu_strategy" is "adaptive". 1013 % Possible values: 1014 % - no [don't skip] 1015 % - yes [skip] 1016 % 1017 % skip_corr_in_monotone_mode ("yes") 1018 % Skip the corrector step during monotone barrier parameter mode 1019 % (unsupported!). 1020 % The corrector step is not tried if the algorithm is currently in the 1021 % monotone mode (see also option "barrier_strategy").This option is only 1022 % used if "mu_strategy" is "adaptive". 1023 % Possible values: 1024 % - no [don't skip] 1025 % - yes [skip] 1026 % 1027 % corrector_compl_avrg_red_fact 0 < ( 1) < +inf 1028 % Complementarity tolerance factor for accepting corrector step 1029 % (unsupported!). 1030 % This option determines the factor by which complementarity is allowed to 1031 % increase for a corrector step to be accepted. 1032 % 1033 % nu_init 0 < ( 1e-06) < +inf 1034 % Initial value of the penalty parameter. 1035 % 1036 % nu_inc 0 < ( 0.0001) < +inf 1037 % Increment of the penalty parameter. 1038 % 1039 % rho 0 < ( 0.1) < 1 1040 % Value in penalty parameter update formula. 1041 % 1042 % kappa_sigma 0 < ( 1e+10) < +inf 1043 % Factor limiting the deviation of dual variables from primal estimates. 1044 % If the dual variables deviate from their primal estimates, a correction 1045 % is performed. (See Eqn. (16) in the implementation paper.) Setting the 1046 % value to less than 1 disables the correction. 1047 % 1048 % recalc_y ("no") 1049 % Tells the algorithm to recalculate the equality and inequality multipliers 1050 % as least square estimates. 1051 % This asks the algorithm to recompute the multipliers, whenever the 1052 % current infeasibility is less than recalc_y_feas_tol. Choosing yes might 1053 % be helpful in the quasi-Newton option. However, each recalculation 1054 % requires an extra factorization of the linear system. If a limited 1055 % memory quasi-Newton option is chosen, this is used by default. 1056 % Possible values: 1057 % - no [use the Newton step to update the multipliers] 1058 % - yes [use least-square multiplier estimates] 1059 % 1060 % recalc_y_feas_tol 0 < ( 1e-06) < +inf 1061 % Feasibility threshold for recomputation of multipliers. 1062 % If recalc_y is chosen and the current infeasibility is less than this 1063 % value, then the multipliers are recomputed. 1064 % 1065 % slack_move 0 <= (1.81899e-12) < +inf 1066 % Correction size for very small slacks. 1067 % Due to numerical issues or the lack of an interior, the slack variables 1068 % might become very small. If a slack becomes very small compared to 1069 % machine precision, the corresponding bound is moved slightly. This 1070 % parameter determines how large the move should be. Its default value is 1071 % mach_eps^{3/4}. (See also end of Section 3.5 in implementation paper - 1072 % but actual implementation might be somewhat different.) 1073 % 1074 % 1075 % 1076 % ### Warm Start ### 1077 % 1078 % warm_start_init_point ("no") 1079 % Warm-start for initial point 1080 % Indicates whether this optimization should use a warm start 1081 % initialization, where values of primal and dual variables are given 1082 % (e.g., from a previous optimization of a related problem.) 1083 % Possible values: 1084 % - no [do not use the warm start initialization] 1085 % - yes [use the warm start initialization] 1086 % 1087 % warm_start_same_structure ("no") 1088 % Indicates whether a problem with a structure identical to the previous one 1089 % is to be solved. 1090 % If "yes" is chosen, then the algorithm assumes that an NLP is now to be 1091 % solved, whose structure is identical to one that already was considered 1092 % (with the same NLP object). 1093 % Possible values: 1094 % - no [Assume this is a new problem.] 1095 % - yes [Assume this is problem has known structure] 1096 % 1097 % warm_start_bound_push 0 < ( 0.001) < +inf 1098 % same as bound_push for the regular initializer. 1099 % 1100 % warm_start_bound_frac 0 < ( 0.001) <= 0.5 1101 % same as bound_frac for the regular initializer. 1102 % 1103 % warm_start_slack_bound_push 0 < ( 0.001) < +inf 1104 % same as slack_bound_push for the regular initializer. 1105 % 1106 % warm_start_slack_bound_frac 0 < ( 0.001) <= 0.5 1107 % same as slack_bound_frac for the regular initializer. 1108 % 1109 % warm_start_mult_bound_push 0 < ( 0.001) < +inf 1110 % same as mult_bound_push for the regular initializer. 1111 % 1112 % warm_start_mult_init_max -inf < ( 1e+06) < +inf 1113 % Maximum initial value for the equality multipliers. 1114 % 1115 % warm_start_entire_iterate ("no") 1116 % Tells algorithm whether to use the GetWarmStartIterate method in the NLP. 1117 % Possible values: 1118 % - no [call GetStartingPoint in the NLP] 1119 % - yes [call GetWarmStartIterate in the NLP] 1120 % 1121 % 1122 % 1123 % ### Linear Solver ### 1124 % 1125 % linear_solver ("mumps") 1126 % Linear solver used for step computations. 1127 % Determines which linear algebra package is to be used for the solution of 1128 % the augmented linear system (for obtaining the search directions). Note, 1129 % the code must have been compiled with the linear solver you want to 1130 % choose. Depending on your Ipopt installation, not all options are 1131 % available. 1132 % Possible values: 1133 % - ma27 [use the Harwell routine MA27] 1134 % - ma57 [use the Harwell routine MA57] 1135 % - pardiso [use the Pardiso package] 1136 % - wsmp [use WSMP package] 1137 % - mumps [use MUMPS package] 1138 % - custom [use custom linear solver] 1139 % 1140 % linear_system_scaling ("none") 1141 % Method for scaling the linear system. 1142 % Determines the method used to compute symmetric scaling factors for the 1143 % augmented system (see also the "linear_scaling_on_demand" option). This 1144 % scaling is independent of the NLP problem scaling. By default, MC19 is 1145 % only used if MA27 or MA57 are selected as linear solvers. This option is 1146 % only available if Ipopt has been compiled with MC19. 1147 % Possible values: 1148 % - none [no scaling will be performed] 1149 % - mc19 [use the Harwell routine MC19] 1150 % 1151 % linear_scaling_on_demand ("yes") 1152 % Flag indicating that linear scaling is only done if it seems required. 1153 % This option is only important if a linear scaling method (e.g., mc19) is 1154 % used. If you choose "no", then the scaling factors are computed for 1155 % every linear system from the start. This can be quite expensive. 1156 % Choosing "yes" means that the algorithm will start the scaling method 1157 % only when the solutions to the linear system seem not good, and then use 1158 % it until the end. 1159 % Possible values: 1160 % - no [Always scale the linear system.] 1161 % - yes [Start using linear system scaling if solutions 1162 % seem not good.] 1163 % 1164 % 1165 % 1166 % ### Step Calculation ### 1167 % 1168 % mehrotra_algorithm ("no") 1169 % Indicates if we want to do Mehrotra's algorithm. 1170 % If set to yes, Ipopt runs as Mehrotra's predictor-corrector algorithm. 1171 % This works usually very well for LPs and convex QPs. This automatically 1172 % disables the line search, and chooses the (unglobalized) adaptive mu 1173 % strategy with the "probing" oracle, and uses "corrector_type=affine" 1174 % without any safeguards; you should not set any of those options 1175 % explicitly in addition. Also, unless otherwise specified, the values of 1176 % "bound_push", "bound_frac", and "bound_mult_init_val" are set more 1177 % aggressive, and sets "alpha_for_y=bound_mult". 1178 % Possible values: 1179 % - no [Do the usual Ipopt algorithm.] 1180 % - yes [Do Mehrotra's predictor-corrector algorithm.] 1181 % 1182 % fast_step_computation ("no") 1183 % Indicates if the linear system should be solved quickly. 1184 % If set to yes, the algorithm assumes that the linear system that is 1185 % solved to obtain the search direction, is solved sufficiently well. In 1186 % that case, no residuals are computed, and the computation of the search 1187 % direction is a little faster. 1188 % Possible values: 1189 % - no [Verify solution of linear system by computing 1190 % residuals.] 1191 % - yes [Trust that linear systems are solved well.] 1192 % 1193 % min_refinement_steps 0 <= ( 1) < +inf 1194 % Minimum number of iterative refinement steps per linear system solve. 1195 % Iterative refinement (on the full unsymmetric system) is performed for 1196 % each right hand side. This option determines the minimum number of 1197 % iterative refinements (i.e. at least "min_refinement_steps" iterative 1198 % refinement steps are enforced per right hand side.) 1199 % 1200 % max_refinement_steps 0 <= ( 10) < +inf 1201 % Maximum number of iterative refinement steps per linear system solve. 1202 % Iterative refinement (on the full unsymmetric system) is performed for 1203 % each right hand side. This option determines the maximum number of 1204 % iterative refinement steps. 1205 % 1206 % residual_ratio_max 0 < ( 1e-10) < +inf 1207 % Iterative refinement tolerance 1208 % Iterative refinement is performed until the residual test ratio is less 1209 % than this tolerance (or until "max_refinement_steps" refinement steps are 1210 % performed). 1211 % 1212 % residual_ratio_singular 0 < ( 1e-05) < +inf 1213 % Threshold for declaring linear system singular after failed iterative 1214 % refinement. 1215 % If the residual test ratio is larger than this value after failed 1216 % iterative refinement, the algorithm pretends that the linear system is 1217 % singular. 1218 % 1219 % residual_improvement_factor 0 < ( 1) < +inf 1220 % Minimal required reduction of residual test ratio in iterative refinement. 1221 % If the improvement of the residual test ratio made by one iterative 1222 % refinement step is not better than this factor, iterative refinement is 1223 % aborted. 1224 % 1225 % neg_curv_test_tol 0 < ( 0) < +inf 1226 % Tolerance for heuristic to ignore wrong inertia. 1227 % If positive, incorrect inertia in the augmented system is ignored, and we 1228 % test if the direction is a direction of positive curvature. This 1229 % tolerance determines when the direction is considered to be sufficiently 1230 % positive. 1231 % 1232 % max_hessian_perturbation 0 < ( 1e+20) < +inf 1233 % Maximum value of regularization parameter for handling negative curvature. 1234 % In order to guarantee that the search directions are indeed proper 1235 % descent directions, Ipopt requires that the inertia of the (augmented) 1236 % linear system for the step computation has the correct number of negative 1237 % and positive eigenvalues. The idea is that this guides the algorithm away 1238 % from maximizers and makes Ipopt more likely converge to first order 1239 % optimal points that are minimizers. If the inertia is not correct, a 1240 % multiple of the identity matrix is added to the Hessian of the Lagrangian 1241 % in the augmented system. This parameter gives the maximum value of the 1242 % regularization parameter. If a regularization of that size is not enough, 1243 % the algorithm skips this iteration and goes to the restoration phase. 1244 % (This is delta_w^max in the implementation paper.) 1245 % 1246 % min_hessian_perturbation 0 <= ( 1e-20) < +inf 1247 % Smallest perturbation of the Hessian block. 1248 % The size of the perturbation of the Hessian block is never selected 1249 % smaller than this value, unless no perturbation is necessary. (This is 1250 % delta_w^min in implementation paper.) 1251 % 1252 % perturb_inc_fact_first 1 < ( 100) < +inf 1253 % Increase factor for x-s perturbation for very first perturbation. 1254 % The factor by which the perturbation is increased when a trial value was 1255 % not sufficient - this value is used for the computation of the very first 1256 % perturbation and allows a different value for for the first perturbation 1257 % than that used for the remaining perturbations. (This is bar_kappa_w^+ in 1258 % the implementation paper.) 1259 % 1260 % perturb_inc_fact 1 < ( 8) < +inf 1261 % Increase factor for x-s perturbation. 1262 % The factor by which the perturbation is increased when a trial value was 1263 % not sufficient - this value is used for the computation of all 1264 % perturbations except for the first. (This is kappa_w^+ in the 1265 % implementation paper.) 1266 % 1267 % perturb_dec_fact 0 < ( 0.333333) < 1 1268 % Decrease factor for x-s perturbation. 1269 % The factor by which the perturbation is decreased when a trial value is 1270 % deduced from the size of the most recent successful perturbation. (This 1271 % is kappa_w^- in the implementation paper.) 1272 % 1273 % first_hessian_perturbation 0 < ( 0.0001) < +inf 1274 % Size of first x-s perturbation tried. 1275 % The first value tried for the x-s perturbation in the inertia correction 1276 % scheme.(This is delta_0 in the implementation paper.) 1277 % 1278 % jacobian_regularization_value 0 <= ( 1e-08) < +inf 1279 % Size of the regularization for rank-deficient constraint Jacobians. 1280 % (This is bar delta_c in the implementation paper.) 1281 % 1282 % jacobian_regularization_exponent 0 <= ( 0.25) < +inf 1283 % Exponent for mu in the regularization for rank-deficient constraint 1284 % Jacobians. 1285 % (This is kappa_c in the implementation paper.) 1286 % 1287 % perturb_always_cd ("no") 1288 % Active permanent perturbation of constraint linearization. 1289 % This options makes the delta_c and delta_d perturbation be used for the 1290 % computation of every search direction. Usually, it is only used when the 1291 % iteration matrix is singular. 1292 % Possible values: 1293 % - no [perturbation only used when required] 1294 % - yes [always use perturbation] 1295 % 1296 % 1297 % 1298 % ### Restoration Phase ### 1299 % 1300 % expect_infeasible_problem ("no") 1301 % Enable heuristics to quickly detect an infeasible problem. 1302 % This options is meant to activate heuristics that may speed up the 1303 % infeasibility determination if you expect that there is a good chance for 1304 % the problem to be infeasible. In the filter line search procedure, the 1305 % restoration phase is called more quickly than usually, and more reduction 1306 % in the constraint violation is enforced before the restoration phase is 1307 % left. If the problem is square, this option is enabled automatically. 1308 % Possible values: 1309 % - no [the problem probably be feasible] 1310 % - yes [the problem has a good chance to be infeasible] 1311 % 1312 % expect_infeasible_problem_ctol 0 <= ( 0.001) < +inf 1313 % Threshold for disabling "expect_infeasible_problem" option. 1314 % If the constraint violation becomes smaller than this threshold, the 1315 % "expect_infeasible_problem" heuristics in the filter line search are 1316 % disabled. If the problem is square, this options is set to 0. 1317 % 1318 % expect_infeasible_problem_ytol 0 < ( 1e+08) < +inf 1319 % Multiplier threshold for activating "expect_infeasible_problem" option. 1320 % If the max norm of the constraint multipliers becomes larger than this 1321 % value and "expect_infeasible_problem" is chosen, then the restoration 1322 % phase is entered. 1323 % 1324 % start_with_resto ("no") 1325 % Tells algorithm to switch to restoration phase in first iteration. 1326 % Setting this option to "yes" forces the algorithm to switch to the 1327 % feasibility restoration phase in the first iteration. If the initial 1328 % point is feasible, the algorithm will abort with a failure. 1329 % Possible values: 1330 % - no [don't force start in restoration phase] 1331 % - yes [force start in restoration phase] 1332 % 1333 % soft_resto_pderror_reduction_factor 0 <= ( 0.9999) < +inf 1334 % Required reduction in primal-dual error in the soft restoration phase. 1335 % The soft restoration phase attempts to reduce the primal-dual error with 1336 % regular steps. If the damped primal-dual step (damped only to satisfy the 1337 % fraction-to-the-boundary rule) is not decreasing the primal-dual error by 1338 % at least this factor, then the regular restoration phase is called. 1339 % Choosing "0" here disables the soft restoration phase. 1340 % 1341 % max_soft_resto_iters 0 <= ( 10) < +inf 1342 % Maximum number of iterations performed successively in soft restoration 1343 % phase. 1344 % If the soft restoration phase is performed for more than so many 1345 % iterations in a row, the regular restoration phase is called. 1346 % 1347 % required_infeasibility_reduction 0 <= ( 0.9) < 1 1348 % Required reduction of infeasibility before leaving restoration phase. 1349 % The restoration phase algorithm is performed, until a point is found that 1350 % is acceptable to the filter and the infeasibility has been reduced by at 1351 % least the fraction given by this option. 1352 % 1353 % max_resto_iter 0 <= ( 3000000) < +inf 1354 % Maximum number of successive iterations in restoration phase. 1355 % The algorithm terminates with an error message if the number of 1356 % iterations successively taken in the restoration phase exceeds this 1357 % number. 1358 % 1359 % evaluate_orig_obj_at_resto_trial("yes") 1360 % Determines if the original objective function should be evaluated at 1361 % restoration phase trial points. 1362 % Setting this option to "yes" makes the restoration phase algorithm 1363 % evaluate the objective function of the original problem at every trial 1364 % point encountered during the restoration phase, even if this value is not 1365 % required. In this way, it is guaranteed that the original objective 1366 % function can be evaluated without error at all accepted iterates; 1367 % otherwise the algorithm might fail at a point where the restoration phase 1368 % accepts an iterate that is good for the restoration phase problem, but 1369 % not the original problem. On the other hand, if the evaluation of the 1370 % original objective is expensive, this might be costly. 1371 % Possible values: 1372 % - no [skip evaluation] 1373 % - yes [evaluate at every trial point] 1374 % 1375 % resto_penalty_parameter 0 < ( 1000) < +inf 1376 % Penalty parameter in the restoration phase objective function. 1377 % This is the parameter rho in equation (31a) in the Ipopt implementation 1378 % paper. 1379 % 1380 % bound_mult_reset_threshold 0 <= ( 1000) < +inf 1381 % Threshold for resetting bound multipliers after the restoration phase. 1382 % After returning from the restoration phase, the bound multipliers are 1383 % updated with a Newton step for complementarity. Here, the change in the 1384 % primal variables during the entire restoration phase is taken to be the 1385 % corresponding primal Newton step. However, if after the update the 1386 % largest bound multiplier exceeds the threshold specified by this option, 1387 % the multipliers are all reset to 1. 1388 % 1389 % constr_mult_reset_threshold 0 <= ( 0) < +inf 1390 % Threshold for resetting equality and inequality multipliers after 1391 % restoration phase. 1392 % After returning from the restoration phase, the constraint multipliers 1393 % are recomputed by a least square estimate. This option triggers when 1394 % those least-square estimates should be ignored. 1395 % 1396 % 1397 % 1398 % ### Derivative Checker ### 1399 % 1400 % derivative_test ("none") 1401 % Enable derivative checker 1402 % If this option is enabled, a (slow!) derivative test will be performed 1403 % before the optimization. The test is performed at the user provided 1404 % starting point and marks derivative values that seem suspicious 1405 % Possible values: 1406 % - none [do not perform derivative test] 1407 % - first-order [perform test of first derivatives at starting 1408 % point] 1409 % - second-order [perform test of first and second derivatives at 1410 % starting point] 1411 % - only-second-order [perform test of second derivatives at starting 1412 % point] 1413 % 1414 % derivative_test_first_index -2 <= ( -2) < +inf 1415 % Index of first quantity to be checked by derivative checker 1416 % If this is set to -2, then all derivatives are checked. Otherwise, for 1417 % the first derivative test it specifies the first variable for which the 1418 % test is done (counting starts at 0). For second derivatives, it 1419 % specifies the first constraint for which the test is done; counting of 1420 % constraint indices starts at 0, and -1 refers to the objective function 1421 % Hessian. 1422 % 1423 % derivative_test_perturbation 0 < ( 1e-08) < +inf 1424 % Size of the finite difference perturbation in derivative test. 1425 % This determines the relative perturbation of the variable entries. 1426 % 1427 % derivative_test_tol 0 < ( 0.0001) < +inf 1428 % Threshold for indicating wrong derivative. 1429 % If the relative deviation of the estimated derivative from the given one 1430 % is larger than this value, the corresponding derivative is marked as 1431 % wrong. 1432 % 1433 % derivative_test_print_all ("no") 1434 % Indicates whether information for all estimated derivatives should be 1435 % printed. 1436 % Determines verbosity of derivative checker. 1437 % Possible values: 1438 % - no [Print only suspect derivatives] 1439 % - yes [Print all derivatives] 1440 % 1441 % jacobian_approximation ("exact") 1442 % Specifies technique to compute constraint Jacobian 1443 % Possible values: 1444 % - exact [user-provided derivatives] 1445 % - finite-difference-values [user-provided structure, values by finite 1446 % differences] 1447 % 1448 % findiff_perturbation 0 < ( 1e-07) < +inf 1449 % Size of the finite difference perturbation for derivative approximation. 1450 % This determines the relative perturbation of the variable entries. 1451 % 1452 % point_perturbation_radius 0 <= ( 10) < +inf 1453 % Maximal perturbation of an evaluation point. 1454 % If a random perturbation of a points is required, this number indicates 1455 % the maximal perturbation. This is for example used when determining the 1456 % center point at which the finite difference derivative test is executed. 1457 % 1458 % 1459 % 1460 % ### Hessian Approximation ### 1461 % 1462 % limited_memory_max_history 0 <= ( 6) < +inf 1463 % Maximum size of the history for the limited quasi-Newton Hessian 1464 % approximation. 1465 % This option determines the number of most recent iterations that are 1466 % taken into account for the limited-memory quasi-Newton approximation. 1467 % 1468 % limited_memory_update_type ("bfgs") 1469 % Quasi-Newton update formula for the limited memory approximation. 1470 % Determines which update formula is to be used for the limited-memory 1471 % quasi-Newton approximation. 1472 % Possible values: 1473 % - bfgs [BFGS update (with skipping)] 1474 % - sr1 [SR1 (not working well)] 1475 % 1476 % limited_memory_initialization ("scalar1") 1477 % Initialization strategy for the limited memory quasi-Newton approximation. 1478 % Determines how the diagonal Matrix B_0 as the first term in the limited 1479 % memory approximation should be computed. 1480 % Possible values: 1481 % - scalar1 [sigma = s^Ty/s^Ts] 1482 % - scalar2 [sigma = y^Ty/s^Ty] 1483 % - constant [sigma = limited_memory_init_val] 1484 % 1485 % limited_memory_init_val 0 < ( 1) < +inf 1486 % Value for B0 in low-rank update. 1487 % The starting matrix in the low rank update, B0, is chosen to be this 1488 % multiple of the identity in the first iteration (when no updates have 1489 % been performed yet), and is constantly chosen as this value, if 1490 % "limited_memory_initialization" is "constant". 1491 % 1492 % limited_memory_init_val_max 0 < ( 1e+08) < +inf 1493 % Upper bound on value for B0 in low-rank update. 1494 % The starting matrix in the low rank update, B0, is chosen to be this 1495 % multiple of the identity in the first iteration (when no updates have 1496 % been performed yet), and is constantly chosen as this value, if 1497 % "limited_memory_initialization" is "constant". 1498 % 1499 % limited_memory_init_val_min 0 < ( 1e-08) < +inf 1500 % Lower bound on value for B0 in low-rank update. 1501 % The starting matrix in the low rank update, B0, is chosen to be this 1502 % multiple of the identity in the first iteration (when no updates have 1503 % been performed yet), and is constantly chosen as this value, if 1504 % "limited_memory_initialization" is "constant". 1505 % 1506 % limited_memory_max_skipping 1 <= ( 2) < +inf 1507 % Threshold for successive iterations where update is skipped. 1508 % If the update is skipped more than this number of successive iterations, 1509 % we quasi-Newton approximation is reset. 1510 % 1511 % hessian_approximation ("exact") 1512 % Indicates what Hessian information is to be used. 1513 % This determines which kind of information for the Hessian of the 1514 % Lagrangian function is used by the algorithm. 1515 % Possible values: 1516 % - exact [Use second derivatives provided by the NLP.] 1517 % - limited-memory [Perform a limited-memory quasi-Newton 1518 % approximation] 1519 % 1520 % hessian_approximation_space ("nonlinear-variables") 1521 % Indicates in which subspace the Hessian information is to be approximated. 1522 % Possible values: 1523 % - nonlinear-variables [only in space of nonlinear variables.] 1524 % - all-variables [in space of all variables (without slacks)] 1525 % 1526 % 1527 % 1528 % ### MA27 Linear Solver ### 1529 % 1530 % ma27_pivtol 0 < ( 1e-08) < 1 1531 % Pivot tolerance for the linear solver MA27. 1532 % A smaller number pivots for sparsity, a larger number pivots for 1533 % stability. This option is only available if Ipopt has been compiled with 1534 % MA27. 1535 % 1536 % ma27_pivtolmax 0 < ( 0.0001) < 1 1537 % Maximum pivot tolerance for the linear solver MA27. 1538 % Ipopt may increase pivtol as high as pivtolmax to get a more accurate 1539 % solution to the linear system. This option is only available if Ipopt 1540 % has been compiled with MA27. 1541 % 1542 % ma27_liw_init_factor 1 <= ( 5) < +inf 1543 % Integer workspace memory for MA27. 1544 % The initial integer workspace memory = liw_init_factor * memory required 1545 % by unfactored system. Ipopt will increase the workspace size by 1546 % meminc_factor if required. This option is only available if Ipopt has 1547 % been compiled with MA27. 1548 % 1549 % ma27_la_init_factor 1 <= ( 5) < +inf 1550 % Real workspace memory for MA27. 1551 % The initial real workspace memory = la_init_factor * memory required by 1552 % unfactored system. Ipopt will increase the workspace size by 1553 % meminc_factor if required. This option is only available if Ipopt has 1554 % been compiled with MA27. 1555 % 1556 % ma27_meminc_factor 1 <= ( 10) < +inf 1557 % Increment factor for workspace size for MA27. 1558 % If the integer or real workspace is not large enough, Ipopt will increase 1559 % its size by this factor. This option is only available if Ipopt has been 1560 % compiled with MA27. 1561 % 1562 % ma27_skip_inertia_check ("no") 1563 % Always pretend inertia is correct. 1564 % Setting this option to "yes" essentially disables inertia check. This 1565 % option makes the algorithm non-robust and easily fail, but it might give 1566 % some insight into the necessity of inertia control. 1567 % Possible values: 1568 % - no [check inertia] 1569 % - yes [skip inertia check] 1570 % 1571 % ma27_ignore_singularity ("no") 1572 % Enables MA27's ability to solve a linear system even if the matrix is 1573 % singular. 1574 % Setting this option to "yes" means that Ipopt will call MA27 to compute 1575 % solutions for right hand sides, even if MA27 has detected that the matrix 1576 % is singular (but is still able to solve the linear system). In some cases 1577 % this might be better than using Ipopt's heuristic of small perturbation 1578 % of the lower diagonal of the KKT matrix. 1579 % Possible values: 1580 % - no [Don't have MA27 solve singular systems] 1581 % - yes [Have MA27 solve singular systems] 1582 % 1583 % 1584 % 1585 % ### MA57 Linear Solver ### 1586 % 1587 % ma57_pivtol 0 < ( 1e-08) < 1 1588 % Pivot tolerance for the linear solver MA57. 1589 % A smaller number pivots for sparsity, a larger number pivots for 1590 % stability. This option is only available if Ipopt has been compiled with 1591 % MA57. 1592 % 1593 % ma57_pivtolmax 0 < ( 0.0001) < 1 1594 % Maximum pivot tolerance for the linear solver MA57. 1595 % Ipopt may increase pivtol as high as ma57_pivtolmax to get a more 1596 % accurate solution to the linear system. This option is only available if 1597 % Ipopt has been compiled with MA57. 1598 % 1599 % ma57_pre_alloc 1 <= ( 3) < +inf 1600 % Safety factor for work space memory allocation for the linear solver MA57. 1601 % If 1 is chosen, the suggested amount of work space is used. However, 1602 % choosing a larger number might avoid reallocation if the suggest values 1603 % do not suffice. This option is only available if Ipopt has been compiled 1604 % with MA57. 1605 % 1606 % ma57_pivot_order 0 <= ( 5) <= 5 1607 % Controls pivot order in MA57 1608 % This is INCTL(6) in MA57. 1609 % 1610 % 1611 % 1612 % ### Pardiso Linear Solver ### 1613 % 1614 % pardiso_matching_strategy ("complete+2x2") 1615 % Matching strategy to be used by Pardiso 1616 % This is IPAR(13) in Pardiso manual. This option is only available if 1617 % Ipopt has been compiled with Pardiso. 1618 % Possible values: 1619 % - complete [Match complete (IPAR(13)=1)] 1620 % - complete+2x2 [Match complete+2x2 (IPAR(13)=2)] 1621 % - constraints [Match constraints (IPAR(13)=3)] 1622 % 1623 % pardiso_redo_symbolic_fact_only_if_inertia_wrong("no") 1624 % Toggle for handling case when elements were perturbed by Pardiso. 1625 % This option is only available if Ipopt has been compiled with Pardiso. 1626 % Possible values: 1627 % - no [Always redo symbolic factorization when 1628 % elements were perturbed] 1629 % - yes [Only redo symbolic factorization when elements 1630 % were perturbed if also the inertia was wrong] 1631 % 1632 % pardiso_repeated_perturbation_means_singular("no") 1633 % Interpretation of perturbed elements. 1634 % This option is only available if Ipopt has been compiled with Pardiso. 1635 % Possible values: 1636 % - no [Don't assume that matrix is singular if 1637 % elements were perturbed after recent symbolic 1638 % factorization] 1639 % - yes [Assume that matrix is singular if elements were 1640 % perturbed after recent symbolic factorization] 1641 % 1642 % pardiso_out_of_core_power 0 <= ( 0) < +inf 1643 % Enables out-of-core variant of Pardiso 1644 % Setting this option to a positive integer k makes Pardiso work in the 1645 % out-of-core variant where the factor is split in 2^k subdomains. This is 1646 % IPARM(50) in the Pardiso manual. This option is only available if Ipopt 1647 % has been compiled with Pardiso. 1648 % 1649 % pardiso_msglvl 0 <= ( 0) < +inf 1650 % Pardiso message level 1651 % This determines the amount of analysis output from the Pardiso solver. 1652 % This is MSGLVL in the Pardiso manual. 1653 % 1654 % pardiso_skip_inertia_check ("no") 1655 % Always pretent inertia is correct. 1656 % Setting this option to "yes" essentially disables inertia check. This 1657 % option makes the algorithm non-robust and easily fail, but it might give 1658 % some insight into the necessity of inertia control. 1659 % Possible values: 1660 % - no [check inertia] 1661 % - yes [skip inertia check] 1662 % 1663 % pardiso_max_iter 1 <= ( 500) < +inf 1664 % Maximum number of Krylov-Subspace Iteration 1665 % DPARM(1) 1666 % 1667 % pardiso_iter_relative_tol 0 < ( 1e-06) < 1 1668 % Relative Residual Convergence 1669 % DPARM(2) 1670 % 1671 % pardiso_iter_coarse_size 1 <= ( 5000) < +inf 1672 % Maximum Size of Coarse Grid Matrix 1673 % DPARM(3) 1674 % 1675 % pardiso_iter_max_levels 1 <= ( 10000) < +inf 1676 % Maximum Size of Grid Levels 1677 % DPARM(4) 1678 % 1679 % pardiso_iter_dropping_factor 0 < ( 0.5) < 1 1680 % dropping value for incomplete factor 1681 % DPARM(5) 1682 % 1683 % pardiso_iter_dropping_schur 0 < ( 0.1) < 1 1684 % dropping value for sparsify schur complement factor 1685 % DPARM(6) 1686 % 1687 % pardiso_iter_max_row_fill 1 <= ( 10000000) < +inf 1688 % max fill for each row 1689 % DPARM(7) 1690 % 1691 % pardiso_iter_inverse_norm_factor 1 < ( 5e+06) < +inf 1692 % 1693 % DPARM(8) 1694 % 1695 % pardiso_iterative ("no") 1696 % Switch on iterative solver in Pardiso library 1697 % Possible values: 1698 % - no [] 1699 % - yes [] 1700 % 1701 % pardiso_max_droptol_corrections 1 <= ( 4) < +inf 1702 % Maximal number of decreases of drop tolerance during one solve. 1703 % This is relevant only for iterative Pardiso options. 1704 % 1705 % 1706 % 1707 % ### Mumps Linear Solver ### 1708 % 1709 % mumps_pivtol 0 <= ( 1e-06) <= 1 1710 % Pivot tolerance for the linear solver MUMPS. 1711 % A smaller number pivots for sparsity, a larger number pivots for 1712 % stability. This option is only available if Ipopt has been compiled with 1713 % MUMPS. 1714 % 1715 % mumps_pivtolmax 0 <= ( 0.1) <= 1 1716 % Maximum pivot tolerance for the linear solver MUMPS. 1717 % Ipopt may increase pivtol as high as pivtolmax to get a more accurate 1718 % solution to the linear system. This option is only available if Ipopt 1719 % has been compiled with MUMPS. 1720 % 1721 % mumps_mem_percent 0 <= ( 1000) < +inf 1722 % Percentage increase in the estimated working space for MUMPS. 1723 % In MUMPS when significant extra fill-in is caused by numerical pivoting, 1724 % larger values of mumps_mem_percent may help use the workspace more 1725 % efficiently. On the other hand, if memory requirement are too large at 1726 % the very beginning of the optimization, choosing a much smaller value for 1727 % this option, such as 5, might reduce memory requirements. 1728 % 1729 % mumps_permuting_scaling 0 <= ( 7) <= 7 1730 % Controls permuting and scaling in MUMPS 1731 % This is ICNTL(6) in MUMPS. 1732 % 1733 % mumps_pivot_order 0 <= ( 7) <= 7 1734 % Controls pivot order in MUMPS 1735 % This is ICNTL(7) in MUMPS. 1736 % 1737 % mumps_scaling -2 <= ( 77) <= 77 1738 % Controls scaling in MUMPS 1739 % This is ICNTL(8) in MUMPS. 1740 % 1741 % mumps_dep_tol -inf < ( -1) < +inf 1742 % Pivot threshold for detection of linearly dependent constraints in MUMPS. 1743 % When MUMPS is used to determine linearly dependent constraints, this is 1744 % determines the threshold for a pivot to be considered zero. This is 1745 % CNTL(3) in MUMPS. 1746 % 1747 % 1748 % 1749 % ### MA28 Linear Solver ### 1750 % 1751 % ma28_pivtol 0 < ( 0.01) <= 1 1752 % Pivot tolerance for linear solver MA28. 1753 % This is used when MA28 tries to find the dependent constraints. 1754 % 1755 % 1756 % 1757 % ### Uncategorized ### 1758 % 1759 % warm_start_target_mu -inf < ( 0) < +inf 1760 % Unsupported!