MIPS MATPOWER Interior Point Solver. [X, F, EXITFLAG, OUTPUT, LAMBDA] = ... MIPS(F_FCN, X0, A, L, U, XMIN, XMAX, GH_FCN, HESS_FCN, OPT) Primal-dual interior point method for NLP (nonlinear programming). Minimize a function F(X) beginning from a starting point X0, subject to optional linear and nonlinear constraints and variable bounds. min F(X) X subject to G(X) = 0 (nonlinear equalities) H(X) <= 0 (nonlinear inequalities) L <= A*X <= U (linear constraints) XMIN <= X <= XMAX (variable bounds) Inputs (all optional except F_FCN and X0): F_FCN : handle to function that evaluates the objective function, its gradients and Hessian for a given value of X. If there are nonlinear constraints, the Hessian information is provided by the HESS_FCN function passed in the 9th argument and is not required here. Calling syntax for this function: [F, DF, D2F] = F_FCN(X) X0 : starting value of optimization vector X A, L, U : define the optional linear constraints. Default values for the elements of L and U are -Inf and Inf, respectively. XMIN, XMAX : optional lower and upper bounds on the X variables, defaults are -Inf and Inf, respectively. GH_FCN : handle to function that evaluates the optional nonlinear constraints and their gradients for a given value of X. Calling syntax for this function is: [H, G, DH, DG] = GH_FCN(X) where the columns of DH and DG are the gradients of the corresponding elements of H and G, i.e. DH and DG are transposes of the Jacobians of H and G, respectively. HESS_FCN : handle to function that computes the Hessian of the Lagrangian for given values of X, lambda and mu, where lambda and mu are the multipliers on the equality and inequality constraints, g and h, respectively. The calling syntax for this function is: LXX = HESS_FCN(X, LAM) where lambda = LAM.eqnonlin and mu = LAM.ineqnonlin. OPT : optional options structure with the following fields, all of which are also optional (default values shown in parentheses) verbose (0) - controls level of progress output displayed 0 = no progress output 1 = some progress output 2 = verbose progress output linsolver ('') - linear system solver for solving update steps '' = default solver (currently same as '\') '\' = built-in \ operator 'PARDISO' = PARDISO solver (if available) feastol (1e-6) - termination tolerance for feasibility condition gradtol (1e-6) - termination tolerance for gradient condition comptol (1e-6) - termination tolerance for complementarity condition costtol (1e-6) - termination tolerance for cost condition max_it (150) - maximum number of iterations step_control (0) - set to 1 to enable step-size control sc.red_it (20) - maximum number of step-size reductions if step-control is on cost_mult (1) - cost multiplier used to scale the objective function for improved conditioning. Note: This value is also passed as the 3rd argument to the Hessian evaluation function so that it can appropriately scale the objective function term in the Hessian of the Lagrangian. xi (0.99995) - constant used in alpha updates* sigma (0.1) - centering parameter* z0 (1) - used to initialize slack variables* alpha_min (1e-8) - returns EXITFLAG = -1 if either alpha parameter becomes smaller than this value* rho_min (0.95) - lower bound on rho_t* rho_max (1.05) - upper bound on rho_t* mu_threshold (1e-5) - KT multipliers smaller than this value for non-binding constraints are forced to zero max_stepsize (1e10) - returns EXITFLAG = -1 if the 2-norm of the reduced Newton step exceeds this value* * see the corresponding Appendix in the manual for details PROBLEM : The inputs can alternatively be supplied in a single PROBLEM struct with fields corresponding to the input arguments described above: f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt Outputs: X : solution vector F : final objective function value EXITFLAG : exit flag 1 = first order optimality conditions satisfied 0 = maximum number of iterations reached -1 = numerically failed OUTPUT : output struct with fields: iterations - number of iterations performed hist - struct array with trajectories of the following: feascond, gradcond, compcond, costcond, gamma, stepsize, obj, alphap, alphad message - exit message LAMBDA : struct containing the Langrange and Kuhn-Tucker multipliers on the constraints, with fields: eqnonlin - nonlinear equality constraints ineqnonlin - nonlinear inequality constraints mu_l - lower (left-hand) limit on linear constraints mu_u - upper (right-hand) limit on linear constraints lower - lower bound on optimization variables upper - upper bound on optimization variables Note the calling syntax is almost identical to that of FMINCON from MathWorks' Optimization Toolbox. The main difference is that the linear constraints are specified with A, L, U instead of A, B, Aeq, Beq. The functions for evaluating the objective function, constraints and Hessian are identical. Calling syntax options: [x, f, exitflag, output, lambda] = ... mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt); x = mips(f_fcn, x0); x = mips(f_fcn, x0, A, l); x = mips(f_fcn, x0, A, l, u); x = mips(f_fcn, x0, A, l, u, xmin); x = mips(f_fcn, x0, A, l, u, xmin, xmax); x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn); x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn); x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt); x = mips(problem); where problem is a struct with fields: f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt all fields except 'f_fcn' and 'x0' are optional x = mips(...); [x, f] = mips(...); [x, f, exitflag] = mips(...); [x, f, exitflag, output] = mips(...); [x, f, exitflag, output, lambda] = mips(...); Example: (problem from https://en.wikipedia.org/wiki/Nonlinear_programming) function [f, df, d2f] = f2(x) f = -x(1)*x(2) - x(2)*x(3); if nargout > 1 %% gradient is required df = -[x(2); x(1)+x(3); x(2)]; if nargout > 2 %% Hessian is required d2f = -[0 1 0; 1 0 1; 0 1 0]; %% actually not used since end %% 'hess_fcn' is provided end function [h, g, dh, dg] = gh2(x) h = [ 1 -1 1; 1 1 1] * x.^2 + [-2; -10]; dh = 2 * [x(1) x(1); -x(2) x(2); x(3) x(3)]; g = []; dg = []; function Lxx = hess2(x, lam, cost_mult) if nargin < 3, cost_mult = 1; end mu = lam.ineqnonlin; Lxx = cost_mult * [0 -1 0; -1 0 -1; 0 -1 0] + ... [2*[1 1]*mu 0 0; 0 2*[-1 1]*mu 0; 0 0 2*[1 1]*mu]; problem = struct( ... 'f_fcn', @(x)f2(x), ... 'gh_fcn', @(x)gh2(x), ... 'hess_fcn', @(x, lam, cost_mult)hess2(x, lam, cost_mult), ... 'x0', [1; 1; 0], ... 'opt', struct('verbose', 2) ... ); [x, f, exitflag, output, lambda] = mips(problem); Ported by Ray Zimmerman from C code written by H. Wang for his PhD dissertation: "On the Computation and Application of Multi-period Security-Constrained Optimal Power Flow for Real-time Electricity Market Operations", Cornell University, May 2007. See also: H. Wang, C. E. Murillo-Sanchez, R. D. Zimmerman, R. J. Thomas, "On Computational Issues of Market-Based Optimal Power Flow", IEEE Transactions on Power Systems, Vol. 22, No. 3, Aug. 2007, pp. 1185-1193.
0001 function [x, f, eflag, output, lambda] = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt) 0002 %MIPS MATPOWER Interior Point Solver. 0003 % [X, F, EXITFLAG, OUTPUT, LAMBDA] = ... 0004 % MIPS(F_FCN, X0, A, L, U, XMIN, XMAX, GH_FCN, HESS_FCN, OPT) 0005 % Primal-dual interior point method for NLP (nonlinear programming). 0006 % Minimize a function F(X) beginning from a starting point X0, subject to 0007 % optional linear and nonlinear constraints and variable bounds. 0008 % 0009 % min F(X) 0010 % X 0011 % 0012 % subject to 0013 % 0014 % G(X) = 0 (nonlinear equalities) 0015 % H(X) <= 0 (nonlinear inequalities) 0016 % L <= A*X <= U (linear constraints) 0017 % XMIN <= X <= XMAX (variable bounds) 0018 % 0019 % Inputs (all optional except F_FCN and X0): 0020 % F_FCN : handle to function that evaluates the objective function, 0021 % its gradients and Hessian for a given value of X. If there 0022 % are nonlinear constraints, the Hessian information is 0023 % provided by the HESS_FCN function passed in the 9th argument 0024 % and is not required here. Calling syntax for this function: 0025 % [F, DF, D2F] = F_FCN(X) 0026 % X0 : starting value of optimization vector X 0027 % A, L, U : define the optional linear constraints. Default 0028 % values for the elements of L and U are -Inf and Inf, 0029 % respectively. 0030 % XMIN, XMAX : optional lower and upper bounds on the 0031 % X variables, defaults are -Inf and Inf, respectively. 0032 % GH_FCN : handle to function that evaluates the optional 0033 % nonlinear constraints and their gradients for a given 0034 % value of X. Calling syntax for this function is: 0035 % [H, G, DH, DG] = GH_FCN(X) 0036 % where the columns of DH and DG are the gradients of the 0037 % corresponding elements of H and G, i.e. DH and DG are 0038 % transposes of the Jacobians of H and G, respectively. 0039 % HESS_FCN : handle to function that computes the Hessian of the 0040 % Lagrangian for given values of X, lambda and mu, where 0041 % lambda and mu are the multipliers on the equality and 0042 % inequality constraints, g and h, respectively. The calling 0043 % syntax for this function is: 0044 % LXX = HESS_FCN(X, LAM) 0045 % where lambda = LAM.eqnonlin and mu = LAM.ineqnonlin. 0046 % OPT : optional options structure with the following fields, 0047 % all of which are also optional (default values shown in 0048 % parentheses) 0049 % verbose (0) - controls level of progress output displayed 0050 % 0 = no progress output 0051 % 1 = some progress output 0052 % 2 = verbose progress output 0053 % linsolver ('') - linear system solver for solving update steps 0054 % '' = default solver (currently same as '\') 0055 % '\' = built-in \ operator 0056 % 'PARDISO' = PARDISO solver (if available) 0057 % feastol (1e-6) - termination tolerance for feasibility 0058 % condition 0059 % gradtol (1e-6) - termination tolerance for gradient 0060 % condition 0061 % comptol (1e-6) - termination tolerance for complementarity 0062 % condition 0063 % costtol (1e-6) - termination tolerance for cost condition 0064 % max_it (150) - maximum number of iterations 0065 % step_control (0) - set to 1 to enable step-size control 0066 % sc.red_it (20) - maximum number of step-size reductions if 0067 % step-control is on 0068 % cost_mult (1) - cost multiplier used to scale the objective 0069 % function for improved conditioning. Note: This value is 0070 % also passed as the 3rd argument to the Hessian evaluation 0071 % function so that it can appropriately scale the 0072 % objective function term in the Hessian of the 0073 % Lagrangian. 0074 % xi (0.99995) - constant used in alpha updates* 0075 % sigma (0.1) - centering parameter* 0076 % z0 (1) - used to initialize slack variables* 0077 % alpha_min (1e-8) - returns EXITFLAG = -1 if either alpha 0078 % parameter becomes smaller than this value* 0079 % rho_min (0.95) - lower bound on rho_t* 0080 % rho_max (1.05) - upper bound on rho_t* 0081 % mu_threshold (1e-5) - KT multipliers smaller than this value 0082 % for non-binding constraints are forced to zero 0083 % max_stepsize (1e10) - returns EXITFLAG = -1 if the 2-norm of 0084 % the reduced Newton step exceeds this value* 0085 % * see the corresponding Appendix in the manual for details 0086 % PROBLEM : The inputs can alternatively be supplied in a single 0087 % PROBLEM struct with fields corresponding to the input arguments 0088 % described above: f_fcn, x0, A, l, u, xmin, xmax, 0089 % gh_fcn, hess_fcn, opt 0090 % 0091 % Outputs: 0092 % X : solution vector 0093 % F : final objective function value 0094 % EXITFLAG : exit flag 0095 % 1 = first order optimality conditions satisfied 0096 % 0 = maximum number of iterations reached 0097 % -1 = numerically failed 0098 % OUTPUT : output struct with fields: 0099 % iterations - number of iterations performed 0100 % hist - struct array with trajectories of the following: 0101 % feascond, gradcond, compcond, costcond, gamma, 0102 % stepsize, obj, alphap, alphad 0103 % message - exit message 0104 % LAMBDA : struct containing the Langrange and Kuhn-Tucker 0105 % multipliers on the constraints, with fields: 0106 % eqnonlin - nonlinear equality constraints 0107 % ineqnonlin - nonlinear inequality constraints 0108 % mu_l - lower (left-hand) limit on linear constraints 0109 % mu_u - upper (right-hand) limit on linear constraints 0110 % lower - lower bound on optimization variables 0111 % upper - upper bound on optimization variables 0112 % 0113 % Note the calling syntax is almost identical to that of FMINCON 0114 % from MathWorks' Optimization Toolbox. The main difference is that 0115 % the linear constraints are specified with A, L, U instead of 0116 % A, B, Aeq, Beq. The functions for evaluating the objective 0117 % function, constraints and Hessian are identical. 0118 % 0119 % Calling syntax options: 0120 % [x, f, exitflag, output, lambda] = ... 0121 % mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt); 0122 % 0123 % x = mips(f_fcn, x0); 0124 % x = mips(f_fcn, x0, A, l); 0125 % x = mips(f_fcn, x0, A, l, u); 0126 % x = mips(f_fcn, x0, A, l, u, xmin); 0127 % x = mips(f_fcn, x0, A, l, u, xmin, xmax); 0128 % x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn); 0129 % x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn); 0130 % x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt); 0131 % x = mips(problem); 0132 % where problem is a struct with fields: 0133 % f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt 0134 % all fields except 'f_fcn' and 'x0' are optional 0135 % x = mips(...); 0136 % [x, f] = mips(...); 0137 % [x, f, exitflag] = mips(...); 0138 % [x, f, exitflag, output] = mips(...); 0139 % [x, f, exitflag, output, lambda] = mips(...); 0140 % 0141 % Example: (problem from https://en.wikipedia.org/wiki/Nonlinear_programming) 0142 % function [f, df, d2f] = f2(x) 0143 % f = -x(1)*x(2) - x(2)*x(3); 0144 % if nargout > 1 %% gradient is required 0145 % df = -[x(2); x(1)+x(3); x(2)]; 0146 % if nargout > 2 %% Hessian is required 0147 % d2f = -[0 1 0; 1 0 1; 0 1 0]; %% actually not used since 0148 % end %% 'hess_fcn' is provided 0149 % end 0150 % 0151 % function [h, g, dh, dg] = gh2(x) 0152 % h = [ 1 -1 1; 1 1 1] * x.^2 + [-2; -10]; 0153 % dh = 2 * [x(1) x(1); -x(2) x(2); x(3) x(3)]; 0154 % g = []; dg = []; 0155 % 0156 % function Lxx = hess2(x, lam, cost_mult) 0157 % if nargin < 3, cost_mult = 1; end 0158 % mu = lam.ineqnonlin; 0159 % Lxx = cost_mult * [0 -1 0; -1 0 -1; 0 -1 0] + ... 0160 % [2*[1 1]*mu 0 0; 0 2*[-1 1]*mu 0; 0 0 2*[1 1]*mu]; 0161 % 0162 % problem = struct( ... 0163 % 'f_fcn', @(x)f2(x), ... 0164 % 'gh_fcn', @(x)gh2(x), ... 0165 % 'hess_fcn', @(x, lam, cost_mult)hess2(x, lam, cost_mult), ... 0166 % 'x0', [1; 1; 0], ... 0167 % 'opt', struct('verbose', 2) ... 0168 % ); 0169 % [x, f, exitflag, output, lambda] = mips(problem); 0170 % 0171 % Ported by Ray Zimmerman from C code written by H. Wang for his 0172 % PhD dissertation: 0173 % "On the Computation and Application of Multi-period 0174 % Security-Constrained Optimal Power Flow for Real-time 0175 % Electricity Market Operations", Cornell University, May 2007. 0176 % 0177 % See also: 0178 % H. Wang, C. E. Murillo-Sanchez, R. D. Zimmerman, R. J. Thomas, 0179 % "On Computational Issues of Market-Based Optimal Power Flow", 0180 % IEEE Transactions on Power Systems, Vol. 22, No. 3, Aug. 2007, 0181 % pp. 1185-1193. 0182 0183 % MIPS 0184 % Copyright (c) 2009-2020, Power Systems Engineering Research Center (PSERC) 0185 % by Ray Zimmerman, PSERC Cornell 0186 % 0187 % This file is part of MIPS. 0188 % Covered by the 3-clause BSD License (see LICENSE file for details). 0189 % See https://github.com/MATPOWER/mips for more info. 0190 0191 %%----- input argument handling ----- 0192 %% gather inputs 0193 if nargin == 1 && isstruct(f_fcn) %% problem struct 0194 p = f_fcn; 0195 f_fcn = p.f_fcn; 0196 x0 = p.x0; 0197 nx = size(x0, 1); %% number of optimization variables 0198 if isfield(p, 'opt'), opt = p.opt; else, opt = []; end 0199 if isfield(p, 'hess_fcn'), hess_fcn = p.hess_fcn; else, hess_fcn = ''; end 0200 if isfield(p, 'gh_fcn'), gh_fcn = p.gh_fcn; else, gh_fcn = ''; end 0201 if isfield(p, 'xmax'), xmax = p.xmax; else, xmax = []; end 0202 if isfield(p, 'xmin'), xmin = p.xmin; else, xmin = []; end 0203 if isfield(p, 'u'), u = p.u; else, u = []; end 0204 if isfield(p, 'l'), l = p.l; else, l = []; end 0205 if isfield(p, 'A'), A = p.A; else, A=sparse(0,nx); end 0206 else %% individual args 0207 nx = size(x0, 1); %% number of optimization variables 0208 if nargin < 10 0209 opt = []; 0210 if nargin < 9 0211 hess_fcn = ''; 0212 if nargin < 8 0213 gh_fcn = ''; 0214 if nargin < 7 0215 xmax = []; 0216 if nargin < 6 0217 xmin = []; 0218 if nargin < 5 0219 u = []; 0220 if nargin < 4 0221 l = []; 0222 A = sparse(0,nx); 0223 end 0224 end 0225 end 0226 end 0227 end 0228 end 0229 end 0230 end 0231 %% set default argument values if missing 0232 if isempty(A) || (~isempty(A) && (isempty(l) || all(l == -Inf)) && ... 0233 (isempty(u) || all(u == Inf))) 0234 A = sparse(0,nx); %% no limits => no linear constraints 0235 end 0236 nA = size(A, 1); %% number of original linear constraints 0237 if isempty(u) %% By default, linear inequalities are ... 0238 u = Inf(nA, 1); %% ... unbounded above and ... 0239 end 0240 if isempty(l) 0241 l = -Inf(nA, 1); %% ... unbounded below. 0242 end 0243 if isempty(xmin) %% By default, optimization variables are ... 0244 xmin = -Inf(nx, 1); %% ... unbounded below and ... 0245 end 0246 if isempty(xmax) 0247 xmax = Inf(nx, 1); %% ... unbounded above. 0248 end 0249 if isempty(gh_fcn) 0250 nonlinear = false; %% no nonlinear constraints present 0251 gn = []; hn = []; 0252 else 0253 nonlinear = true; %% we have some nonlinear constraints 0254 end 0255 0256 %% default options 0257 if isempty(opt) 0258 opt = struct('verbose', 0); 0259 end 0260 if ~isfield(opt, 'linsolver') || isempty(opt.linsolver) 0261 opt.linsolver = ''; 0262 end 0263 if ~isfield(opt, 'feastol') || isempty(opt.feastol) || opt.feastol == 0 0264 opt.feastol = 1e-6; 0265 end 0266 if ~isfield(opt, 'gradtol') || isempty(opt.gradtol) || opt.gradtol == 0 0267 opt.gradtol = 1e-6; 0268 end 0269 if ~isfield(opt, 'comptol') || isempty(opt.comptol) || opt.comptol == 0 0270 opt.comptol = 1e-6; 0271 end 0272 if ~isfield(opt, 'costtol') || isempty(opt.costtol) || opt.costtol == 0 0273 opt.costtol = 1e-6; 0274 end 0275 if ~isfield(opt, 'max_it') || isempty(opt.max_it) 0276 opt.max_it = 150; 0277 end 0278 if ~isfield(opt, 'sc') || ~isfield(opt.sc, 'red_it') || isempty(opt.sc.red_it) 0279 opt.sc.red_it = 20; 0280 end 0281 if ~isfield(opt, 'step_control') || isempty(opt.step_control) 0282 opt.step_control = 0; 0283 end 0284 if ~isfield(opt, 'cost_mult') || isempty(opt.cost_mult) 0285 opt.cost_mult = 1; 0286 end 0287 if ~isfield(opt, 'verbose') || isempty(opt.verbose) 0288 opt.verbose = 0; 0289 end 0290 %% algorithm constants 0291 if ~isfield(opt, 'xi') || isempty(opt.xi) 0292 opt.xi = 0.99995; %% OPT_IPM_PHI 0293 end 0294 if ~isfield(opt, 'sigma') || isempty(opt.sigma) 0295 opt.sigma = 0.1; %% OPT_IPM_SIGMA, centering parameter 0296 end 0297 if ~isfield(opt, 'z0') || isempty(opt.z0) 0298 opt.z0 = 1; %% OPT_IPM_INIT_SLACK 0299 end 0300 if ~isfield(opt, 'alpha_min') || isempty(opt.alpha_min) 0301 opt.alpha_min = 1e-8; %% OPT_AP_AD_MIN 0302 end 0303 if ~isfield(opt, 'rho_min') || isempty(opt.rho_min) 0304 opt.rho_min = 0.95; %% OPT_IPM_QUAD_LOWTHRESH 0305 end 0306 if ~isfield(opt, 'rho_max') || isempty(opt.rho_max) 0307 opt.rho_max = 1.05; %% OPT_IPM_QUAD_HIGHTHRESH 0308 end 0309 if ~isfield(opt, 'mu_threshold') || isempty(opt.mu_threshold) 0310 opt.mu_threshold = 1e-5; %% SCOPF_MULTIPLIERS_FILTER_THRESH 0311 end 0312 if ~isfield(opt, 'max_stepsize') || isempty(opt.max_stepsize) 0313 opt.max_stepsize = 1e10; 0314 end 0315 0316 %% initialize history 0317 hist(opt.max_it+1) = struct('feascond', 0, 'gradcond', 0, 'compcond', 0, ... 0318 'costcond', 0, 'gamma', 0, 'stepsize', 0, 'obj', 0, ... 0319 'alphap', 0, 'alphad', 0); 0320 0321 %%----- set up problem ----- 0322 %% constants 0323 [xi, sigma, z0, alpha_min, rho_min, rho_max, mu_threshold, max_stepsize] = ... 0324 deal(opt.xi, opt.sigma, opt.z0, opt.alpha_min, ... 0325 opt.rho_min, opt.rho_max, opt.mu_threshold, opt.max_stepsize); 0326 if xi >= 1 || xi < 0.5 0327 error('mips: opt.xi (%g) must be a number slightly less than 1', opt.xi); 0328 end 0329 if sigma > 1 || sigma <= 0 0330 error('mips: opt.sigma (%g) must be a number between 0 and 1', opt.sigma); 0331 end 0332 0333 %% initialize 0334 i = 0; %% iteration counter 0335 converged = 0; %% flag 0336 eflag = 0; %% exit flag 0337 0338 %% add var limits to linear constraints 0339 AA = [speye(nx); A]; 0340 ll = [xmin; l]; 0341 uu = [xmax; u]; 0342 0343 %% split up linear constraints 0344 ieq = find( abs(uu-ll) <= eps ); %% equality 0345 igt = find( uu >= 1e10 & ll > -1e10 ); %% greater than, unbounded above 0346 ilt = find( ll <= -1e10 & uu < 1e10 ); %% less than, unbounded below 0347 ibx = find( (abs(uu-ll) > eps) & (uu < 1e10) & (ll > -1e10) ); 0348 Ae = AA(ieq, :); 0349 be = uu(ieq, 1); 0350 Ai = [ AA(ilt, :); -AA(igt, :); AA(ibx, :); -AA(ibx, :) ]; 0351 bi = [ uu(ilt, 1); -ll(igt, 1); uu(ibx, 1); -ll(ibx, 1) ]; 0352 0353 %% evaluate cost f(x0) and constraints g(x0), h(x0) 0354 x = x0; 0355 [f, df] = f_fcn(x); %% cost 0356 f = f * opt.cost_mult; 0357 df = df * opt.cost_mult; 0358 if nonlinear 0359 [hn, gn, dhn, dgn] = gh_fcn(x); %% nonlinear constraints 0360 h = [hn; Ai * x - bi]; %% inequality constraints 0361 g = [gn; Ae * x - be]; %% equality constraints 0362 dh = [dhn Ai']; %% 1st derivative of inequalities 0363 dg = [dgn Ae']; %% 1st derivative of equalities 0364 else 0365 h = Ai * x - bi; %% inequality constraints 0366 g = Ae * x - be; %% equality constraints 0367 dh = Ai'; %% 1st derivative of inequalities 0368 dg = Ae'; %% 1st derivative of equalities 0369 end 0370 0371 %% grab some dimensions 0372 neq = size(g, 1); %% number of equality constraints 0373 niq = size(h, 1); %% number of inequality constraints 0374 neqnln = size(gn, 1); %% number of nonlinear equality constraints 0375 niqnln = size(hn, 1); %% number of nonlinear inequality constraints 0376 nlt = length(ilt); %% number of upper bounded linear inequalities 0377 ngt = length(igt); %% number of lower bounded linear inequalities 0378 nbx = length(ibx); %% number of doubly bounded linear inequalities 0379 0380 %% initialize gamma, lam, mu, z, e 0381 gamma = 1; %% barrier coefficient, r in Harry's code 0382 lam = zeros(neq, 1); 0383 z = z0 * ones(niq, 1); 0384 mu = z; 0385 k = find(h < -z0); 0386 z(k) = -h(k); 0387 k = find(gamma ./ z > z0); %% (seems k is always empty if gamma = z0 = 1) 0388 if ~isempty(k) 0389 mu(k) = gamma / z(k); 0390 end 0391 e = ones(niq, 1); 0392 0393 %% check tolerance 0394 f0 = f; 0395 if opt.step_control 0396 L = f + lam' * g + mu' * (h+z) - gamma * sum(log(z)); 0397 end 0398 Lx = df + dg * lam + dh * mu; 0399 if isempty(h) 0400 maxh = zeros(1,0); 0401 else 0402 maxh = max(h); 0403 end 0404 feascond = max([norm(g, Inf), maxh]) / (1 + max([ norm(x, Inf), norm(z, Inf) ])); 0405 gradcond = norm(Lx, Inf) / (1 + max([ norm(lam, Inf), norm(mu, Inf) ])); 0406 compcond = (z' * mu) / (1 + norm(x, Inf)); 0407 costcond = abs(f - f0) / (1 + abs(f0)); 0408 %% save history 0409 hist(i+1) = struct('feascond', feascond, 'gradcond', gradcond, ... 0410 'compcond', compcond, 'costcond', costcond, 'gamma', gamma, ... 0411 'stepsize', 0, 'obj', f/opt.cost_mult, 'alphap', 0, 'alphad', 0); 0412 if strcmp(upper(opt.linsolver), 'PARDISO') 0413 ls = 'PARDISO'; 0414 mplinsolve_opt = struct('pardiso', ... 0415 struct('mtype', -2, ... 0416 'iparm', [8, 1; %% max it refinement steps 0417 10, 12;%% eps pivot 0418 11, 1; %% use scaling vectors 0419 13, 2; %% improved accuracy 0420 18, 0; %% do not determine nnz in LU 0421 21, 3; %% ? undocumented pivoting 0422 ] )); 0423 else 0424 ls = 'built-in'; 0425 mplinsolve_opt = []; 0426 end 0427 if opt.verbose 0428 if opt.step_control, s = '-sc'; else, s = ''; end 0429 v = mipsver('all'); 0430 fprintf('MATPOWER Interior Point Solver -- MIPS%s, Version %s, %s\n (using %s linear solver)', ... 0431 s, v.Version, v.Date, ls); 0432 if opt.verbose > 1 0433 fprintf('\n it objective step size feascond gradcond compcond costcond '); 0434 fprintf('\n---- ------------ --------- ------------ ------------ ------------ ------------'); 0435 fprintf('\n%3d %12.8g %10s %12g %12g %12g %12g', ... 0436 i, f/opt.cost_mult, '', feascond, gradcond, compcond, costcond); 0437 end 0438 end 0439 if feascond < opt.feastol && gradcond < opt.gradtol && ... 0440 compcond < opt.comptol && costcond < opt.costtol 0441 converged = 1; 0442 if opt.verbose 0443 fprintf('\nConverged!\n'); 0444 end 0445 end 0446 0447 %%----- do Newton iterations ----- 0448 while (~converged && i < opt.max_it) 0449 %% update iteration counter 0450 i = i + 1; 0451 0452 %% compute update step 0453 lambda = struct('eqnonlin', lam(1:neqnln), 'ineqnonlin', mu(1:niqnln)); 0454 if nonlinear 0455 if isempty(hess_fcn) 0456 fprintf('mips: Hessian evaluation via finite differences not yet implemented.\n Please provide your own hessian evaluation function.'); 0457 end 0458 Lxx = hess_fcn(x, lambda, opt.cost_mult); 0459 else 0460 [f_, df_, d2f] = f_fcn(x); %% cost 0461 Lxx = d2f * opt.cost_mult; 0462 end 0463 zinvdiag = sparse(1:niq, 1:niq, 1 ./ z, niq, niq); 0464 mudiag = sparse(1:niq, 1:niq, mu, niq, niq); 0465 dh_zinv = dh * zinvdiag; 0466 M = Lxx + dh_zinv * mudiag * dh'; 0467 N = Lx + dh_zinv * (mudiag * h + gamma * e); 0468 dxdlam = mplinsolve([M dg; dg' sparse(neq, neq)], [-N; -g], opt.linsolver, mplinsolve_opt); 0469 % AAA = [ 0470 % M dg; 0471 % dg' sparse(neq, neq) 0472 % ]; 0473 % rc = 1/condest(AAA); 0474 % if rc < 1e-22 0475 % fprintf('my RCOND = %g\n', rc); 0476 % n = size(AAA, 1); 0477 % AAA = AAA + 1e-3 * speye(n,n); 0478 % end 0479 % bbb = [-N; -g]; 0480 % dxdlam = AAA \ bbb; 0481 if any(isnan(dxdlam)) || norm(dxdlam) > max_stepsize 0482 if opt.verbose 0483 fprintf('\nNumerically Failed\n'); 0484 end 0485 eflag = -1; 0486 break; 0487 end 0488 dx = dxdlam(1:nx, 1); 0489 dlam = dxdlam(nx+(1:neq), 1); 0490 dz = -h - z - dh' * dx; 0491 dmu = -mu + zinvdiag *(gamma*e - mudiag * dz); 0492 0493 %% optional step-size control 0494 sc = 0; 0495 if opt.step_control 0496 x1 = x + dx; 0497 0498 %% evaluate cost, constraints, derivatives at x1 0499 [f1, df1] = f_fcn(x1); %% cost 0500 f1 = f1 * opt.cost_mult; 0501 df1 = df1 * opt.cost_mult; 0502 if nonlinear 0503 [hn1, gn1, dhn1, dgn1] = gh_fcn(x1); %% nonlinear constraints 0504 h1 = [hn1; Ai * x1 - bi]; %% inequality constraints 0505 g1 = [gn1; Ae * x1 - be]; %% equality constraints 0506 dh1 = [dhn1 Ai']; %% 1st derivative of inequalities 0507 dg1 = [dgn1 Ae']; %% 1st derivative of equalities 0508 else 0509 h1 = Ai * x1 - bi; %% inequality constraints 0510 g1 = Ae * x1 - be; %% equality constraints 0511 dh1 = dh; %% 1st derivative of inequalities 0512 dg1 = dg; %% 1st derivative of equalities 0513 end 0514 0515 %% check tolerance 0516 Lx1 = df1 + dg1 * lam + dh1 * mu; 0517 if isempty(h1) 0518 maxh1 = zeros(1,0); 0519 else 0520 maxh1 = max(h1); 0521 end 0522 feascond1 = max([norm(g1, Inf), maxh1]) / (1 + max([ norm(x1, Inf), norm(z, Inf) ])); 0523 gradcond1 = norm(Lx1, Inf) / (1 + max([ norm(lam, Inf), norm(mu, Inf) ])); 0524 0525 if feascond1 > feascond && gradcond1 > gradcond 0526 sc = 1; 0527 end 0528 end 0529 if sc 0530 alpha = 1; 0531 for j = 1:opt.sc.red_it 0532 dx1 = alpha * dx; 0533 x1 = x + dx1; 0534 f1 = f_fcn(x1); %% cost 0535 f1 = f1 * opt.cost_mult; 0536 if nonlinear 0537 [hn1, gn1] = gh_fcn(x1); %% nonlinear constraints 0538 h1 = [hn1; Ai * x1 - bi]; %% inequality constraints 0539 g1 = [gn1; Ae * x1 - be]; %% equality constraints 0540 else 0541 h1 = Ai * x1 - bi; %% inequality constraints 0542 g1 = Ae * x1 - be; %% equality constraints 0543 end 0544 L1 = f1 + lam' * g1 + mu' * (h1+z) - gamma * sum(log(z)); 0545 if opt.verbose > 2 0546 fprintf('\n %3d %10g', -j, norm(dx1)); 0547 end 0548 rho = (L1 - L) / (Lx' * dx1 + 0.5 * dx1' * Lxx * dx1); 0549 if rho > rho_min && rho < rho_max 0550 break; 0551 else 0552 alpha = alpha / 2; 0553 end 0554 end 0555 dx = alpha * dx; 0556 dz = alpha * dz; 0557 dlam = alpha * dlam; 0558 dmu = alpha * dmu; 0559 end 0560 0561 %% do the update 0562 k = find(dz < 0); 0563 if isempty(k) 0564 alphap = 1; 0565 else 0566 alphap = min( [xi * min(z(k) ./ -dz(k)) 1] ); 0567 end 0568 k = find(dmu < 0); 0569 if isempty(k) 0570 alphad = 1; 0571 else 0572 alphad = min( [xi * min(mu(k) ./ -dmu(k)) 1] ); 0573 end 0574 x = x + alphap * dx; 0575 z = z + alphap * dz; 0576 lam = lam + alphad * dlam; 0577 mu = mu + alphad * dmu; 0578 if niq > 0 0579 gamma = sigma * (z' * mu) / niq; 0580 end 0581 0582 %% evaluate cost, constraints, derivatives 0583 [f, df] = f_fcn(x); %% cost 0584 f = f * opt.cost_mult; 0585 df = df * opt.cost_mult; 0586 if nonlinear 0587 [hn, gn, dhn, dgn] = gh_fcn(x); %% nonlinear constraints 0588 h = [hn; Ai * x - bi]; %% inequality constraints 0589 g = [gn; Ae * x - be]; %% equality constraints 0590 dh = [dhn Ai']; %% 1st derivative of inequalities 0591 dg = [dgn Ae']; %% 1st derivative of equalities 0592 else 0593 h = Ai * x - bi; %% inequality constraints 0594 g = Ae * x - be; %% equality constraints 0595 %% 1st derivatives are constant, still dh = Ai', dg = Ae' 0596 end 0597 0598 %% check tolerance 0599 Lx = df + dg * lam + dh * mu; 0600 if isempty(h) 0601 maxh = zeros(1,0); 0602 else 0603 maxh = max(h); 0604 end 0605 feascond = max([norm(g, Inf), maxh]) / (1 + max([ norm(x, Inf), norm(z, Inf) ])); 0606 gradcond = norm(Lx, Inf) / (1 + max([ norm(lam, Inf), norm(mu, Inf) ])); 0607 compcond = (z' * mu) / (1 + norm(x, Inf)); 0608 costcond = abs(f - f0) / (1 + abs(f0)); 0609 %% save history 0610 hist(i+1) = struct('feascond', feascond, 'gradcond', gradcond, ... 0611 'compcond', compcond, 'costcond', costcond, 'gamma', gamma, ... 0612 'stepsize', norm(dx), 'obj', f/opt.cost_mult, ... 0613 'alphap', alphap, 'alphad', alphad); 0614 0615 if opt.verbose > 1 0616 fprintf('\n%3d %12.8g %10.5g %12g %12g %12g %12g', ... 0617 i, f/opt.cost_mult, norm(dx), feascond, gradcond, compcond, costcond); 0618 end 0619 if feascond < opt.feastol && gradcond < opt.gradtol && ... 0620 compcond < opt.comptol && costcond < opt.costtol 0621 converged = 1; 0622 if opt.verbose 0623 fprintf('\nConverged!\n'); 0624 end 0625 else 0626 if any(isnan(x)) || alphap < alpha_min || alphad < alpha_min || ... 0627 gamma < eps || gamma > 1/eps 0628 if opt.verbose 0629 fprintf('\nNumerically Failed\n'); 0630 end 0631 eflag = -1; 0632 break; 0633 end 0634 f0 = f; 0635 if opt.step_control 0636 L = f + lam' * g + mu' * (h+z) - gamma * sum(log(z)); 0637 end 0638 end 0639 end 0640 0641 if opt.verbose 0642 if ~converged 0643 fprintf('\nDid not converge in %d iterations.\n', i); 0644 end 0645 end 0646 0647 %%----- package up results ----- 0648 hist = hist(1:i+1); 0649 if eflag ~= -1 0650 eflag = converged; 0651 end 0652 output = struct('iterations', i, 'hist', hist, 'message', ''); 0653 if eflag == 0 0654 output.message = 'Did not converge'; 0655 elseif eflag == 1 0656 output.message = 'Converged'; 0657 elseif eflag == -1 0658 output.message = 'Numerically failed'; 0659 else 0660 output.message = 'Please hang up and dial again'; 0661 end 0662 0663 %% zero out multipliers on non-binding constraints 0664 mu(h < -opt.feastol & mu < mu_threshold) = 0; 0665 0666 %% un-scale cost and prices 0667 f = f / opt.cost_mult; 0668 lam = lam / opt.cost_mult; 0669 mu = mu / opt.cost_mult; 0670 0671 %% re-package multipliers into struct 0672 lam_lin = lam((neqnln+1):neq); %% lambda for linear constraints 0673 mu_lin = mu((niqnln+1):niq); %% mu for linear constraints 0674 kl = find(lam_lin < 0); %% lower bound binding 0675 ku = find(lam_lin > 0); %% upper bound binding 0676 0677 mu_l = zeros(nx+nA, 1); 0678 mu_l(ieq(kl)) = -lam_lin(kl); 0679 mu_l(igt) = mu_lin(nlt+(1:ngt)); 0680 mu_l(ibx) = mu_lin(nlt+ngt+nbx+(1:nbx)); 0681 0682 mu_u = zeros(nx+nA, 1); 0683 mu_u(ieq(ku)) = lam_lin(ku); 0684 mu_u(ilt) = mu_lin(1:nlt); 0685 mu_u(ibx) = mu_lin(nlt+ngt+(1:nbx)); 0686 0687 fields = { ... 0688 'mu_l', mu_l((nx+1):end), ... 0689 'mu_u', mu_u((nx+1):end), ... 0690 'lower', mu_l(1:nx), ... 0691 'upper', mu_u(1:nx) ... 0692 }; 0693 0694 if niqnln > 0 0695 fields = { ... 0696 'ineqnonlin', mu(1:niqnln), ... 0697 fields{:} ... 0698 }; 0699 end 0700 if neqnln > 0 0701 fields = { ... 0702 'eqnonlin', lam(1:neqnln), ... 0703 fields{:} ... 0704 }; 0705 end 0706 0707 lambda = struct(fields{:}); 0708 0709 % lambda = struct( ... 0710 % 'eqnonlin', lam(1:neqnln), ... 0711 % 'ineqnonlin', mu(1:niqnln), ... 0712 % 'mu_l', mu_l((nx+1):end), ... 0713 % 'mu_u', mu_u((nx+1):end), ... 0714 % 'lower', mu_l(1:nx), ... 0715 % 'upper', mu_u(1:nx) );