MIPS MATLAB Interior Point Solver. [X, F, EXITFLAG, OUTPUT, LAMBDA] = ... MIPS(F_FCN, X0, A, L, U, XMIN, XMAX, GH_FCN, HESS_FCN, OPT) Primal-dual interior point method for NLP (nonlinear programming). Minimize a function F(X) beginning from a starting point X0, subject to optional linear and nonlinear constraints and variable bounds. min F(X) X subject to G(X) = 0 (nonlinear equalities) H(X) <= 0 (nonlinear inequalities) L <= A*X <= U (linear constraints) XMIN <= X <= XMAX (variable bounds) Inputs (all optional except F_FCN and X0): F_FCN : handle to function that evaluates the objective function, its gradients and Hessian for a given value of X. If there are nonlinear constraints, the Hessian information is provided by the HESS_FCN function passed in the 9th argument and is not required here. Calling syntax for this function: [F, DF, D2F] = F_FCN(X) X0 : starting value of optimization vector X A, L, U : define the optional linear constraints. Default values for the elements of L and U are -Inf and Inf, respectively. XMIN, XMAX : optional lower and upper bounds on the X variables, defaults are -Inf and Inf, respectively. GH_FCN : handle to function that evaluates the optional nonlinear constraints and their gradients for a given value of X. Calling syntax for this function is: [H, G, DH, DG] = GH_FCN(X) HESS_FCN : handle to function that computes the Hessian of the Lagrangian for given values of X, lambda and mu, where lambda and mu are the multipliers on the equality and inequality constraints, g and h, respectively. The calling syntax for this function is: LXX = HESS_FCN(X, LAM) where lambda = LAM.eqnonlin and mu = LAM.ineqnonlin. OPT : optional options structure with the following fields, all of which are also optional (default values shown in parentheses) verbose (0) - controls level of progress output displayed feastol (1e-6) - termination tolerance for feasibility condition gradtol (1e-6) - termination tolerance for gradient condition comptol (1e-6) - termination tolerance for complementarity condition costtol (1e-6) - termination tolerance for cost condition max_it (150) - maximum number of iterations step_control (0) - set to 1 to enable step-size control sc.red_it (20) - maximum number of step-size reductions if step-control is on cost_mult (1) - cost multiplier used to scale the objective function for improved conditioning. Note: This value is also passed as the 3rd argument to the Hessian evaluation function so that it can appropriately scale the objective function term in the Hessian of the Lagrangian. xi (0.99995) - constant used in alpha updates* sigma (0.1) - centering parameter* z0 (1) - used to initialize slack variables* alpha_min (1e-8) - returns EXITFLAG = -1 if either alpha parameter becomes smaller than this value* rho_min (0.95) - lower bound on rho_t* rho_max (1.05) - upper bound on rho_t* mu_threshold (1e-5) - KT multipliers smaller than this value for non-binding constraints are forced to zero max_stepsize (1e10) - returns EXITFLAG = -1 if the 2-norm of the reduced Newton step exceeds this value* * see the corresponding Appendix in the manual for details PROBLEM : The inputs can alternatively be supplied in a single PROBLEM struct with fields corresponding to the input arguments described above: f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt Outputs: X : solution vector F : final objective function value EXITFLAG : exit flag 1 = first order optimality conditions satisfied 0 = maximum number of iterations reached -1 = numerically failed OUTPUT : output struct with fields: iterations - number of iterations performed hist - struct array with trajectories of the following: feascond, gradcond, compcond, costcond, gamma, stepsize, obj, alphap, alphad message - exit message LAMBDA : struct containing the Langrange and Kuhn-Tucker multipliers on the constraints, with fields: eqnonlin - nonlinear equality constraints ineqnonlin - nonlinear inequality constraints mu_l - lower (left-hand) limit on linear constraints mu_u - upper (right-hand) limit on linear constraints lower - lower bound on optimization variables upper - upper bound on optimization variables Note the calling syntax is almost identical to that of FMINCON from MathWorks' Optimization Toolbox. The main difference is that the linear constraints are specified with A, L, U instead of A, B, Aeq, Beq. The functions for evaluating the objective function, constraints and Hessian are identical. Calling syntax options: [x, f, exitflag, output, lambda] = ... mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt); x = mips(f_fcn, x0); x = mips(f_fcn, x0, A, l); x = mips(f_fcn, x0, A, l, u); x = mips(f_fcn, x0, A, l, u, xmin); x = mips(f_fcn, x0, A, l, u, xmin, xmax); x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn); x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn); x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt); x = mips(problem); where problem is a struct with fields: f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt all fields except 'f_fcn' and 'x0' are optional x = mips(...); [x, f] = mips(...); [x, f, exitflag] = mips(...); [x, f, exitflag, output] = mips(...); [x, f, exitflag, output, lambda] = mips(...); Example: (problem from http://en.wikipedia.org/wiki/Nonlinear_programming) function [f, df, d2f] = f2(x) f = -x(1)*x(2) - x(2)*x(3); if nargout > 1 %% gradient is required df = -[x(2); x(1)+x(3); x(2)]; if nargout > 2 %% Hessian is required d2f = -[0 1 0; 1 0 1; 0 1 0]; %% actually not used since end %% 'hess_fcn' is provided end function [h, g, dh, dg] = gh2(x) h = [ 1 -1 1; 1 1 1] * x.^2 + [-2; -10]; dh = 2 * [x(1) x(1); -x(2) x(2); x(3) x(3)]; g = []; dg = []; function Lxx = hess2(x, lam, cost_mult) if nargin < 3, cost_mult = 1; end mu = lam.ineqnonlin; Lxx = cost_mult * [0 -1 0; -1 0 -1; 0 -1 0] + ... [2*[1 1]*mu 0 0; 0 2*[-1 1]*mu 0; 0 0 2*[1 1]*mu]; problem = struct( ... 'f_fcn', @(x)f2(x), ... 'gh_fcn', @(x)gh2(x), ... 'hess_fcn', @(x, lam, cost_mult)hess2(x, lam, cost_mult), ... 'x0', [1; 1; 0], ... 'opt', struct('verbose', 2) ... ); [x, f, exitflag, output, lambda] = mips(problem); Ported by Ray Zimmerman from C code written by H. Wang for his PhD dissertation: "On the Computation and Application of Multi-period Security-Constrained Optimal Power Flow for Real-time Electricity Market Operations", Cornell University, May 2007. See also: H. Wang, C. E. Murillo-S‡nchez, R. D. Zimmerman, R. J. Thomas, "On Computational Issues of Market-Based Optimal Power Flow", IEEE Transactions on Power Systems, Vol. 22, No. 3, Aug. 2007, pp. 1185-1193.
0001 function [x, f, eflag, output, lambda] = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt) 0002 %MIPS MATLAB Interior Point Solver. 0003 % [X, F, EXITFLAG, OUTPUT, LAMBDA] = ... 0004 % MIPS(F_FCN, X0, A, L, U, XMIN, XMAX, GH_FCN, HESS_FCN, OPT) 0005 % Primal-dual interior point method for NLP (nonlinear programming). 0006 % Minimize a function F(X) beginning from a starting point X0, subject to 0007 % optional linear and nonlinear constraints and variable bounds. 0008 % 0009 % min F(X) 0010 % X 0011 % 0012 % subject to 0013 % 0014 % G(X) = 0 (nonlinear equalities) 0015 % H(X) <= 0 (nonlinear inequalities) 0016 % L <= A*X <= U (linear constraints) 0017 % XMIN <= X <= XMAX (variable bounds) 0018 % 0019 % Inputs (all optional except F_FCN and X0): 0020 % F_FCN : handle to function that evaluates the objective function, 0021 % its gradients and Hessian for a given value of X. If there 0022 % are nonlinear constraints, the Hessian information is 0023 % provided by the HESS_FCN function passed in the 9th argument 0024 % and is not required here. Calling syntax for this function: 0025 % [F, DF, D2F] = F_FCN(X) 0026 % X0 : starting value of optimization vector X 0027 % A, L, U : define the optional linear constraints. Default 0028 % values for the elements of L and U are -Inf and Inf, 0029 % respectively. 0030 % XMIN, XMAX : optional lower and upper bounds on the 0031 % X variables, defaults are -Inf and Inf, respectively. 0032 % GH_FCN : handle to function that evaluates the optional 0033 % nonlinear constraints and their gradients for a given 0034 % value of X. Calling syntax for this function is: 0035 % [H, G, DH, DG] = GH_FCN(X) 0036 % HESS_FCN : handle to function that computes the Hessian of the 0037 % Lagrangian for given values of X, lambda and mu, where 0038 % lambda and mu are the multipliers on the equality and 0039 % inequality constraints, g and h, respectively. The calling 0040 % syntax for this function is: 0041 % LXX = HESS_FCN(X, LAM) 0042 % where lambda = LAM.eqnonlin and mu = LAM.ineqnonlin. 0043 % OPT : optional options structure with the following fields, 0044 % all of which are also optional (default values shown in 0045 % parentheses) 0046 % verbose (0) - controls level of progress output displayed 0047 % feastol (1e-6) - termination tolerance for feasibility 0048 % condition 0049 % gradtol (1e-6) - termination tolerance for gradient 0050 % condition 0051 % comptol (1e-6) - termination tolerance for complementarity 0052 % condition 0053 % costtol (1e-6) - termination tolerance for cost condition 0054 % max_it (150) - maximum number of iterations 0055 % step_control (0) - set to 1 to enable step-size control 0056 % sc.red_it (20) - maximum number of step-size reductions if 0057 % step-control is on 0058 % cost_mult (1) - cost multiplier used to scale the objective 0059 % function for improved conditioning. Note: This value is 0060 % also passed as the 3rd argument to the Hessian evaluation 0061 % function so that it can appropriately scale the 0062 % objective function term in the Hessian of the 0063 % Lagrangian. 0064 % xi (0.99995) - constant used in alpha updates* 0065 % sigma (0.1) - centering parameter* 0066 % z0 (1) - used to initialize slack variables* 0067 % alpha_min (1e-8) - returns EXITFLAG = -1 if either alpha 0068 % parameter becomes smaller than this value* 0069 % rho_min (0.95) - lower bound on rho_t* 0070 % rho_max (1.05) - upper bound on rho_t* 0071 % mu_threshold (1e-5) - KT multipliers smaller than this value 0072 % for non-binding constraints are forced to zero 0073 % max_stepsize (1e10) - returns EXITFLAG = -1 if the 2-norm of 0074 % the reduced Newton step exceeds this value* 0075 % * see the corresponding Appendix in the manual for details 0076 % PROBLEM : The inputs can alternatively be supplied in a single 0077 % PROBLEM struct with fields corresponding to the input arguments 0078 % described above: f_fcn, x0, A, l, u, xmin, xmax, 0079 % gh_fcn, hess_fcn, opt 0080 % 0081 % Outputs: 0082 % X : solution vector 0083 % F : final objective function value 0084 % EXITFLAG : exit flag 0085 % 1 = first order optimality conditions satisfied 0086 % 0 = maximum number of iterations reached 0087 % -1 = numerically failed 0088 % OUTPUT : output struct with fields: 0089 % iterations - number of iterations performed 0090 % hist - struct array with trajectories of the following: 0091 % feascond, gradcond, compcond, costcond, gamma, 0092 % stepsize, obj, alphap, alphad 0093 % message - exit message 0094 % LAMBDA : struct containing the Langrange and Kuhn-Tucker 0095 % multipliers on the constraints, with fields: 0096 % eqnonlin - nonlinear equality constraints 0097 % ineqnonlin - nonlinear inequality constraints 0098 % mu_l - lower (left-hand) limit on linear constraints 0099 % mu_u - upper (right-hand) limit on linear constraints 0100 % lower - lower bound on optimization variables 0101 % upper - upper bound on optimization variables 0102 % 0103 % Note the calling syntax is almost identical to that of FMINCON 0104 % from MathWorks' Optimization Toolbox. The main difference is that 0105 % the linear constraints are specified with A, L, U instead of 0106 % A, B, Aeq, Beq. The functions for evaluating the objective 0107 % function, constraints and Hessian are identical. 0108 % 0109 % Calling syntax options: 0110 % [x, f, exitflag, output, lambda] = ... 0111 % mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt); 0112 % 0113 % x = mips(f_fcn, x0); 0114 % x = mips(f_fcn, x0, A, l); 0115 % x = mips(f_fcn, x0, A, l, u); 0116 % x = mips(f_fcn, x0, A, l, u, xmin); 0117 % x = mips(f_fcn, x0, A, l, u, xmin, xmax); 0118 % x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn); 0119 % x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn); 0120 % x = mips(f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt); 0121 % x = mips(problem); 0122 % where problem is a struct with fields: 0123 % f_fcn, x0, A, l, u, xmin, xmax, gh_fcn, hess_fcn, opt 0124 % all fields except 'f_fcn' and 'x0' are optional 0125 % x = mips(...); 0126 % [x, f] = mips(...); 0127 % [x, f, exitflag] = mips(...); 0128 % [x, f, exitflag, output] = mips(...); 0129 % [x, f, exitflag, output, lambda] = mips(...); 0130 % 0131 % Example: (problem from http://en.wikipedia.org/wiki/Nonlinear_programming) 0132 % function [f, df, d2f] = f2(x) 0133 % f = -x(1)*x(2) - x(2)*x(3); 0134 % if nargout > 1 %% gradient is required 0135 % df = -[x(2); x(1)+x(3); x(2)]; 0136 % if nargout > 2 %% Hessian is required 0137 % d2f = -[0 1 0; 1 0 1; 0 1 0]; %% actually not used since 0138 % end %% 'hess_fcn' is provided 0139 % end 0140 % 0141 % function [h, g, dh, dg] = gh2(x) 0142 % h = [ 1 -1 1; 1 1 1] * x.^2 + [-2; -10]; 0143 % dh = 2 * [x(1) x(1); -x(2) x(2); x(3) x(3)]; 0144 % g = []; dg = []; 0145 % 0146 % function Lxx = hess2(x, lam, cost_mult) 0147 % if nargin < 3, cost_mult = 1; end 0148 % mu = lam.ineqnonlin; 0149 % Lxx = cost_mult * [0 -1 0; -1 0 -1; 0 -1 0] + ... 0150 % [2*[1 1]*mu 0 0; 0 2*[-1 1]*mu 0; 0 0 2*[1 1]*mu]; 0151 % 0152 % problem = struct( ... 0153 % 'f_fcn', @(x)f2(x), ... 0154 % 'gh_fcn', @(x)gh2(x), ... 0155 % 'hess_fcn', @(x, lam, cost_mult)hess2(x, lam, cost_mult), ... 0156 % 'x0', [1; 1; 0], ... 0157 % 'opt', struct('verbose', 2) ... 0158 % ); 0159 % [x, f, exitflag, output, lambda] = mips(problem); 0160 % 0161 % Ported by Ray Zimmerman from C code written by H. Wang for his 0162 % PhD dissertation: 0163 % "On the Computation and Application of Multi-period 0164 % Security-Constrained Optimal Power Flow for Real-time 0165 % Electricity Market Operations", Cornell University, May 2007. 0166 % 0167 % See also: 0168 % H. Wang, C. E. Murillo-S‡nchez, R. D. Zimmerman, R. J. Thomas, 0169 % "On Computational Issues of Market-Based Optimal Power Flow", 0170 % IEEE Transactions on Power Systems, Vol. 22, No. 3, Aug. 2007, 0171 % pp. 1185-1193. 0172 0173 % MIPS 0174 % $Id: mips.m 2437 2014-12-04 19:13:54Z ray $ 0175 % by Ray Zimmerman, PSERC Cornell 0176 % Copyright (c) 2009-2013 by Power System Engineering Research Center (PSERC) 0177 % 0178 % This file is part of MIPS. 0179 % See http://www.pserc.cornell.edu/matpower/ for more info. 0180 % 0181 % MIPS is free software: you can redistribute it and/or modify 0182 % it under the terms of the GNU General Public License as published 0183 % by the Free Software Foundation, either version 3 of the License, 0184 % or (at your option) any later version. 0185 % 0186 % MIPS is distributed in the hope that it will be useful, 0187 % but WITHOUT ANY WARRANTY; without even the implied warranty of 0188 % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 0189 % GNU General Public License for more details. 0190 % 0191 % You should have received a copy of the GNU General Public License 0192 % along with MIPS. If not, see <http://www.gnu.org/licenses/>. 0193 % 0194 % Additional permission under GNU GPL version 3 section 7 0195 % 0196 % If you modify MIPS, or any covered work, to interface with 0197 % other modules (such as MATLAB code and MEX-files) available in a 0198 % MATLAB(R) or comparable environment containing parts covered 0199 % under other licensing terms, the licensors of MIPS grant 0200 % you additional permission to convey the resulting work. 0201 0202 %%----- input argument handling ----- 0203 %% gather inputs 0204 if nargin == 1 && isstruct(f_fcn) %% problem struct 0205 p = f_fcn; 0206 f_fcn = p.f_fcn; 0207 x0 = p.x0; 0208 nx = size(x0, 1); %% number of optimization variables 0209 if isfield(p, 'opt'), opt = p.opt; else, opt = []; end 0210 if isfield(p, 'hess_fcn'), hess_fcn = p.hess_fcn; else, hess_fcn = ''; end 0211 if isfield(p, 'gh_fcn'), gh_fcn = p.gh_fcn; else, gh_fcn = ''; end 0212 if isfield(p, 'xmax'), xmax = p.xmax; else, xmax = []; end 0213 if isfield(p, 'xmin'), xmin = p.xmin; else, xmin = []; end 0214 if isfield(p, 'u'), u = p.u; else, u = []; end 0215 if isfield(p, 'l'), l = p.l; else, l = []; end 0216 if isfield(p, 'A'), A = p.A; else, A=sparse(0,nx); end 0217 else %% individual args 0218 nx = size(x0, 1); %% number of optimization variables 0219 if nargin < 10 0220 opt = []; 0221 if nargin < 9 0222 hess_fcn = ''; 0223 if nargin < 8 0224 gh_fcn = ''; 0225 if nargin < 7 0226 xmax = []; 0227 if nargin < 6 0228 xmin = []; 0229 if nargin < 5 0230 u = []; 0231 if nargin < 4 0232 l = []; 0233 A = sparse(0,nx); 0234 end 0235 end 0236 end 0237 end 0238 end 0239 end 0240 end 0241 end 0242 %% set default argument values if missing 0243 if ~isempty(A) && (isempty(l) || all(l == -Inf)) && ... 0244 (isempty(u) || all(u == Inf)) 0245 A = sparse(0,nx); %% no limits => no linear constraints 0246 end 0247 nA = size(A, 1); %% number of original linear constraints 0248 if isempty(u) %% By default, linear inequalities are ... 0249 u = Inf * ones(nA, 1); %% ... unbounded above and ... 0250 end 0251 if isempty(l) 0252 l = -Inf * ones(nA, 1); %% ... unbounded below. 0253 end 0254 if isempty(xmin) %% By default, optimization variables are ... 0255 xmin = -Inf * ones(nx, 1); %% ... unbounded below and ... 0256 end 0257 if isempty(xmax) 0258 xmax = Inf * ones(nx, 1); %% ... unbounded above. 0259 end 0260 if isempty(gh_fcn) 0261 nonlinear = false; %% no nonlinear constraints present 0262 gn = []; hn = []; 0263 else 0264 nonlinear = true; %% we have some nonlinear constraints 0265 end 0266 0267 %% default options 0268 if isempty(opt) 0269 opt = struct('verbose', 0); 0270 end 0271 if ~isfield(opt, 'feastol') || isempty(opt.feastol) 0272 opt.feastol = 1e-6; 0273 end 0274 if ~isfield(opt, 'gradtol') || isempty(opt.gradtol) 0275 opt.gradtol = 1e-6; 0276 end 0277 if ~isfield(opt, 'comptol') || isempty(opt.comptol) 0278 opt.comptol = 1e-6; 0279 end 0280 if ~isfield(opt, 'costtol') || isempty(opt.costtol) 0281 opt.costtol = 1e-6; 0282 end 0283 if ~isfield(opt, 'max_it') || isempty(opt.max_it) 0284 opt.max_it = 150; 0285 end 0286 if ~isfield(opt, 'sc') || ~isfield(opt.sc, 'red_it') || isempty(opt.sc.red_it) 0287 opt.sc.red_it = 20; 0288 end 0289 if ~isfield(opt, 'step_control') || isempty(opt.step_control) 0290 opt.step_control = 0; 0291 end 0292 if ~isfield(opt, 'cost_mult') || isempty(opt.cost_mult) 0293 opt.cost_mult = 1; 0294 end 0295 if ~isfield(opt, 'verbose') || isempty(opt.verbose) 0296 opt.verbose = 0; 0297 end 0298 %% algorithm constants 0299 if ~isfield(opt, 'xi') || isempty(opt.xi) 0300 opt.xi = 0.99995; %% OPT_IPM_PHI 0301 end 0302 if ~isfield(opt, 'sigma') || isempty(opt.sigma) 0303 opt.sigma = 0.1; %% OPT_IPM_SIGMA, centering parameter 0304 end 0305 if ~isfield(opt, 'z0') || isempty(opt.z0) 0306 opt.z0 = 1; %% OPT_IPM_INIT_SLACK 0307 end 0308 if ~isfield(opt, 'alpha_min') || isempty(opt.alpha_min) 0309 opt.alpha_min = 1e-8; %% OPT_AP_AD_MIN 0310 end 0311 if ~isfield(opt, 'rho_min') || isempty(opt.rho_min) 0312 opt.rho_min = 0.95; %% OPT_IPM_QUAD_LOWTHRESH 0313 end 0314 if ~isfield(opt, 'rho_max') || isempty(opt.rho_max) 0315 opt.rho_max = 1.05; %% OPT_IPM_QUAD_HIGHTHRESH 0316 end 0317 if ~isfield(opt, 'mu_threshold') || isempty(opt.mu_threshold) 0318 opt.mu_threshold = 1e-5; %% SCOPF_MULTIPLIERS_FILTER_THRESH 0319 end 0320 if ~isfield(opt, 'max_stepsize') || isempty(opt.max_stepsize) 0321 opt.max_stepsize = 1e10; 0322 end 0323 0324 %% initialize history 0325 hist(opt.max_it+1) = struct('feascond', 0, 'gradcond', 0, 'compcond', 0, ... 0326 'costcond', 0, 'gamma', 0, 'stepsize', 0, 'obj', 0, ... 0327 'alphap', 0, 'alphad', 0); 0328 0329 %%----- set up problem ----- 0330 %% constants 0331 [xi, sigma, z0, alpha_min, rho_min, rho_max, mu_threshold, max_stepsize] = ... 0332 deal(opt.xi, opt.sigma, opt.z0, opt.alpha_min, ... 0333 opt.rho_min, opt.rho_max, opt.mu_threshold, opt.max_stepsize); 0334 if xi >= 1 || xi < 0.5 0335 error('mips: opt.xi (%g) must be a number slightly less than 1', opt.xi); 0336 end 0337 if sigma > 1 || sigma <= 0 0338 error('mips: opt.sigma (%g) must be a number between 0 and 1', opt.sigma); 0339 end 0340 0341 %% initialize 0342 i = 0; %% iteration counter 0343 converged = 0; %% flag 0344 eflag = 0; %% exit flag 0345 0346 %% add var limits to linear constraints 0347 AA = [speye(nx); A]; 0348 ll = [xmin; l]; 0349 uu = [xmax; u]; 0350 0351 %% split up linear constraints 0352 ieq = find( abs(uu-ll) <= eps ); %% equality 0353 igt = find( uu >= 1e10 & ll > -1e10 ); %% greater than, unbounded above 0354 ilt = find( ll <= -1e10 & uu < 1e10 ); %% less than, unbounded below 0355 ibx = find( (abs(uu-ll) > eps) & (uu < 1e10) & (ll > -1e10) ); 0356 Ae = AA(ieq, :); 0357 be = uu(ieq, 1); 0358 Ai = [ AA(ilt, :); -AA(igt, :); AA(ibx, :); -AA(ibx, :) ]; 0359 bi = [ uu(ilt, 1); -ll(igt, 1); uu(ibx, 1); -ll(ibx, 1) ]; 0360 0361 %% evaluate cost f(x0) and constraints g(x0), h(x0) 0362 x = x0; 0363 [f, df] = f_fcn(x); %% cost 0364 f = f * opt.cost_mult; 0365 df = df * opt.cost_mult; 0366 if nonlinear 0367 [hn, gn, dhn, dgn] = gh_fcn(x); %% nonlinear constraints 0368 h = [hn; Ai * x - bi]; %% inequality constraints 0369 g = [gn; Ae * x - be]; %% equality constraints 0370 dh = [dhn Ai']; %% 1st derivative of inequalities 0371 dg = [dgn Ae']; %% 1st derivative of equalities 0372 else 0373 h = Ai * x - bi; %% inequality constraints 0374 g = Ae * x - be; %% equality constraints 0375 dh = Ai'; %% 1st derivative of inequalities 0376 dg = Ae'; %% 1st derivative of equalities 0377 end 0378 0379 %% grab some dimensions 0380 neq = size(g, 1); %% number of equality constraints 0381 niq = size(h, 1); %% number of inequality constraints 0382 neqnln = size(gn, 1); %% number of nonlinear equality constraints 0383 niqnln = size(hn, 1); %% number of nonlinear inequality constraints 0384 nlt = length(ilt); %% number of upper bounded linear inequalities 0385 ngt = length(igt); %% number of lower bounded linear inequalities 0386 nbx = length(ibx); %% number of doubly bounded linear inequalities 0387 0388 %% initialize gamma, lam, mu, z, e 0389 gamma = 1; %% barrier coefficient, r in Harry's code 0390 lam = zeros(neq, 1); 0391 z = z0 * ones(niq, 1); 0392 mu = z; 0393 k = find(h < -z0); 0394 z(k) = -h(k); 0395 k = find(gamma / z > z0); %% (seems k is always empty if gamma = z0 = 1) 0396 if ~isempty(k) 0397 mu(k) = gamma / z(k); 0398 end 0399 e = ones(niq, 1); 0400 0401 %% check tolerance 0402 f0 = f; 0403 if opt.step_control 0404 L = f + lam' * g + mu' * (h+z) - gamma * sum(log(z)); 0405 end 0406 Lx = df + dg * lam + dh * mu; 0407 if isempty(h) 0408 maxh = zeros(1,0); 0409 else 0410 maxh = max(h); 0411 end 0412 feascond = max([norm(g, Inf), maxh]) / (1 + max([ norm(x, Inf), norm(z, Inf) ])); 0413 gradcond = norm(Lx, Inf) / (1 + max([ norm(lam, Inf), norm(mu, Inf) ])); 0414 compcond = (z' * mu) / (1 + norm(x, Inf)); 0415 costcond = abs(f - f0) / (1 + abs(f0)); 0416 %% save history 0417 hist(i+1) = struct('feascond', feascond, 'gradcond', gradcond, ... 0418 'compcond', compcond, 'costcond', costcond, 'gamma', gamma, ... 0419 'stepsize', 0, 'obj', f/opt.cost_mult, 'alphap', 0, 'alphad', 0); 0420 if opt.verbose 0421 if opt.step_control, s = '-sc'; else, s = ''; end 0422 v = mipsver('all'); 0423 fprintf('MATLAB Interior Point Solver -- MIPS%s, Version %s, %s', ... 0424 s, v.Version, v.Date); 0425 if opt.verbose > 1 0426 fprintf('\n it objective step size feascond gradcond compcond costcond '); 0427 fprintf('\n---- ------------ --------- ------------ ------------ ------------ ------------'); 0428 fprintf('\n%3d %12.8g %10s %12g %12g %12g %12g', ... 0429 i, f/opt.cost_mult, '', feascond, gradcond, compcond, costcond); 0430 end 0431 end 0432 if feascond < opt.feastol && gradcond < opt.gradtol && ... 0433 compcond < opt.comptol && costcond < opt.costtol 0434 converged = 1; 0435 if opt.verbose 0436 fprintf('\nConverged!\n'); 0437 end 0438 end 0439 0440 %%----- do Newton iterations ----- 0441 while (~converged && i < opt.max_it) 0442 %% update iteration counter 0443 i = i + 1; 0444 0445 %% compute update step 0446 lambda = struct('eqnonlin', lam(1:neqnln), 'ineqnonlin', mu(1:niqnln)); 0447 if nonlinear 0448 if isempty(hess_fcn) 0449 fprintf('mips: Hessian evaluation via finite differences not yet implemented.\n Please provide your own hessian evaluation function.'); 0450 end 0451 Lxx = hess_fcn(x, lambda, opt.cost_mult); 0452 else 0453 [f_, df_, d2f] = f_fcn(x); %% cost 0454 Lxx = d2f * opt.cost_mult; 0455 end 0456 zinvdiag = sparse(1:niq, 1:niq, 1 ./ z, niq, niq); 0457 mudiag = sparse(1:niq, 1:niq, mu, niq, niq); 0458 dh_zinv = dh * zinvdiag; 0459 M = Lxx + dh_zinv * mudiag * dh'; 0460 N = Lx + dh_zinv * (mudiag * h + gamma * e); 0461 dxdlam = [M dg; dg' sparse(neq, neq)] \ [-N; -g]; 0462 % AAA = [ 0463 % M dg; 0464 % dg' sparse(neq, neq) 0465 % ]; 0466 % rc = 1/condest(AAA); 0467 % if rc < 1e-22 0468 % fprintf('my RCOND = %g\n', rc); 0469 % n = size(AAA, 1); 0470 % AAA = AAA + 1e-3 * speye(n,n); 0471 % end 0472 % bbb = [-N; -g]; 0473 % dxdlam = AAA \ bbb; 0474 if any(isnan(dxdlam)) || norm(dxdlam) > max_stepsize 0475 if opt.verbose 0476 fprintf('\nNumerically Failed\n'); 0477 end 0478 eflag = -1; 0479 break; 0480 end 0481 dx = dxdlam(1:nx, 1); 0482 dlam = dxdlam(nx+(1:neq), 1); 0483 dz = -h - z - dh' * dx; 0484 dmu = -mu + zinvdiag *(gamma*e - mudiag * dz); 0485 0486 %% optional step-size control 0487 sc = 0; 0488 if opt.step_control 0489 x1 = x + dx; 0490 0491 %% evaluate cost, constraints, derivatives at x1 0492 [f1, df1] = f_fcn(x1); %% cost 0493 f1 = f1 * opt.cost_mult; 0494 df1 = df1 * opt.cost_mult; 0495 if nonlinear 0496 [hn1, gn1, dhn1, dgn1] = gh_fcn(x1); %% nonlinear constraints 0497 h1 = [hn1; Ai * x1 - bi]; %% inequality constraints 0498 g1 = [gn1; Ae * x1 - be]; %% equality constraints 0499 dh1 = [dhn1 Ai']; %% 1st derivative of inequalities 0500 dg1 = [dgn1 Ae']; %% 1st derivative of equalities 0501 else 0502 h1 = Ai * x1 - bi; %% inequality constraints 0503 g1 = Ae * x1 - be; %% equality constraints 0504 dh1 = dh; %% 1st derivative of inequalities 0505 dg1 = dg; %% 1st derivative of equalities 0506 end 0507 0508 %% check tolerance 0509 Lx1 = df1 + dg1 * lam + dh1 * mu; 0510 if isempty(h1) 0511 maxh1 = zeros(1,0); 0512 else 0513 maxh1 = max(h1); 0514 end 0515 feascond1 = max([norm(g1, Inf), maxh1]) / (1 + max([ norm(x1, Inf), norm(z, Inf) ])); 0516 gradcond1 = norm(Lx1, Inf) / (1 + max([ norm(lam, Inf), norm(mu, Inf) ])); 0517 0518 if feascond1 > feascond && gradcond1 > gradcond 0519 sc = 1; 0520 end 0521 end 0522 if sc 0523 alpha = 1; 0524 for j = 1:opt.sc.red_it 0525 dx1 = alpha * dx; 0526 x1 = x + dx1; 0527 f1 = f_fcn(x1); %% cost 0528 f1 = f1 * opt.cost_mult; 0529 if nonlinear 0530 [hn1, gn1] = gh_fcn(x1); %% nonlinear constraints 0531 h1 = [hn1; Ai * x1 - bi]; %% inequality constraints 0532 g1 = [gn1; Ae * x1 - be]; %% equality constraints 0533 else 0534 h1 = Ai * x1 - bi; %% inequality constraints 0535 g1 = Ae * x1 - be; %% equality constraints 0536 end 0537 L1 = f1 + lam' * g1 + mu' * (h1+z) - gamma * sum(log(z)); 0538 if opt.verbose > 2 0539 fprintf('\n %3d %10g', -j, norm(dx1)); 0540 end 0541 rho = (L1 - L) / (Lx' * dx1 + 0.5 * dx1' * Lxx * dx1); 0542 if rho > rho_min && rho < rho_max 0543 break; 0544 else 0545 alpha = alpha / 2; 0546 end 0547 end 0548 dx = alpha * dx; 0549 dz = alpha * dz; 0550 dlam = alpha * dlam; 0551 dmu = alpha * dmu; 0552 end 0553 0554 %% do the update 0555 k = find(dz < 0); 0556 if isempty(k) 0557 alphap = 1; 0558 else 0559 alphap = min( [xi * min(z(k) ./ -dz(k)) 1] ); 0560 end 0561 k = find(dmu < 0); 0562 if isempty(k) 0563 alphad = 1; 0564 else 0565 alphad = min( [xi * min(mu(k) ./ -dmu(k)) 1] ); 0566 end 0567 x = x + alphap * dx; 0568 z = z + alphap * dz; 0569 lam = lam + alphad * dlam; 0570 mu = mu + alphad * dmu; 0571 if niq > 0 0572 gamma = sigma * (z' * mu) / niq; 0573 end 0574 0575 %% evaluate cost, constraints, derivatives 0576 [f, df] = f_fcn(x); %% cost 0577 f = f * opt.cost_mult; 0578 df = df * opt.cost_mult; 0579 if nonlinear 0580 [hn, gn, dhn, dgn] = gh_fcn(x); %% nonlinear constraints 0581 h = [hn; Ai * x - bi]; %% inequality constraints 0582 g = [gn; Ae * x - be]; %% equality constraints 0583 dh = [dhn Ai']; %% 1st derivative of inequalities 0584 dg = [dgn Ae']; %% 1st derivative of equalities 0585 else 0586 h = Ai * x - bi; %% inequality constraints 0587 g = Ae * x - be; %% equality constraints 0588 %% 1st derivatives are constant, still dh = Ai', dg = Ae' 0589 end 0590 0591 %% check tolerance 0592 Lx = df + dg * lam + dh * mu; 0593 if isempty(h) 0594 maxh = zeros(1,0); 0595 else 0596 maxh = max(h); 0597 end 0598 feascond = max([norm(g, Inf), maxh]) / (1 + max([ norm(x, Inf), norm(z, Inf) ])); 0599 gradcond = norm(Lx, Inf) / (1 + max([ norm(lam, Inf), norm(mu, Inf) ])); 0600 compcond = (z' * mu) / (1 + norm(x, Inf)); 0601 costcond = abs(f - f0) / (1 + abs(f0)); 0602 %% save history 0603 hist(i+1) = struct('feascond', feascond, 'gradcond', gradcond, ... 0604 'compcond', compcond, 'costcond', costcond, 'gamma', gamma, ... 0605 'stepsize', norm(dx), 'obj', f/opt.cost_mult, ... 0606 'alphap', alphap, 'alphad', alphad); 0607 0608 if opt.verbose > 1 0609 fprintf('\n%3d %12.8g %10.5g %12g %12g %12g %12g', ... 0610 i, f/opt.cost_mult, norm(dx), feascond, gradcond, compcond, costcond); 0611 end 0612 if feascond < opt.feastol && gradcond < opt.gradtol && ... 0613 compcond < opt.comptol && costcond < opt.costtol 0614 converged = 1; 0615 if opt.verbose 0616 fprintf('\nConverged!\n'); 0617 end 0618 else 0619 if any(isnan(x)) || alphap < alpha_min || alphad < alpha_min || ... 0620 gamma < eps || gamma > 1/eps 0621 if opt.verbose 0622 fprintf('\nNumerically Failed\n'); 0623 end 0624 eflag = -1; 0625 break; 0626 end 0627 f0 = f; 0628 if opt.step_control 0629 L = f + lam' * g + mu' * (h+z) - gamma * sum(log(z)); 0630 end 0631 end 0632 end 0633 0634 if opt.verbose 0635 if ~converged 0636 fprintf('\nDid not converge in %d iterations.\n', i); 0637 end 0638 end 0639 0640 %%----- package up results ----- 0641 hist = hist(1:i+1); 0642 if eflag ~= -1 0643 eflag = converged; 0644 end 0645 output = struct('iterations', i, 'hist', hist, 'message', ''); 0646 if eflag == 0 0647 output.message = 'Did not converge'; 0648 elseif eflag == 1 0649 output.message = 'Converged'; 0650 elseif eflag == -1 0651 output.message = 'Numerically failed'; 0652 else 0653 output.message = 'Please hang up and dial again'; 0654 end 0655 0656 %% zero out multipliers on non-binding constraints 0657 mu(h < -opt.feastol & mu < mu_threshold) = 0; 0658 0659 %% un-scale cost and prices 0660 f = f / opt.cost_mult; 0661 lam = lam / opt.cost_mult; 0662 mu = mu / opt.cost_mult; 0663 0664 %% re-package multipliers into struct 0665 lam_lin = lam((neqnln+1):neq); %% lambda for linear constraints 0666 mu_lin = mu((niqnln+1):niq); %% mu for linear constraints 0667 kl = find(lam_lin < 0); %% lower bound binding 0668 ku = find(lam_lin > 0); %% upper bound binding 0669 0670 mu_l = zeros(nx+nA, 1); 0671 mu_l(ieq(kl)) = -lam_lin(kl); 0672 mu_l(igt) = mu_lin(nlt+(1:ngt)); 0673 mu_l(ibx) = mu_lin(nlt+ngt+nbx+(1:nbx)); 0674 0675 mu_u = zeros(nx+nA, 1); 0676 mu_u(ieq(ku)) = lam_lin(ku); 0677 mu_u(ilt) = mu_lin(1:nlt); 0678 mu_u(ibx) = mu_lin(nlt+ngt+(1:nbx)); 0679 0680 fields = { ... 0681 'mu_l', mu_l((nx+1):end), ... 0682 'mu_u', mu_u((nx+1):end), ... 0683 'lower', mu_l(1:nx), ... 0684 'upper', mu_u(1:nx) ... 0685 }; 0686 0687 if niqnln > 0 0688 fields = { ... 0689 'ineqnonlin', mu(1:niqnln), ... 0690 fields{:} ... 0691 }; 0692 end 0693 if neqnln > 0 0694 fields = { ... 0695 'eqnonlin', lam(1:neqnln), ... 0696 fields{:} ... 0697 }; 0698 end 0699 0700 lambda = struct(fields{:}); 0701 0702 % lambda = struct( ... 0703 % 'eqnonlin', lam(1:neqnln), ... 0704 % 'ineqnonlin', mu(1:niqnln), ... 0705 % 'mu_l', mu_l((nx+1):end), ... 0706 % 'mu_u', mu_u((nx+1):end), ... 0707 % 'lower', mu_l(1:nx), ... 0708 % 'upper', mu_u(1:nx) );