NLP_CONSFCN Evaluates nonlinear constraints and their Jacobian. [H, G] = NLP_CONSFCN(OM, X) [H, G, DH, DG] = NLP_CONSFCN(OM, X) [H, G, DH, DG] = NLP_CONSFCN(OM, X, DHS, DGS) Constraint evaluation function nonlinear constraints, suitable for use with MIPS, FMINCON, etc. Computes constraint vectors and their gradients. Inputs: OM : Opt-Model object X : optimization vector DHS : (optional) sparse matrix with tiny non-zero values specifying the fixed sparsity structure that the resulting DH should match DGS : (optional) sparse matrix with tiny non-zero values specifying the fixed sparsity structure that the resulting DG should match Outputs: H : vector of inequality constraint values G : vector of equality constraint values DH : (optional) inequality constraint gradients, column j is gradient of H(j) DG : (optional) equality constraint gradients Examples: [h, g] = nlp_consfcn(om, x); [h, g, dh, dg] = nlp_consfcn(om, x); [...] = nlp_consfcn(om, x, dhs, dgs); See also NLP_COSTFCN, NLP_HESSFCN.
0001 function [h, g, dh, dg] = nlp_consfcn(om, x, dhs, dgs) 0002 %NLP_CONSFCN Evaluates nonlinear constraints and their Jacobian. 0003 % [H, G] = NLP_CONSFCN(OM, X) 0004 % [H, G, DH, DG] = NLP_CONSFCN(OM, X) 0005 % [H, G, DH, DG] = NLP_CONSFCN(OM, X, DHS, DGS) 0006 % 0007 % Constraint evaluation function nonlinear constraints, suitable 0008 % for use with MIPS, FMINCON, etc. Computes constraint vectors and their 0009 % gradients. 0010 % 0011 % Inputs: 0012 % OM : Opt-Model object 0013 % X : optimization vector 0014 % DHS : (optional) sparse matrix with tiny non-zero values specifying 0015 % the fixed sparsity structure that the resulting DH should match 0016 % DGS : (optional) sparse matrix with tiny non-zero values specifying 0017 % the fixed sparsity structure that the resulting DG should match 0018 % 0019 % Outputs: 0020 % H : vector of inequality constraint values 0021 % G : vector of equality constraint values 0022 % DH : (optional) inequality constraint gradients, column j is 0023 % gradient of H(j) 0024 % DG : (optional) equality constraint gradients 0025 % 0026 % Examples: 0027 % [h, g] = nlp_consfcn(om, x); 0028 % [h, g, dh, dg] = nlp_consfcn(om, x); 0029 % [...] = nlp_consfcn(om, x, dhs, dgs); 0030 % 0031 % See also NLP_COSTFCN, NLP_HESSFCN. 0032 0033 % MP-Opt-Model 0034 % Copyright (c) 1996-2020, Power Systems Engineering Research Center (PSERC) 0035 % by Ray Zimmerman, PSERC Cornell 0036 % 0037 % This file is part of MP-Opt-Model. 0038 % Covered by the 3-clause BSD License (see LICENSE file for details). 0039 % See https://github.com/MATPOWER/mp-opt-model for more info. 0040 0041 if nargout == 2 %% contraints only 0042 g = om.eval_nln_constraint(x, 1); %% equalities 0043 h = om.eval_nln_constraint(x, 0); %% inequalities 0044 else %% constraints and derivatives 0045 [g, dg] = om.eval_nln_constraint(x, 1); %% equalities 0046 [h, dh] = om.eval_nln_constraint(x, 0); %% inequalities 0047 dg = dg'; 0048 dh = dh'; 0049 0050 %% force specified sparsity structure 0051 if nargin > 2 0052 %% add sparse structure (with tiny values) to current matrices to 0053 %% ensure that sparsity structure matches that supplied 0054 dg = dg + dgs; 0055 dh = dh + dhs; 0056 0057 % %% check sparsity structure against that supplied 0058 % if nnz(dg) ~= nnz(dgs) 0059 % fprintf('=====> nnz(dg) is %d, expected %d <=====\n', nnz(dg), nnz(dgs)); 0060 % else 0061 % [idgs, jdgs] = find(dgs); 0062 % [idg, jdg] = find(dg); 0063 % if any(idg ~= idgs) || any(jdg ~= jdgs) 0064 % fprintf('=====> structure of dg is not as expected <=====\n'); 0065 % end 0066 % end 0067 % if nnz(dh) ~= nnz(dhs) 0068 % fprintf('=====> nnz(dh) is %d, expected %d <=====\n', nnz(dh), nnz(dhs)); 0069 % else 0070 % [idhs, jdhs] = find(dhs); 0071 % [idh, jdh] = find(dh); 0072 % if any(idh ~= idhs) || any(jdh ~= jdhs) 0073 % fprintf('=====> structure of dh is not as expected <=====\n'); 0074 % end 0075 % end 0076 end 0077 end