Solve nonlinear curve-fitting (data-fitting) problems in the least-squares sense. That is, given input data xdata, and the observed output ydata, find coefficients x that "best-fit" the equation
x = lsqcurvefit(fun,x0,xdata,ydata) x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub) x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options) x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options,P1,P2. ) [x,resnorm] = lsqcurvefit(. ) [x,resnorm,residual] = lsqcurvefit(. ) [x,resnorm,residual,exitflag] = lsqcurvefit(. ) [x,resnorm,residual,exitflag,output] = lsqcurvefit(. ) [x,resnorm,residual,exitflag,output,lambda] = lsqcurvefit(. ) [x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqcurvefit(. )
Description
lsqcurvefit solves nonlinear data-fitting problems. lsqcurvefit requires a user-defined function to compute the vector-valued function F(x, xdata). The size of the vector returned by the user-defined function must be the same as the size of ydata.
x = lsqcurvefit(fun,x0,xdata,ydata) starts at x0 and finds coefficients x to best fit the nonlinear function fun(x,xdata) to the data ydata (in the least-squares sense). ydata must be the same size as the vector (or matrix) F returned by fun .
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub) defines a set of lower and upper bounds on the design variables, x , so that the solution is always in the range lb
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options) minimizes with the optimization parameters specified in the structure options . Use optimset to set these parameters. Pass empty matrices for lb and ub if no bounds exist.
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options,P1,P2. ) passes the problem-dependent parameters P1 , P2 , etc., directly to the function fun . Pass an empty matrix for options to use the default values for options .
[x,resnorm] = lsqcurvefit(. ) returns the value of the squared 2-norm of the residual at x : sum .
[x,resnorm,residual] = lsqcurvefit(. ) returns the value of the residual, fun(x,xdata)-ydata , at the solution x .
[x,resnorm,residual,exitflag] = lsqcurvefit(. ) returns a value exitflag that describes the exit condition.
[x,resnorm,residual,exitflag,output] = lsqcurvefit(. ) returns a structure output that contains information about the optimization.
[x,resnorm,residual,exitflag,output,lambda] = lsqcurvefit(. ) returns a structure lambda whose fields contain the Lagrange multipliers at the solution x .
[x,resnorm,residual,exitflag,output,lambda,jacobian] = 99 99lsqcurvefit(. ) returns the Jacobian of fun at the solution x .
Input Arguments
Function Arguments contains general descriptions of arguments passed in to lsqcurvefit . This section provides function-specific details for fun and options :
x = lsqcurvefit(@myfun,x0,xdata,ydata)
function F = myfun(x,xdata) F = . % Compute function values at x
f = inline('x(1)*xdata.^2+x(2)*sin(xdata)'. 'x','xdata'); x = lsqcurvefit(f,x0,xdata,ydata);
Note fun should return fun(x,xdata) , and not the sum-of-squares sum((fun(x,xdata)-ydata).^2) . The algorithm implicitly squares and sums fun(x,xdata)-ydata . |
options = optimset('Jacobian','on')
function [F,J] = myfun(x,xdata) F = . % objective function values at x if nargout > 1 % two output arguments J = . % Jacobian of the function evaluated at x end
Output Arguments
Function Arguments contains general descriptions of arguments returned by lsqcurvefit . This section provides function-specific details for exitflag , lambda , and output :
Note The sum of squares should not be formed explicitly. Instead, your function should return a vector of function values. See the examples below. |
Optimization options parameters used by lsqcurvefit . Some parameters apply to all algorithms, some are only relevant when using the large-scale algorithm, and others are only relevant when using the medium-scale algorithm.You can use optimset to set or change the values of these fields in the parameters structure, options . See Optimization Parameters, for detailed information.
We start by describing the LargeScale option since it states a preference for which algorithm to use. It is only a preference since certain conditions must be met to use the large-scale or medium-scale algorithm. For the large-scale algorithm , the nonlinear system of equations cannot be under-determined; that is, the number of equations (the number of elements of F returned by fun ) must be at least as many as the length of x . Furthermore, only the large-scale algorithm handles bound constraints:
Medium-Scale and Large-Scale Algorithms. These parameters are used by both the medium-scale and large-scale algorithms:
Large-Scale Algorithm Only. These parameters are used only by the large-scale algorithm:
W = jmfun(Jinfo,Y,flag,p1,p2. )
[F,Jinfo] = fun(x,p1,p2. )
lsqcurvefit(fun. options,p1,p2. )
Note 'Jacobian' must be set to 'on' for Jinfo to be passed from fun to jmfun . |
Medium-Scale Algorithm Only. These parameters are used only by the medium-scale algorithm:
Compare user-supplied derivatives (Jacobian) to finite-differencing derivatives.
Maximum change in variables for finite-differencing.
Minimum change in variables for finite-differencing.
Choose Levenberg-Marquardt over Gauss-Newton algorithm.
Line search algorithm choice.
Vectors of data xdata and ydata are of length n. You want to find coefficients x to find the best fit to the equation
function F = myfun(x,xdata) F = x(1)*xdata.^2 + x(2)*sin(xdata) + x(3)*xdata.^3;
Next, invoke an optimization routine:
% Assume you determined xdata and ydata experimentally xdata = [3.6 7.7 9.3 4.1 8.6 2.8 1.3 7.9 10.0 5.4]; ydata = [16.5 150.6 263.1 24.7 208.5 9.9 2.7 163.9 325.0 54.3]; x0 = [10, 10, 10] % Starting guess [x,resnorm] = lsqcurvefit(@myfun,x0,xdata,ydata)
Note that at the time that lsqcurvefit is called, xdata and ydata are assumed to exist and are vectors of the same size. They must be the same size because the valu e F returned by fun must be the same size as ydata .
After 33 function evaluations, this example gives the solution
x = 0.2269 0.3385 0.3021 % residual or sum of squares resnorm = 6.2950
The residual is not zero because in this case there was some noise (experimental error) in the data.
Large-Scale Optimization. By default lsqcurvefit chooses the large-scale algorithm. This algorithm is a subspace trust region method and is based on the interior-reflective Newton method described in [1], [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Methods for Nonlinear Minimization, and Preconditioned Conjugate Gradients.
Medium-Scale Optimization. lsqcurvefit , with the LargeScale parameter set to 'off' with optimset , uses the Levenberg-Marquardt method with line-search [4], [5], [6]. Alternatively, a Gauss-Newton method [3] with line-search may be selected. The choice of algorithm is made by setting the LevenbergMarquardt parameter with optimset . Setting LevenbergMarquardt to 'off' (and LargeScale to 'off' ) selects the Gauss-Newton method, which is generally faster when the residual is small.
The default line search algorithm, i.e., LineSearchType parameter set to 'quadcubic' , is a safeguarded mixed quadratic and cubic polynomial interpolation and extrapolation method. A safeguarded cubic polynomial method can be selected by setting LineSearchType to 'cubicpoly' . This method generally requires fewer function evaluations but more gradient evaluations. Thus, if gradients are being supplied and can be calculated inexpensively, the cubic polynomial line search method is preferable. The algorithms used are described fully in the Standard Algorithms chapter.
Diagnostics
Large-Scale Optimization. The large-scale code does not allow equal upper and lower bounds. For example if lb(2)==ub(2) then lsqlin gives the error
Equal upper and lower bounds not permitted.
(lsqcurvefit does not handle equality constraints, which is another way to formulate equal bounds. If equality constraints are present, use fmincon , fminimax , or fgoalattain for alternative formulations where equality constraints can be included.)
Limitations
The function to be minimized must be continuous. lsqcurvefit may only give local solutions.
lsqcurvefit only handles real variables (the user-defined function must only return real values). When x has complex variables, the variables must be split into real and imaginary parts.
Large-Scale Optimization. The large-scale method for lsqcurvefit does not solve underdetermined systems; it requires that the number of equations, i.e., row dimension of F, be at least as great as the number of variables. In the underdetermined case, the medium-scale algorithm is used instead. See Table 2-4, Large-Scale Problem Coverage and Requirements,, for more information on what problem formulations are covered and what information must be provided.
The preconditioner computation used in the preconditioned conjugate gradient part of the large-scale method forms J T J (where J is the Jacobian matrix) before computing the preconditioner; therefore, a row of J with many nonzeros, which results in a nearly dense product J T J, may lead to a costly solution process for large problems.
If components of x have no upper (or lower) bounds, then lsqcurvefit prefers that the corresponding components of ub (or lb ) be set to inf (or -inf for lower bounds) as opposed to an arbitrary but very large positive (or negative for lower bounds) number.
Currently, if the analytical Jacobian is provided in fun , the options parameter DerivativeCheck cannot be used with the large-scale method to compare the analytic Jacobian to the finite-difference Jacobian. Instead, use the medium-scale method to check the derivatives with options parameter MaxIter set to zero iterations. Then run the problem with the large-scale method.
Medium-Scale Optimization. The medium-scale algorithm does not handle bound constraints.
Since the large-scale algorithm does not handle under-determined systems and the medium-scale does not handle bound constraints, problems with both these characteristics cannot be solved by lsqcurvefit .
[1] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.
[2] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189-224, 1994.
[3] Dennis, J. E. Jr., "Nonlinear Least-Squares," State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269-312, 1977.
[4] Levenberg, K., "A Method for the Solution of Certain Problems in Least-Squares," Quarterly Applied Math. 2, pp. 164-168, 1944.
[5] Marquardt, D., "An Algorithm for Least-Squares Estimation of Nonlinear Parameters," SIAM Journal Applied Math. Vol. 11, pp. 431-441, 1963.
[6] More, J. J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105-116, 1977.
linprog | lsqlin |