Title: | R Interface to the Levenberg-Marquardt Nonlinear Least-Squares Algorithm Found in MINPACK, Plus Support for Bounds |
---|---|
Description: | The nls.lm function provides an R interface to lmder and lmdif from the MINPACK library, for solving nonlinear least-squares problems by a modification of the Levenberg-Marquardt algorithm, with support for lower and upper parameter bounds. The implementation can be used via nls-like calls using the nlsLM function. |
Authors: | Timur V. Elzhov, Katharine M. Mullen, Andrej-Nikolai Spiess, Ben Bolker |
Maintainer: | Katharine M. Mullen <[email protected]> |
License: | GPL-3 |
Version: | 1.2-4 |
Built: | 2024-11-20 05:35:00 UTC |
Source: | https://github.com/cran/minpack.lm |
The purpose of nls.lm
is to minimize the sum square of the
vector returned by the function fn
, by a modification of the
Levenberg-Marquardt algorithm. The user may also provide a
function jac
which calculates the Jacobian.
nls.lm(par, lower=NULL, upper=NULL, fn, jac = NULL, control = nls.lm.control(), ...)
nls.lm(par, lower=NULL, upper=NULL, fn, jac = NULL, control = nls.lm.control(), ...)
par |
A list or numeric vector of starting estimates. If
|
lower |
A numeric vector of lower bounds on each parameter. If
not given, the default lower bound for each parameter is set to
|
upper |
A numeric vector of upper bounds on each parameter. If
not given, the default upper bound for each parameter is set to
|
fn |
A function that returns a vector of residuals, the sum square
of which is to be minimized. The first argument of |
jac |
A function to return the Jacobian for the |
control |
An optional list of control settings. See |
... |
Further arguments to be passed to |
Both functions fn
and jac
(if provided) must return
numeric vectors. Length of the vector returned by fn
must
not be lower than the length of par
. The vector returned by
jac
must have length equal to
.
The control
argument is a list; see nls.lm.control
for
details.
Successful completion.
The accuracy of nls.lm
is controlled by the convergence
parameters ftol
, ptol
, and gtol
. These
parameters are used in tests which make three types of comparisons
between the approximation and a solution
.
nls.lm
terminates when any of the tests
is satisfied. If any of the convergence parameters is less than
the machine precision, then nls.lm
only attempts to satisfy
the test defined by the machine precision. Further progress is not
usually possible.
The tests assume that fn
as well as jac
are
reasonably well behaved. If this condition is not satisfied, then
nls.lm
may incorrectly indicate convergence. The validity
of the answer can be checked, for example, by rerunning
nls.lm
with tighter tolerances.
First convergence test.
If denotes the Euclidean norm of a vector
, then
this test attempts to guarantee that
where denotes the result of
fn
function
evaluated at . If this condition is satisfied
with
ftol
, then the final
residual norm
has
significant decimal digits
and
info
is set to 1 (or to 3 if the second test is also
satisfied). Unless high precision solutions are required, the
recommended value for ftol
is the square root of the machine
precision.
Second convergence test.
If is the diagonal matrix whose entries are defined by the
array
diag
, then this test attempt to guarantee that
If this condition is satisfied with ptol
, then the larger components of
have
significant decimal digits and
info
is set to 2 (or to 3 if the first test is also
satisfied). There is a danger that the smaller components of
may have large relative errors, but if
diag
is internally set, then the accuracy of the components
of is usually related to their sensitivity. Unless high
precision solutions are required, the recommended value for
ptol
is the square root of the machine precision.
Third convergence test.
This test is satisfied when the cosine of the angle between the
result of fn
evaluation and any column of the
Jacobian at
is at most
gtol
in absolute value.
There is no clear relationship between this test and the accuracy
of nls.lm
, and furthermore, the test is equally well
satisfied at other critical points, namely maximizers and saddle
points. Therefore, termination caused by this test (info
=
4) should be examined carefully. The recommended value for
gtol
is zero.
Unsuccessful completion.
Unsuccessful termination of nls.lm
can be due to improper
input parameters, arithmetic interrupts, an excessive number of
function evaluations, or an excessive number of iterations.
Improper input parameters.info
is set to 0 if , or
, or
ftol
,
or
ptol
, or
gtol
, or
maxfev
, or
factor
.
Arithmetic interrupts.
If these interrupts occur in the fn
function during an
early stage of the computation, they may be caused by an
unacceptable choice of by
nls.lm
. In this case,
it may be possible to remedy the situation by rerunning
nls.lm
with a smaller value of factor
.
Excessive number of function evaluations.
A reasonable value for maxfev
is . If the
number of calls to
fn
reaches maxfev
, then this
indicates that the routine is converging very slowly as measured
by the progress of and
info
is set to 5. In this
case, it may be helpful to force diag
to be internally set.
Excessive number of function iterations.
The allowed number of iterations defaults to 50, can be increased if
desired.
The list returned by nls.lm
has methods
for the generic functions coef
,
deviance
, df.residual
,
print
, residuals
, summary
,
confint
,
and vcov
.
A list with components:
par |
The best set of parameters found. |
hessian |
A symmetric matrix giving an estimate of the Hessian at the solution found. |
fvec |
The result of the last |
info |
|
message |
character string indicating reason for termination |
.
diag |
The result list of |
niter |
The number of iterations completed before termination. |
rsstrace |
The residual sum of squares at each iteration. Can be used to check the progress each iteration. |
deviance |
The sum of the squared residual vector. |
The public domain FORTRAN sources of MINPACK package by J.J. Moré, implementing the Levenberg-Marquardt algorithm were downloaded from https://netlib.org/minpack/, and left unchanged. The contents of this manual page are largely extracted from the comments of MINPACK sources.
J.J. Moré, "The Levenberg-Marquardt algorithm: implementation and theory," in Lecture Notes in Mathematics 630: Numerical Analysis, G.A. Watson (Ed.), Springer-Verlag: Berlin, 1978, pp. 105-116.
###### example 1 ## values over which to simulate data x <- seq(0,5,length=100) ## model based on a list of parameters getPred <- function(parS, xx) parS$a * exp(xx * parS$b) + parS$c ## parameter values used to simulate data pp <- list(a=9,b=-1, c=6) ## simulated data, with noise simDNoisy <- getPred(pp,x) + rnorm(length(x),sd=.1) ## plot data plot(x,simDNoisy, main="data") ## residual function residFun <- function(p, observed, xx) observed - getPred(p,xx) ## starting values for parameters parStart <- list(a=3,b=-.001, c=1) ## perform fit nls.out <- nls.lm(par=parStart, fn = residFun, observed = simDNoisy, xx = x, control = nls.lm.control(nprint=1)) ## plot model evaluated at final parameter estimates lines(x,getPred(as.list(coef(nls.out)), x), col=2, lwd=2) ## summary information on parameter estimates summary(nls.out) ###### example 2 ## function to simulate data f <- function(TT, tau, N0, a, f0) { expr <- expression(N0*exp(-TT/tau)*(1 + a*cos(f0*TT))) eval(expr) } ## helper function for an analytical gradient j <- function(TT, tau, N0, a, f0) { expr <- expression(N0*exp(-TT/tau)*(1 + a*cos(f0*TT))) c(eval(D(expr, "tau")), eval(D(expr, "N0" )), eval(D(expr, "a" )), eval(D(expr, "f0" ))) } ## values over which to simulate data TT <- seq(0, 8, length=501) ## parameter values underlying simulated data p <- c(tau = 2.2, N0 = 1000, a = 0.25, f0 = 8) ## get data Ndet <- do.call("f", c(list(TT = TT), as.list(p))) ## with noise N <- Ndet + rnorm(length(Ndet), mean=Ndet, sd=.01*max(Ndet)) ## plot the data to fit par(mfrow=c(2,1), mar = c(3,5,2,1)) plot(TT, N, bg = "black", cex = 0.5, main="data") ## define a residual function fcn <- function(p, TT, N, fcall, jcall) (N - do.call("fcall", c(list(TT = TT), as.list(p)))) ## define analytical expression for the gradient fcn.jac <- function(p, TT, N, fcall, jcall) -do.call("jcall", c(list(TT = TT), as.list(p))) ## starting values guess <- c(tau = 2.2, N0 = 1500, a = 0.25, f0 = 10) ## to use an analytical expression for the gradient found in fcn.jac ## uncomment jac = fcn.jac out <- nls.lm(par = guess, fn = fcn, jac = fcn.jac, fcall = f, jcall = j, TT = TT, N = N, control = nls.lm.control(nprint=1)) ## get the fitted values N1 <- do.call("f", c(list(TT = TT), out$par)) ## add a blue line representing the fitting values to the plot of data lines(TT, N1, col="blue", lwd=2) ## add a plot of the log residual sum of squares as it is made to ## decrease each iteration; note that the RSS at the starting parameter ## values is also stored plot(1:(out$niter+1), log(out$rsstrace), type="b", main="log residual sum of squares vs. iteration number", xlab="iteration", ylab="log residual sum of squares", pch=21,bg=2) ## get information regarding standard errors summary(out)
###### example 1 ## values over which to simulate data x <- seq(0,5,length=100) ## model based on a list of parameters getPred <- function(parS, xx) parS$a * exp(xx * parS$b) + parS$c ## parameter values used to simulate data pp <- list(a=9,b=-1, c=6) ## simulated data, with noise simDNoisy <- getPred(pp,x) + rnorm(length(x),sd=.1) ## plot data plot(x,simDNoisy, main="data") ## residual function residFun <- function(p, observed, xx) observed - getPred(p,xx) ## starting values for parameters parStart <- list(a=3,b=-.001, c=1) ## perform fit nls.out <- nls.lm(par=parStart, fn = residFun, observed = simDNoisy, xx = x, control = nls.lm.control(nprint=1)) ## plot model evaluated at final parameter estimates lines(x,getPred(as.list(coef(nls.out)), x), col=2, lwd=2) ## summary information on parameter estimates summary(nls.out) ###### example 2 ## function to simulate data f <- function(TT, tau, N0, a, f0) { expr <- expression(N0*exp(-TT/tau)*(1 + a*cos(f0*TT))) eval(expr) } ## helper function for an analytical gradient j <- function(TT, tau, N0, a, f0) { expr <- expression(N0*exp(-TT/tau)*(1 + a*cos(f0*TT))) c(eval(D(expr, "tau")), eval(D(expr, "N0" )), eval(D(expr, "a" )), eval(D(expr, "f0" ))) } ## values over which to simulate data TT <- seq(0, 8, length=501) ## parameter values underlying simulated data p <- c(tau = 2.2, N0 = 1000, a = 0.25, f0 = 8) ## get data Ndet <- do.call("f", c(list(TT = TT), as.list(p))) ## with noise N <- Ndet + rnorm(length(Ndet), mean=Ndet, sd=.01*max(Ndet)) ## plot the data to fit par(mfrow=c(2,1), mar = c(3,5,2,1)) plot(TT, N, bg = "black", cex = 0.5, main="data") ## define a residual function fcn <- function(p, TT, N, fcall, jcall) (N - do.call("fcall", c(list(TT = TT), as.list(p)))) ## define analytical expression for the gradient fcn.jac <- function(p, TT, N, fcall, jcall) -do.call("jcall", c(list(TT = TT), as.list(p))) ## starting values guess <- c(tau = 2.2, N0 = 1500, a = 0.25, f0 = 10) ## to use an analytical expression for the gradient found in fcn.jac ## uncomment jac = fcn.jac out <- nls.lm(par = guess, fn = fcn, jac = fcn.jac, fcall = f, jcall = j, TT = TT, N = N, control = nls.lm.control(nprint=1)) ## get the fitted values N1 <- do.call("f", c(list(TT = TT), out$par)) ## add a blue line representing the fitting values to the plot of data lines(TT, N1, col="blue", lwd=2) ## add a plot of the log residual sum of squares as it is made to ## decrease each iteration; note that the RSS at the starting parameter ## values is also stored plot(1:(out$niter+1), log(out$rsstrace), type="b", main="log residual sum of squares vs. iteration number", xlab="iteration", ylab="log residual sum of squares", pch=21,bg=2) ## get information regarding standard errors summary(out)
Allow the user to set some characteristics
Levenberg-Marquardt nonlinear least squares algorithm implemented
in nls.lm
.
nls.lm.control(ftol = sqrt(.Machine$double.eps), ptol = sqrt(.Machine$double.eps), gtol = 0, diag = list(), epsfcn = 0, factor = 100, maxfev = integer(), maxiter = 50, nprint = 0)
nls.lm.control(ftol = sqrt(.Machine$double.eps), ptol = sqrt(.Machine$double.eps), gtol = 0, diag = list(), epsfcn = 0, factor = 100, maxfev = integer(), maxiter = 50, nprint = 0)
ftol |
non-negative numeric. Termination occurs when
both the actual and predicted relative reductions in the sum of
squares are at most |
ptol |
non-negative numeric. Termination occurs when
the relative error between two consecutive iterates is at most
|
gtol |
non-negative numeric. Termination occurs when
the cosine of the angle between result of |
diag |
a list or numeric vector containing positive
entries that serve as multiplicative scale factors for the
parameters. Length of |
epsfcn |
(used if |
factor |
positive numeric, used in determining the
initial step bound. This bound is set to the product of
|
maxfev |
integer; termination occurs
when the number of calls to |
maxiter |
positive integer. Termination occurs
when the number of iterations reaches |
nprint |
is an integer; set |
A list
with exactly nine components:
ftol |
|
ptol |
|
gtol |
|
diag |
|
epsfcn |
|
factor |
|
maxfev |
|
nprint |
with meanings as explained under ‘Arguments’.
J.J. Moré, "The Levenberg-Marquardt algorithm: implementation and theory," in Lecture Notes in Mathematics 630: Numerical Analysis, G.A. Watson (Ed.), Springer-Verlag: Berlin, 1978, pp. 105-116.
nls.lm.control(maxiter = 4)
nls.lm.control(maxiter = 4)
nlsLM
is a modified version of nls
that uses nls.lm
for fitting.
Since an object of class 'nls' is returned, all generic functions such as anova
,
coef
, confint
, deviance
, df.residual
,
fitted
, formula
, logLik
, predict
,
print
, profile
, residuals
, summary
,
update
, vcov
and weights
are applicable.
nlsLM(formula, data = parent.frame(), start, jac = NULL, algorithm = "LM", control = nls.lm.control(), lower = NULL, upper = NULL, trace = FALSE, subset, weights, na.action, model = FALSE, ...)
nlsLM(formula, data = parent.frame(), start, jac = NULL, algorithm = "LM", control = nls.lm.control(), lower = NULL, upper = NULL, trace = FALSE, subset, weights, na.action, model = FALSE, ...)
formula |
a nonlinear model |
data |
an optional data frame in which to evaluate the variables in |
start |
a named list or named numeric vector of starting estimates. |
jac |
A function to return the Jacobian. |
algorithm |
only method |
control |
an optional list of control settings. See |
lower |
A numeric vector of lower bounds on each parameter. If not given, the default lower bound for each parameter is set to |
upper |
A numeric vector of upper bounds on each parameter. If not given, the default upper bound for each parameter is set to |
trace |
logical value indicating if a trace of the iteration progress should be printed. Default is |
subset |
an optional vector specifying a subset of observations to be used in the fitting process. |
weights |
an optional numeric vector of (fixed) weights. When
present, the objective function is weighted least squares. See the
|
na.action |
a function which indicates what should happen when the data contain |
model |
logical. If true, the model frame is returned as part of the object. Default is |
... |
Additional optional arguments. None are used at present. |
The standard nls
function was modified in several ways to incorporate the Levenberg-Marquardt type nls.lm
fitting algorithm. The formula
is transformed into a function that returns a vector of (weighted) residuals whose sum square is minimized by nls.lm
. The optimized parameters are then transferred
to nlsModel
in order to obtain an object of class 'nlsModel'. The internal C function C_nls_iter
and nls_port_fit
were removed to avoid subsequent "Gauss-Newton", "port" or "plinear" types of optimization of nlsModel
. Several other small modifications were made in order to make all generic functions work on the output.
A list of
m |
an |
data |
the expression that was passed to |
call |
the matched call. |
convInfo |
a list with convergence information. |
control |
the control |
na.action |
the |
dataClasses |
the |
model |
if |
weights |
if |
Andrej-Nikolai Spiess and Katharine M. Mullen
Bates, D. M. and Watts, D. G. (1988) Nonlinear Regression Analysis and Its Applications, Wiley
Bates, D. M. and Chambers, J. M. (1992) Nonlinear models. Chapter 10 of Statistical Models in S eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.
J.J. More, "The Levenberg-Marquardt algorithm: implementation and theory," in Lecture Notes in Mathematics 630: Numerical Analysis, G.A. Watson (Ed.), Springer-Verlag: Berlin, 1978, pp. 105-116.
nls.lm
, nls
, nls.lm.control
, optim
### Examples from 'nls' doc ### DNase1 <- subset(DNase, Run == 1) ## using a selfStart model fm1DNase1 <- nlsLM(density ~ SSlogis(log(conc), Asym, xmid, scal), DNase1) ## using logistic formula fm2DNase1 <- nlsLM(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = DNase1, start = list(Asym = 3, xmid = 0, scal = 1)) ## all generics are applicable coef(fm1DNase1) confint(fm1DNase1) deviance(fm1DNase1) df.residual(fm1DNase1) fitted(fm1DNase1) formula(fm1DNase1) logLik(fm1DNase1) predict(fm1DNase1) print(fm1DNase1) profile(fm1DNase1) residuals(fm1DNase1) summary(fm1DNase1) update(fm1DNase1) vcov(fm1DNase1) weights(fm1DNase1) ## weighted nonlinear regression using ## inverse squared variance of the response ## gives same results as original 'nls' function Treated <- Puromycin[Puromycin$state == "treated", ] var.Treated <- tapply(Treated$rate, Treated$conc, var) var.Treated <- rep(var.Treated, each = 2) Pur.wt1 <- nls(rate ~ (Vm * conc)/(K + conc), data = Treated, start = list(Vm = 200, K = 0.1), weights = 1/var.Treated^2) Pur.wt2 <- nlsLM(rate ~ (Vm * conc)/(K + conc), data = Treated, start = list(Vm = 200, K = 0.1), weights = 1/var.Treated^2) all.equal(coef(Pur.wt1), coef(Pur.wt2)) ## 'nlsLM' can fit zero-noise data ## in contrast to 'nls' x <- 1:10 y <- 2*x + 3 ## Not run: nls(y ~ a + b * x, start = list(a = 0.12345, b = 0.54321)) ## End(Not run) nlsLM(y ~ a + b * x, start = list(a = 0.12345, b = 0.54321)) ### Examples from 'nls.lm' doc ## values over which to simulate data x <- seq(0,5, length = 100) ## model based on a list of parameters getPred <- function(parS, xx) parS$a * exp(xx * parS$b) + parS$c ## parameter values used to simulate data pp <- list(a = 9,b = -1, c = 6) ## simulated data with noise simDNoisy <- getPred(pp, x) + rnorm(length(x), sd = .1) ## make model mod <- nlsLM(simDNoisy ~ a * exp(b * x) + c, start = c(a = 3, b = -0.001, c = 1), trace = TRUE) ## plot data plot(x, simDNoisy, main = "data") ## plot fitted values lines(x, fitted(mod), col = 2, lwd = 2) ## create declining cosine ## with noise TT <- seq(0, 8, length = 501) tau <- 2.2 N0 <- 1000 a <- 0.25 f0 <- 8 Ndet <- N0 * exp(-TT/tau) * (1 + a * cos(f0 * TT)) N <- Ndet + rnorm(length(Ndet), mean = Ndet, sd = .01 * max(Ndet)) ## make model mod <- nlsLM(N ~ N0 * exp(-TT/tau) * (1 + a * cos(f0 * TT)), start = c(tau = 2.2, N0 = 1500, a = 0.25, f0 = 10), trace = TRUE) ## plot data plot(TT, N, main = "data") ## plot fitted values lines(TT, fitted(mod), col = 2, lwd = 2)
### Examples from 'nls' doc ### DNase1 <- subset(DNase, Run == 1) ## using a selfStart model fm1DNase1 <- nlsLM(density ~ SSlogis(log(conc), Asym, xmid, scal), DNase1) ## using logistic formula fm2DNase1 <- nlsLM(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = DNase1, start = list(Asym = 3, xmid = 0, scal = 1)) ## all generics are applicable coef(fm1DNase1) confint(fm1DNase1) deviance(fm1DNase1) df.residual(fm1DNase1) fitted(fm1DNase1) formula(fm1DNase1) logLik(fm1DNase1) predict(fm1DNase1) print(fm1DNase1) profile(fm1DNase1) residuals(fm1DNase1) summary(fm1DNase1) update(fm1DNase1) vcov(fm1DNase1) weights(fm1DNase1) ## weighted nonlinear regression using ## inverse squared variance of the response ## gives same results as original 'nls' function Treated <- Puromycin[Puromycin$state == "treated", ] var.Treated <- tapply(Treated$rate, Treated$conc, var) var.Treated <- rep(var.Treated, each = 2) Pur.wt1 <- nls(rate ~ (Vm * conc)/(K + conc), data = Treated, start = list(Vm = 200, K = 0.1), weights = 1/var.Treated^2) Pur.wt2 <- nlsLM(rate ~ (Vm * conc)/(K + conc), data = Treated, start = list(Vm = 200, K = 0.1), weights = 1/var.Treated^2) all.equal(coef(Pur.wt1), coef(Pur.wt2)) ## 'nlsLM' can fit zero-noise data ## in contrast to 'nls' x <- 1:10 y <- 2*x + 3 ## Not run: nls(y ~ a + b * x, start = list(a = 0.12345, b = 0.54321)) ## End(Not run) nlsLM(y ~ a + b * x, start = list(a = 0.12345, b = 0.54321)) ### Examples from 'nls.lm' doc ## values over which to simulate data x <- seq(0,5, length = 100) ## model based on a list of parameters getPred <- function(parS, xx) parS$a * exp(xx * parS$b) + parS$c ## parameter values used to simulate data pp <- list(a = 9,b = -1, c = 6) ## simulated data with noise simDNoisy <- getPred(pp, x) + rnorm(length(x), sd = .1) ## make model mod <- nlsLM(simDNoisy ~ a * exp(b * x) + c, start = c(a = 3, b = -0.001, c = 1), trace = TRUE) ## plot data plot(x, simDNoisy, main = "data") ## plot fitted values lines(x, fitted(mod), col = 2, lwd = 2) ## create declining cosine ## with noise TT <- seq(0, 8, length = 501) tau <- 2.2 N0 <- 1000 a <- 0.25 f0 <- 8 Ndet <- N0 * exp(-TT/tau) * (1 + a * cos(f0 * TT)) N <- Ndet + rnorm(length(Ndet), mean = Ndet, sd = .01 * max(Ndet)) ## make model mod <- nlsLM(N ~ N0 * exp(-TT/tau) * (1 + a * cos(f0 * TT)), start = c(tau = 2.2, N0 = 1500, a = 0.25, f0 = 10), trace = TRUE) ## plot data plot(TT, N, main = "data") ## plot fitted values lines(TT, fitted(mod), col = 2, lwd = 2)
weights
argument
of nlsLM
or nls
wfct
can be supplied to the weights
argument of
nlsLM
or nls
, and facilitates specification of
weighting schemes.
wfct(expr)
wfct(expr)
expr |
An expression specifying the weighting scheme as described in the Details section below. |
The weighting function can take 5 different variable definitions and combinations thereof:
the name of the predictor (independent) variable
the name of the response (dependent) variable
error: if replicates
exist, the error
fitted: the fitted values of the model
resid: the residuals of the model
For the last two, the model is fit unweighted, fitted values and residuals are extracted and the model is refit by the defined weights.
The results of evaluation of expr
in a new
environment, yielding the vector of weights to be applied.
Andrej-Nikolai Spiess
### Examples from 'nls' doc ### ## note that 'nlsLM' below may be replaced with calls to 'nls' Treated <- Puromycin[Puromycin$state == "treated", ] ## Weighting by inverse of response 1/y_i: nlsLM(rate ~ Vm * conc/(K + conc), data = Treated, start = c(Vm = 200, K = 0.05), weights = wfct(1/rate)) ## Weighting by square root of predictor \sqrt{x_i}: nlsLM(rate ~ Vm * conc/(K + conc), data = Treated, start = c(Vm = 200, K = 0.05), weights = wfct(sqrt(conc))) ## Weighting by inverse square of fitted values 1/\hat{y_i}^2: nlsLM(rate ~ Vm * conc/(K + conc), data = Treated, start = c(Vm = 200, K = 0.05), weights = wfct(1/fitted^2)) ## Weighting by inverse variance 1/\sigma{y_i}^2: nlsLM(rate ~ Vm * conc/(K + conc), data = Treated, start = c(Vm = 200, K = 0.05), weights = wfct(1/error^2))
### Examples from 'nls' doc ### ## note that 'nlsLM' below may be replaced with calls to 'nls' Treated <- Puromycin[Puromycin$state == "treated", ] ## Weighting by inverse of response 1/y_i: nlsLM(rate ~ Vm * conc/(K + conc), data = Treated, start = c(Vm = 200, K = 0.05), weights = wfct(1/rate)) ## Weighting by square root of predictor \sqrt{x_i}: nlsLM(rate ~ Vm * conc/(K + conc), data = Treated, start = c(Vm = 200, K = 0.05), weights = wfct(sqrt(conc))) ## Weighting by inverse square of fitted values 1/\hat{y_i}^2: nlsLM(rate ~ Vm * conc/(K + conc), data = Treated, start = c(Vm = 200, K = 0.05), weights = wfct(1/fitted^2)) ## Weighting by inverse variance 1/\sigma{y_i}^2: nlsLM(rate ~ Vm * conc/(K + conc), data = Treated, start = c(Vm = 200, K = 0.05), weights = wfct(1/error^2))