Library: | nummath |
See also: | nmgolden nmgraditer nmgraddiff nmBFGS |
Quantlet: | nmcongrad | |
Description: | conjugate gradient method for finding the minimum of a given function |
Usage: | min = nmcongrad(func,x0{,fder,linmin,ftol,maxiter}) | |
Input: | ||
func | string, name of a function whose minimum is to be found. The function should have just one parameter x, which is either vector n x 1, if the fder is given (see below), or matrix n x k (for any k >= 1) containing x = (x1,x2,...,xk), if the gradient is to be computed automatically; xi (n x 1 vector; i=1..k) represent points in which the function should be evaluated. As a result, the function should return a scalar or 1 x k vector, respectively. | |
x0 | n x 1 vector, the initial estimate of the minimum | |
fder | (optional) derivatives of func; there are several possible formats: 1. string, name of function computing gradient of func; its output is an n x 1 vector 2. empty string; in this case the gradient will be computed automatically using the quantlet nmgraddiff with a default step h 3. scalar h; the gradient will be computed automatically using the quantlet nmgraddiff with the given step h | |
linmin | optional string, name of the routine for 1D (line) minimization; default linmin = "nmlinminder" | |
ftol | optional scalar; convergence tolerance on the function value default ftol = 1e-7 | |
maxiter | optional scalar; maximal number of iterations; default maxiter = 250 | |
Output: | ||
min.xmin | minimum of func (isolated to a fractional precision of tol) | |
min.fmin | minimum function value f(xmin) | |
min.iter | number of performed iterations |
library("nummath") ; ; definition of function ; proc(p)=ftion(x) p = x[1,]^2 + 3*(x[2,]-1)^4 endp ; nmcongrad("ftion",#(1,2)) ; ; search for a minimum of f=x1^2 + 3*(x2-1)^4 with ; initial estimation x0=(1,2)
Contents of _tmp.xmin [1,] 5.1756e-12 [2,] 1 Contents of _tmp.fmin [1,] 2.6789e-23 Contents of _tmp.iter [1,] 10
library("nummath") ; ; variables: simulated probit ; randomize(1) n = 1000 x = normal(n,2) eps = normal(n) x1 = matrix(n)~x beta = #(0.5,1,-2) y = x1 * beta + eps yd =(y >= 0) ; ; definition of probit likelihood function ; L = log prod(cdfn(x1*beta)^yd *(1-cdfn(x1*beta))^(1-yd)) ; = sum log(...) ; for numerical reasons, we will define it rather as follows: ; proc(L)=ftion(beta) yd = getglobal("yd") x1 = getglobal("x1") L1 = log((cdfn(x1*beta).*yd) +(1-yd)) L2 = log(((1-cdfn(x1*beta)).*(1-yd)) + yd) L = - sum(L1) - sum(L2) ; maximum of likelihood = minimum of L endp ; proc(dL)=fderiv(beta) yd = getglobal("yd") x1 = getglobal("x1") dL1 = pdfn(x1*beta)./(cdfn(x1*beta)+(1-yd)).*x1.*yd dL2 = -pdfn(x1*beta)./(1-cdfn(x1*beta)+yd).*x1.*(1-yd) dL = trans(- sum(dL1) - sum(dL2)) ; defined this way for numerical reasons endp ; min = nmcongrad("ftion",#(0,0,0),"fderiv","nmlinminder") min ; notice that ML(beta=min.xmin) = -min.fmin
Contents of min.xmin [1,] 0.5358 [2,] 0.93021 [3,] -2.0262 Contents of min.fmin [1,] 288.3 Contents of min.iter [1,] 5