Keywords - Function groups - @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Library: nummath
See also: nmgraditer nmgraddiff nmcongrad nmlinminappr

Quantlet: nmBHHH
Description: Berndt-Hall-Hall-Hausman method to find a minimum of a given negative log-likelihood function (and maximum of the corresponding likelihood function).

Usage: min = nmBHHH(func,x0,fder{,linmin,ftol,gtol,maxiter,nowarn}) or min = nmBHHH(func,x0,fder{,opt})
Input:
func string, name of the function to minimize, which returns the negative log-likelihood value at some x. The function should have just one parameter x, which is a m x 1 vector
x0 m x 1 vector, the initial estimate of the minimum
fder string, name of the function for computing the negative gradient of log-density function evaluated at all points; its output is a m x n matrix, n being number of data points, containing gradients of negative log-likelihood contributions of each data point
opt (optional) list containing all or some of the following items: linmin, ftol, gtol, maxiter and nowarn as described below
linmin (optional) string, name of the routine for 1D (line) minimization; default is linmin = "nmlinmin"
ftol (optional) scalar, reserved for future usage; convergence tolerance of the function value, default is ftol = 1e-7
gtol (optional) scalar, convergence tolerance of the value of the function gradient; default is gtol = 1e-9
maxiter (optional) scalar, maximal number of iterations; default is maxiter = 250
nowarn (optional) scalar; by default, nowarn = 0. If nowarn is set to a nonzero value, no warnings will be shown and nowarn will be set to 1 for quantlets called by nmBHHH having this option
Output:
min.xmin m x 1 vector, minimizer of func (isolated to a fractional precision of tol), that is, the maximizer of log-likelihood function
min.fmin scalar, minimal function value f(xmin) = -log L(xmin)
min.iter scalar, number of performed iterations
min.hessin m x m matrix, approximation of the inverted Hessian matrix of the negative log-likelihood function at xmin; the approximation is based on the sum of products of negative log-density gradients

Example:
library("nummath")
;
; variables: simulated probit
;
randomize(1)
n    = 1000
x    = normal(n,2)
eps  = normal(n)
x1   = matrix(n)~x
beta = #(0.5,1,-2)
y    = x1 * beta + eps
yd   =(y >= 0)
;
; definition of probit likelihood function
; L = log prod(cdfn(x1*beta)^yd *(1-cdfn(x1*beta))^(1-yd))
;		= sum log(...)
; for numerical reasons, we will define it rather as follows:
;
proc(L)=ftion(beta)
  yd = getglobal("yd")
  x1 = getglobal("x1")
  L1 = log((cdfn(x1*beta).*yd) +(1-yd))
  L2 = log(((1-cdfn(x1*beta)).*(1-yd)) + yd)
  L  = - sum(L1) - sum(L2)
  ; maximum of likelihood = minimum of L
endp
;
proc(f)=fder(beta)
  yd = getglobal("yd")
  x1 = getglobal("x1")
  f1 =((pdfn(x1*beta).*yd)) /((cdfn(x1*beta).*yd) +(1-yd))
  f2 =(((-pdfn(x1*beta)).*(1-yd))) /(((1-cdfn(x1*beta)).*(1-yd)) + yd)
  f  = trans((-f1-f2) .* x1 )
endp
;
min = nmBHHH("ftion",#(0,0,0),"fder")
min
; notice that ML(beta=min.xmin) = -min.fmin

Result:
Contents of min.xmin
[1,]  0.53581
[2,]  0.93027
[3,]  -2.0261

Contents of min.fmin
[1,]    288.3

Contents of min.iter
[1,]        8

Contents of min.hessin
[1,] -0.0040919 -0.0014901  0.0030772
[2,] -0.0014901 -0.0067684  0.0063876
[3,]  0.0030772  0.0063876 -0.017185



Author: L. Cizkova, P. Cizek, 20030122 license MD*Tech
(C) MD*TECH Method and Data Technologies, 05.02.2006