Usage: |
lmin = nmlinminappr(func,fder,x0,direc{,stepmax,nowarn}) or
lmin = nmlinminappr(func,fder,x0,direc{,opt})
|
Input: |
| func | string, name of function of which the minimum is to be
found. The function should have just one
parameter x (vector n x 1). As a result, the function
should return a scalar.
|
| fder | derivatives of func; there are several possible
formats:
1. name of the function (string) for computing the
gradient of func; its output is a n x 1 vector
2. empty string; the gradient will be computed automatically
using the quantlet nmgraddiff with a default step h
3. zero; nmlinminappr first checks
the existence of the global parameter nmlinminfderval.
In positive case it substitutes its value to fderval,
otherwise the gradient will be computed automatically
using the quantlet nmgraddiff with a default step h as for fder = ""
4. scalar h; the gradient will be computed
automatically using the quantlet nmgraddiff
with the given step h
|
| x0 | n x 1 vector, starting point for the line minimization
|
| direc | n x 1 vector, direction vector of a line for the line minimization
|
| opt | (optional) list containing both or one of the tems
stepmax and nowarn as described below
|
| stepmax | (optional) scalar, limit to the length of the steps
preventing evaluation of func in regions where it is not defined;
if stepmax is not given, nmlinminappr searches for the global parameter
nmlinminstepmax and if this exists, substitutes its value to stepmax;
otherwise default stepmax = 100 will be used
|
| nowarn | (optional) scalar; by default, nowarn = 0.
If nowarn is set to a nonzero value, no warning
in case of a roundoff problem will be shown (only the
output parameter warn will be set to 1)
|
Output: |
| lmin.xlmin | n x 1 vector, minimum of func on the line x0 + span{direc} |
| lmin.flmin | scalar, minimum function value on the line, f(xlmin) |
| lmin.moved | n x 1 vector, vector displacement during line minimization,
moved = xlmin - x0 |
| lmin.check | scalar; check = 0 on normal exit (numerical convergence achieved),
check = 1 for xlmin too close to x0.
check = 1 means usually convergence for a minimization algorithm but
in a root-finding algorithm the calling method should check the convergence |
| lmin.warn | scalar; warn = 1 in case of roundoff problem
in the algorithm (and minimum found can be bad
approximation of the real one),
warn = 0 otherwise. When warn = 1 and input parameter
nowarn is not set to 0, a warning window will appear. |
- Example:
library("nummath")
;
; definition of function
;
proc(p)=ftion(x)
p = x[1,]^2 + 3*(x[2,]-1)^2
endp
;
lmin = nmlinminappr("ftion","",#(2,2),#(0,-1))
lmin
;
; minimization for x1 = 2, x2 = 2 - t
; only t >= 0; for the other direction, see the next example
- Result:
Contents of lmin.xlmin
[1,] 2
[2,] 1
Contents of lmin.flmin
[1,] 4
Contents of lmin.moved
[1,] 0
[2,] -1
Contents of lmin.check
[1,] 0
Contents of lmin.warn
[1,] 0
- Example:
library("nummath")
;
; definition of function
;
proc(p)=ftion(x)
p = x[1,]^2 + 3*(x[2,]-1)^2
endp
;
lmin = nmlinminappr("ftion","",#(2,2),#(0,1))
lmin
;
; minimization for x1 = 2, x2 = 2 + t
; only t >= 0!
; this is the uphill direction, hence the minimum is in the starting point
- Result:
Contents of lmin.xlmin
[1,] 2
[2,] 2
Contents of lmin.flmin
[1,] 7
Contents of lmin.moved
[1,] 0
[2,] 0
Contents of lmin.check
[1,] 1
Contents of lmin.warn
[1,] 0
- Example:
library("nummath")
;
; definition of function
;
proc(p)=ftion(x)
p = x[1,]^2 + 3*(x[2,]-1)^2
endp
;
lmin = nmlinminappr("ftion","",#(1,2),#(1,-1))
lmin
;
; minimization for x1 = 1 + t, x2 = 2 - t
- Result:
Contents of lmin.xlmin
[1,] 1.5
[2,] 1.5
Contents of lmin.flmin
[1,] 3
Contents of lmin.moved
[1,] 0.5
[2,] -0.5
Contents of lmin.check
[1,] 0
Contents of lmin.warn
[1,] 0