|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object edu.stanford.rsl.jpop.fortran.UncminForJava
public class UncminForJava
This class contains Java translations of the UNCMIN unconstrained optimization routines. See R.B. Schnabel, J.E. Koontz, and B.E. Weiss, A Modular System of Algorithms for Unconstrained Minimization, Report CU-CS-240-82, Comp. Sci. Dept., University of Colorado at Boulder, 1982.
IMPORTANT: The "_f77" suffixes indicate that these routines use FORTRAN style indexing. For example, you will see
for (i = 1; i <= n; i++)rather than
for (i = 0; i < n; i++)To use the "_f77" routines you will have to declare your vectors and matrices to be one element larger (e.g., v[101] rather than v[100], and a[101][101] rather than a[100][100]), and you will have to fill elements 1 through n rather than elements 0 through n - 1. Versions of these programs that use C/Java style indexing will eventually be available. They will end with the suffix "_j".
This class was translated by a statistician from a FORTRAN version of UNCMIN. It is NOT an official translation. It wastes memory by failing to use the first elements of vectors. When public domain Java optimization routines become available from the people who produced UNCMIN, then THE CODE PRODUCED BY THE NUMERICAL ANALYSTS SHOULD BE USED.
Meanwhile, if you have suggestions for improving this code, please contact Steve Verrill at steve@ws13.fpl.fs.fed.us.
Constructor Summary | |
---|---|
UncminForJava(FunctionController controller)
|
Method Summary | |
---|---|
static void |
initialize(int n,
double[] x_in_java,
double[] typsiz,
double[] fscale,
int[] method,
int[] iexp,
int[] msg,
int[] ndigit,
int[] itnlim,
int[] iagflg,
int[] iahflg,
double[] dlt,
double[] gradtl,
double[] stepmx,
double[] steptl)
Deprecated. |
void |
optimizeFunction(int n,
double[] x,
OptimizableFunction minclass,
double[] typsiz,
double[] fscale,
int[] method,
int[] iexp,
int[] msg,
int[] ndigit,
int[] itnlim,
int[] iagflg,
int[] iahflg,
double[] dlt,
double[] gradtl,
double[] stepmx,
double[] steptl,
double[] xpls,
double[] fpls,
double[] gpls,
int[] itrmcd,
double[][] a,
double[] udiag,
double[] numericalGradient,
double[] p,
double[] sx,
double[] wrk0,
double[] wrk1,
double[] wrk2,
double[] wrk3)
Deprecated. |
void |
optimizeFunction0(int dimension,
double[] initialX_in_java,
OptimizableFunction function,
double[] vectorX,
double[] functionValueAtX,
double[] gradientAtX,
int[] terminationCode,
double[][] hessianAtX,
double[] diagonalOfHessian)
Deprecated. |
void |
optimizeFunction7(int n,
double[] x_in_java,
OptimizableFunction minclass,
double[] typsiz,
double[] fscale,
int[] method,
int[] iexp,
int[] msg,
int[] ndigit,
int[] itnlim,
int[] iagflg,
int[] iahflg,
double[] dlt,
double[] gradtl,
double[] stepmx,
double[] steptl,
double[] xpls,
double[] fpls,
double[] gpls,
int[] itrmcd,
double[][] a,
double[] udiag)
Deprecated. |
Methods inherited from class java.lang.Object |
---|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Constructor Detail |
---|
public UncminForJava(FunctionController controller)
Method Detail |
---|
@Deprecated public void optimizeFunction0(int dimension, double[] initialX_in_java, OptimizableFunction function, double[] vectorX, double[] functionValueAtX, double[] gradientAtX, int[] terminationCode, double[][] hessianAtX, double[] diagonalOfHessian)
The optif0_f77 method minimizes a smooth nonlinear function of n variables. A method that computes the function value at any point must be supplied. (See Uncmin_methods.java and UncminTest.java.) Derivative values are not required. The optif0_f77 method provides the simplest user access to the UNCMIN minimization routines. Without a recompile, the user has no control over options. For details, see the Schnabel et al reference and the comments in the code. Translated by Steve Verrill, August 4, 1998.
dimension
- The number of arguments of the function to minimizeinitialX
- The initial estimate of the minimum pointfunction
- A class that implements the OptimizableFunction
interface (see the definition in
OptimizableFunction.java). See UncminTest_f77.java for an
example of such a class. The class must define:
1.) a method, evaluate, to minimize.
evaluate must have the form
public static double evaluate(double x[])
where x is the vector of arguments to the function
and the return value is the value of the function
evaluated at x.
2.) a method, gradient, that has the form
public static double [] gradient(double x[])
where the return value is the gradient of f evaluated at x. This
method will have an empty body if the user
does not wish to provide an analytic estimate
of the gradient.
3.) a method, hessian, that has the form
public static double [] [] hessian(double x[])
where the return value is the Hessian of f evaluated at x. This
method will have an empty body if the user
does not wish to provide an analytic estimate
of the Hessian. If the user wants Uncmin
to check the Hessian, then the hessian method
should only fill the lower triangle (and diagonal)
of the Hessian.vectorX
- The final estimate of the minimum pointfunctionValueAtX
- The value of f_to_minimize at xplsgradientAtX
- The gradient at the local minimum xplsterminationCode
- Termination code
ITRMCD = 0: Optimal solution found
ITRMCD = 1: Terminated with gradient small,
xpls is probably optimal
ITRMCD = 2: Terminated with stepsize small,
xpls is probably optimal
ITRMCD = 3: Lower point cannot be found,
xpls is probably optimal
ITRMCD = 4: Iteration limit (150) exceeded
ITRMCD = 5: Too many large steps,
function may be unboundedhessianAtX
- Workspace for the Hessian (or its estimate)
and its Cholesky decompositiondiagonalOfHessian
- Workspace for the diagonal of the Hessian@Deprecated public void optimizeFunction7(int n, double[] x_in_java, OptimizableFunction minclass, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl, double[] xpls, double[] fpls, double[] gpls, int[] itrmcd, double[][] a, double[] udiag)
The optif9_f77 method minimizes a smooth nonlinear function of n variables. A method that computes the function value at any point must be supplied. (See Uncmin_methods.java and UncminTest.java.) Derivative values are not required. The optif9 method provides complete user access to the UNCMIN minimization routines. The user has full control over options. For details, see the Schnabel et al reference and the comments in the code. Translated by Steve Verrill, August 4, 1998.
n
- The number of arguments of the function to minimizex
- The initial estimate of the minimum pointminclass
- A class that implements the OptimizableFunction
interface (see the definition in
GradientOptimizableFunction.java). See UncminTest_f77.java for an
example of such a class. The class must define:
1.) a method, evaluate, to minimize.
evaluate must have the form
public static double evaluate(double x[])
where x is the vector of arguments to the function
and the return value is the value of the function
evaluated at x.
2.) a method, gradient, that has the form
public static double [] gradient(double x[])
where the return value is the gradient of f evaluated at x. This
method will have an empty body if the user
does not wish to provide an analytic estimate
of the gradient.
3.) a method, hessian, that has the form
public static double [] [] hessian(double x[])
where the return value is the Hessian of f evaluated at x. This
method will have an empty body if the user
does not wish to provide an analytic estimate
of the Hessian. If the user wants Uncmin
to check the Hessian, then the hessian method
should only fill the lower triangle (and diagonal)
of the Hessian.typsiz
- Typical size for each component of xfscale
- Estimate of the scale of the objective functionmethod
- Algorithm to use to solve the minimization problem
= 1 line search
= 2 double dogleg
= 3 More-Hebdoniexp
- = 1 if the optimization function f_to_minimize
is expensive to evaluate, = 0 otherwise. If iexp = 1,
then the Hessian will be evaluated by secant update
rather than analytically or by finite differences.msg
- Message to inhibit certain automatic checks and outputndigit
- Number of good digits in the minimization functionitnlim
- Maximum number of allowable iterationsiagflg
- = 0 if an analytic gradient is not suppliediahflg
- = 0 if an analytic Hessian is not supplieddlt
- Trust region radiusgradtl
- Tolerance at which the gradient is considered close enough
to zero to terminate the algorithmstepmx
- Maximum allowable step sizesteptl
- Relative step size at which successive iterates
are considered close enough to terminate the algorithmxpls
- The final estimate of the minimum pointfpls
- The value of f_to_minimize at xplsgpls
- The gradient at the local minimum xplsitrmcd
- Termination code
ITRMCD = 0: Optimal solution found
ITRMCD = 1: Terminated with gradient small,
X is probably optimal
ITRMCD = 2: Terminated with stepsize small,
X is probably optimal
ITRMCD = 3: Lower point cannot be found,
X is probably optimal
ITRMCD = 4: Iteration limit (150) exceeded
ITRMCD = 5: Too many large steps,
function may be unboundeda
- Workspace for the Hessian (or its estimate)
and its Cholesky decompositionudiag
- Workspace for the diagonal of the Hessian@Deprecated public static void initialize(int n, double[] x_in_java, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl)
The dfault_f77 method sets default values for each input variable to the minimization algorithm. Translated by Steve Verrill, August 4, 1998.
n
- Dimension of the problemx
- Initial estimate of the solution (to compute max step size)typsiz
- Typical size for each component of xfscale
- Estimate of the scale of the minimization functionmethod
- Algorithm to use to solve the minimization problemiexp
- = 0 if the minimization function is not expensive to evaluatemsg
- Message to inhibit certain automatic checks and outputndigit
- Number of good digits in the minimization functionitnlim
- Maximum number of allowable iterationsiagflg
- = 0 if an analytic gradient is not suppliediahflg
- = 0 if an analytic Hessian is not supplieddlt
- Trust region radiusgradtl
- Tolerance at which the gradient is considered close enough
to zero to terminate the algorithmstepmx
- "Value of zero to trip default maximum in optchk"steptl
- Tolerance at which successive iterates are considered
close enough to terminate the algorithm@Deprecated public void optimizeFunction(int n, double[] x, OptimizableFunction minclass, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl, double[] xpls, double[] fpls, double[] gpls, int[] itrmcd, double[][] a, double[] udiag, double[] numericalGradient, double[] p, double[] sx, double[] wrk0, double[] wrk1, double[] wrk2, double[] wrk3)
The optdrv_f77 method is the driver for the nonlinear optimization problem. Translated by Steve Verrill, May 18, 1998.
n
- The dimension of the problemx
- On entry, estimate of the location of a minimum
of f_to_minimizeminclass
- A class that implements the OptimizableFunction
interface (see the definition in
GradientOptimizableFunction.java). See UncminTest_f77.java for an
example of such a class. The class must define:
1.) a method, evaluate, to minimize.
evaluate must have the form
public static double evaluate(double x[])
where x is the vector of arguments to the function
and the return value is the value of the function
evaluated at x.
2.) a method, gradient, that has the form
public static double [] gradient(double x[])
where the return value is the gradient of f evaluated at x. This
method will have an empty body if the user
does not wish to provide an analytic estimate
of the gradient.
3.) a method, hessian, that has the form
public static double [] [] hessian(double x[])
where the return value is the Hessian of f evaluated at x. This
method will have an empty body if the user
does not wish to provide an analytic estimate
of the Hessian. If the user wants Uncmin
to check the Hessian, then the hessian method
should only fill the lower triangle (and diagonal)
of the Hessian.typsiz
- Typical size of each component of xfscale
- Estimate of scale of objective functionmethod
- Algorithm indicator
1 -- line search
2 -- double dogleg
3 -- More-Hebdoniexp
- Expense flag.
1 -- optimization function, f_to_minimize,
is expensive to evaluate
0 -- otherwise
If iexp = 1, the Hessian will be evaluated
by secant update rather than analytically or
by finite differences.msg
- On input: (> 0) message to inhibit certain
automatic checks
On output: (< 0) error code (= 0, no error)ndigit
- Number of good digits in the optimization functionitnlim
- Maximum number of allowable iterationsiagflg
- = 1 if an analytic gradient is suppliediahflg
- = 1 if an analytic Hessian is supplieddlt
- Trust region radiusgradtl
- Tolerance at which the gradient is considered
close enough to zero to terminate the algorithmstepmx
- Maximum step sizesteptl
- Relative step size at which successive iterates
are considered close enough to terminate the
algorithmxpls
- On exit: xpls is a local minimumfpls
- On exit: function value at xplsgpls
- On exit: gradient at xplsitrmcd
- Termination codea
- workspace for Hessian (or its approximation)
and its Cholesky decompositionudiag
- workspace (for diagonal of Hessian)numericalGradient
- workspace (for gradient at current iterate)p
- workspace for stepsx
- workspace (for scaling vector)wrk0
- workspacewrk1
- workspacewrk2
- workspacewrk3
- workspace
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |