API Reference¶
Public API¶
optimization — SciPy-backed optimization utilities.
All public symbols are re-exported from optimization.core so
callers can simply write:
from optimization import minimize_bfgs_fd, quadratic_program
This package is a self-contained set of optimization utilities inspired by IMSL Fortran Numerical Library Optimization chapter, backed by scipy.optimize.
- class optimization.OptimizationResult(x, fval, success, message, n_iter=0, n_fev=0, n_gev=0, jacobian=None, hessian=None)¶
Bases:
objectResult of an optimization procedure.
- Parameters:
x (ndarray)
fval (float)
success (bool)
message (str)
n_iter (int)
n_fev (int)
n_gev (int)
jacobian (ndarray | None)
hessian (ndarray | None)
- x¶
Solution vector.
- Type:
np.ndarray
- fval¶
Objective function value at solution.
- Type:
float
- success¶
Whether optimization succeeded.
- Type:
bool
- message¶
Description of termination condition.
- Type:
str
- n_iter¶
Number of iterations performed.
- Type:
int
- n_fev¶
Number of function evaluations.
- Type:
int
- n_gev¶
Number of gradient evaluations.
- Type:
int
- jacobian¶
Residual Jacobian (least-squares only).
- Type:
np.ndarray | None
- hessian¶
Hessian approximation (Newton methods only).
- Type:
np.ndarray | None
- class optimization.LinearConstraint(A, lb, ub)¶
Bases:
objectLinear constraint: lb <= A @ x <= ub.
- Parameters:
A (ndarray)
lb (ndarray)
ub (ndarray)
- A¶
Constraint matrix of shape (m, n).
- Type:
np.ndarray
- lb¶
Lower bounds of shape (m,). Use -np.inf for no lower bound.
- Type:
np.ndarray
- ub¶
Upper bounds of shape (m,). Use np.inf for no upper bound.
- Type:
np.ndarray
- class optimization.NonlinearConstraint(fun, type, jac=None)¶
Bases:
objectNonlinear constraint.
- Parameters:
fun (Callable)
type (str)
jac (Callable | None)
- fun¶
Constraint function: fun(x) -> float or np.ndarray.
- Type:
Callable
- type¶
‘eq’ for equality (fun(x) == 0) or ‘ineq’ for inequality (fun(x) >= 0).
- Type:
str
- jac¶
Optional Jacobian of fun.
- Type:
Callable | None
- optimization.central_difference_gradient(f, x, h=None)¶
Approximate gradient using central differences (IMSL CDGRD equivalent).
Uses the formula: grad[i] = (f(x + h*e_i) - f(x - h*e_i)) / (2*h[i]).
- Parameters:
f (Callable) – Scalar function f(x: np.ndarray) -> float.
x (np.ndarray) – Point at which to evaluate gradient, shape (n,).
h (np.ndarray | None) – Step sizes of shape (n,). Defaults to
eps^(1/3) * max(1, abs(x[i]))component-wise.
- Returns:
Approximate gradient of shape (n,).
- Return type:
np.ndarray
- Raises:
TypeError – If f is not callable.
ValueError – If x is not 1D.
- optimization.forward_difference_gradient(f, x, h=None)¶
Approximate gradient using forward differences (IMSL FDGRD equivalent).
Uses the formula: grad[i] = (f(x + h*e_i) - f(x)) / h[i].
- Parameters:
f (Callable) – Scalar function f(x: np.ndarray) -> float.
x (np.ndarray) – Point at which to evaluate gradient, shape (n,).
h (np.ndarray | None) – Step sizes of shape (n,). Defaults to
sqrt(eps) * max(1, abs(x[i]))component-wise.
- Returns:
Approximate gradient of shape (n,).
- Return type:
np.ndarray
- Raises:
TypeError – If f is not callable.
ValueError – If x is not 1D.
- optimization.forward_difference_hessian(f, x, h=None)¶
Approximate Hessian using forward differences (IMSL FDHES equivalent).
Uses the second-order forward difference formula for the Hessian.
- Parameters:
f (Callable) – Scalar function f(x: np.ndarray) -> float.
x (np.ndarray) – Point at which to evaluate Hessian, shape (n,).
h (np.ndarray | None) – Step sizes of shape (n,). Defaults to
eps^(1/4) * max(1, abs(x[i]))component-wise.
- Returns:
Approximate Hessian of shape (n, n).
- Return type:
np.ndarray
- Raises:
TypeError – If f is not callable.
ValueError – If x is not 1D.
- optimization.gradient_difference_hessian(f, grad, x, h=None)¶
Approximate Hessian from analytic gradient using finite differences (IMSL GDHES).
Uses the formula: H[i,j] = (grad(x + h*e_j)[i] - grad(x)[i]) / h[j].
- Parameters:
f (Callable) – Scalar function f(x: np.ndarray) -> float (not used directly but included for API consistency).
grad (Callable) – Gradient function grad(x: np.ndarray) -> np.ndarray of shape (n,).
x (np.ndarray) – Point at which to evaluate Hessian, shape (n,).
h (np.ndarray | None) – Step sizes of shape (n,). Defaults to
sqrt(eps) * max(1, abs(x[i]))component-wise.
- Returns:
Approximate Hessian of shape (n, n).
- Return type:
np.ndarray
- Raises:
TypeError – If grad is not callable.
ValueError – If x is not 1D.
- optimization.divided_difference_jacobian(f, x, m, h=None)¶
Approximate Jacobian using divided differences (IMSL DDJAC equivalent).
Uses central differences: J[i,j] = (f(x+h*e_j)[i] - f(x-h*e_j)[i]) / (2*h[j]).
- Parameters:
f (Callable) – Vector function f(x: np.ndarray) -> np.ndarray of shape (m,).
x (np.ndarray) – Point at which to evaluate Jacobian, shape (n,).
m (int) – Number of function outputs (number of rows in Jacobian).
h (np.ndarray | None) – Step sizes of shape (n,). Defaults to
eps^(1/3) * max(1, abs(x[i]))component-wise.
- Returns:
Approximate Jacobian of shape (m, n).
- Return type:
np.ndarray
- Raises:
TypeError – If f is not callable.
ValueError – If x is not 1D or m < 1.
- optimization.forward_difference_jacobian(f, x, m, h=None)¶
Approximate Jacobian using forward differences (IMSL FDJAC equivalent).
Uses the formula: J[i,j] = (f(x + h*e_j)[i] - f(x)[i]) / h[j].
- Parameters:
f (Callable) – Vector function f(x: np.ndarray) -> np.ndarray of shape (m,).
x (np.ndarray) – Point at which to evaluate Jacobian, shape (n,).
m (int) – Number of function outputs (number of rows in Jacobian).
h (np.ndarray | None) – Step sizes of shape (n,). Defaults to
sqrt(eps) * max(1, abs(x[i]))component-wise.
- Returns:
Approximate Jacobian of shape (m, n).
- Return type:
np.ndarray
- Raises:
TypeError – If f is not callable.
ValueError – If x is not 1D or m < 1.
- optimization.check_gradient(f, grad, x, tol=0.0001)¶
Verify user-supplied gradient against central-difference estimate (IMSL CHGRD).
- Parameters:
f (Callable) – Scalar function f(x: np.ndarray) -> float.
grad (Callable) – Gradient function grad(x: np.ndarray) -> np.ndarray.
x (np.ndarray) – Point at which to check gradient, shape (n,).
tol (float) – Tolerance for relative error check. Default 1e-4.
- Returns:
- Dictionary with keys:
’max_error’ (float): Maximum absolute componentwise error.
’passed’ (bool): True if max_error < tol.
’fd_gradient’ (np.ndarray): Finite-difference gradient.
’analytic_gradient’ (np.ndarray): User-supplied gradient.
- Return type:
dict
- Raises:
TypeError – If f or grad are not callable.
ValueError – If x is not 1D.
- optimization.check_hessian(f, hess, x, tol=0.0001)¶
Verify user-supplied Hessian against finite-difference estimate (IMSL CHHES).
- Parameters:
f (Callable) – Scalar function f(x: np.ndarray) -> float.
hess (Callable) – Hessian function hess(x: np.ndarray) -> np.ndarray of shape (n, n).
x (np.ndarray) – Point at which to check Hessian, shape (n,).
tol (float) – Tolerance for max absolute element error. Default 1e-4.
- Returns:
- Dictionary with keys:
’max_error’ (float): Maximum absolute elementwise error.
’passed’ (bool): True if max_error < tol.
’fd_hessian’ (np.ndarray): Finite-difference Hessian.
’analytic_hessian’ (np.ndarray): User-supplied Hessian.
- Return type:
dict
- Raises:
TypeError – If f or hess are not callable.
ValueError – If x is not 1D.
- optimization.check_jacobian(f, jac, x, m, tol=0.0001)¶
Verify user-supplied Jacobian against finite-difference estimate (IMSL CHJAC).
- Parameters:
f (Callable) – Vector function f(x: np.ndarray) -> np.ndarray of shape (m,).
jac (Callable) – Jacobian function jac(x: np.ndarray) -> np.ndarray of shape (m, n).
x (np.ndarray) – Point at which to check Jacobian, shape (n,).
m (int) – Number of function outputs.
tol (float) – Tolerance for max absolute element error. Default 1e-4.
- Returns:
- Dictionary with keys:
’max_error’ (float): Maximum absolute elementwise error.
’passed’ (bool): True if max_error < tol.
’fd_jacobian’ (np.ndarray): Finite-difference Jacobian.
’analytic_jacobian’ (np.ndarray): User-supplied Jacobian.
- Return type:
dict
- Raises:
TypeError – If f or jac are not callable.
ValueError – If x is not 1D or m < 1.
- optimization.generate_starting_points(n, n_points, bounds=None, seed=None)¶
Generate a grid of starting points for global search (IMSL GGUES equivalent).
Attempts Latin Hypercube Sampling via scipy.stats; falls back to uniform random if scipy.stats is unavailable.
- Parameters:
n (int) – Number of variables (dimension).
n_points (int) – Number of starting points to generate.
bounds (list[tuple[float, float]] | None) – List of (lb, ub) pairs for each dimension. Default: [(-1, 1)] * n.
seed (int | None) – Random seed for reproducibility.
- Returns:
Starting points of shape (n_points, n).
- Return type:
np.ndarray
- Raises:
ValueError – If n < 1, n_points < 1, or bounds has wrong length.
- optimization.minimize_univariate(f, x_guess, bound, step=1.0, x_tol=0.0001, max_fev=1000)¶
Minimize a smooth univariate function using safeguarded quadratic interpolation (IMSL UVMIF).
Uses scipy.optimize.minimize_scalar with method=’bounded’. x is restricted to [x_guess - bound, x_guess + bound].
- Parameters:
f (Callable) – Function f(x: float) -> float to minimize.
x_guess (float) – Initial guess.
bound (float) – Positive bound on change from x_guess.
step (float) – Order-of-magnitude estimate of required change. Default 1.0.
x_tol (float) – Required absolute accuracy in final x. Default 1e-4.
max_fev (int) – Max function evaluations. Default 1000.
- Returns:
Result with x (scalar in array), fval.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If bound <= 0 or max_fev < 1.
- optimization.minimize_univariate_deriv(f, df, x_guess, bound, step=1.0, x_tol=0.0001, max_fev=1000)¶
Minimize a smooth univariate function using function and derivative values (IMSL UVMID).
Uses scipy.optimize.minimize with method=’L-BFGS-B’ constrained to a bounded interval, augmented with the analytic derivative.
- Parameters:
f (Callable) – Function f(x: float) -> float.
df (Callable) – First derivative df(x: float) -> float.
x_guess (float) – Initial guess.
bound (float) – Positive bound on change from x_guess.
step (float) – Step estimate. Default 1.0.
x_tol (float) – Absolute accuracy. Default 1e-4.
max_fev (int) – Max function evaluations. Default 1000.
- Returns:
Result with x (scalar in array), fval.
- Return type:
- Raises:
TypeError – If f or df are not callable.
ValueError – If bound <= 0.
- optimization.minimize_univariate_nonsmooth(f, x_guess, bound, x_tol=0.0001, max_fev=1000)¶
Minimize a nonsmooth univariate function using golden-section search (IMSL UVMGS).
Uses scipy.optimize.minimize_scalar with method=’golden’.
- Parameters:
f (Callable) – Nonsmooth function f(x: float) -> float.
x_guess (float) – Initial guess.
bound (float) – Search bound; defines bracket [x_guess-bound, x_guess+bound].
x_tol (float) – Tolerance. Default 1e-4.
max_fev (int) – Max evaluations. Default 1000.
- Returns:
Result with x (scalar in array), fval.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If bound <= 0.
- optimization.minimize_bfgs_fd(f, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize a multivariate function using quasi-Newton BFGS with FD gradient (IMSL UMINF).
- Parameters:
f (Callable) – f(x: np.ndarray) -> float.
x_guess (np.ndarray | None) – Initial guess. Default: zeros of shape (n,).
n (int | None) – Problem dimension. Inferred from x_guess if provided.
x_scale (np.ndarray | None) – Scaling vector. Default: ones.
f_scale (float) – Function scaling factor. Default 1.0.
grad_tol (float | None) – Gradient tolerance. Default: machine-dependent.
step_tol (float | None) – Step tolerance. Default: eps^(2/3).
max_iter (int) – Maximum iterations. Default 100.
max_fev (int) – Maximum function evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev, n_gev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If n is not positive or x_guess has wrong dimension.
- optimization.minimize_bfgs(f, grad, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize a multivariate function using quasi-Newton BFGS with analytic gradient (IMSL UMING).
- Parameters:
f (Callable) – f(x: np.ndarray) -> float.
grad (Callable) – grad(x: np.ndarray) -> np.ndarray of shape (n,).
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max function evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev, n_gev.
- Return type:
- Raises:
TypeError – If f or grad are not callable.
ValueError – If gradient callable does not return correct shape.
- optimization.minimize_newton_fd_hessian(f, grad, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize using Newton-CG with analytic gradient and finite-difference Hessian (IMSL UMIDH).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
grad (Callable) – Analytic gradient grad(x: np.ndarray) -> np.ndarray.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f or grad are not callable.
ValueError – If inputs are invalid.
- optimization.minimize_newton(f, grad, hess, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize using Newton-CG with analytic gradient and Hessian (IMSL UMIAH).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
grad (Callable) – Analytic gradient grad(x: np.ndarray) -> np.ndarray.
hess (Callable) – Analytic Hessian hess(x: np.ndarray) -> np.ndarray of shape (n, n).
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev, hessian.
- Return type:
- Raises:
TypeError – If f, grad, or hess are not callable.
ValueError – If inputs are invalid.
- optimization.minimize_conjugate_gradient_fd(f, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize using conjugate gradient with finite-difference gradient (IMSL UMCGF).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If inputs are invalid.
- optimization.minimize_conjugate_gradient(f, grad, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize using conjugate gradient with analytic gradient (IMSL UMCGG).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
grad (Callable) – Analytic gradient grad(x: np.ndarray) -> np.ndarray.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f or grad are not callable.
ValueError – If inputs are invalid.
- optimization.minimize_nelder_mead(f, x_guess, x_tol=0.0001, f_tol=0.0001, max_iter=None, max_fev=None)¶
Minimize a nonsmooth function using Nelder-Mead simplex method (IMSL UMPOL).
- Parameters:
f (Callable) – Function f(x: np.ndarray) -> float.
x_guess (np.ndarray) – Initial guess, shape (n,).
x_tol (float) – Absolute tolerance on x. Default 1e-4.
f_tol (float) – Absolute tolerance on function value. Default 1e-4.
max_iter (int | None) – Max iterations. Default: 200*n.
max_fev (int | None) – Max function evaluations. Default: 200*n.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If x_guess is not 1D.
- optimization.nonlinear_least_squares_fd(f, m, x_guess=None, n=None, x_scale=None, f_scale=None, grad_tol=None, step_tol=None, f_tol=None, max_iter=100, max_fev=400)¶
Solve nonlinear least-squares using Levenberg-Marquardt with FD Jacobian (IMSL UNLSF).
Minimizes sum(f_i(x)^2) / 2 where f: R^n -> R^m, m >= n. Uses scipy.optimize.least_squares(method=’lm’, jac=’2-point’).
- Parameters:
f (Callable) – Residual function f(x: np.ndarray) -> np.ndarray of shape (m,).
m (int) – Number of residual functions.
x_guess (np.ndarray | None) – Initial guess of shape (n,). Default: zeros.
n (int | None) – Number of variables. Inferred from x_guess if given.
x_scale (np.ndarray | None) – Variable scaling.
f_scale (np.ndarray | None) – Function scaling.
grad_tol (float | None) – Gradient tolerance (gtol in scipy).
step_tol (float | None) – Step tolerance (xtol in scipy).
f_tol (float | None) – Function tolerance (ftol in scipy).
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max function evaluations. Default 400.
- Returns:
- Solution with x, fval (sum of squares / 2), jacobian,
n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If m < n (underdetermined).
- optimization.nonlinear_least_squares(f, jac, m, x_guess=None, n=None, x_scale=None, f_scale=None, grad_tol=None, step_tol=None, f_tol=None, max_iter=100, max_fev=400)¶
Solve nonlinear least-squares using Levenberg-Marquardt with analytic Jacobian (IMSL UNLSJ).
- Parameters:
f (Callable) – Residual function f(x: np.ndarray) -> np.ndarray of shape (m,).
jac (Callable) – Jacobian jac(x: np.ndarray) -> np.ndarray of shape (m, n).
m (int) – Number of residual functions.
x_guess (np.ndarray | None) – Initial guess of shape (n,).
n (int | None) – Number of variables.
x_scale (np.ndarray | None) – Variable scaling.
f_scale (np.ndarray | None) – Function scaling.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
f_tol (float | None) – Function tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, jacobian, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f or jac are not callable.
ValueError – If m < n.
- optimization.minimize_bounds_bfgs_fd(f, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize subject to bounds using L-BFGS-B with FD gradient (IMSL BCONF).
- Parameters:
f (Callable) – f(x: np.ndarray) -> float.
ibtype (int) – Bound type. 0=user supplies all, 1=all nonneg (xlb=0,xub=inf), 2=all nonpos (xlb=-inf,xub=0), 3=user supplies first var bounds broadcast.
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If ibtype not in [0, 1, 2, 3] or bounds invalid.
- optimization.minimize_bounds_bfgs(f, grad, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize subject to bounds using L-BFGS-B with analytic gradient (IMSL BCONG).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
grad (Callable) – Analytic gradient grad(x: np.ndarray) -> np.ndarray.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f or grad are not callable.
ValueError – If bounds or ibtype are invalid.
- optimization.minimize_bounds_newton_fd(f, grad, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize subject to bounds using trust-constr with FD Hessian (IMSL BCODH).
Uses scipy trust-constr (which supports bounds, unlike trust-ncg).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
grad (Callable) – Analytic gradient grad(x: np.ndarray) -> np.ndarray.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f or grad are not callable.
ValueError – If inputs are invalid.
- optimization.minimize_bounds_newton(f, grad, hess, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, f_scale=1.0, grad_tol=None, step_tol=None, max_iter=100, max_fev=400)¶
Minimize subject to bounds using trust-constr with analytic Hessian (IMSL BCOAH).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
grad (Callable) – Analytic gradient grad(x: np.ndarray) -> np.ndarray.
hess (Callable) – Analytic Hessian hess(x: np.ndarray) -> np.ndarray of shape (n, n).
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
f_scale (float) – Function scaling. Default 1.0.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev, hessian.
- Return type:
- Raises:
TypeError – If f, grad, or hess are not callable.
ValueError – If inputs are invalid.
- optimization.minimize_bounds_nelder_mead(f, ibtype, xlb, xub, x_guess=None, x_tol=0.0001, f_tol=0.0001, max_iter=None, max_fev=None)¶
Minimize a nonsmooth function subject to bounds using Nelder-Mead (IMSL BCPOL).
Bounds are enforced via a large penalty approach since scipy Nelder-Mead does not natively support bounds.
- Parameters:
f (Callable) – Nonsmooth objective function f(x: np.ndarray) -> float.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
x_tol (float) – Tolerance on x. Default 1e-4.
f_tol (float) – Tolerance on f. Default 1e-4.
max_iter (int | None) – Max iterations. Default: 200*n.
max_fev (int | None) – Max evaluations. Default: 200*n.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If bounds are invalid.
- optimization.least_squares_bounds_fd(f, m, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, f_scale=None, grad_tol=None, step_tol=None, f_tol=None, max_iter=100, max_fev=400)¶
Nonlinear least squares with bounds using TRF and FD Jacobian (IMSL BCLSF).
- Parameters:
f (Callable) – Residuals f(x: np.ndarray) -> np.ndarray of shape (m,).
m (int) – Number of residuals.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Number of variables.
x_scale (np.ndarray | None) – Variable scaling.
f_scale (np.ndarray | None) – Function scaling.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
f_tol (float | None) – Function tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, jacobian, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If m < n or bounds are invalid.
- optimization.least_squares_bounds(f, jac, m, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, f_scale=None, grad_tol=None, step_tol=None, f_tol=None, max_iter=100, max_fev=400)¶
Nonlinear least squares with bounds using TRF and analytic Jacobian (IMSL BCLSJ).
- Parameters:
f (Callable) – Residuals f(x: np.ndarray) -> np.ndarray of shape (m,).
jac (Callable) – Jacobian jac(x: np.ndarray) -> np.ndarray of shape (m, n).
m (int) – Number of residuals.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Number of variables.
x_scale (np.ndarray | None) – Variable scaling.
f_scale (np.ndarray | None) – Function scaling.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
f_tol (float | None) – Function tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, jacobian, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f or jac are not callable.
ValueError – If m < n or bounds are invalid.
- optimization.nonlinear_least_squares_bounds(f, m, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, f_scale=None, grad_tol=None, step_tol=None, f_tol=None, max_iter=100, max_fev=400)¶
Nonlinear least-squares subject to bounds (IMSL BCNLS).
Alias for least_squares_bounds_fd with a clearer name.
- Parameters:
f (Callable) – Residuals f(x: np.ndarray) -> np.ndarray of shape (m,).
m (int) – Number of residuals.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Number of variables.
x_scale (np.ndarray | None) – Variable scaling.
f_scale (np.ndarray | None) – Function scaling.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
f_tol (float | None) – Function tolerance.
max_iter (int) – Max iterations. Default 100.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, jacobian, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If m < n or bounds are invalid.
- optimization.linear_program(A, bl, bu, c, irtype, xlb=None, xub=None)¶
Solve a linear programming problem using HiGHS (IMSL DENSE_LP).
Minimizes c @ x subject to bl[i] <= (A @ x)[i] <= bu[i], xlb <= x <= xub.
- Parameters:
A (np.ndarray) – Constraint matrix of shape (m, nvar).
bl (np.ndarray) – Lower bounds on constraints, shape (m,).
bu (np.ndarray) – Upper bounds on constraints, shape (m,).
c (np.ndarray) – Objective coefficient vector, shape (nvar,).
irtype (np.ndarray) – Constraint types, shape (m,). Values: 0=equality, 1=upper (R<=bu), 2=lower (R>=bl), 3=range (bl<=R<=bu), 4=ignore.
xlb (np.ndarray | None) – Lower bounds on variables. Default: 0.
xub (np.ndarray | None) – Upper bounds on variables. Default: None (unbounded).
- Returns:
Solution with x, fval (objective), n_iter.
- Return type:
- Raises:
ValueError – If A, bl, bu, c dimensions are inconsistent.
- optimization.linear_program_revised_simplex(A, bl, bu, c, irtype, xlb=None, xub=None)¶
Solve a linear programming problem using revised simplex (IMSL DLPRS).
- Parameters:
A (np.ndarray) – Constraint matrix of shape (m, nvar).
bl (np.ndarray) – Lower bounds on constraints.
bu (np.ndarray) – Upper bounds on constraints.
c (np.ndarray) – Objective coefficients.
irtype (np.ndarray) – Constraint types.
xlb (np.ndarray | None) – Lower variable bounds.
xub (np.ndarray | None) – Upper variable bounds.
- Returns:
Solution with x, fval, n_iter.
- Return type:
- Raises:
ValueError – If dimensions are inconsistent.
- optimization.linear_program_sparse(A, bl, bu, c, irtype, xlb=None, xub=None)¶
Solve a sparse linear programming problem (IMSL SLPRS).
Accepts sparse or dense A. Uses scipy.sparse.linalg if A is sparse.
- Parameters:
A (np.ndarray or scipy.sparse matrix) – Constraint matrix of shape (m, nvar).
bl (np.ndarray) – Lower bounds on constraints.
bu (np.ndarray) – Upper bounds on constraints.
c (np.ndarray) – Objective coefficients.
irtype (np.ndarray) – Constraint types.
xlb (np.ndarray | None) – Lower variable bounds.
xub (np.ndarray | None) – Upper variable bounds.
- Returns:
Solution with x, fval, n_iter.
- Return type:
- Raises:
ValueError – If dimensions are inconsistent.
- optimization.transportation_problem(costs, supply, demand)¶
Solve a transportation problem (IMSL TRAN).
Minimizes sum(costs[i,j] * x[i,j]) subject to supply/demand constraints.
- Parameters:
costs (np.ndarray) – Cost matrix of shape (m, n), costs[i,j] is cost from source i to destination j.
supply (np.ndarray) – Supply at each source, shape (m,).
demand (np.ndarray) – Demand at each destination, shape (n,).
- Returns:
- Solution with x containing flow matrix flattened to
shape (m*n,), fval containing total cost.
- Return type:
- Raises:
ValueError – If total supply != total demand or dimensions are inconsistent.
- optimization.quadratic_program(neq, A, b, g, H, n_var=None, n_con=None, max_iter=100000)¶
Solve a quadratic programming problem (IMSL QPROG).
Minimizes
0.5 * x @ H @ x + g @ xsubject toA[:neq] @ x == b[:neq](equality constraints) andA[neq:] @ x >= b[neq:](inequality constraints).Uses scipy.optimize.minimize(method=’SLSQP’).
- Parameters:
neq (int) – Number of equality constraints (first neq rows of A and b).
A (np.ndarray) – Constraint matrix of shape (ncon, nvar).
b (np.ndarray) – RHS of constraints, shape (ncon,).
g (np.ndarray) – Linear objective coefficient vector, shape (nvar,).
H (np.ndarray) – Hessian matrix, shape (nvar, nvar). Should be symmetric PD.
n_var (int | None) – Number of variables. Default: A.shape[1].
n_con (int | None) – Number of constraints. Default: A.shape[0].
max_iter (int) – Max iterations. Default 100000.
- Returns:
Solution with x, fval (QP objective value), n_iter.
- Return type:
- Raises:
ValueError – If neq > n_con, or A/b/g/H dimensions are inconsistent.
- optimization.minimize_linear_constraints_fd(f, constraints, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, grad_tol=None, step_tol=None, max_iter=200, max_fev=400)¶
Minimize with linear constraints using SLSQP and FD gradient (IMSL LCONF).
- Parameters:
f (Callable) – Objective f(x: np.ndarray) -> float.
constraints (list[LinearConstraint]) – List of linear constraints.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds on variables.
xub (np.ndarray) – Upper bounds on variables.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 200.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If constraints or bounds are invalid.
- optimization.minimize_linear_constraints(f, grad, constraints, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, grad_tol=None, step_tol=None, max_iter=200, max_fev=400)¶
Minimize with linear constraints using SLSQP and analytic gradient (IMSL LCONG).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
grad (Callable) – Analytic gradient grad(x: np.ndarray) -> np.ndarray.
constraints (list[LinearConstraint]) – Linear constraints.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 200.
max_fev (int) – Max evaluations. Default 400.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f or grad are not callable.
ValueError – If inputs are invalid.
- optimization.minimize_linear_constraints_trust_region(f, constraints, ibtype, xlb, xub, x_guess=None, n=None, x_scale=None, grad_tol=None, step_tol=None, max_iter=200)¶
Minimize with linear constraints without derivatives using trust-constr (IMSL LIN_CON_TRUST_REGION).
- Parameters:
f (Callable) – Objective function (no gradient required).
constraints (list[LinearConstraint]) – Linear constraints.
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds.
xub (np.ndarray) – Upper bounds.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
max_iter (int) – Max iterations. Default 200.
- Returns:
Solution with x, fval, n_iter.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If inputs are invalid.
- optimization.minimize_nonlinear_constraints_fd(f, m, me, ibtype, xlb, xub, constraints, x_guess=None, n=None, x_scale=None, max_iter=200, grad_tol=None, step_tol=None)¶
Minimize with nonlinear constraints using SLSQP and FD gradients (IMSL NNLPF).
- Parameters:
f (Callable) – Objective f(x: np.ndarray) -> float.
m (int) – Total number of constraints.
me (int) – Number of equality constraints (first me constraints are equalities).
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds on variables.
xub (np.ndarray) – Upper bounds on variables.
constraints (list[NonlinearConstraint]) – List of nonlinear constraints.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
max_iter (int) – Max iterations. Default 200.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f is not callable.
ValueError – If m != len(constraints) or me > m.
- optimization.minimize_nonlinear_constraints(f, grad, m, me, ibtype, xlb, xub, constraints, x_guess=None, n=None, x_scale=None, max_iter=200, grad_tol=None, step_tol=None)¶
Minimize with nonlinear constraints using SLSQP and analytic gradients (IMSL NNLPG).
- Parameters:
f (Callable) – Objective function f(x: np.ndarray) -> float.
grad (Callable) – Analytic gradient of f, grad(x: np.ndarray) -> np.ndarray.
m (int) – Total number of constraints.
me (int) – Number of equality constraints (first me constraints).
ibtype (int) – Bound type (0-3).
xlb (np.ndarray) – Lower bounds on variables.
xub (np.ndarray) – Upper bounds on variables.
constraints (list[NonlinearConstraint]) – Nonlinear constraints with optional jac.
x_guess (np.ndarray | None) – Initial guess.
n (int | None) – Problem dimension.
x_scale (np.ndarray | None) – Scaling vector.
max_iter (int) – Max iterations. Default 200.
grad_tol (float | None) – Gradient tolerance.
step_tol (float | None) – Step tolerance.
- Returns:
Solution with x, fval, n_iter, n_fev.
- Return type:
- Raises:
TypeError – If f or grad are not callable.
ValueError – If m != len(constraints) or me > m.
- optimization.launch_documentation(opener=<function open>, package_dir=None)¶
Launch package documentation in a web browser.
The launcher opens local HTML documentation only.
- Parameters:
opener (Callable[[str], bool]) – Browser opener function. Defaults to
webbrowser.open.package_dir (Path | None) – Optional package directory override used for tests. Defaults to the directory of this module.
- Returns:
The
file://...URI opened.- Return type:
str
- Raises:
FileNotFoundError – If no local documentation index can be located.