Participer au site avec un Tip
Rechercher
 

Améliorations / Corrections

Vous avez des améliorations (ou des corrections) à proposer pour ce document : je vous remerçie par avance de m'en faire part, cela m'aide à améliorer le site.

Emplacement :

Description des améliorations :

Vous êtes un professionnel et vous avez besoin d'une formation ? Programmation Python
Les fondamentaux
Voir le programme détaillé
Module « scipy.optimize »

Fonction fmin_l_bfgs_b - module scipy.optimize

Signature de la fonction fmin_l_bfgs_b

def fmin_l_bfgs_b(func, x0, fprime=None, args=(), approx_grad=0, bounds=None, m=10, factr=10000000.0, pgtol=1e-05, epsilon=1e-08, iprint=-1, maxfun=15000, maxiter=15000, disp=None, callback=None, maxls=20) 

Description

help(scipy.optimize.fmin_l_bfgs_b)

Minimize a function func using the L-BFGS-B algorithm.

Parameters
----------
func : callable f(x,*args)
    Function to minimize.
x0 : ndarray
    Initial guess.
fprime : callable fprime(x,*args), optional
    The gradient of `func`. If None, then `func` returns the function
    value and the gradient (``f, g = func(x, *args)``), unless
    `approx_grad` is True in which case `func` returns only ``f``.
args : sequence, optional
    Arguments to pass to `func` and `fprime`.
approx_grad : bool, optional
    Whether to approximate the gradient numerically (in which case
    `func` returns only the function value).
bounds : list, optional
    ``(min, max)`` pairs for each element in ``x``, defining
    the bounds on that parameter. Use None or +-inf for one of ``min`` or
    ``max`` when there is no bound in that direction.
m : int, optional
    The maximum number of variable metric corrections
    used to define the limited memory matrix. (The limited memory BFGS
    method does not store the full hessian but uses this many terms in an
    approximation to it.)
factr : float, optional
    The iteration stops when
    ``(f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps``,
    where ``eps`` is the machine precision, which is automatically
    generated by the code. Typical values for `factr` are: 1e12 for
    low accuracy; 1e7 for moderate accuracy; 10.0 for extremely
    high accuracy. See Notes for relationship to `ftol`, which is exposed
    (instead of `factr`) by the `scipy.optimize.minimize` interface to
    L-BFGS-B.
pgtol : float, optional
    The iteration will stop when
    ``max{|proj g_i | i = 1, ..., n} <= pgtol``
    where ``proj g_i`` is the i-th component of the projected gradient.
epsilon : float, optional
    Step size used when `approx_grad` is True, for numerically
    calculating the gradient
iprint : int, optional
    Deprecated option that previously controlled the text printed on the
    screen during the problem solution. Now the code does not emit any
    output and this keyword has no function.

    .. deprecated:: 1.15.0
        This keyword is deprecated and will be removed from SciPy 1.17.0.

disp : int, optional
    Deprecated option that previously controlled the text printed on the
    screen during the problem solution. Now the code does not emit any
    output and this keyword has no function.

    .. deprecated:: 1.15.0
        This keyword is deprecated and will be removed from SciPy 1.17.0.

maxfun : int, optional
    Maximum number of function evaluations. Note that this function
    may violate the limit because of evaluating gradients by numerical
    differentiation.
maxiter : int, optional
    Maximum number of iterations.
callback : callable, optional
    Called after each iteration, as ``callback(xk)``, where ``xk`` is the
    current parameter vector.
maxls : int, optional
    Maximum number of line search steps (per iteration). Default is 20.

Returns
-------
x : array_like
    Estimated position of the minimum.
f : float
    Value of `func` at the minimum.
d : dict
    Information dictionary.

    * d['warnflag'] is

      - 0 if converged,
      - 1 if too many function evaluations or too many iterations,
      - 2 if stopped for another reason, given in d['task']

    * d['grad'] is the gradient at the minimum (should be 0 ish)
    * d['funcalls'] is the number of function calls made.
    * d['nit'] is the number of iterations.

See also
--------
minimize: Interface to minimization algorithms for multivariate
    functions. See the 'L-BFGS-B' `method` in particular. Note that the
    `ftol` option is made available via that interface, while `factr` is
    provided via this interface, where `factr` is the factor multiplying
    the default machine floating-point precision to arrive at `ftol`:
    ``ftol = factr * numpy.finfo(float).eps``.

Notes
-----
SciPy uses a C-translated and modified version of the Fortran code,
L-BFGS-B v3.0 (released April 25, 2011, BSD-3 licensed). Original Fortran
version was written by Ciyou Zhu, Richard Byrd, Jorge Nocedal and,
Jose Luis Morales.

References
----------
* R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound
  Constrained Optimization, (1995), SIAM Journal on Scientific and
  Statistical Computing, 16, 5, pp. 1190-1208.
* C. Zhu, R. H. Byrd and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-B,
  FORTRAN routines for large scale bound constrained optimization (1997),
  ACM Transactions on Mathematical Software, 23, 4, pp. 550 - 560.
* J.L. Morales and J. Nocedal. L-BFGS-B: Remark on Algorithm 778: L-BFGS-B,
  FORTRAN routines for large scale bound constrained optimization (2011),
  ACM Transactions on Mathematical Software, 38, 1.

Examples
--------
Solve a linear regression problem via `fmin_l_bfgs_b`. To do this, first we
define an objective function ``f(m, b) = (y - y_model)**2``, where `y`
describes the observations and `y_model` the prediction of the linear model
as ``y_model = m*x + b``. The bounds for the parameters, ``m`` and ``b``,
are arbitrarily chosen as ``(0,5)`` and ``(5,10)`` for this example.

>>> import numpy as np
>>> from scipy.optimize import fmin_l_bfgs_b
>>> X = np.arange(0, 10, 1)
>>> M = 2
>>> B = 3
>>> Y = M * X + B
>>> def func(parameters, *args):
...     x = args[0]
...     y = args[1]
...     m, b = parameters
...     y_model = m*x + b
...     error = sum(np.power((y - y_model), 2))
...     return error

>>> initial_values = np.array([0.0, 1.0])

>>> x_opt, f_opt, info = fmin_l_bfgs_b(func, x0=initial_values, args=(X, Y),
...                                    approx_grad=True)
>>> x_opt, f_opt
array([1.99999999, 3.00000006]), 1.7746231151323805e-14  # may vary

The optimized parameters in ``x_opt`` agree with the ground truth parameters
``m`` and ``b``. Next, let us perform a bound constrained optimization using
the `bounds` parameter.

>>> bounds = [(0, 5), (5, 10)]
>>> x_opt, f_op, info = fmin_l_bfgs_b(func, x0=initial_values, args=(X, Y),
...                                   approx_grad=True, bounds=bounds)
>>> x_opt, f_opt
array([1.65990508, 5.31649385]), 15.721334516453945  # may vary


Vous êtes un professionnel et vous avez besoin d'une formation ? Programmation Python
Les compléments
Voir le programme détaillé