Participer au site avec un Tip
Rechercher
 

Améliorations / Corrections

Vous avez des améliorations (ou des corrections) à proposer pour ce document : je vous remerçie par avance de m'en faire part, cela m'aide à améliorer le site.

Emplacement :

Description des améliorations :

Vous êtes un professionnel et vous avez besoin d'une formation ? Deep Learning avec Python
et Keras et Tensorflow
Voir le programme détaillé
Module « scipy.optimize »

Fonction root - module scipy.optimize

Signature de la fonction root

def root(fun, x0, args=(), method='hybr', jac=None, tol=None, callback=None, options=None) 

Description

help(scipy.optimize.root)

Find a root of a vector function.

Parameters
----------
fun : callable
    A vector function to find a root of.

    Suppose the callable has signature ``f0(x, *my_args, **my_kwargs)``, where
    ``my_args`` and ``my_kwargs`` are required positional and keyword arguments.
    Rather than passing ``f0`` as the callable, wrap it to accept
    only ``x``; e.g., pass ``fun=lambda x: f0(x, *my_args, **my_kwargs)`` as the
    callable, where ``my_args`` (tuple) and ``my_kwargs`` (dict) have been
    gathered before invoking this function.
x0 : ndarray
    Initial guess.
args : tuple, optional
    Extra arguments passed to the objective function and its Jacobian.
method : str, optional
    Type of solver. Should be one of

    - 'hybr'             :ref:`(see here) <optimize.root-hybr>`
    - 'lm'               :ref:`(see here) <optimize.root-lm>`
    - 'broyden1'         :ref:`(see here) <optimize.root-broyden1>`
    - 'broyden2'         :ref:`(see here) <optimize.root-broyden2>`
    - 'anderson'         :ref:`(see here) <optimize.root-anderson>`
    - 'linearmixing'     :ref:`(see here) <optimize.root-linearmixing>`
    - 'diagbroyden'      :ref:`(see here) <optimize.root-diagbroyden>`
    - 'excitingmixing'   :ref:`(see here) <optimize.root-excitingmixing>`
    - 'krylov'           :ref:`(see here) <optimize.root-krylov>`
    - 'df-sane'          :ref:`(see here) <optimize.root-dfsane>`

jac : bool or callable, optional
    If `jac` is a Boolean and is True, `fun` is assumed to return the
    value of Jacobian along with the objective function. If False, the
    Jacobian will be estimated numerically.
    `jac` can also be a callable returning the Jacobian of `fun`. In
    this case, it must accept the same arguments as `fun`.
tol : float, optional
    Tolerance for termination. For detailed control, use solver-specific
    options.
callback : function, optional
    Optional callback function. It is called on every iteration as
    ``callback(x, f)`` where `x` is the current solution and `f`
    the corresponding residual. For all methods but 'hybr' and 'lm'.
options : dict, optional
    A dictionary of solver options. E.g., `xtol` or `maxiter`, see
    :obj:`show_options()` for details.

Returns
-------
sol : OptimizeResult
    The solution represented as a ``OptimizeResult`` object.
    Important attributes are: ``x`` the solution array, ``success`` a
    Boolean flag indicating if the algorithm exited successfully and
    ``message`` which describes the cause of the termination. See
    `OptimizeResult` for a description of other attributes.

See also
--------
show_options : Additional options accepted by the solvers

Notes
-----
This section describes the available solvers that can be selected by the
'method' parameter. The default method is *hybr*.

Method *hybr* uses a modification of the Powell hybrid method as
implemented in MINPACK [1]_.

Method *lm* solves the system of nonlinear equations in a least squares
sense using a modification of the Levenberg-Marquardt algorithm as
implemented in MINPACK [1]_.

Method *df-sane* is a derivative-free spectral method. [3]_

Methods *broyden1*, *broyden2*, *anderson*, *linearmixing*,
*diagbroyden*, *excitingmixing*, *krylov* are inexact Newton methods,
with backtracking or full line searches [2]_. Each method corresponds
to a particular Jacobian approximations.

- Method *broyden1* uses Broyden's first Jacobian approximation, it is
  known as Broyden's good method.
- Method *broyden2* uses Broyden's second Jacobian approximation, it
  is known as Broyden's bad method.
- Method *anderson* uses (extended) Anderson mixing.
- Method *Krylov* uses Krylov approximation for inverse Jacobian. It
  is suitable for large-scale problem.
- Method *diagbroyden* uses diagonal Broyden Jacobian approximation.
- Method *linearmixing* uses a scalar Jacobian approximation.
- Method *excitingmixing* uses a tuned diagonal Jacobian
  approximation.

.. warning::

    The algorithms implemented for methods *diagbroyden*,
    *linearmixing* and *excitingmixing* may be useful for specific
    problems, but whether they will work may depend strongly on the
    problem.

.. versionadded:: 0.11.0

References
----------
.. [1] More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom.
   1980. User Guide for MINPACK-1.
.. [2] C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear
   Equations. Society for Industrial and Applied Mathematics.
   <https://archive.siam.org/books/kelley/fr16/>
.. [3] W. La Cruz, J.M. Martinez, M. Raydan. Math. Comp. 75, 1429 (2006).

Examples
--------
The following functions define a system of nonlinear equations and its
jacobian.

>>> import numpy as np
>>> def fun(x):
...     return [x[0]  + 0.5 * (x[0] - x[1])**3 - 1.0,
...             0.5 * (x[1] - x[0])**3 + x[1]]

>>> def jac(x):
...     return np.array([[1 + 1.5 * (x[0] - x[1])**2,
...                       -1.5 * (x[0] - x[1])**2],
...                      [-1.5 * (x[1] - x[0])**2,
...                       1 + 1.5 * (x[1] - x[0])**2]])

A solution can be obtained as follows.

>>> from scipy import optimize
>>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr')
>>> sol.x
array([ 0.8411639,  0.1588361])

**Large problem**

Suppose that we needed to solve the following integrodifferential
equation on the square :math:`[0,1]\times[0,1]`:

.. math::

   \nabla^2 P = 10 \left(\int_0^1\int_0^1\cosh(P)\,dx\,dy\right)^2

with :math:`P(x,1) = 1` and :math:`P=0` elsewhere on the boundary of
the square.

The solution can be found using the ``method='krylov'`` solver:

>>> from scipy import optimize
>>> # parameters
>>> nx, ny = 75, 75
>>> hx, hy = 1./(nx-1), 1./(ny-1)

>>> P_left, P_right = 0, 0
>>> P_top, P_bottom = 1, 0

>>> def residual(P):
...    d2x = np.zeros_like(P)
...    d2y = np.zeros_like(P)
...
...    d2x[1:-1] = (P[2:]   - 2*P[1:-1] + P[:-2]) / hx/hx
...    d2x[0]    = (P[1]    - 2*P[0]    + P_left)/hx/hx
...    d2x[-1]   = (P_right - 2*P[-1]   + P[-2])/hx/hx
...
...    d2y[:,1:-1] = (P[:,2:] - 2*P[:,1:-1] + P[:,:-2])/hy/hy
...    d2y[:,0]    = (P[:,1]  - 2*P[:,0]    + P_bottom)/hy/hy
...    d2y[:,-1]   = (P_top   - 2*P[:,-1]   + P[:,-2])/hy/hy
...
...    return d2x + d2y - 10*np.cosh(P).mean()**2

>>> guess = np.zeros((nx, ny), float)
>>> sol = optimize.root(residual, guess, method='krylov')
>>> print('Residual: %g' % abs(residual(sol.x)).max())
Residual: 5.7972e-06  # may vary

>>> import matplotlib.pyplot as plt
>>> x, y = np.mgrid[0:1:(nx*1j), 0:1:(ny*1j)]
>>> plt.pcolormesh(x, y, sol.x, shading='gouraud')
>>> plt.colorbar()
>>> plt.show()



Vous êtes un professionnel et vous avez besoin d'une formation ? Sensibilisation à
l'Intelligence Artificielle
Voir le programme détaillé