Participer au site avec un Tip
Rechercher
 

Améliorations / Corrections

Vous avez des améliorations (ou des corrections) à proposer pour ce document : je vous remerçie par avance de m'en faire part, cela m'aide à améliorer le site.

Emplacement :

Description des améliorations :

Vous êtes un professionnel et vous avez besoin d'une formation ? Machine Learning
avec Scikit-Learn
Voir le programme détaillé
Module « scipy.sparse.linalg »

Fonction gmres - module scipy.sparse.linalg

Signature de la fonction gmres

def gmres(A, b, x0=None, *, rtol=1e-05, atol=0.0, restart=None, maxiter=None, M=None, callback=None, callback_type=None) 

Description

help(scipy.sparse.linalg.gmres)

Use Generalized Minimal RESidual iteration to solve ``Ax = b``.

Parameters
----------
A : {sparse array, ndarray, LinearOperator}
    The real or complex N-by-N matrix of the linear system.
    Alternatively, `A` can be a linear operator which can
    produce ``Ax`` using, e.g.,
    ``scipy.sparse.linalg.LinearOperator``.
b : ndarray
    Right hand side of the linear system. Has shape (N,) or (N,1).
x0 : ndarray
    Starting guess for the solution (a vector of zeros by default).
atol, rtol : float
    Parameters for the convergence test. For convergence,
    ``norm(b - A @ x) <= max(rtol*norm(b), atol)`` should be satisfied.
    The default is ``atol=0.`` and ``rtol=1e-5``.
restart : int, optional
    Number of iterations between restarts. Larger values increase
    iteration cost, but may be necessary for convergence.
    If omitted, ``min(20, n)`` is used.
maxiter : int, optional
    Maximum number of iterations (restart cycles).  Iteration will stop
    after maxiter steps even if the specified tolerance has not been
    achieved. See `callback_type`.
M : {sparse array, ndarray, LinearOperator}
    Inverse of the preconditioner of `A`.  `M` should approximate the
    inverse of `A` and be easy to solve for (see Notes).  Effective
    preconditioning dramatically improves the rate of convergence,
    which implies that fewer iterations are needed to reach a given
    error tolerance.  By default, no preconditioner is used.
    In this implementation, left preconditioning is used,
    and the preconditioned residual is minimized. However, the final
    convergence is tested with respect to the ``b - A @ x`` residual.
callback : function
    User-supplied function to call after each iteration.  It is called
    as ``callback(args)``, where ``args`` are selected by `callback_type`.
callback_type : {'x', 'pr_norm', 'legacy'}, optional
    Callback function argument requested:
      - ``x``: current iterate (ndarray), called on every restart
      - ``pr_norm``: relative (preconditioned) residual norm (float),
        called on every inner iteration
      - ``legacy`` (default): same as ``pr_norm``, but also changes the
        meaning of `maxiter` to count inner iterations instead of restart
        cycles.

    This keyword has no effect if `callback` is not set.

Returns
-------
x : ndarray
    The converged solution.
info : int
    Provides convergence information:
        0  : successful exit
        >0 : convergence to tolerance not achieved, number of iterations

See Also
--------
LinearOperator

Notes
-----
A preconditioner, P, is chosen such that P is close to A but easy to solve
for. The preconditioner parameter required by this routine is
``M = P^-1``. The inverse should preferably not be calculated
explicitly.  Rather, use the following template to produce M::

  # Construct a linear operator that computes P^-1 @ x.
  import scipy.sparse.linalg as spla
  M_x = lambda x: spla.spsolve(P, x)
  M = spla.LinearOperator((n, n), M_x)

Examples
--------
>>> import numpy as np
>>> from scipy.sparse import csc_array
>>> from scipy.sparse.linalg import gmres
>>> A = csc_array([[3, 2, 0], [1, -1, 0], [0, 5, 1]], dtype=float)
>>> b = np.array([2, 4, -1], dtype=float)
>>> x, exitCode = gmres(A, b, atol=1e-5)
>>> print(exitCode)            # 0 indicates successful convergence
0
>>> np.allclose(A.dot(x), b)
True


Vous êtes un professionnel et vous avez besoin d'une formation ? RAG (Retrieval-Augmented Generation)
et Fine Tuning d'un LLM
Voir le programme détaillé