Participer au site avec un Tip
Rechercher
 

Améliorations / Corrections

Vous avez des améliorations (ou des corrections) à proposer pour ce document : je vous remerçie par avance de m'en faire part, cela m'aide à améliorer le site.

Emplacement :

Description des améliorations :

Vous êtes un professionnel et vous avez besoin d'une formation ? Coder avec une
Intelligence Artificielle
Voir le programme détaillé
Module « scipy.sparse.linalg »

Fonction lgmres - module scipy.sparse.linalg

Signature de la fonction lgmres

def lgmres(A, b, x0=None, *, rtol=1e-05, atol=0.0, maxiter=1000, M=None, callback=None, inner_m=30, outer_k=3, outer_v=None, store_outer_Av=True, prepend_outer_v=False) 

Description

help(scipy.sparse.linalg.lgmres)

Solve a matrix equation using the LGMRES algorithm.

The LGMRES algorithm [1]_ [2]_ is designed to avoid some problems
in the convergence in restarted GMRES, and often converges in fewer
iterations.

Parameters
----------
A : {sparse array, ndarray, LinearOperator}
    The real or complex N-by-N matrix of the linear system.
    Alternatively, ``A`` can be a linear operator which can
    produce ``Ax`` using, e.g.,
    ``scipy.sparse.linalg.LinearOperator``.
b : ndarray
    Right hand side of the linear system. Has shape (N,) or (N,1).
x0 : ndarray
    Starting guess for the solution.
rtol, atol : float, optional
    Parameters for the convergence test. For convergence,
    ``norm(b - A @ x) <= max(rtol*norm(b), atol)`` should be satisfied.
    The default is ``rtol=1e-5``, the default for ``atol`` is ``0.0``.
maxiter : int, optional
    Maximum number of iterations.  Iteration will stop after maxiter
    steps even if the specified tolerance has not been achieved.
M : {sparse array, ndarray, LinearOperator}, optional
    Preconditioner for A.  The preconditioner should approximate the
    inverse of A.  Effective preconditioning dramatically improves the
    rate of convergence, which implies that fewer iterations are needed
    to reach a given error tolerance.
callback : function, optional
    User-supplied function to call after each iteration.  It is called
    as callback(xk), where xk is the current solution vector.
inner_m : int, optional
    Number of inner GMRES iterations per each outer iteration.
outer_k : int, optional
    Number of vectors to carry between inner GMRES iterations.
    According to [1]_, good values are in the range of 1...3.
    However, note that if you want to use the additional vectors to
    accelerate solving multiple similar problems, larger values may
    be beneficial.
outer_v : list of tuples, optional
    List containing tuples ``(v, Av)`` of vectors and corresponding
    matrix-vector products, used to augment the Krylov subspace, and
    carried between inner GMRES iterations. The element ``Av`` can
    be `None` if the matrix-vector product should be re-evaluated.
    This parameter is modified in-place by `lgmres`, and can be used
    to pass "guess" vectors in and out of the algorithm when solving
    similar problems.
store_outer_Av : bool, optional
    Whether LGMRES should store also A@v in addition to vectors `v`
    in the `outer_v` list. Default is True.
prepend_outer_v : bool, optional
    Whether to put outer_v augmentation vectors before Krylov iterates.
    In standard LGMRES, prepend_outer_v=False.

Returns
-------
x : ndarray
    The converged solution.
info : int
    Provides convergence information:

        - 0  : successful exit
        - >0 : convergence to tolerance not achieved, number of iterations
        - <0 : illegal input or breakdown

Notes
-----
The LGMRES algorithm [1]_ [2]_ is designed to avoid the
slowing of convergence in restarted GMRES, due to alternating
residual vectors. Typically, it often outperforms GMRES(m) of
comparable memory requirements by some measure, or at least is not
much worse.

Another advantage in this algorithm is that you can supply it with
'guess' vectors in the `outer_v` argument that augment the Krylov
subspace. If the solution lies close to the span of these vectors,
the algorithm converges faster. This can be useful if several very
similar matrices need to be inverted one after another, such as in
Newton-Krylov iteration where the Jacobian matrix often changes
little in the nonlinear steps.

References
----------
.. [1] A.H. Baker and E.R. Jessup and T. Manteuffel, "A Technique for
         Accelerating the Convergence of Restarted GMRES", SIAM J. Matrix
         Anal. Appl. 26, 962 (2005).
.. [2] A.H. Baker, "On Improving the Performance of the Linear Solver
         restarted GMRES", PhD thesis, University of Colorado (2003).

Examples
--------
>>> import numpy as np
>>> from scipy.sparse import csc_array
>>> from scipy.sparse.linalg import lgmres
>>> A = csc_array([[3, 2, 0], [1, -1, 0], [0, 5, 1]], dtype=float)
>>> b = np.array([2, 4, -1], dtype=float)
>>> x, exitCode = lgmres(A, b, atol=1e-5)
>>> print(exitCode)            # 0 indicates successful convergence
0
>>> np.allclose(A.dot(x), b)
True


Vous êtes un professionnel et vous avez besoin d'une formation ? Sensibilisation à
l'Intelligence Artificielle
Voir le programme détaillé