Participer au site avec un Tip
Rechercher
 

Améliorations / Corrections

Vous avez des améliorations (ou des corrections) à proposer pour ce document : je vous remerçie par avance de m'en faire part, cela m'aide à améliorer le site.

Emplacement :

Description des améliorations :

Vous êtes un professionnel et vous avez besoin d'une formation ? Programmation Python
Les compléments
Voir le programme détaillé
Module « scipy.sparse.linalg »

Fonction bicgstab - module scipy.sparse.linalg

Signature de la fonction bicgstab

def bicgstab(A, b, x0=None, *, rtol=1e-05, atol=0.0, maxiter=None, M=None, callback=None) 

Description

help(scipy.sparse.linalg.bicgstab)

Use BIConjugate Gradient STABilized iteration to solve ``Ax = b``.

Parameters
----------
A : {sparse array, ndarray, LinearOperator}
    The real or complex N-by-N matrix of the linear system.
    Alternatively, `A` can be a linear operator which can
    produce ``Ax`` and ``A^T x`` using, e.g.,
    ``scipy.sparse.linalg.LinearOperator``.
b : ndarray
    Right hand side of the linear system. Has shape (N,) or (N,1).
x0 : ndarray
    Starting guess for the solution.
rtol, atol : float, optional
    Parameters for the convergence test. For convergence,
    ``norm(b - A @ x) <= max(rtol*norm(b), atol)`` should be satisfied.
    The default is ``atol=0.`` and ``rtol=1e-5``.
maxiter : integer
    Maximum number of iterations.  Iteration will stop after maxiter
    steps even if the specified tolerance has not been achieved.
M : {sparse array, ndarray, LinearOperator}
    Preconditioner for `A`. It should approximate the
    inverse of `A` (see Notes). Effective preconditioning dramatically improves the
    rate of convergence, which implies that fewer iterations are needed
    to reach a given error tolerance.
callback : function
    User-supplied function to call after each iteration.  It is called
    as ``callback(xk)``, where ``xk`` is the current solution vector.

Returns
-------
x : ndarray
    The converged solution.
info : integer
    Provides convergence information:
        0  : successful exit
        >0 : convergence to tolerance not achieved, number of iterations
        <0 : parameter breakdown

Notes
-----
The preconditioner `M` should be a matrix such that ``M @ A`` has a smaller
condition number than `A`, see [1]_ .

References
----------
.. [1] "Preconditioner", Wikipedia, 
       https://en.wikipedia.org/wiki/Preconditioner
.. [2] "Biconjugate gradient stabilized method", 
       Wikipedia, https://en.wikipedia.org/wiki/Biconjugate_gradient_stabilized_method

Examples
--------
>>> import numpy as np
>>> from scipy.sparse import csc_array
>>> from scipy.sparse.linalg import bicgstab
>>> R = np.array([[4, 2, 0, 1],
...               [3, 0, 0, 2],
...               [0, 1, 1, 1],
...               [0, 2, 1, 0]])
>>> A = csc_array(R)
>>> b = np.array([-1, -0.5, -1, 2])
>>> x, exit_code = bicgstab(A, b, atol=1e-5)
>>> print(exit_code)  # 0 indicates successful convergence
0
>>> np.allclose(A.dot(x), b)
True


Vous êtes un professionnel et vous avez besoin d'une formation ? Deep Learning avec Python
et Keras et Tensorflow
Voir le programme détaillé