Participer au site avec un Tip
Rechercher
 

Améliorations / Corrections

Vous avez des améliorations (ou des corrections) à proposer pour ce document : je vous remerçie par avance de m'en faire part, cela m'aide à améliorer le site.

Emplacement :

Description des améliorations :

Module « pandas »

Fonction read_pickle - module pandas

Signature de la fonction read_pickle

def read_pickle(filepath_or_buffer: Union[ForwardRef('PathLike[str]'), str, IO[~T], io.RawIOBase, io.BufferedIOBase, io.TextIOBase, _io.TextIOWrapper, mmap.mmap], compression: Union[str, Dict[str, Any], NoneType] = 'infer', storage_options: Optional[Dict[str, Any]] = None) 

Description

read_pickle.__doc__

Load pickled pandas object (or any object) from file.

.. warning::

   Loading pickled data received from untrusted sources can be
   unsafe. See `here <https://docs.python.org/3/library/pickle.html>`__.

Parameters
----------
filepath_or_buffer : str, path object or file-like object
    File path, URL, or buffer where the pickled object will be loaded from.

    .. versionchanged:: 1.0.0
       Accept URL. URL is not limited to S3 and GCS.

compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
    If 'infer' and 'path_or_url' is path-like, then detect compression from
    the following extensions: '.gz', '.bz2', '.zip', or '.xz' (otherwise no
    compression) If 'infer' and 'path_or_url' is not path-like, then use
    None (= no decompression).

storage_options : dict, optional
    Extra options that make sense for a particular storage connection, e.g.
    host, port, username, password, etc., if using a URL that will
    be parsed by ``fsspec``, e.g., starting "s3://", "gcs://". An error
    will be raised if providing this argument with a non-fsspec URL.
    See the fsspec and backend storage implementation docs for the set of
    allowed keys and values.

    .. versionadded:: 1.2.0

Returns
-------
unpickled : same type as object stored in file

See Also
--------
DataFrame.to_pickle : Pickle (serialize) DataFrame object to file.
Series.to_pickle : Pickle (serialize) Series object to file.
read_hdf : Read HDF5 file into a DataFrame.
read_sql : Read SQL query or database table into a DataFrame.
read_parquet : Load a parquet object, returning a DataFrame.

Notes
-----
read_pickle is only guaranteed to be backwards compatible to pandas 0.20.3.

Examples
--------
>>> original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})
>>> original_df
   foo  bar
0    0    5
1    1    6
2    2    7
3    3    8
4    4    9
>>> pd.to_pickle(original_df, "./dummy.pkl")

>>> unpickled_df = pd.read_pickle("./dummy.pkl")
>>> unpickled_df
   foo  bar
0    0    5
1    1    6
2    2    7
3    3    8
4    4    9

>>> import os
>>> os.remove("./dummy.pkl")