Participer au site avec un Tip
Rechercher
 

Améliorations / Corrections

Vous avez des améliorations (ou des corrections) à proposer pour ce document : je vous remerçie par avance de m'en faire part, cela m'aide à améliorer le site.

Emplacement :

Description des améliorations :

Vous êtes un professionnel et vous avez besoin d'une formation ? Programmation Python
Les fondamentaux
Voir le programme détaillé
Module « pandas »

Classe « HDFStore »

Informations générales

Héritage

builtins.object
    HDFStore

Définition

class HDFStore(builtins.object):

help(HDFStore)

Dict-like IO interface for storing pandas objects in PyTables.

Either Fixed or Table format.

.. warning::

   Pandas uses PyTables for reading and writing HDF5 files, which allows
   serializing object-dtype data with pickle when using the "fixed" format.
   Loading pickled data received from untrusted sources can be unsafe.

   See: https://docs.python.org/3/library/pickle.html for more.

Parameters
----------
path : str
    File path to HDF5 file.
mode : {'a', 'w', 'r', 'r+'}, default 'a'

    ``'r'``
        Read-only; no data can be modified.
    ``'w'``
        Write; a new file is created (an existing file with the same
        name would be deleted).
    ``'a'``
        Append; an existing file is opened for reading and writing,
        and if the file does not exist it is created.
    ``'r+'``
        It is similar to ``'a'``, but the file must already exist.
complevel : int, 0-9, default None
    Specifies a compression level for data.
    A value of 0 or None disables compression.
complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
    Specifies the compression library to be used.
    These additional compressors for Blosc are supported
    (default if no compressor specified: 'blosc:blosclz'):
    {'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
     'blosc:zlib', 'blosc:zstd'}.
    Specifying a compression library which is not available issues
    a ValueError.
fletcher32 : bool, default False
    If applying compression use the fletcher32 checksum.
**kwargs
    These parameters will be passed to the PyTables open_file method.

Examples
--------
>>> bar = pd.DataFrame(np.random.randn(10, 4))
>>> store = pd.HDFStore('test.h5')
>>> store['foo'] = bar   # write to HDF5
>>> bar = store['foo']   # retrieve
>>> store.close()

**Create or load HDF5 file in-memory**

When passing the `driver` option to the PyTables open_file method through
**kwargs, the HDF5 file is loaded or created in-memory and will only be
written when closed:

>>> bar = pd.DataFrame(np.random.randn(10, 4))
>>> store = pd.HDFStore('test.h5', driver='H5FD_CORE')
>>> store['foo'] = bar
>>> store.close()   # only now, data is written to disk

Constructeur(s)

Signature du constructeur Description
__init__(self, path, mode: 'str' = 'a', complevel: 'int | None' = None, complib=None, fletcher32: 'bool' = False, **kwargs) -> 'None'

Liste des propriétés

Nom de la propriétéDescription
filename
is_open
rootreturn the root node [extrait de root.__doc__]

Liste des opérateurs

Signature de l'opérateur Description
__contains__(self, key: 'str') -> 'bool'
__delitem__(self, key: 'str') -> 'None'
__getitem__(self, key: 'str')
__setitem__(self, key: 'str', value) -> 'None'

Opérateurs hérités de la classe object

__eq__, __ge__, __gt__, __le__, __lt__, __ne__

Liste des méthodes

Toutes les méthodes Méthodes d'instance Méthodes statiques Méthodes dépréciées
Signature de la méthodeDescription
__enter__(self) -> 'Self'
__exit__(self, exc_type: 'type[BaseException] | None', exc_value: 'BaseException | None', traceback: 'TracebackType | None') -> 'None'
__fspath__(self) -> 'str'
__getattr__(self, name: 'str') allow attribute access to get stores [extrait de __getattr__.__doc__]
__iter__(self) -> 'Iterator[str]'
__len__(self) -> 'int'
__repr__(self) -> 'str'
append(self, key: 'str', value: 'DataFrame | Series', format=None, axes=None, index: 'bool | list[str]' = True, append: 'bool' = True, complib=None, complevel: 'int | None' = None, columns=None, min_itemsize: 'int | dict[str, int] | None' = None, nan_rep=None, chunksize: 'int | None' = None, expectedrows=None, dropna: 'bool | None' = None, data_columns: 'Literal[True] | list[str] | None' = None, encoding=None, errors: 'str' = 'strict') -> 'None'
append_to_multiple(self, d: 'dict', value, selector, data_columns=None, axes=None, dropna: 'bool' = False, **kwargs) -> 'None'
close(self) -> 'None'
copy(self, file, mode: 'str' = 'w', propindexes: 'bool' = True, keys=None, complib=None, complevel: 'int | None' = None, fletcher32: 'bool' = False, overwrite: 'bool' = True) -> 'HDFStore'
create_table_index(self, key: 'str', columns=None, optlevel: 'int | None' = None, kind: 'str | None' = None) -> 'None'
flush(self, fsync: 'bool' = False) -> 'None'
get(self, key: 'str')
get_node(self, key: 'str') -> 'Node | None' return the node with the key or None if it does not exist [extrait de get_node.__doc__]
get_storer(self, key: 'str') -> 'GenericFixed | Table' return the storer object for a key, raise if not in the file [extrait de get_storer.__doc__]
groups(self) -> 'list'
info(self) -> 'str'
items(self) -> 'Iterator[tuple[str, list]]'
keys(self, include: 'str' = 'pandas') -> 'list[str]'
open(self, mode: 'str' = 'a', **kwargs) -> 'None'
put(self, key: 'str', value: 'DataFrame | Series', format=None, index: 'bool' = True, append: 'bool' = False, complib=None, complevel: 'int | None' = None, min_itemsize: 'int | dict[str, int] | None' = None, nan_rep=None, data_columns: 'Literal[True] | list[str] | None' = None, encoding=None, errors: 'str' = 'strict', track_times: 'bool' = True, dropna: 'bool' = False) -> 'None'
remove(self, key: 'str', where=None, start=None, stop=None) -> 'None'
select(self, key: 'str', where=None, start=None, stop=None, columns=None, iterator: 'bool' = False, chunksize: 'int | None' = None, auto_close: 'bool' = False)
select_as_coordinates(self, key: 'str', where=None, start: 'int | None' = None, stop: 'int | None' = None)
select_as_multiple(self, keys, where=None, selector=None, columns=None, start=None, stop=None, iterator: 'bool' = False, chunksize: 'int | None' = None, auto_close: 'bool' = False)
select_column(self, key: 'str', column: 'str', start: 'int | None' = None, stop: 'int | None' = None)
walk(self, where: 'str' = '/') -> 'Iterator[tuple[str, list[str], list[str]]]'

Méthodes héritées de la classe object

__delattr__, __dir__, __format__, __getattribute__, __getstate__, __hash__, __init_subclass__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __str__, __subclasshook__

Vous êtes un professionnel et vous avez besoin d'une formation ? Mise en oeuvre d'IHM
avec Qt et PySide6
Voir le programme détaillé