rex.external.rexarray.RexBackendEntrypoint

class RexBackendEntrypoint[source]

Bases: BackendEntrypoint

Backend for NREL rex-style files

See also

backends.RexStore

Methods

guess_can_open(filename_or_obj)

Guess if this backend can read a file

open_dataset(filename_or_obj, *[, ...])

Open a dataset using the rexarray backend

open_datatree(filename_or_obj, *[, ...])

Open a rex-style file as a data tree

open_groups_as_dict(filename_or_obj, *[, ...])

Open a rex-style file as a data dictionary

Attributes

description

open_dataset_parameters

url

guess_can_open(filename_or_obj)[source]

Guess if this backend can read a file

Parameters:

filename_or_obj (path-like) – Filename used to guess wether this backend can open.

Returns:

bool – Flag indicating wether this backend can open the file or not.

open_dataset(filename_or_obj, *, drop_variables=None, group=None, lock=None, h5_driver=None, h5_driver_kwds=None, hsds=False, hsds_kwargs=None)[source]

Open a dataset using the rexarray backend

Parameters:
  • filename_or_obj (str | path-like | ReadBuffer | AbstractDataStore) – Path to file to open, or instantiated buffer that data can be read from.

  • drop_variables (str | Iterable[str] | None, optional) – A variable or list of variables to exclude from being parsed from the dataset. This may be useful to drop variables with problems or inconsistent values. By default, None.

  • group (str, optional) – Name of subgroup in HDF5 file to open. By default, None.

  • lock (SerializableLock, optional) – Resource lock to use when reading data from disk. Only relevant when using dask or another form of parallelism. By default, None`, which chooses the appropriate locks to safely read and write files with the currently active dask scheduler.

  • h5_driver (str, optional) – HDF5 driver to use. See [here](https://docs.h5py.org/en/latest/high/file.html#file-drivers) for more details. By default, None.

  • h5_driver_kwds (_type_, optional) – HDF5 driver keyword-argument pairs. See [here](https://docs.h5py.org/en/latest/high/file.html#file-drivers) for more details. By default, None.

  • hsds (bool, optional) – Boolean flag to use h5pyd to handle HDF5 ‘files’ hosted on AWS behind HSDS. Note that file paths starting with “/nrel/” will be treated as hsds=True regardless of this input. By default, False.

  • hsds_kwargs (dict, optional) – Dictionary of optional kwargs for h5pyd, (e.g., bucket, username, password, etc.). By default, None.

Returns:

xr.Dataset – Initialized and opened xarray Dataset instance.

open_datatree(filename_or_obj, *, drop_variables=None, group=None, lock=None, h5_driver=None, h5_driver_kwds=None, hsds=False, hsds_kwargs=None)[source]

Open a rex-style file as a data tree

The groups in the HDF5 file map directly to the groups of the DataTree

Parameters:
  • filename_or_obj (str | path-like | ReadBuffer | AbstractDataStore) – Path to file to open, or instantiated buffer that data can be read from.

  • drop_variables (str | Iterable[str] | None, optional) – A variable or list of variables to exclude from being parsed from the dataset. This may be useful to drop variables with problems or inconsistent values. By default, None.

  • group (str, optional) – Name of subgroup in HDF5 file to open. By default, None.

  • lock (SerializableLock, optional) – Resource lock to use when reading data from disk. Only relevant when using dask or another form of parallelism. By default, None`, which chooses the appropriate locks to safely read and write files with the currently active dask scheduler.

  • h5_driver (str, optional) – HDF5 driver to use. See [here](https://docs.h5py.org/en/latest/high/file.html#file-drivers) for more details. By default, None.

  • h5_driver_kwds (_type_, optional) – HDF5 driver keyword-argument pairs. See [here](https://docs.h5py.org/en/latest/high/file.html#file-drivers) for more details. By default, None.

  • hsds (bool, optional) – Boolean flag to use h5pyd to handle HDF5 ‘files’ hosted on AWS behind HSDS. Note that file paths starting with “/nrel/” will be treated as hsds=True regardless of this input. By default, False.

  • hsds_kwargs (dict, optional) – Dictionary of optional kwargs for h5pyd, (e.g., bucket, username, password, etc.). By default, None.

Returns:

xr.DataTree – Initialized and opened xarray DataTree instance.

open_groups_as_dict(filename_or_obj, *, drop_variables=None, group=None, lock=None, h5_driver=None, h5_driver_kwds=None, hsds=False, hsds_kwargs=None)[source]

Open a rex-style file as a data dictionary

The groups in the HDF5 file map directly to keys in the return dictionary.

Parameters:
  • filename_or_obj (str | path-like | ReadBuffer | AbstractDataStore) – Path to file to open, or instantiated buffer that data can be read from.

  • drop_variables (str | Iterable[str] | None, optional) – A variable or list of variables to exclude from being parsed from the dataset. This may be useful to drop variables with problems or inconsistent values. By default, None.

  • group (str, optional) – Name of subgroup in HDF5 file to open. By default, None.

  • lock (SerializableLock, optional) – Resource lock to use when reading data from disk. Only relevant when using dask or another form of parallelism. By default, None`, which chooses the appropriate locks to safely read and write files with the currently active dask scheduler.

  • h5_driver (str, optional) – HDF5 driver to use. See [here](https://docs.h5py.org/en/latest/high/file.html#file-drivers) for more details. By default, None.

  • h5_driver_kwds (_type_, optional) – HDF5 driver keyword-argument pairs. See [here](https://docs.h5py.org/en/latest/high/file.html#file-drivers) for more details. By default, None.

  • hsds (bool, optional) – Boolean flag to use h5pyd to handle HDF5 ‘files’ hosted on AWS behind HSDS. Note that file paths starting with “/nrel/” will be treated as hsds=True regardless of this input. By default, False.

  • hsds_kwargs (dict, optional) – Dictionary of optional kwargs for h5pyd, (e.g., bucket, username, password, etc.). By default, None.

Returns:

dict – Initialized and opened file where keys are group names and values are dataset instances for that group.