- dask.dataframe.read_parquet(path, columns=None, filters=None, categories=None, index=None, storage_options=None, engine='auto', gather_statistics=None, ignore_metadata_file=False, metadata_task_size=None, split_row_groups=None, chunksize=None, aggregate_files=None, **kwargs)¶
Read a Parquet file into a Dask DataFrame
This reads a directory of Parquet data into a Dask.dataframe, one file per partition. It selects the index among the sorted columns if any exist.
- pathstr or list
Source directory for data, or path(s) to individual parquet files. Prefix with a protocol like
s3://to read from alternative filesystems. To read from multiple files you can pass a globstring or a list of paths, with the caveat that they must all have the same protocol.
- columnsstr or list, default None
Field name(s) to read in as columns in the output. By default all non-index fields will be read (as determined by the pandas parquet metadata, if present). Provide a single field name instead of a list to read in the data as a Series.
- filtersUnion[List[Tuple[str, str, Any]], List[List[Tuple[str, str, Any]]]], default None
List of filters to apply, like
[[('col1', '==', 0), ...], ...]. Using this argument will NOT result in row-wise filtering of the final partitions unless
engine="pyarrow-dataset"is also specified. For other engines, filtering is only performed at the partition level, i.e., to prevent the loading of some row-groups and/or files.
For the “pyarrow” engines, predicates can be expressed in disjunctive normal form (DNF). This means that the innermost tuple describes a single column predicate. These inner predicates are combined with an AND conjunction into a larger predicate. The outer-most list then combines all of the combined filters with an OR disjunction.
Predicates can also be expressed as a List[Tuple]. These are evaluated as an AND conjunction. To express OR in predictates, one must use the (preferred for “pyarrow”) List[List[Tuple]] notation.
Note that the “fastparquet” engine does not currently support DNF for the filtering of partitioned columns (List[Tuple] is required).
- indexstr, list or False, default None
Field name(s) to use as the output frame index. By default will be inferred from the pandas parquet file metadata (if present). Use False to read all fields as columns.
- categorieslist or dict, default None
For any fields listed here, if the parquet encoding is Dictionary, the column will be created with dtype category. Use only if it is guaranteed that the column is encoded as dictionary in all row-groups. If a list, assumes up to 2**16-1 labels; if a dict, specify the number of labels expected; if None, will load categories automatically for data written by dask/fastparquet, not otherwise.
- storage_optionsdict, default None
Key/value pairs to be passed on to the file-system backend, if any.
- enginestr, default ‘auto’
Parquet reader library to use. Options include: ‘auto’, ‘fastparquet’, ‘pyarrow’, ‘pyarrow-dataset’, and ‘pyarrow-legacy’. Defaults to ‘auto’, which selects the FastParquetEngine if fastparquet is installed (and ArrowDatasetEngine otherwise). If ‘pyarrow’ or ‘pyarrow-dataset’ is specified, the ArrowDatasetEngine (which leverages the pyarrow.dataset API) will be used. If ‘pyarrow-legacy’ is specified, ArrowLegacyEngine will be used (which leverages the pyarrow.parquet.ParquetDataset API). NOTE: The ‘pyarrow-legacy’ option (ArrowLegacyEngine) is deprecated for pyarrow>=5.
- gather_statisticsbool, default None
Gather the statistics for each dataset partition. By default, this will only be done if the _metadata file is available. Otherwise, statistics will only be gathered if True, because the footer of every file will be parsed (which is very slow on some systems).
- ignore_metadata_filebool, default False
Whether to ignore the global
_metadatafile (when one is present). If
True, or if the global
_metadatafile is missing, the parquet metadata may be gathered and processed in parallel. Parallel metadata processing is currently supported for
- metadata_task_sizeint, default configurable
If parquet metadata is processed in parallel (see
ignore_metadata_filedescription above), this argument can be used to specify the number of dataset files to be processed by each task in the Dask graph. If this argument is set to
0, parallel metadata processing will be disabled. The default values for local and remote filesystems can be specified with the “metadata-task-size-local” and “metadata-task-size-remote” config fields, respectively (see “dataframe.parquet”).
- split_row_groupsbool or int, default None
Default is True if a _metadata file is available or if the dataset is composed of a single file (otherwise defult is False). If True, then each output dataframe partition will correspond to a single parquet-file row-group. If False, each partition will correspond to a complete file. If a positive integer value is given, each dataframe partition will correspond to that number of parquet row-groups (or fewer). Only the “pyarrow” engine supports this argument.
- chunksizeint or str, default None
The desired size of each output
DataFramepartition in terms of total (uncompressed) parquet storage space. If specified, adjacent row-groups and/or files will be aggregated into the same output partition until the cumulative
total_byte_sizeparquet-metadata statistic reaches this value. Use aggregate_files to enable/disable inter-file aggregation.
- aggregate_filesbool or str, default None
Whether distinct file paths may be aggregated into the same output partition. This parameter requires gather_statistics=True, and is only used when chunksize is specified or when split_row_groups is an integer >1. A setting of True means that any two file paths may be aggregated into the same output partition, while False means that inter-file aggregation is prohibited.
For “hive-partitioned” datasets, a “partition”-column name can also be specified. In this case, we allow the aggregation of any two files sharing a file path up to, and including, the corresponding directory name. For example, if
aggregate_filesis set to
"section"for the directory structure below,
04.parquetmay be aggregated together, but
02.parquetcannot be. If, however,
aggregate_filesis set to
01.parquetmay be aggregated with
03.parquetmay be aggregated with
dataset-path/ ├── region=1/ │ ├── section=a/ │ │ └── 01.parquet │ ├── section=b/ │ └── └── 02.parquet └── region=2/ ├── section=a/ │ ├── 03.parquet └── └── 04.parquet
Note that the default behavior of
- **kwargs: dict (of dicts)
Passthrough key-word arguments for read backend. The top-level keys correspond to the appropriate operation type, and the second level corresponds to the kwargs that will be passed on to the underlying
fastparquetfunction. Supported top-level keys: ‘dataset’ (for opening a
pyarrowdataset), ‘file’ (for opening a
ParquetFile), ‘read’ (for the backend read function), ‘arrow_to_pandas’ (for controlling the arguments passed to convert from a
>>> df = dd.read_parquet('s3://bucket/my-parquet-data')