Dask DataFrame
In this tutorial, you learn:
Basic concepts and features of Dask DataFrames
Applications of Dask DataFrames
Interacting with Dask DataFrames
Built-in operations with Dask DataFrames
Dask DataFrames Best Practices
Prerequisites
Concepts |
Importance |
Notes |
---|---|---|
Necessary |
||
Necessary |
Time to learn: 40 minutes
Introduction
Image credit: Dask Contributors
pandas is a very popular tool for working with tabular datasets, but the dataset needs to fit into the memory.
pandas operates best with smaller datasets, and if you have a large dataset, you’ll receive an out of memory error using pandas. A general rule of thumb for pandas is:
- “Have 5 to 10 times as much RAM as the size of your dataset”
Wes McKinney (2017) in 10 things I hate about pandas
But Dask DataFrame can be used to solve pandas performance issues with larger-than-memory datasets.
What is Dask DataFrame?
A Dask DataFrame is a parallel DataFrame composed of smaller pandas DataFrames (also known as partitions).
Dask Dataframes look and feel like the pandas DataFrames on the surface.
Dask DataFrames partition the data into manageable partitions that can be processed in parallel and across multiple cores or computers.
Similar to Dask Arrays, Dask DataFrames are lazy!
Unlike pandas, operations on Dask DataFrames are not computed until you explicitly request them (e.g. by calling
.compute
).
When to use Dask DataFrame and when to avoid it?
Dask DataFrames are used in situations where pandas fails or has poor performance due to data size.
Dask DataFrame is a good choice when doing parallalizeable computations.
Some examples are:
Element-wise operations such as
df.x + df.y
Row-wise filtering such as
df[df.x>0]
Common aggregations such as
df.x.max()
Dropping duplicates such as
df.x.drop_duplicate()
However, Dask is not great for operations that requires shuffling or re-indexing.
Some examples are:
Set index:
df.set_index(df.x)
See the Dask DataFrame API documentation for a compehnsive list of available functions.
Tutorial Dataset
In this tutorial, we are going to use the NOAA Global Historical Climatology Network Daily (GHCN-D) dataset.
GHCN-D is a public available dataset that includes daily climate records from +100,000 surface observations around the world.
This is an example of a real dataset that is used by NCAR scientists for their research. GHCN-D raw dataset for all stations is available through NOAA Climate Data Online.
To learn more about GHCNd dataset, please visit:
Download the data
For this example, we are going to look through a subset of data from the GHCN-D dataset.
First, we look at the daily observations from Denver International Airport, next we are going to look through selected stations in the US.
The access the preprocessed dataset for this tutorial, please run the following script:
!./get_data.sh
Downloading https://docs.google.com/uc?export=download&id=14doSRn8hT14QYtjZz28GKv14JgdIsbFF
USC00023160.csv
USC00027281.csv
USC00027390.csv
USC00030936.csv
USC00031596.csv
USC00032444.csv
USC00035186.csv
USC00035754.csv
USC00035820.csv
USC00035908.csv
USC00042294.csv
USC00044259.csv
USC00048758.csv
USC00050848.csv
USC00051294.csv
USC00051528.csv
USC00051564.csv
USC00051741.csv
USC00052184.csv
USC00052281.csv
USC00052446.csv
USC00053005.csv
USC00053038.csv
USC00053146.csv
USC00053662.csv
USC00053951.csv
USC00054076.csv
USC00054770.csv
USC00054834.csv
USC00055322.csv
USC00055722.csv
USC00057167.csv
USC00057337.csv
Downloading https://docs.google.com/uc?export=download&id=15rCwQUxxpH6angDhpXzlvbe1nGetYHrf
USC00057936.csv
USC00058204.csv
USC00058429.csv
USC00059243.csv
USC00068138.csv
USC00080211.csv
USC00084731.csv
USC00088824.csv
USC00098703.csv
USC00100010.csv
USC00100470.csv
USC00105275.csv
USC00106152.csv
USC00107264.csv
USC00108137.csv
USC00110338.csv
USC00112140.csv
USC00112193.csv
USC00112348.csv
USC00112483.csv
USC00113335.csv
USC00114108.csv
USC00114442.csv
USC00114823.csv
USC00115079.csv
USC00115326.csv
USC00115712.csv
USC00115768.csv
USC00115833.csv
USC00115901.csv
USC00115943.csv
USC00116446.csv
USW00003017.csv
Downloading https://docs.google.com/uc?export=download&id=1Tbuom1KMCwHjy7-eexEQcOXSr51i6mae
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
This script should save the preprocessed GHCN-D data in ../data
path.
Pandas DataFrame Basics
Let’s start with an example using pandas DataFrame.
First, let’s read in the comma-seperated GHCN-D dataset for one station at Denver International Airport (DIA), CO (site ID : USW00003017
).
To see the list of all available GHCN-D sites and their coordinates and IDs, please see this link.
import os
import pandas as pd
# DIA ghcnd id
site = 'USW00003017'
data_dir = '../data/'
df = pd.read_csv(os.path.join(data_dir, site+'.csv'), parse_dates=['DATE'], index_col=0)
# Display the top five rows of the dataframe
df.head()
ID | YEAR | MONTH | DAY | TMAX | TMAX_FLAGS | TMIN | TMIN_FLAGS | PRCP | PRCP_FLAGS | ... | RHMN_FLAGS | RHMX | RHMX_FLAGS | PSUN | PSUN_FLAGS | LATITUDE | LONGITUDE | ELEVATION | STATE | STATION | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DATE | |||||||||||||||||||||
1994-07-20 | USW00003017 | 1994 | 7 | 20 | 316.0 | XXS | 150.0 | XXS | 20.0 | DXS | ... | XXX | NaN | XXX | NaN | XXX | 39.8467 | -104.6561 | 1647.1 | CO | DENVER INTL AP |
1994-07-23 | USW00003017 | 1994 | 7 | 23 | 355.0 | XXS | 166.0 | XXS | 0.0 | DXS | ... | XXX | NaN | XXX | NaN | XXX | 39.8467 | -104.6561 | 1647.1 | CO | DENVER INTL AP |
1994-07-24 | USW00003017 | 1994 | 7 | 24 | 333.0 | XXS | 155.0 | XXS | 81.0 | DXS | ... | XXX | NaN | XXX | NaN | XXX | 39.8467 | -104.6561 | 1647.1 | CO | DENVER INTL AP |
1994-07-25 | USW00003017 | 1994 | 7 | 25 | 327.0 | XXS | 172.0 | XXS | 0.0 | DXS | ... | XXX | NaN | XXX | NaN | XXX | 39.8467 | -104.6561 | 1647.1 | CO | DENVER INTL AP |
1994-07-26 | USW00003017 | 1994 | 7 | 26 | 327.0 | XXS | 155.0 | XXS | 0.0 | DXS | ... | XXX | NaN | XXX | NaN | XXX | 39.8467 | -104.6561 | 1647.1 | CO | DENVER INTL AP |
5 rows × 99 columns
Question: What variables are available?
df.columns
Index(['ID', 'YEAR', 'MONTH', 'DAY', 'TMAX', 'TMAX_FLAGS', 'TMIN',
'TMIN_FLAGS', 'PRCP', 'PRCP_FLAGS', 'TAVG', 'TAVG_FLAGS', 'SNOW',
'SNOW_FLAGS', 'SNWD', 'SNWD_FLAGS', 'AWND', 'AWND_FLAGS', 'FMTM',
'FMTM_FLAGS', 'PGTM', 'PGTM_FLAGS', 'WDF2', 'WDF2_FLAGS', 'WDF5',
'WDF5_FLAGS', 'WSF2', 'WSF2_FLAGS', 'WSF5', 'WSF5_FLAGS', 'WT01',
'WT01_FLAGS', 'WT02', 'WT02_FLAGS', 'WT08', 'WT08_FLAGS', 'WT16',
'WT16_FLAGS', 'WT17', 'WT17_FLAGS', 'WT18', 'WT18_FLAGS', 'WT03',
'WT03_FLAGS', 'WT05', 'WT05_FLAGS', 'WT19', 'WT19_FLAGS', 'WT10',
'WT10_FLAGS', 'WT09', 'WT09_FLAGS', 'WT06', 'WT06_FLAGS', 'WT07',
'WT07_FLAGS', 'WT11', 'WT11_FLAGS', 'WT13', 'WT13_FLAGS', 'WT21',
'WT21_FLAGS', 'WT14', 'WT14_FLAGS', 'WT15', 'WT15_FLAGS', 'WT22',
'WT22_FLAGS', 'WT04', 'WT04_FLAGS', 'WV03', 'WV03_FLAGS', 'TSUN',
'TSUN_FLAGS', 'WV01', 'WV01_FLAGS', 'WESD', 'WESD_FLAGS', 'ADPT',
'ADPT_FLAGS', 'ASLP', 'ASLP_FLAGS', 'ASTP', 'ASTP_FLAGS', 'AWBT',
'AWBT_FLAGS', 'RHAV', 'RHAV_FLAGS', 'RHMN', 'RHMN_FLAGS', 'RHMX',
'RHMX_FLAGS', 'PSUN', 'PSUN_FLAGS', 'LATITUDE', 'LONGITUDE',
'ELEVATION', 'STATE', 'STATION'],
dtype='object')
The description and units of the dataset is available here.
Operations on pandas DataFrame
pandas DataFrames has several features that give us flexibility to do different calculations and analysis on our dataset. Let’s check some out:
Simple Analysis
For example:
When was the coldest day at this station during December of last year?
# use python slicing notation inside .loc
# use idxmin() to find the index of minimum valus
df.loc['2022-12-01':'2022-12-31'].TMIN.idxmin()
Timestamp('2022-12-22 00:00:00')
# Here we easily plot the prior data using matplotlib from pandas
# -- .loc for value based indexing
df.loc['2022-12-01':'2022-12-31'].SNWD.plot(ylabel= 'Daily Average Snow Depth [mm]')
<Axes: xlabel='DATE', ylabel='Daily Average Snow Depth [mm]'>
How many snow days do we have each year at this station?
Pandas groupby is used for grouping the data according to the categories.
# 1- First select days with snow > 0
# 2- Create a "groupby object" based on the selected columns
# 3- use .size() to compute the size of each group
# 4- sort the values descending
# we count days where SNOW>0, and sort them and show top 5 years:
df[df['SNOW']>0].groupby('YEAR').size().sort_values(ascending=False).head()
YEAR
2015 36
2019 34
2014 32
2008 32
2007 31
dtype: int64
Or for a more complex analysis:
For example, we have heard that this could be Denver’s first January in 13 years with no 60-degree days.
Below, we show all days with high temperature above 60°F (155.5°C/10) since 2010:
df[(df['MONTH']==1) & (df['YEAR']>=2010) & (df['TMAX']>155.5)].groupby(['YEAR']).size()
YEAR
2011 1
2012 6
2013 4
2014 3
2015 6
2016 1
2017 4
2018 5
2019 3
2020 2
2021 2
2022 3
dtype: int64
This is great! But how big is this dataset for one station?
First, let’s check the file size:
!ls -lh ../data/USW00003017.csv
-rw-r--r-- 1 runner docker 3.6M Feb 5 2023 ../data/USW00003017.csv
Similar to the previous tutorial, we can use the following function to find the size of a variable on memory.
# Define function to display variable size in MB
import sys
def var_size(in_var):
result = sys.getsizeof(in_var) / 1e6
print(f"Size of variable: {result:.2f} MB")
var_size(df)
Size of variable: 33.21 MB
Remember, the above rule?
- “Have 5 to 10 times as much RAM as the size of your dataset”
Wes McKinney (2017) in 10 things I hate about pandas
So far, we read in and analyzed data for one station. We have a total of +118,000 stations over the world and +4500 stations in Colorado alone!
What if we want to look at the larger dataset?
Scaling up to a larger dataset
Let’s start by reading data from selected stations. The downloaded data for this example includes the climatology observations from 66 selected sites in Colorado.
Pandas can concatenate data to load data spread across multiple files:
!du -csh ../data/*.csv |tail -n1
565M total
Using a for loop with pandas.concat
, we can read multiple files at the same time:
%%time
import glob
co_sites = glob.glob(os.path.join(data_dir, '*.csv'))
df = pd.concat(pd.read_csv(f, index_col=0, parse_dates=['DATE']) for f in co_sites)
CPU times: user 8.27 s, sys: 1.42 s, total: 9.69 s
Wall time: 9.69 s
How many stations have we read in?
print ("Concatenated data for", len(df.ID.unique()), "unique sites.")
Concatenated data for 66 unique sites.
Now that we concatenated the data for all sites in one DataFrame, we can do similar analysis on it:
Which site has recorded the most snow days in a year?
%%time
# ~90s on 4GB RAM
snowy_days = df[df['SNOW']>0].groupby(['ID','YEAR']).size()
print ('This site has the highest number of snow days in a year : ')
snowy_days.agg(['idxmax','max'])
This site has the highest number of snow days in a year :
CPU times: user 387 ms, sys: 0 ns, total: 387 ms
Wall time: 386 ms
idxmax (USC00052281, 1983)
max 102
dtype: object
Excersise: Which Colorado site has recorded the most snow days in 2023?
Dask allows us to conceptualize all of these files as a single dataframe!
# Let's do a little cleanup
del df, snowy_days
Computations on Dask DataFrame
Create a “LocalCluster” Client with Dask
from dask.distributed import Client, LocalCluster
cluster = LocalCluster()
client = Client(cluster)
client
Client
Client-db43c640-b27f-11ef-8a3c-000d3a3c95f2
Connection method: Cluster object | Cluster type: distributed.LocalCluster |
Dashboard: http://127.0.0.1:8787/status |
Cluster Info
LocalCluster
fad2356e
Dashboard: http://127.0.0.1:8787/status | Workers: 4 |
Total threads: 4 | Total memory: 15.61 GiB |
Status: running | Using processes: True |
Scheduler Info
Scheduler
Scheduler-b1e02555-6fa6-4d25-8309-c0891029dbd3
Comm: tcp://127.0.0.1:44639 | Workers: 4 |
Dashboard: http://127.0.0.1:8787/status | Total threads: 4 |
Started: Just now | Total memory: 15.61 GiB |
Workers
Worker: 0
Comm: tcp://127.0.0.1:44047 | Total threads: 1 |
Dashboard: http://127.0.0.1:33137/status | Memory: 3.90 GiB |
Nanny: tcp://127.0.0.1:36517 | |
Local directory: /tmp/dask-scratch-space/worker-ocvrw9yp |
Worker: 1
Comm: tcp://127.0.0.1:44099 | Total threads: 1 |
Dashboard: http://127.0.0.1:38989/status | Memory: 3.90 GiB |
Nanny: tcp://127.0.0.1:41367 | |
Local directory: /tmp/dask-scratch-space/worker-ezhc_xdq |
Worker: 2
Comm: tcp://127.0.0.1:45259 | Total threads: 1 |
Dashboard: http://127.0.0.1:43153/status | Memory: 3.90 GiB |
Nanny: tcp://127.0.0.1:42955 | |
Local directory: /tmp/dask-scratch-space/worker-4457cvjz |
Worker: 3
Comm: tcp://127.0.0.1:39681 | Total threads: 1 |
Dashboard: http://127.0.0.1:44951/status | Memory: 3.90 GiB |
Nanny: tcp://127.0.0.1:41481 | |
Local directory: /tmp/dask-scratch-space/worker-ax10eyf7 |
☝️ Click the Dashboard link above.
👈 Or click the “Search” 🔍 button in the dask-labextension dashboard.
Dask DataFrame read_csv
to read multiple files
dask.dataframe.read_csv
function can be used in conjunction with glob
to read multiple csv files at the same time.
Remember we can read one file with pandas.read_csv
. For reading multiple files with pandas, we have to concatenate them with pd.concatenate
. However, we can read many files at once just using dask.dataframe.read_csv
.
Overall, Dask is designed to perform I/O in parallel and is more performant than pandas for operations with multiple files or large files.
%%time
import dask
import dask.dataframe as dd
ddf = dd.read_csv(co_sites, parse_dates=['DATE'])
ddf
CPU times: user 401 ms, sys: 12.4 ms, total: 413 ms
Wall time: 408 ms
DATE | ID | YEAR | MONTH | DAY | PRCP | PRCP_FLAGS | SNWD | SNWD_FLAGS | SNOW | SNOW_FLAGS | TMAX | TMAX_FLAGS | TMIN | TMIN_FLAGS | TOBS | TOBS_FLAGS | WT01 | WT01_FLAGS | WT08 | WT08_FLAGS | EVAP | EVAP_FLAGS | WDMV | WDMV_FLAGS | WT16 | WT16_FLAGS | WT18 | WT18_FLAGS | WT05 | WT05_FLAGS | WT07 | WT07_FLAGS | DAEV | DAEV_FLAGS | DAPR | DAPR_FLAGS | DASF | DASF_FLAGS | MDEV | MDEV_FLAGS | MDPR | MDPR_FLAGS | MDSF | MDSF_FLAGS | WT04 | WT04_FLAGS | WT03 | WT03_FLAGS | WT14 | WT14_FLAGS | WT11 | WT11_FLAGS | WT09 | WT09_FLAGS | DAWM | DAWM_FLAGS | MDWM | MDWM_FLAGS | MNPN | MNPN_FLAGS | MXPN | MXPN_FLAGS | WT10 | WT10_FLAGS | LATITUDE | LONGITUDE | ELEVATION | STATE | STATION | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
npartitions=66 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
datetime64[ns] | string | int64 | int64 | int64 | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | string | float64 | float64 | float64 | string | string | |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
ddf.TMAX.mean()
<dask_expr.expr.Scalar: expr=ReadCSV(f124e8d)['TMAX'].mean(), dtype=float64>
Notice that the representation of the DataFrame object contains no data just headers and datatypes. Why?
Lazy Evaluation
Similar to Dask Arrays, Dask DataFrames are lazy. Here the data has not yet been read into the dataframe yet (a.k.a. lazy evaluation).
Dask just construct the task graph of the computation but it will “evaluate” them only when necessary.
So how does Dask know the name and dtype of each column?
Dask has just read the start of the first file and infers the column names and dtypes.
Unlike pandas.read_csv
that reads in all files before inferring data types, dask.dataframe.read_csv
only reads in a sample from the beginning of the file (or first file if using a glob). The column names and dtypes are then enforced when reading the specific partitions (Dask can make mistakes on these inferences if there is missing or misleading data in the early rows).
Let’s take a look at the start of our dataframe:
ddf.head()
DATE | ID | YEAR | MONTH | DAY | PRCP | PRCP_FLAGS | SNWD | SNWD_FLAGS | SNOW | ... | MNPN_FLAGS | MXPN | MXPN_FLAGS | WT10 | WT10_FLAGS | LATITUDE | LONGITUDE | ELEVATION | STATE | STATION | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1903-09-13 | USC00048758 | 1903 | 9 | 13 | 0.0 | TX6 | NaN | XXX | NaN | ... | XXX | NaN | XXX | NaN | XXX | 39.1678 | -120.1428 | 1898.9 | CA | TAHOE CITY |
1 | 1903-10-10 | USC00048758 | 1903 | 10 | 10 | 97.0 | XX6 | NaN | XXX | NaN | ... | XXX | NaN | XXX | NaN | XXX | 39.1678 | -120.1428 | 1898.9 | CA | TAHOE CITY |
2 | 1903-11-04 | USC00048758 | 1903 | 11 | 4 | 1778.0 | XS6 | NaN | XXX | NaN | ... | XXX | NaN | XXX | NaN | XXX | 39.1678 | -120.1428 | 1898.9 | CA | TAHOE CITY |
3 | 1903-11-07 | USC00048758 | 1903 | 11 | 7 | 0.0 | TX6 | NaN | XXX | NaN | ... | XXX | NaN | XXX | NaN | XXX | 39.1678 | -120.1428 | 1898.9 | CA | TAHOE CITY |
4 | 1903-11-14 | USC00048758 | 1903 | 11 | 14 | 406.0 | XX6 | NaN | XXX | NaN | ... | XXX | NaN | XXX | NaN | XXX | 39.1678 | -120.1428 | 1898.9 | CA | TAHOE CITY |
5 rows × 70 columns
NOTE: Whenever we operate on our dataframe we read through all of our CSV data so that we don’t fill up RAM. Dask will delete intermediate results (like the full pandas DataFrame for each file) as soon as possible. This enables you to handle larger than memory datasets but, repeated computations will have to load all of the data in each time.
Similar data manipulations as pandas.dataframe
can be done for dask.dataframes
.
For example, let’s find the highest number of snow days in Colorado:
%%time
print ('This site has the highest number of snow days in a year : ')
snowy_days = ddf[ddf['SNOW']>0].groupby(['ID','YEAR']).size()
snowy_days.compute().agg(['idxmax','max'])
This site has the highest number of snow days in a year :
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask/backends.py:140, in CreationDispatch.register_inplace.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
139 try:
--> 140 return func(*args, **kwargs)
141 except Exception as e:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask/dataframe/io/csv.py:779, in make_reader.<locals>.read(urlpath, blocksize, lineterminator, compression, sample, sample_rows, enforce, assume_missing, storage_options, include_path_column, **kwargs)
766 def read(
767 urlpath,
768 blocksize="default",
(...)
777 **kwargs,
778 ):
--> 779 return read_pandas(
780 reader,
781 urlpath,
782 blocksize=blocksize,
783 lineterminator=lineterminator,
784 compression=compression,
785 sample=sample,
786 sample_rows=sample_rows,
787 enforce=enforce,
788 assume_missing=assume_missing,
789 storage_options=storage_options,
790 include_path_column=include_path_column,
791 **kwargs,
792 )
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask/dataframe/io/csv.py:642, in read_pandas(reader, urlpath, blocksize, lineterminator, compression, sample, sample_rows, enforce, assume_missing, storage_options, include_path_column, **kwargs)
641 try:
--> 642 head = reader(BytesIO(b_sample), nrows=sample_rows, **head_kwargs)
643 except pd.errors.ParserError as e:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1026, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend)
1024 kwds.update(kwds_defaults)
-> 1026 return _read(filepath_or_buffer, kwds)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/readers.py:620, in _read(filepath_or_buffer, kwds)
619 # Create the parser.
--> 620 parser = TextFileReader(filepath_or_buffer, **kwds)
622 if chunksize or iterator:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1620, in TextFileReader.__init__(self, f, engine, **kwds)
1619 self.handles: IOHandles | None = None
-> 1620 self._engine = self._make_engine(f, self.engine)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1898, in TextFileReader._make_engine(self, f, engine)
1897 try:
-> 1898 return mapping[engine](f, **self.options)
1899 except Exception:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/c_parser_wrapper.py:161, in CParserWrapper.__init__(self, src, **kwds)
160 # error: Cannot determine type of 'names'
--> 161 self._validate_parse_dates_presence(self.names) # type: ignore[has-type]
162 self._set_noconvert_columns()
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/base_parser.py:243, in ParserBase._validate_parse_dates_presence(self, columns)
242 if missing_cols:
--> 243 raise ValueError(
244 f"Missing column provided to 'parse_dates': '{missing_cols}'"
245 )
246 # Convert positions to actual column names
ValueError: Missing column provided to 'parse_dates': 'DATE'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
File <timed exec>:3
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_collection.py:479, in FrameBase.compute(self, fuse, concatenate, **kwargs)
477 if not isinstance(out, Scalar) and concatenate:
478 out = out.repartition(npartitions=1)
--> 479 out = out.optimize(fuse=fuse)
480 return DaskMethodsMixin.compute(out, **kwargs)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_collection.py:594, in FrameBase.optimize(self, fuse)
576 def optimize(self, fuse: bool = True):
577 """Optimizes the DataFrame.
578
579 Runs the optimizer with all steps over the DataFrame and wraps the result in a
(...)
592 The optimized Dask Dataframe
593 """
--> 594 return new_collection(self.expr.optimize(fuse=fuse))
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:93, in Expr.optimize(self, **kwargs)
92 def optimize(self, **kwargs):
---> 93 return optimize(self, **kwargs)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:3110, in optimize(expr, fuse)
3089 """High level query optimization
3090
3091 This leverages three optimization passes:
(...)
3106 optimize_blockwise_fusion
3107 """
3108 stage: core.OptimizerStage = "fused" if fuse else "simplified-physical"
-> 3110 return optimize_until(expr, stage)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:3061, in optimize_until(expr, stage)
3058 return result
3060 # Simplify
-> 3061 expr = result.simplify()
3062 if stage == "simplified-logical":
3063 return expr
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_core.py:397, in Expr.simplify(self)
395 while True:
396 dependents = collect_dependents(expr)
--> 397 new = expr.simplify_once(dependents=dependents, simplified={})
398 if new._name == expr._name:
399 break
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_core.py:375, in Expr.simplify_once(self, dependents, simplified)
372 if isinstance(operand, Expr):
373 # Bandaid for now, waiting for Singleton
374 dependents[operand._name].append(weakref.ref(expr))
--> 375 new = operand.simplify_once(
376 dependents=dependents, simplified=simplified
377 )
378 simplified[operand._name] = new
379 if new._name != operand._name:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_core.py:348, in Expr.simplify_once(self, dependents, simplified)
345 expr = self
347 while True:
--> 348 out = expr._simplify_down()
349 if out is None:
350 out = expr
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_groupby.py:598, in Size._simplify_down(self)
594 def _simplify_down(self):
595 if (
596 self._slice is not None
597 and not isinstance(self._slice, list)
--> 598 or self.frame.ndim == 1
599 ):
600 # Scalar slices influence the result and are allowed, i.e., the name of
601 # the series is different
602 return
604 # We can remove every column since pandas reduces to a Series anyway
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/functools.py:981, in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:83, in Expr.ndim(self)
81 @functools.cached_property
82 def ndim(self):
---> 83 meta = self._meta
84 try:
85 return meta.ndim
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/functools.py:981, in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:494, in Blockwise._meta(self)
492 @functools.cached_property
493 def _meta(self):
--> 494 args = [op._meta if isinstance(op, Expr) else op for op in self._args]
495 return self.operation(*args, **self._kwargs)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:494, in <listcomp>(.0)
492 @functools.cached_property
493 def _meta(self):
--> 494 args = [op._meta if isinstance(op, Expr) else op for op in self._args]
495 return self.operation(*args, **self._kwargs)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/functools.py:981, in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:2069, in Projection._meta(self)
2067 @functools.cached_property
2068 def _meta(self):
-> 2069 if is_dataframe_like(self.frame._meta):
2070 return super()._meta
2071 # if we are not a DataFrame and have a scalar, we reduce to a scalar
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/functools.py:981, in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/io/csv.py:95, in ReadCSV._meta(self)
93 @functools.cached_property
94 def _meta(self):
---> 95 return self._ddf._meta
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/functools.py:981, in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/io/csv.py:85, in ReadCSV._ddf(self)
82 elif usecols:
83 columns = usecols
---> 85 return self.operation(
86 self.filename,
87 usecols=columns,
88 header=self.header,
89 storage_options=self.storage_options,
90 **kwargs,
91 )
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask/backends.py:151, in CreationDispatch.register_inplace.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
149 raise e
150 else:
--> 151 raise exc from e
ValueError: An error occurred while calling the read_csv method registered to the pandas backend.
Original Message: Missing column provided to 'parse_dates': 'DATE'
Nice, but what did Dask do?
# Requires ipywidgets
snowy_days.dask
{('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
0): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
0),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 1),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 2),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 3),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 4),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 5),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 6),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
7)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
1): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
8),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 9),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 10),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 11),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 12),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 13),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 14),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
15)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
2): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
16),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 17),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 18),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 19),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 20),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 21),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 22),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
23)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
3): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
24),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 25),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 26),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 27),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 28),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 29),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 30),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
31)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
4): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
32),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 33),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 34),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 35),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 36),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 37),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 38),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
39)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
5): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
40),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 41),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 42),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 43),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 44),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 45),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 46),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
47)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
6): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
48),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 49),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 50),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 51),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 52),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 53),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 54),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
55)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
7): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
56),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 57),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 58),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 59),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 60),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 61),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 62),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
63)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
8): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
64),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
65)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
2,
0): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
0),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079', 1, 1),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079', 1, 2),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079', 1, 3),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079', 1, 4),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079', 1, 5),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079', 1, 6),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
7)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
2,
1): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
1,
8)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
0): (<function dask.utils.apply(func, args, kwargs=None)>, <bound method SingleAggregation.aggregate of <class 'dask_expr._groupby.Size'>>, [[('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
2,
0),
('size-tree-4e04cecba34e538ed8ba4cb1dabb1079',
2,
1)]], {'aggfunc': <methodcaller: sum>,
'levels': [0, 1],
'sort': None,
'observed': False}),
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
0): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 0) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
1): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 1) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
2): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 2) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
3): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 3) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
4): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 4) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
5): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 5) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
6): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 6) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
7): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 7) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
8): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 8) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
9): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 9) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
10): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 10) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
11): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 11) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
12): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 12) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
13): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 13) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
14): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 14) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
15): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 15) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
16): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 16) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
17): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 17) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
18): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 18) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
19): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 19) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
20): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 20) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
21): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 21) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
22): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 22) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
23): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 23) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
24): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 24) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
25): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 25) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
26): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 26) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
27): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 27) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
28): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 28) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
29): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 29) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
30): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 30) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
31): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 31) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
32): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 32) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
33): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 33) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
34): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 34) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
35): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 35) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
36): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 36) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
37): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 37) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
38): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 38) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
39): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 39) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
40): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 40) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
41): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 41) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
42): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 42) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
43): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 43) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
44): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 44) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
45): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 45) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
46): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 46) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
47): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 47) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
48): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 48) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
49): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 49) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
50): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 50) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
51): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 51) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
52): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 52) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
53): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 53) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
54): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 54) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
55): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 55) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
56): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 56) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
57): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 57) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
58): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 58) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
59): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 59) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
60): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 60) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
61): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 61) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
62): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 62) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
63): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 63) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
64): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 64) chunk(..., ...)>,
('chunk-488cb44e5d626ef61bd1f992e6ed8d92',
65): <Task ('chunk-488cb44e5d626ef61bd1f992e6ed8d92', 65) chunk(..., ...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
0): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 0) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
1): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 1) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
2): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 2) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
3): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 3) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
4): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 4) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
5): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 5) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
6): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 6) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
7): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 7) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
8): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 8) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
9): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 9) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
10): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 10) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
11): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 11) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
12): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 12) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
13): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 13) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
14): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 14) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
15): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 15) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
16): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 16) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
17): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 17) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
18): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 18) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
19): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 19) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
20): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 20) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
21): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 21) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
22): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 22) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
23): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 23) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
24): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 24) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
25): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 25) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
26): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 26) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
27): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 27) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
28): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 28) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
29): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 29) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
30): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 30) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
31): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 31) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
32): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 32) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
33): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 33) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
34): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 34) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
35): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 35) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
36): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 36) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
37): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 37) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
38): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 38) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
39): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 39) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
40): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 40) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
41): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 41) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
42): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 42) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
43): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 43) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
44): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 44) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
45): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 45) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
46): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 46) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
47): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 47) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
48): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 48) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
49): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 49) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
50): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 50) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
51): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 51) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
52): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 52) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
53): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 53) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
54): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 54) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
55): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 55) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
56): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 56) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
57): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 57) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
58): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 58) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
59): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 59) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
60): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 60) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
61): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 61) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
62): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 62) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
63): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 63) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
64): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 64) getitem(...)>,
('getitem-66c7b00095ef9df80a69310397b5d215',
65): <Task ('getitem-66c7b00095ef9df80a69310397b5d215', 65) getitem(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
0): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 0) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
1): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 1) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
2): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 2) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
3): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 3) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
4): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 4) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
5): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 5) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
6): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 6) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
7): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 7) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
8): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 8) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
9): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 9) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
10): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 10) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
11): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 11) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
12): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 12) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
13): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 13) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
14): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 14) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
15): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 15) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
16): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 16) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
17): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 17) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
18): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 18) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
19): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 19) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
20): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 20) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
21): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 21) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
22): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 22) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
23): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 23) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
24): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 24) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
25): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 25) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
26): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 26) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
27): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 27) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
28): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 28) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
29): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 29) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
30): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 30) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
31): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 31) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
32): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 32) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
33): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 33) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
34): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 34) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
35): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 35) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
36): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 36) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
37): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 37) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
38): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 38) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
39): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 39) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
40): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 40) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
41): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 41) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
42): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 42) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
43): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 43) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
44): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 44) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
45): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 45) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
46): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 46) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
47): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 47) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
48): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 48) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
49): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 49) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
50): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 50) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
51): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 51) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
52): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 52) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
53): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 53) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
54): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 54) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
55): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 55) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
56): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 56) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
57): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 57) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
58): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 58) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
59): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 59) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
60): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 60) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
61): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 61) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
62): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 62) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
63): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 63) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
64): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 64) gt(...)>,
('gt-356423ddf30c779c68f0a9f7aebafeb5',
65): <Task ('gt-356423ddf30c779c68f0a9f7aebafeb5', 65) gt(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
0): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 0) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
1): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 1) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
2): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 2) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
3): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 3) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
4): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 4) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
5): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 5) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
6): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 6) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
7): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 7) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
8): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 8) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
9): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 9) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
10): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 10) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
11): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 11) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
12): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 12) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
13): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 13) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
14): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 14) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
15): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 15) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
16): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 16) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
17): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 17) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
18): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 18) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
19): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 19) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
20): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 20) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
21): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 21) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
22): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 22) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
23): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 23) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
24): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 24) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
25): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 25) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
26): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 26) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
27): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 27) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
28): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 28) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
29): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 29) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
30): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 30) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
31): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 31) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
32): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 32) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
33): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 33) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
34): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 34) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
35): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 35) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
36): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 36) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
37): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 37) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
38): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 38) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
39): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 39) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
40): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 40) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
41): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 41) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
42): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 42) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
43): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 43) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
44): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 44) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
45): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 45) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
46): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 46) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
47): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 47) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
48): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 48) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
49): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 49) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
50): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 50) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
51): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 51) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
52): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 52) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
53): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 53) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
54): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 54) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
55): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 55) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
56): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 56) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
57): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 57) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
58): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 58) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
59): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 59) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
60): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 60) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
61): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 61) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
62): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 62) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
63): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 63) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
64): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 64) getitem(...)>,
('getitem-63332a7872aa6e75aef5f39c6770c679',
65): <Task ('getitem-63332a7872aa6e75aef5f39c6770c679', 65) getitem(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
0): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 0) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
1): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 1) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
2): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 2) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
3): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 3) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
4): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 4) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
5): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 5) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
6): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 6) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
7): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 7) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
8): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 8) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
9): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 9) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
10): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 10) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
11): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 11) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
12): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 12) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
13): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 13) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
14): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 14) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
15): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 15) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
16): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 16) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
17): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 17) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
18): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 18) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
19): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 19) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
20): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 20) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
21): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 21) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
22): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 22) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
23): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 23) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
24): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 24) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
25): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 25) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
26): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 26) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
27): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 27) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
28): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 28) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
29): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 29) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
30): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 30) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
31): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 31) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
32): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 32) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
33): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 33) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
34): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 34) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
35): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 35) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
36): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 36) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
37): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 37) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
38): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 38) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
39): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 39) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
40): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 40) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
41): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 41) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
42): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 42) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
43): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 43) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
44): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 44) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
45): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 45) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
46): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 46) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
47): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 47) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
48): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 48) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
49): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 49) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
50): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 50) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
51): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 51) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
52): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 52) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
53): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 53) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
54): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 54) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
55): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 55) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
56): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 56) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
57): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 57) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
58): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 58) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
59): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 59) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
60): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 60) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
61): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 61) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
62): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 62) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
63): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 63) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
64): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 64) lambda(...)>,
('read_csv-c01028db9af9be8fab6d8a8aef124e8d',
65): <Task ('read_csv-c01028db9af9be8fab6d8a8aef124e8d', 65) lambda(...)>}
You can also view the underlying task graph using .visualize()
:
#graph is too large
snowy_days.visualize()
Use .compute
wisely!
.persist
or caching
Sometimes you might want your computers to keep intermediate results in memory, if it fits in the memory.
The .persist()
method can be used to “cache” data and tell Dask what results to keep around. You should only use .persist()
with any data or computation that fits in memory.
For example, if we want to only do analysis on a subset of data (for example snow days at Boulder site):
boulder_snow = ddf[(ddf['SNOW']>0)&(ddf['ID']=='USC00050848')]
%%time
tmax = boulder_snow.TMAX.mean().compute()
tmin = boulder_snow.TMIN.mean().compute()
print (tmin, tmax)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask/backends.py:140, in CreationDispatch.register_inplace.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
139 try:
--> 140 return func(*args, **kwargs)
141 except Exception as e:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask/dataframe/io/csv.py:779, in make_reader.<locals>.read(urlpath, blocksize, lineterminator, compression, sample, sample_rows, enforce, assume_missing, storage_options, include_path_column, **kwargs)
766 def read(
767 urlpath,
768 blocksize="default",
(...)
777 **kwargs,
778 ):
--> 779 return read_pandas(
780 reader,
781 urlpath,
782 blocksize=blocksize,
783 lineterminator=lineterminator,
784 compression=compression,
785 sample=sample,
786 sample_rows=sample_rows,
787 enforce=enforce,
788 assume_missing=assume_missing,
789 storage_options=storage_options,
790 include_path_column=include_path_column,
791 **kwargs,
792 )
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask/dataframe/io/csv.py:642, in read_pandas(reader, urlpath, blocksize, lineterminator, compression, sample, sample_rows, enforce, assume_missing, storage_options, include_path_column, **kwargs)
641 try:
--> 642 head = reader(BytesIO(b_sample), nrows=sample_rows, **head_kwargs)
643 except pd.errors.ParserError as e:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1026, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend)
1024 kwds.update(kwds_defaults)
-> 1026 return _read(filepath_or_buffer, kwds)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/readers.py:620, in _read(filepath_or_buffer, kwds)
619 # Create the parser.
--> 620 parser = TextFileReader(filepath_or_buffer, **kwds)
622 if chunksize or iterator:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1620, in TextFileReader.__init__(self, f, engine, **kwds)
1619 self.handles: IOHandles | None = None
-> 1620 self._engine = self._make_engine(f, self.engine)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1898, in TextFileReader._make_engine(self, f, engine)
1897 try:
-> 1898 return mapping[engine](f, **self.options)
1899 except Exception:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/c_parser_wrapper.py:161, in CParserWrapper.__init__(self, src, **kwds)
160 # error: Cannot determine type of 'names'
--> 161 self._validate_parse_dates_presence(self.names) # type: ignore[has-type]
162 self._set_noconvert_columns()
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/pandas/io/parsers/base_parser.py:243, in ParserBase._validate_parse_dates_presence(self, columns)
242 if missing_cols:
--> 243 raise ValueError(
244 f"Missing column provided to 'parse_dates': '{missing_cols}'"
245 )
246 # Convert positions to actual column names
ValueError: Missing column provided to 'parse_dates': 'DATE'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
File <timed exec>:1
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_collection.py:479, in FrameBase.compute(self, fuse, concatenate, **kwargs)
477 if not isinstance(out, Scalar) and concatenate:
478 out = out.repartition(npartitions=1)
--> 479 out = out.optimize(fuse=fuse)
480 return DaskMethodsMixin.compute(out, **kwargs)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_collection.py:594, in FrameBase.optimize(self, fuse)
576 def optimize(self, fuse: bool = True):
577 """Optimizes the DataFrame.
578
579 Runs the optimizer with all steps over the DataFrame and wraps the result in a
(...)
592 The optimized Dask Dataframe
593 """
--> 594 return new_collection(self.expr.optimize(fuse=fuse))
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:93, in Expr.optimize(self, **kwargs)
92 def optimize(self, **kwargs):
---> 93 return optimize(self, **kwargs)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:3110, in optimize(expr, fuse)
3089 """High level query optimization
3090
3091 This leverages three optimization passes:
(...)
3106 optimize_blockwise_fusion
3107 """
3108 stage: core.OptimizerStage = "fused" if fuse else "simplified-physical"
-> 3110 return optimize_until(expr, stage)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_expr.py:3061, in optimize_until(expr, stage)
3058 return result
3060 # Simplify
-> 3061 expr = result.simplify()
3062 if stage == "simplified-logical":
3063 return expr
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_core.py:397, in Expr.simplify(self)
395 while True:
396 dependents = collect_dependents(expr)
--> 397 new = expr.simplify_once(dependents=dependents, simplified={})
398 if new._name == expr._name:
399 break
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_core.py:375, in Expr.simplify_once(self, dependents, simplified)
372 if isinstance(operand, Expr):
373 # Bandaid for now, waiting for Singleton
374 dependents[operand._name].append(weakref.ref(expr))
--> 375 new = operand.simplify_once(
376 dependents=dependents, simplified=simplified
377 )
378 simplified[operand._name] = new
379 if new._name != operand._name:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_core.py:375, in Expr.simplify_once(self, dependents, simplified)
372 if isinstance(operand, Expr):
373 # Bandaid for now, waiting for Singleton
374 dependents[operand._name].append(weakref.ref(expr))
--> 375 new = operand.simplify_once(
376 dependents=dependents, simplified=simplified
377 )
378 simplified[operand._name] = new
379 if new._name != operand._name:
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/_core.py:358, in Expr.simplify_once(self, dependents, simplified)
356 # Allow children to simplify their parents
357 for child in expr.dependencies():
--> 358 out = child._simplify_up(expr, dependents)
359 if out is None:
360 out = expr
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/io/csv.py:105, in ReadCSV._simplify_up(self, parent, dependents)
101 if kwargs.get("usecols", None) is not None and isinstance(
102 kwargs.get("usecols")[0], int
103 ):
104 return
--> 105 return super()._simplify_up(parent, dependents)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/io/io.py:78, in BlockwiseIO._simplify_up(self, parent, dependents)
74 def _simplify_up(self, parent, dependents):
75 if (
76 self._absorb_projections
77 and isinstance(parent, Projection)
---> 78 and is_dataframe_like(self._meta)
79 ):
80 # Column projection
81 parent_columns = parent.operand("columns")
82 proposed_columns = determine_column_projection(self, parent, dependents)
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/functools.py:981, in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/io/csv.py:95, in ReadCSV._meta(self)
93 @functools.cached_property
94 def _meta(self):
---> 95 return self._ddf._meta
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/functools.py:981, in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask_expr/io/csv.py:85, in ReadCSV._ddf(self)
82 elif usecols:
83 columns = usecols
---> 85 return self.operation(
86 self.filename,
87 usecols=columns,
88 header=self.header,
89 storage_options=self.storage_options,
90 **kwargs,
91 )
File ~/miniconda3/envs/dask-cookbook/lib/python3.10/site-packages/dask/backends.py:151, in CreationDispatch.register_inplace.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
149 raise e
150 else:
--> 151 raise exc from e
ValueError: An error occurred while calling the read_csv method registered to the pandas backend.
Original Message: Missing column provided to 'parse_dates': 'DATE'
boulder_snow = ddf[(ddf['SNOW']>0)&(ddf['ID']=='USC00050848')].persist()
%%time
tmax = boulder_snow.TMAX.mean().compute()
tmin = boulder_snow.TMIN.mean().compute()
print (tmin, tmax)
-74.82074711099168 37.419103836866114
CPU times: user 771 ms, sys: 40.8 ms, total: 812 ms
Wall time: 4.91 s
As you can see the analysis on this persisted data is much faster because we are not repeating the loading and selecting.
Dask DataFrames Best Practices
Use pandas (when you can)
For data that fits into RAM, pandas can often be easier and more efficient to use than Dask DataFrame. However, Dask DataFrame is a powerful tool for larger-than-memory datasets.
When the data is still larger than memory, Dask DataFrame can be used to reduce the larger datasets to a manageable level that pandas can handle. Next, use pandas at that point.
Avoid Full-Data Shuffling
Some operations are more expensive to compute in a parallel setting than if they are in-memory on a single machine (for example, set_index
or merge
). In particular, shuffling operations that rearrange data can become very communication intensive.
pandas performance tips
pandas performance tips such as using vectorized operations also apply to Dask DataFrames. See Modern Pandas notebook for more tips on better performance with pandas.
Check Partition Size
Similar to chunks, partitions should be small enough that they fit in the memory, but large enough to avoid that the communication overhead.
blocksize
The number of partitions can be set using the
blocksize
argument. If none is given, the number of partitions/blocksize is calculated depending on the available memory and the number of cores on a machine up to a max of 64 MB. As we increase the blocksize, the number of partitions (calculated by Dask) will decrease. This is especially important when reading one large csv file.
As a good rule of thumb, you should aim for partitions that have around 100MB of data each.
Smart use of .compute()
Try avoiding running .compute()
operation as long as possible. Dask works best when users avoid computation until results are needed. The .compute()
command informs Dask to trigger computations on the Dask DataFrame.
As shown in the above example, the intermediate results can also be shared by calling .compute()
only once.
Close your local Dask Cluster
It is always a good practice to close the Dask cluster you created.
client.shutdown()
Summary
In this notebook, we have learned about:
Dask DataFrame concept and component.
When to use and when to avoid Dask DataFrames?
How to use Dask DataFrame?
Some best practices around Dask DataFrames.
Resources and references
Reference
Ask for help
dask
tag on Stack Overflow, for usage questionsgithub discussions: dask for general, non-bug, discussion, and usage questions
github issues: dask for bug reports and feature requests