ESGF logo

Complex Searching with intake-esgf

Overview

In this tutorial we will present an interface under design to facilitate complex searching using intake-esgf. intake-esgf is a small intake and intake-esm inspired package under development in ESGF2. Please note that there is a name collison with an existing package in PyPI and conda. You will need to install the package from source.

Prerequisites

Concepts

Importance

Notes

Install Package

Necessary

Understanding of NetCDF

Helpful

Familiarity with metadata structure

Familiar with intake-esm

Helpful

Similar interface

Transient climate response

Background

  • Time to learn: 30 minutes

Imports

from intake_esgf import ESGFCatalog

Initializing the Catalog

As with intake-esm we first instantiate the catalog. However, since we will populate the catalog with search results, the catalog starts empty. Internally, we query different ESGF index nodes for information about what datasets you wish to include in your analysis. As ESGF2 is actively working on an index redesign, our catlogs by default point to a Globus (ElasticSearch) based index at ALCF (Argonne Leadership Computing Facility).

cat = ESGFCatalog()
print(cat)
for ind in cat.indices: # Which indices are included?
    print(ind)
Perform a search() to populate the catalog.
GlobusESGFIndex('anl-dev')

We also provide support for connecting to the ESGF1 Solr-based indices. You may specify a server or list or just include True to choose all the federated index nodes.

cat = ESGFCatalog(esgf1_indices="esgf-node.llnl.gov")  # include LLNL
cat = ESGFCatalog(esgf1_indices=["esgf-node.ornl.gov", "esgf.ceda.ac.uk"])  # ORNL & CEDA
cat = ESGFCatalog(esgf1_indices=True)  # all federated indices
for ind in cat.indices:
    print(ind)
GlobusESGFIndex('anl-dev')
SolrESGFIndex('esgf.ceda.ac.uk')
SolrESGFIndex('esgf-data.dkrz.de')
SolrESGFIndex('esgf-node.ipsl.upmc.fr')
SolrESGFIndex('esg-dn1.nsc.liu.se')
SolrESGFIndex('esgf-node.llnl.gov')
SolrESGFIndex('esgf.nci.org.au')
SolrESGFIndex('esgf-node.ornl.gov')

Populate the catalog

Many times, an analysis will require several variables across multiple experiments. For example, if one were to compute the transient climate response (TCRE), you would need tempererature (tas) and carbon emissions from land (nbp) and ocean (fgco2) for a 1% CO2 increase experiment (1pctCO2) as well as the control experiment (piControl). If TCRE is not in your particular science, that is ok for this notebook. It is a motivating example and the specifics are less important than the search concepts. First, we perform a search in a familiar syntax.

cat.search(
    experiment_id=["piControl", "1pctCO2"],
    variable_id=["tas", "fgco2", "nbp"],
    table_id=["Amon", "Omon", "Lmon"],
)
print(cat)
   Searching indices: 100%|███████████████████████████████|8/8 [    1.36s/index]
Summary information for 399 results:
mip_era                                                     [CMIP6]
activity_id                                                  [CMIP]
institution_id    [CNRM-CERFACS, IPSL, MOHC, MRI, MPI-M, NCAR, N...
source_id         [CNRM-ESM2-1, CNRM-CM6-1, IPSL-CM6A-LR, CNRM-C...
experiment_id                                  [piControl, 1pctCO2]
member_id         [r1i1p1f2, r2i1p1f2, r3i1p1f2, r1i1p1f1, r4i1p...
table_id                                         [Omon, Amon, Lmon]
variable_id                                       [fgco2, tas, nbp]
grid_label                                            [gn, gr, gr1]
dtype: object

Internally, this launches simultaneous searches that are combined locally to provide a global view of what datasets are available. While the Solr indices themselves can be searched in distributed fashion, they will not report if an index has failed to return a response. As index nodes go down from time to time, this can leave you with a false impression that you have found all the datasets of interest. By managing the searches locally, intake-esgf can report back to you that an index has failed and that your results may be incomplete.

If you would like details about what intake-esgf is doing, look in the local cache directory (${HOME}/.esgf/) for a esgf.log file. This is a full history of everything that intake-esgf has searched, downloaded, or accessed. You can also look at just this session by calling session_log(). In this case you will see how long each index took to return a response and if any failed

print(cat.session_log())
2023-12-07 09:37:01 search begin experiment_id=['piControl', '1pctCO2'], variable_id=['tas', 'fgco2', 'nbp'], table_id=['Amon', 'Omon', 'Lmon']
2023-12-07 09:37:02 └─SolrESGFIndex('esgf-node.ipsl.upmc.fr') response_time=1.42 total_time=1.89
2023-12-07 09:37:03 └─GlobusESGFIndex('anl-dev') results=329 response_time=2.19 total_time=2.19
2023-12-07 09:37:03 └─SolrESGFIndex('esg-dn1.nsc.liu.se') response_time=1.90 total_time=2.47
2023-12-07 09:37:06 └─SolrESGFIndex('esgf.ceda.ac.uk') response_time=2.40 total_time=5.31
2023-12-07 09:37:07 └─SolrESGFIndex('esgf.nci.org.au') response_time=3.26 total_time=6.56
2023-12-07 09:37:08 └─SolrESGFIndex('esgf-node.ornl.gov') response_time=1.51 total_time=6.92
2023-12-07 09:37:08 └─SolrESGFIndex('esgf-data.dkrz.de') response_time=2.80 total_time=7.63
2023-12-07 09:37:11 └─SolrESGFIndex('esgf-node.llnl.gov') response_time=1.49 total_time=10.89
2023-12-07 09:37:12 search end total_time=11.41

At this stage of the search you have a catalog full of possibly relevant datasets for your analysis, stored in a pandas dataframe. You are free to view and manipulate this dataframe to help hone these results down. It is available to you as the df member of the ESGFCatalog. You should be careful to only remove rows as internally we could use any column in the downloading of the data. Also note that we have removed the user-facing notion of where the data is hosted. The id column of this dataframe is a list of full dataset_ids which includes the location information. At the point when you are ready to download data, we will choose locations automatically that are fastest for you.

cat.df
mip_era activity_id institution_id source_id experiment_id member_id table_id variable_id grid_label version id
0 CMIP6 CMIP CNRM-CERFACS CNRM-ESM2-1 piControl r1i1p1f2 Omon fgco2 gn v20181115 [CMIP6.CMIP.CNRM-CERFACS.CNRM-ESM2-1.piControl...
1 CMIP6 CMIP CNRM-CERFACS CNRM-CM6-1 piControl r1i1p1f2 Amon tas gr v20180814 [CMIP6.CMIP.CNRM-CERFACS.CNRM-CM6-1.piControl....
2 CMIP6 CMIP CNRM-CERFACS CNRM-ESM2-1 piControl r1i1p1f2 Amon tas gr v20181115 [CMIP6.CMIP.CNRM-CERFACS.CNRM-ESM2-1.piControl...
3 CMIP6 CMIP CNRM-CERFACS CNRM-ESM2-1 piControl r1i1p1f2 Lmon nbp gr v20181115 [CMIP6.CMIP.CNRM-CERFACS.CNRM-ESM2-1.piControl...
4 CMIP6 CMIP CNRM-CERFACS CNRM-CM6-1 1pctCO2 r1i1p1f2 Amon tas gr v20180626 [CMIP6.CMIP.CNRM-CERFACS.CNRM-CM6-1.1pctCO2.r1...
... ... ... ... ... ... ... ... ... ... ... ...
1304 CMIP6 CMIP NASA-GISS GISS-E2-1-G 1pctCO2 r102i1p1f1 Lmon nbp gn v20190815 [CMIP6.CMIP.NASA-GISS.GISS-E2-1-G.1pctCO2.r102...
1309 CMIP6 CMIP MRI MRI-ESM2-0 1pctCO2 r1i2p1f1 Amon tas gn v20191205 [CMIP6.CMIP.MRI.MRI-ESM2-0.1pctCO2.r1i2p1f1.Am...
2048 CMIP6 CMIP MIROC MIROC-ES2H piControl r1i1p4f2 Omon fgco2 gr1 v20230904 [CMIP6.CMIP.MIROC.MIROC-ES2H.piControl.r1i1p4f...
2050 CMIP6 CMIP E3SM-Project E3SM-2-0-NARRM 1pctCO2 r1i1p1f1 Amon tas gr v20230427 [CMIP6.CMIP.E3SM-Project.E3SM-2-0-NARRM.1pctCO...
2051 CMIP6 CMIP E3SM-Project E3SM-2-0-NARRM piControl r1i1p1f1 Amon tas gr v20230505 [CMIP6.CMIP.E3SM-Project.E3SM-2-0-NARRM.piCont...

399 rows × 11 columns

Model Groups

However, intake-esgf also provides you with some tools to help locate relevant data for your analysis. When conducting these kinds of analyses, we are seeking for unique combinations of a source_id, member_id, and grid_label that have all the variables that we need. We call these model groups. In an ESGF search, it is common to find a model that has, for example, a tas for r1i1p1f1 but not a fgco2. Sorting this out is time consuming and labor intensive. So first, we provide you a function to print out all model groups with the following function.

cat.model_groups().to_frame()
variable_id
source_id member_id grid_label
ACCESS-CM2 r1i1p1f1 gn 2
ACCESS-ESM1-5 r1i1p1f1 gn 6
AWI-CM-1-1-MR r1i1p1f1 gn 2
AWI-ESM-1-1-LR r1i1p1f1 gn 2
BCC-CSM2-MR r1i1p1f1 gn 3
... ... ... ...
UKESM1-0-LL r1i1p1f2 gn 6
r2i1p1f2 gn 3
r3i1p1f2 gn 3
r4i1p1f2 gn 3
UKESM1-1-LL r1i1p1f2 gn 6

148 rows × 1 columns

The function model_groups() returns a pandas Series (converted to a dataframe here for printing) with all unique combinations of (source_id,member_id,grid_label) along with the dataset count for each. This helps illustrate why it can be so difficult to locate all the data relevant to a given analysis. At the time of this writing, there are 148 model groups but relatively few of them with all 6 (2 experiments and 3 variables) datasets that we need. Furthermore, you cannot rely on a model group using r1i1p1f1 for its primary result. The results above show that UKESM does not even use f1 at all, further complicating the process of finding results.

In addition to this notion of model groups, intake-esgf provides you a method remove_incomplete() for determing which model groups you wish to keep in the current search. Internally, we will group the search results dataframe by model groups and apply a function of your design to the grouped portion of the dataframe. For example, for the current work, I could just check that there are 6 datasets in the sub-dataframe.

def shall_i_keep_it(sub_df):
    if len(sub_df) == 6:
        return True
    return False


cat.remove_incomplete(shall_i_keep_it)
cat.model_groups().to_frame()
variable_id
source_id member_id grid_label
ACCESS-ESM1-5 r1i1p1f1 gn 6
CanESM5 r1i1p1f1 gn 6
r1i1p2f1 gn 6
CanESM5-1 r1i1p1f1 gn 6
r1i1p2f1 gn 6
CanESM5-CanOE r1i1p2f1 gn 6
CESM2 r1i1p1f1 gn 6
CESM2-FV2 r1i1p1f1 gn 6
CESM2-WACCM r1i1p1f1 gn 6
CESM2-WACCM-FV2 r1i1p1f1 gn 6
CMCC-ESM2 r1i1p1f1 gn 6
GISS-E2-1-G r101i1p1f1 gn 6
r102i1p1f1 gn 6
INM-CM4-8 r1i1p1f1 gr1 6
INM-CM5-0 r1i1p1f1 gr1 6
MIROC-ES2L r1i1p1f2 gn 6
MPI-ESM-1-2-HAM r1i1p1f1 gn 6
MPI-ESM1-2-LR r1i1p1f1 gn 6
MRI-ESM2-0 r1i2p1f1 gn 6
NorCPM1 r1i1p1f1 gn 6
NorESM2-LM r1i1p1f1 gn 6
r1i1p4f1 gn 6
NorESM2-MM r1i1p1f1 gn 6
UKESM1-0-LL r1i1p1f2 gn 6
UKESM1-1-LL r1i1p1f2 gn 6

You could write a much more complex check–it depends on what is relevant to your analysis. The effect is that the list of possible models with consistent results is now much more manageable. This method has the added benefit of forcing the user to be concrete about which models were included in an analysis.

Removing Additional Variants

It may also be that you wish to only include a single member_id in your analysis. The above search shows we have a few models with multiple variants that have all 6 required datasets. To be fair to those that only have 1, you may wish to only keep the smallest variant. We also provide this function as part of the ESGFCatalog object.

cat.remove_ensembles()
cat.model_groups().to_frame()
variable_id
source_id member_id grid_label
ACCESS-ESM1-5 r1i1p1f1 gn 6
CanESM5 r1i1p1f1 gn 6
CanESM5-1 r1i1p1f1 gn 6
CanESM5-CanOE r1i1p2f1 gn 6
CESM2 r1i1p1f1 gn 6
CESM2-FV2 r1i1p1f1 gn 6
CESM2-WACCM r1i1p1f1 gn 6
CESM2-WACCM-FV2 r1i1p1f1 gn 6
CMCC-ESM2 r1i1p1f1 gn 6
GISS-E2-1-G r101i1p1f1 gn 6
INM-CM4-8 r1i1p1f1 gr1 6
INM-CM5-0 r1i1p1f1 gr1 6
MIROC-ES2L r1i1p1f2 gn 6
MPI-ESM-1-2-HAM r1i1p1f1 gn 6
MPI-ESM1-2-LR r1i1p1f1 gn 6
MRI-ESM2-0 r1i2p1f1 gn 6
NorCPM1 r1i1p1f1 gn 6
NorESM2-LM r1i1p1f1 gn 6
NorESM2-MM r1i1p1f1 gn 6
UKESM1-0-LL r1i1p1f2 gn 6
UKESM1-1-LL r1i1p1f2 gn 6

Summary

At this point, you would be ready to use to_dataset_dict() to download and load all datasets into a dictionary for analysis. The point of this notebook however is to expose the search capabilities. It is our goal to make annoying and time-consuming tasks easier by providing you smart interfaces for common operations. Let us know what else is painful for you in locating relevant data for your science.