Data Ingestion - Geospatial-Specific Tooling

PySTAC


Overview

In this notebook, you will ingest Landsat data for use in machine learning. Machine learning tasks often involve a lot of data, and in Python, data is typically stored in memory as simple NumPy arrays. However, higher-level containers built on top of NumPy arrays provide more functionality for multidimensional gridded data (xarray) or out-of-core and distributed data (Dask). Our goal for data ingestion will be to load specific Landsat data of interest into one of these higher-level containers.

Microsoft Plantery Computer is one of several providers of Landsat Data. We are using it together with pystac-client and odc-stac because together they provide a nice Python API for searching and loading with specific criteria such as spatial area, datetime, Landsat mission, and cloud coverage.

Earth science datasets are often stored on remote servers that may be too large to download locally. Therefore, in this cookbook, we will focus primarily on ingestion approaches that load small portions of data from a remote source, as needed. However, the approach for your own work will depend not only on data size and location but also the intended analysis, so in a follow up notebook, you will see an alternative approache for generalized data access and management.

Prerequisites

Concepts

Importance

Notes

Intro to Landsat

Necessary

Background

About the Microsoft Planetary Computer

Helpful

Background

pystac-client Usage

Helpful

Consult as needed

odc.stac.load Reference

Helpful

Consult as needed

xarray

Necessary

Intro to Dask Array

Helpful

Panel Getting Started Guide

Helpful

  • Time to learn: 10 minutes

Imports

import odc.stac
import pandas as pd
import planetary_computer
import pystac_client
import xarray as xr
from pystac.extensions.eo import EOExtension as eo

# Viz
import hvplot.xarray
import panel as pn

pn.extension()

Open and read the root of the STAC catalog

catalog = pystac_client.Client.open(
    "https://planetarycomputer.microsoft.com/api/stac/v1",
    modifier=planetary_computer.sign_inplace,
)
catalog.title
'Microsoft Planetary Computer STAC API'

Microsoft Planetary Computer has a public STAC metadata but the actual data assets are in private Azure Blob Storage containers and require authentication. pystac-client provides a modifier keyword that we can use to manually sign the item. Otherwise, we’d get an error when trying to access the asset.

Search for Landsat Data

Let’s say that an analysis we want to run requires landsat data over a specific region and from a specific time period. We can use our catalog to search for assets that fit our search criteria.

First, let’s find the name of the landsat dataset. This page is a nice resource for browsing the available collections, but we can also just search the catalog for ‘landsat’:

all_collections = [i.id for i in catalog.get_collections()]
landsat_collections = [
    collection for collection in all_collections if "landsat" in collection
]
landsat_collections
['landsat-c2-l2', 'landsat-c2-l1']

We’ll use the landsat-c2-l2 dataset, which stands for Collection 2 Level-2. It contains data from several landsat missions and has better data quality than Level 1 (landsat-c2-l1). Microsoft Planetary Computer has descriptions of Level 1 and Level 2, but a direct and succinct comparison can be found in this community post, and the information can be verified with USGS.

Now, let’s set our search parameters. You may already know the bounding box (region/area of interest) coordinates, but if you don’t, there are many useful tools like bboxfinder.com that can help.

bbox = [-118.89, 38.54, -118.57, 38.84]  # Region over a lake in Nevada, USA
datetime = "2017-06-01/2017-09-30"  # Summer months of 2017
collection = "landsat-c2-l2"

We can also specify other parameters in the query, such as a specific landsat mission and the max percent of cloud cover:

platform = "landsat-8"
cloudy_less_than = 1  # percent

Now we run the search and list the results:

search = catalog.search(
    collections=["landsat-c2-l2"],
    bbox=bbox,
    datetime=datetime,
    query={"eo:cloud_cover": {"lt": cloudy_less_than}, "platform": {"in": [platform]}},
)
items = search.item_collection()
print(f"Returned {len(items)} Items:")
item_id = {(i, item.id): i for i, item in enumerate(items)}
item_id
Returned 3 Items:
{(0, 'LC08_L2SP_042033_20170718_02_T1'): 0,
 (1, 'LC08_L2SP_042033_20170702_02_T1'): 1,
 (2, 'LC08_L2SP_042033_20170616_02_T1'): 2}

It looks like there were three image stacks taken by Landsat 8 over this spatial region during the summer months of 2017 that has less than 1 percent cloud cover.

Preview Results and Select a Dataset

Before loading one of the available image stacks, it would be useful to get a visual check of the results. Many datasets have a rendered preview or thumbnail image that can be accessed without having to load the full resolution data.

We can create a simple interactive application using the Panel library to access and display rendered PNG previews of the our search results. Note that these pre-rendered images are of large tiles that span beyond our bounding box of interest. In the next steps, we will only be loading in a small area around the lake.

item_sel = pn.widgets.Select(value=1, options=item_id, name="item")

def get_preview(i):
    return pn.panel(items[i].assets["rendered_preview"].href, height=300)


pn.Row(item_sel, pn.bind(get_preview, item_sel))
selected_item = items[1]
selected_item

Access the Data

Now that we have selected a dataset from our catalog, we can procede to access the data. We want to be very selective about the data that we read and when we read it because the amount of downloaded data can quickly get out of hand. Therefore, let’s select only a subset of images.

First, we’ll preview the different image assets (or Bands) available in the Landsat item.

assets = []
for _, asset in selected_item.assets.items():
    try:
        assets.append(asset.extra_fields["eo:bands"][0])
    except:
        pass

cols_ordered = [
    "common_name",
    "description",
    "name",
    "center_wavelength",
    "full_width_half_max",
]
bands = pd.DataFrame.from_dict(assets)[cols_ordered]
bands
common_name description name center_wavelength full_width_half_max
0 red Visible red OLI_B4 0.65 0.04
1 blue Visible blue OLI_B2 0.48 0.06
2 green Visible green OLI_B3 0.56 0.06
3 nir08 Near infrared OLI_B5 0.87 0.03
4 lwir11 Long-wave infrared TIRS_B10 10.90 0.59
5 swir16 Short-wave infrared OLI_B6 1.61 0.09
6 swir22 Short-wave infrared OLI_B7 2.20 0.19
7 coastal Coastal/Aerosol OLI_B1 0.44 0.02

Then we will select a few bands (images) of interest:

bands_of_interest = ["red", "green", "blue"]

Finally, we lazily load the selected data. We will use the package called odc which allows us to load only a specific region of interest (bounding box or ‘bbox’) and specific bands (images) of interest. We will also use the chunks argument to load the data as dask arrays; this will load the metadata now and delay the loading until we actually use the data, or until we force the data to be loaded by using .compute().

ds = odc.stac.stac_load(
    [selected_item],
    bands=bands_of_interest,
    bbox=bbox,
    chunks={},  # <-- use Dask
).isel(time=0)
ds
<xarray.Dataset>
Dimensions:      (y: 1128, x: 950)
Coordinates:
  * y            (y) float64 4.301e+06 4.301e+06 ... 4.267e+06 4.267e+06
  * x            (x) float64 3.353e+05 3.353e+05 ... 3.637e+05 3.638e+05
    spatial_ref  int32 32611
    time         datetime64[ns] 2017-07-02T18:33:06.200763
Data variables:
    red          (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
    green        (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
    blue         (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>

Let’s combine the bands of the dataset into a single DataArray that has the band names as coordinates of a new ‘band’ dimension, and also call .compute() to finally load the data.

da = ds.to_array(dim="band").compute()
da
<xarray.DataArray (band: 3, y: 1128, x: 950)>
array([[[14691, 14914, 14988, ..., 16283, 16292, 16316],
        [14655, 14859, 14969, ..., 16272, 16185, 16079],
        [14531, 14699, 14972, ..., 15318, 15526, 14734],
        ...,
        [13804, 13561, 13601, ..., 18311, 18202, 17625],
        [13857, 13828, 13858, ..., 19400, 18942, 18551],
        [13840, 13786, 13867, ..., 17873, 17917, 18453]],

       [[13233, 13402, 13565, ..., 14553, 14658, 14657],
        [13291, 13428, 13585, ..., 14590, 14478, 14550],
        [13122, 13287, 13601, ..., 13987, 14220, 13571],
        ...,
        [12720, 12552, 12468, ..., 16580, 16411, 15899],
        [12704, 12644, 12658, ..., 17351, 16853, 16505],
        [12647, 12620, 12698, ..., 15990, 16211, 16686]],

       [[11572, 11629, 11723, ..., 12857, 12918, 12946],
        [11588, 11655, 11721, ..., 12848, 12792, 12715],
        [11510, 11608, 11781, ..., 12371, 12453, 12053],
        ...,
        [11195, 11104, 11045, ..., 14182, 14031, 13716],
        [11125, 11061, 11106, ..., 14652, 14284, 14062],
        [11059, 11050, 11134, ..., 13756, 13865, 14209]]], dtype=uint16)
Coordinates:
  * y            (y) float64 4.301e+06 4.301e+06 ... 4.267e+06 4.267e+06
  * x            (x) float64 3.353e+05 3.353e+05 ... 3.637e+05 3.638e+05
    spatial_ref  int32 32611
    time         datetime64[ns] 2017-07-02T18:33:06.200763
  * band         (band) object 'red' 'green' 'blue'

Visualize the data

Often, data ingestion involves quickly visualizing your raw data to get a sense that things are proceeding accordingly. As we have created an array with red, blue, and green bands, we can quickly display a natural color image of the lake using the .plot.imshow() function of xarray. We’ll use the robust=True argument because the data values are outside the range of typical RGB images.

da.plot.imshow(robust=True, size=3)
<matplotlib.image.AxesImage at 0x7fe2a8745210>
../_images/44752ad5ede4bfd4ff006172839266a1d5ca899c87f3d555f5c61208891dfbd7.png

Now, let’s use hvplot to provide an interactive visualization of the inividual bands in our array.

ds
<xarray.Dataset>
Dimensions:      (y: 1128, x: 950)
Coordinates:
  * y            (y) float64 4.301e+06 4.301e+06 ... 4.267e+06 4.267e+06
  * x            (x) float64 3.353e+05 3.353e+05 ... 3.637e+05 3.638e+05
    spatial_ref  int32 32611
    time         datetime64[ns] 2017-07-02T18:33:06.200763
Data variables:
    red          (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
    green        (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
    blue         (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
da.hvplot.image(x="x", y="y", cmap="viridis", aspect=1)

Let’s plot the bands as seperate columns by specifying a dimension to expand with col='band'. We can also set rasterize=True to use Datashader (another HoloViz tool) to render large data into a 2D histogram, where every array cell counts the data points falling into that pixel, as set by the resolution of your screen. This is especially important for large and high resolution images that would otherwise cause issues when attempting to render in a browser.

da.hvplot.image(
    x="x", y="y", col="band", cmap="viridis", xaxis=False, yaxis=False, colorbar=False, rasterize=True
)
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/dask/dataframe/_pyarrow_compat.py:17: FutureWarning: Minimal version of pyarrow will soon be increased to 14.0.1. You are using 12.0.1. Please consider upgrading.
  warnings.warn(
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/geopandas/_compat.py:153: UserWarning: The Shapely GEOS version (3.11.1-CAPI-1.17.1) is incompatible with the GEOS version PyGEOS was compiled with (3.10.4-CAPI-1.16.2). Conversions between both will be slow.
  set_use_pygeos()

Select the zoom tool and zoom in on of the plots to see that all the images are all automatically linked!

Retain Attributes

When working with many image arrays, it’s critical to retain the data properties as xarray attributes:

da.attrs = selected_item.properties
da
<xarray.DataArray (band: 3, y: 1128, x: 950)>
array([[[14691, 14914, 14988, ..., 16283, 16292, 16316],
        [14655, 14859, 14969, ..., 16272, 16185, 16079],
        [14531, 14699, 14972, ..., 15318, 15526, 14734],
        ...,
        [13804, 13561, 13601, ..., 18311, 18202, 17625],
        [13857, 13828, 13858, ..., 19400, 18942, 18551],
        [13840, 13786, 13867, ..., 17873, 17917, 18453]],

       [[13233, 13402, 13565, ..., 14553, 14658, 14657],
        [13291, 13428, 13585, ..., 14590, 14478, 14550],
        [13122, 13287, 13601, ..., 13987, 14220, 13571],
        ...,
        [12720, 12552, 12468, ..., 16580, 16411, 15899],
        [12704, 12644, 12658, ..., 17351, 16853, 16505],
        [12647, 12620, 12698, ..., 15990, 16211, 16686]],

       [[11572, 11629, 11723, ..., 12857, 12918, 12946],
        [11588, 11655, 11721, ..., 12848, 12792, 12715],
        [11510, 11608, 11781, ..., 12371, 12453, 12053],
        ...,
        [11195, 11104, 11045, ..., 14182, 14031, 13716],
        [11125, 11061, 11106, ..., 14652, 14284, 14062],
        [11059, 11050, 11134, ..., 13756, 13865, 14209]]], dtype=uint16)
Coordinates:
  * y            (y) float64 4.301e+06 4.301e+06 ... 4.267e+06 4.267e+06
  * x            (x) float64 3.353e+05 3.353e+05 ... 3.637e+05 3.638e+05
    spatial_ref  int32 32611
    time         datetime64[ns] 2017-07-02T18:33:06.200763
  * band         (band) object 'red' 'green' 'blue'
Attributes: (12/22)
    gsd:                          30
    created:                      2022-05-06T17:46:34.110946Z
    sci:doi:                      10.5066/P9OGBGM6
    datetime:                     2017-07-02T18:33:06.200763Z
    platform:                     landsat-8
    proj:epsg:                    32611
    ...                           ...
    view:sun_azimuth:             125.03739105
    landsat:correction:           L2SP
    view:sun_elevation:           65.85380157
    landsat:cloud_cover_land:     0.53
    landsat:collection_number:    02
    landsat:collection_category:  T1

Notice that you can now expand the Attributes: dropdown to see the properties of this data.

Set the crs attribute

As the data is in ‘meter’ units from a reference point, we can plot in commonly used longitude, latitude coordinates with .hvplot(geo=True) if our array has a valid coordinate reference system (CRS) attribute. This value is provided from Microsoft Planetary Computer as the proj:epsg property, so we just need to copy it to a new attribute crs so that hvPlot can automatically find it, without us having to further specify anything in our plotting code

Note, this CRS is referenced by an EPSG code that can be accessed from the metadata of our selected catalog search result. We can see more about this dataset’s specific code at EPSG.io/32611. You can also read more about EPSG codes in general in this Coordinate Reference Systems: EPSG codes online book chapter.

da.attrs["crs"] = f"epsg:{selected_item.properties['proj:epsg']}"
da.attrs["crs"]
'epsg:32611'

Now we can use .hvplot(geo=True) to plot in longitude and latitude coordinates. Informing hvPlot that this is geographic data also allows us to overlay data on aligned geographic tiles using the tiles parameter.

da.hvplot.image(
    x="x", y="y", cmap="viridis", geo=True, alpha=.9, tiles="ESRI", xlabel="Longitude", ylabel="Latitude", colorbar=False, aspect=1,
)

Summary

The data access approach should adapt to features of the data and your intended analysis. As Landsat data is large and multidimensional, a good approach is to use Microsoft Plantery Computer, pystac-client, and odc-stac together for searching the metadata catalog and lazily loading specific data chunks. Once you have accessed data, visualize it with hvPlot to ensure that it matches your expectations.

What’s next?

Before we proceed to workflow examples, we can explore an alternate way of accessing data using generalized tooling.

Resources and References

  • Authored by Demetris Roumis circa Jan, 2023

  • Guidance for parts of this notebook was provided by Microsoft in ‘Reading Data from the STAC API’

  • The image used in the banner is from an announcement about PySTAC from Azavea