Planetary Computer

Data Ingestion - Microsoft Planetary Computer


Overview

In this notebook, you will ingest Landsat data for use in machine learning. Machine learning tasks often involve a lot of data, and in Python, data is typically stored in memory as simple NumPy arrays. However, higher-level containers built on top of NumPy arrays provide more functionality for multidimensional gridded data (xarray) or out-of-core and distributed data (Dask). Our goal for data ingestion will be to load specific Landsat data of interest into one of these higher-level containers.

Microsoft Plantery Computer is one of several providers of Landsat Data. We are using it together with pystac-client and odc-stac because together they provide a nice Python API for searching and loading with specific criteria such as spatial area, datetime, Landsat mission, and cloud coverage.

Earth science datasets are often stored on remote servers that may be too large to download locally. Therefore, in this cookbook, we will focus primarily on ingestion approaches that load small portions of data from a remote source, as needed. However, the approach for your own work will depend not only on data size and location but also the intended analysis, so in a follow up notebook, you will see alternative approaches to loading data.

Prerequisites

Concepts

Importance

Notes

Intro to Landsat

Necessary

Background

About the Microsoft Planetary Computer

Helpful

Background

pystac-client Usage

Helpful

Consult as needed

odc.stac.load Reference

Helpful

Consult as needed

xarray

Necessary

Intro to Dask Array

Helpful

Panel Getting Started Guide

Helpful

  • Time to learn: 10 minutes

Imports

# Data
import odc.stac
import pandas as pd
import planetary_computer
import pystac_client
import xarray as xr
from pystac.extensions.eo import EOExtension as eo

# Viz
import hvplot.xarray
import panel as pn

pn.extension()

Open and read the root of the STAC catalog

catalog = pystac_client.Client.open(
    "https://planetarycomputer.microsoft.com/api/stac/v1",
    modifier=planetary_computer.sign_inplace,
)
catalog.title
'Microsoft Planetary Computer STAC API'

Microsoft Planetary Computer has a public STAC metadata but the actual data assets are in private Azure Blob Storage containers and require authentication. pystac-client provides a modifier keyword that we can use to manually sign the item. Otherwise, we’d get an error when trying to access the asset.

Search for Landsat Data

Let’s say that an analysis we want to run requires landsat data over a specific region and from a specific time period. We can use our catalog to search for assets that fit our search criteria.

First, let’s find the name of the landsat dataset. This page is a nice resource for browsing the available collections, but we can also just search the catalog for ‘landsat’:

all_collections = [i.id for i in catalog.get_collections()]
landsat_collections = [
    collection for collection in all_collections if "landsat" in collection
]
landsat_collections
---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
Cell In[3], line 1
----> 1 all_collections = [i.id for i in catalog.get_collections()]
      2 landsat_collections = [
      3     collection for collection in all_collections if "landsat" in collection
      4 ]
      5 landsat_collections

Cell In[3], line 1, in <listcomp>(.0)
----> 1 all_collections = [i.id for i in catalog.get_collections()]
      2 landsat_collections = [
      3     collection for collection in all_collections if "landsat" in collection
      4 ]
      5 landsat_collections

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/pystac_client/client.py:433, in Client.get_collections(self)
    429         for col in page["collections"]:
    430             collection = CollectionClient.from_dict(
    431                 col, root=self, modifier=self.modifier
    432             )
--> 433             call_modifier(self.modifier, collection)
    434             yield collection
    435 else:

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/pystac_client/_utils.py:18, in call_modifier(modifier, obj)
     15 if modifier is None:
     16     return None
---> 18 result = modifier(obj)
     19 if result is not None and result is not obj:
     20     warnings.warn(
     21         f"modifier '{modifier}' returned a result that's being ignored. "
     22         "You should ensure that 'modifier' is operating in-place and use "
     23         "a function that returns 'None' or silence this warning.",
     24         IgnoredResultWarning,
     25     )

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/planetary_computer/sas.py:109, in sign_inplace(obj)
    103 def sign_inplace(obj: Any) -> Any:
    104     """
    105     Sign the object in place.
    106 
    107     See :func:`planetary_computer.sign` for more.
    108     """
--> 109     return sign(obj, copy=False)

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/functools.py:889, in singledispatch.<locals>.wrapper(*args, **kw)
    885 if not args:
    886     raise TypeError(f'{funcname} requires at least '
    887                     '1 positional argument')
--> 889 return dispatch(args[0].__class__)(*args, **kw)

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/planetary_computer/sas.py:367, in sign_collection(collection, copy)
    364         collection.assets = deepcopy(assets)
    366 for key in collection.assets:
--> 367     _sign_asset_in_place(collection.assets[key])
    368 return collection

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/planetary_computer/sas.py:261, in _sign_asset_in_place(asset)
    251 """Sign a PySTAC asset
    252 
    253 Args:
   (...)
    258     with a signed version.
    259 """
    260 asset.href = sign(asset.href)
--> 261 _sign_fsspec_asset_in_place(asset)
    262 return asset

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/planetary_computer/sas.py:298, in _sign_fsspec_asset_in_place(asset)
    296 container = parse_adlfs_url(href)
    297 if account and container:
--> 298     token = get_token(account, container)
    299     storage_options["credential"] = token.token

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/planetary_computer/sas.py:470, in get_token(account_name, container_name, retry_total, retry_backoff_factor)
    461 session.mount("https://", adapter)
    462 response = session.get(
    463     token_request_url,
    464     headers=(
   (...)
    468     ),
    469 )
--> 470 response.raise_for_status()
    472 token = SASToken(**response.json())
    473 if not token:

File ~/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
   1016     http_error_msg = (
   1017         f"{self.status_code} Server Error: {reason} for url: {self.url}"
   1018     )
   1020 if http_error_msg:
-> 1021     raise HTTPError(http_error_msg, response=self)

HTTPError: 404 Client Error: Not Found for url: https://planetarycomputer.microsoft.com/api/sas/v1/token/pcstacitems/items

We’ll use the landsat-c2-l2 dataset, which stands for Collection 2 Level-2. It contains data from several landsat missions and has better data quality than Level 1 (landsat-c2-l1). Microsoft Planetary Computer has descriptions of Level 1 and Level 2, but a direct and succinct comparison can be found in this community post, and the information can be verified with USGS.

Now, let’s set our search parameters. You may already know the bounding box (region/area of interest) coordinates, but if you don’t, there are many useful tools like bboxfinder.com that can help.

bbox = [-118.89, 38.54, -118.57, 38.84]  # Region over a lake in Nevada, USA
datetime = "2017-06-01/2017-09-30"  # Summer months of 2017
collection = "landsat-c2-l2"

We can also specify other parameters in the query, such as a specific landsat mission and the max percent of cloud cover:

platform = "landsat-8"
cloudy_less_than = 1  # percent

Now we run the search and list the results:

search = catalog.search(
    collections=["landsat-c2-l2"],
    bbox=bbox,
    datetime=datetime,
    query={"eo:cloud_cover": {"lt": cloudy_less_than}, "platform": {"in": [platform]}},
)
items = search.get_all_items()
print(f"Returned {len(items)} Items:")
item_id = {(i, item.id): i for i, item in enumerate(items)}
item_id
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/pystac_client/item_search.py:849: FutureWarning: get_all_items() is deprecated, use item_collection() instead.
  warnings.warn(
Returned 3 Items:
{(0, 'LC08_L2SP_042033_20170718_02_T1'): 0,
 (1, 'LC08_L2SP_042033_20170702_02_T1'): 1,
 (2, 'LC08_L2SP_042033_20170616_02_T1'): 2}

It looks like there were three image stacks taken by Landsat 8 over this spatial region during the summer months of 2017 that has less than 1 percent cloud cover.

Preview Results and Select a Dataset

Before loading one of the available image stacks, it would be useful to get a visual check of the results. Many datasets have a rendered preview or thumbnail image that can be accessed without having to load the full resolution data.

We can create a simple interactive application using the Panel library to access and display rendered PNG previews of the our search results. Note that these pre-rendered images are of large tiles that span beyond our bounding box of interest. In the next steps, we will only be loading in a small area around the lake.

item_sel = pn.widgets.Select(value=1, options=item_id, name="item")

def get_preview(i):
    return pn.panel(items[i].assets["rendered_preview"].href, height=300)


pn.Row(item_sel, pn.bind(get_preview, item_sel))
selected_item = items[1]
selected_item

Access the Data

Now that we have selected a dataset from our catalog, we can procede to access the data. We want to be very selective about the data that we read and when we read it because the amount of downloaded data can quickly get out of hand. Therefore, let’s select only a subset of images.

First, we’ll preview the different image assets (or Bands) available in the Landsat item.

assets = []
for _, asset in selected_item.assets.items():
    try:
        assets.append(asset.extra_fields["eo:bands"][0])
    except:
        pass

cols_ordered = [
    "common_name",
    "description",
    "name",
    "center_wavelength",
    "full_width_half_max",
]
bands = pd.DataFrame.from_dict(assets)[cols_ordered]
bands
common_name description name center_wavelength full_width_half_max
0 red Visible red OLI_B4 0.65 0.04
1 blue Visible blue OLI_B2 0.48 0.06
2 green Visible green OLI_B3 0.56 0.06
3 nir08 Near infrared OLI_B5 0.87 0.03
4 lwir11 Long-wave infrared TIRS_B10 10.90 0.59
5 swir16 Short-wave infrared OLI_B6 1.61 0.09
6 swir22 Short-wave infrared OLI_B7 2.20 0.19
7 coastal Coastal/Aerosol OLI_B1 0.44 0.02

Then we will select a few bands (images) of interest:

bands_of_interest = ["red", "green", "blue"]

Finally, we lazily load the selected data. We will use the package called odc which allows us to load only a specific region of interest (bounding box or ‘bbox’) and specific bands (images) of interest. We will also use the chunks argument to load the data as dask arrays; this will load the metadata now and delay the loading until we actually use the data, or until we force the data to be loaded by using .compute().

ds = odc.stac.stac_load(
    [selected_item],
    bands=bands_of_interest,
    bbox=bbox,
    chunks={},  # <-- use Dask
).isel(time=0)
ds
<xarray.Dataset>
Dimensions:      (y: 1128, x: 950)
Coordinates:
  * y            (y) float64 4.301e+06 4.301e+06 ... 4.267e+06 4.267e+06
  * x            (x) float64 3.353e+05 3.353e+05 ... 3.637e+05 3.638e+05
    spatial_ref  int32 32611
    time         datetime64[ns] 2017-07-02T18:33:06.200763
Data variables:
    red          (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
    green        (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
    blue         (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>

Let’s combine the bands of the dataset into a single DataArray that has the band names as coordinates of a new ‘band’ dimension, and also call .compute() to finally load the data.

da = ds.to_array(dim="band").compute()
da
<xarray.DataArray (band: 3, y: 1128, x: 950)>
array([[[14691, 14914, 14988, ..., 16283, 16292, 16316],
        [14655, 14859, 14969, ..., 16272, 16185, 16079],
        [14531, 14699, 14972, ..., 15318, 15526, 14734],
        ...,
        [13804, 13561, 13601, ..., 18311, 18202, 17625],
        [13857, 13828, 13858, ..., 19400, 18942, 18551],
        [13840, 13786, 13867, ..., 17873, 17917, 18453]],

       [[13233, 13402, 13565, ..., 14553, 14658, 14657],
        [13291, 13428, 13585, ..., 14590, 14478, 14550],
        [13122, 13287, 13601, ..., 13987, 14220, 13571],
        ...,
        [12720, 12552, 12468, ..., 16580, 16411, 15899],
        [12704, 12644, 12658, ..., 17351, 16853, 16505],
        [12647, 12620, 12698, ..., 15990, 16211, 16686]],

       [[11572, 11629, 11723, ..., 12857, 12918, 12946],
        [11588, 11655, 11721, ..., 12848, 12792, 12715],
        [11510, 11608, 11781, ..., 12371, 12453, 12053],
        ...,
        [11195, 11104, 11045, ..., 14182, 14031, 13716],
        [11125, 11061, 11106, ..., 14652, 14284, 14062],
        [11059, 11050, 11134, ..., 13756, 13865, 14209]]], dtype=uint16)
Coordinates:
  * y            (y) float64 4.301e+06 4.301e+06 ... 4.267e+06 4.267e+06
  * x            (x) float64 3.353e+05 3.353e+05 ... 3.637e+05 3.638e+05
    spatial_ref  int32 32611
    time         datetime64[ns] 2017-07-02T18:33:06.200763
  * band         (band) object 'red' 'green' 'blue'

Visualize the data

Often, data ingestion involves quickly visualizing your raw data to get a sense that things are proceeding accordingly. As we have created an array with red, blue, and green bands, we can quickly display a natural color image of the lake using the .plot.imshow() function of xarray. We’ll use the robust=True argument because the data values are outside the range of typical RGB images.

da.plot.imshow(robust=True, size=3)
<matplotlib.image.AxesImage at 0x7f4c342a6860>
../_images/557dbd6db94b24801c1c455a4a746d75b854d756ca6398bc7bdf39537872ea67.png

Now, let’s use hvplot to provide an interactive visualization of the inividual bands in our array.

ds
<xarray.Dataset>
Dimensions:      (y: 1128, x: 950)
Coordinates:
  * y            (y) float64 4.301e+06 4.301e+06 ... 4.267e+06 4.267e+06
  * x            (x) float64 3.353e+05 3.353e+05 ... 3.637e+05 3.638e+05
    spatial_ref  int32 32611
    time         datetime64[ns] 2017-07-02T18:33:06.200763
Data variables:
    red          (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
    green        (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
    blue         (y, x) uint16 dask.array<chunksize=(1128, 950), meta=np.ndarray>
da.hvplot.image(x="x", y="y", cmap="viridis", aspect=1)
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/holoviews/core/util.py:1175: FutureWarning: unique with argument that is not not a Series, Index, ExtensionArray, or np.ndarray is deprecated and will raise in a future version.
  return pd.unique(values)

Let’s plot the bands as seperate columns by specifying a dimension to expand with col='band'. We can also set rasterize=True to use Datashader (another HoloViz tool) to render large data into a 2D histogram, where every array cell counts the data points falling into that pixel, as set by the resolution of your screen. This is especially important for large and high resolution images that would otherwise cause issues when attempting to render in a browser.

da.hvplot.image(
    x="x", y="y", col="band", cmap="viridis", xaxis=False, yaxis=False, colorbar=False, rasterize=True
)
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/holoviews/core/util.py:1175: FutureWarning: unique with argument that is not not a Series, Index, ExtensionArray, or np.ndarray is deprecated and will raise in a future version.
  return pd.unique(values)
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/dask/dataframe/_pyarrow_compat.py:17: FutureWarning: Minimal version of pyarrow will soon be increased to 14.0.1. You are using 12.0.1. Please consider upgrading.
  warnings.warn(
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/holoviews/core/util.py:1891: FutureWarning: Creating a Groupby object with a length-1 list-like level parameter will yield indexes as tuples in a future version. To keep indexes as scalars, create Groupby objects with a scalar level parameter instead.
  groups = ((wrap_tuple(k), group_type(OrderedDict(unpack_group(group, getter)), **kwargs))
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/holoviews/core/util.py:1891: FutureWarning: Creating a Groupby object with a length-1 list-like level parameter will yield indexes as tuples in a future version. To keep indexes as scalars, create Groupby objects with a scalar level parameter instead.
  groups = ((wrap_tuple(k), group_type(OrderedDict(unpack_group(group, getter)), **kwargs))

Select the zoom tool and zoom in on of the plots to see that all the images are all automatically linked!

Retain Attributes

When working with many image arrays, it’s critical to retain the data properties as xarray attributes:

da.attrs = selected_item.properties
da
<xarray.DataArray (band: 3, y: 1128, x: 950)>
array([[[14691, 14914, 14988, ..., 16283, 16292, 16316],
        [14655, 14859, 14969, ..., 16272, 16185, 16079],
        [14531, 14699, 14972, ..., 15318, 15526, 14734],
        ...,
        [13804, 13561, 13601, ..., 18311, 18202, 17625],
        [13857, 13828, 13858, ..., 19400, 18942, 18551],
        [13840, 13786, 13867, ..., 17873, 17917, 18453]],

       [[13233, 13402, 13565, ..., 14553, 14658, 14657],
        [13291, 13428, 13585, ..., 14590, 14478, 14550],
        [13122, 13287, 13601, ..., 13987, 14220, 13571],
        ...,
        [12720, 12552, 12468, ..., 16580, 16411, 15899],
        [12704, 12644, 12658, ..., 17351, 16853, 16505],
        [12647, 12620, 12698, ..., 15990, 16211, 16686]],

       [[11572, 11629, 11723, ..., 12857, 12918, 12946],
        [11588, 11655, 11721, ..., 12848, 12792, 12715],
        [11510, 11608, 11781, ..., 12371, 12453, 12053],
        ...,
        [11195, 11104, 11045, ..., 14182, 14031, 13716],
        [11125, 11061, 11106, ..., 14652, 14284, 14062],
        [11059, 11050, 11134, ..., 13756, 13865, 14209]]], dtype=uint16)
Coordinates:
  * y            (y) float64 4.301e+06 4.301e+06 ... 4.267e+06 4.267e+06
  * x            (x) float64 3.353e+05 3.353e+05 ... 3.637e+05 3.638e+05
    spatial_ref  int32 32611
    time         datetime64[ns] 2017-07-02T18:33:06.200763
  * band         (band) object 'red' 'green' 'blue'
Attributes: (12/22)
    gsd:                          30
    created:                      2022-05-06T17:46:34.110946Z
    sci:doi:                      10.5066/P9OGBGM6
    datetime:                     2017-07-02T18:33:06.200763Z
    platform:                     landsat-8
    proj:epsg:                    32611
    ...                           ...
    view:sun_azimuth:             125.03739105
    landsat:correction:           L2SP
    view:sun_elevation:           65.85380157
    landsat:cloud_cover_land:     0.53
    landsat:collection_number:    02
    landsat:collection_category:  T1

Notice that you can now expand the Attributes: dropdown to see the properties of this data.

Set the crs attribute

As the data is in ‘meter’ units from a reference point, we can plot in commonly used longitude, latitude coordinates with .hvplot(geo=True) if our array has a valid coordinate reference system (CRS) attribute. This value is provided from Microsoft Planetary Computer as the proj:epsg property, so we just need to copy it to a new attribute crs so that hvPlot can automatically find it, without us having to further specify anything in our plotting code

Note, this CRS is referenced by an EPSG code that can be accessed from the metadata of our selected catalog search result. We can see more about this dataset’s specific code at EPSG.io/32611. You can also read more about EPSG codes in general in this Coordinate Reference Systems: EPSG codes online book chapter.

da.attrs["crs"] = f"epsg:{selected_item.properties['proj:epsg']}"
da.attrs["crs"]
'epsg:32611'

Now we can use .hvplot(geo=True) to plot in longitude and latitude coordinates. Informing hvPlot that this is geographic data also allows us to overlay data on aligned geographic tiles using the tiles parameter.

da.hvplot.image(
    x="x", y="y", cmap="viridis", geo=True, alpha=.9, tiles="ESRI", xlabel="Longitude", ylabel="Latitude", colorbar=False, aspect=1,
)
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/geoviews/operation/__init__.py:14: HoloviewsDeprecationWarning: 'ResamplingOperation' is deprecated and will be removed in version 1.18, use 'ResampleOperation2D' instead.
  from holoviews.operation.datashader import (
/home/runner/miniconda3/envs/cookbook-dev/lib/python3.10/site-packages/holoviews/core/util.py:1175: FutureWarning: unique with argument that is not not a Series, Index, ExtensionArray, or np.ndarray is deprecated and will raise in a future version.
  return pd.unique(values)

Summary

The data access approach should adapt to features of the data and your intended analysis. As Landsat data is large and multidimensional, a good approach is to use Microsoft Plantery Computer, pystac-client, and odc-stac together for searching the metadata catalog and lazily loading specific data chunks. Once you have accessed data, visualize it with hvPlot to ensure that it matches your expectations.

What’s next?

Before we proceed to workflow examples, we can explore alternate ways of accessing data and the processing steps that are often utilized.

Resources and References