This site is moving to www.usgs.gov/landsat.  Please visit the new site and update any bookmarks you have. 
 
Top Page    Section 1    Section 2    Section 3    Section 4     Section 5    Section 6    Appendix A     Appendix B     Appendix C    Appendix D    Appendix E    References

Section 5 - Level-1 Products

5.1 Overview

The geometric algorithms used by the Level-1 processing systems at the USGS EROS were originally developed for the Landsat 7 Image Assessment System (IAS). The overall purpose of the IAS geometric algorithms is to use Earth ellipsoid and terrain surface information in conjunction with spacecraft ephemeris and attitude data, and knowledge of the Enhanced Thematic Mapper Plus (ETM+) instrument and its satellite geometry to relate locations in ETM+ image space (band, scan, detector, sample) to geodetic object space (latitude, longitude, and elevation).

These algorithms are used for purposes of creating accurate Level-1 output products, characterizing the ETM+ absolute and relative geometric accuracy, and deriving improved estimates of geometric calibration parameters such as the sensor to spacecraft alignment.

Standard processing is included with every product ordered through EarthExplorerGloVis, and LandsatLook Viewer. As of October 2008, all Landsat data are available for download at no charge.

5.2 Level-1 Algorithms

The Level-1 processing algorithms used at EROS includes:

  • Payload Correction Data (PCD) processing
  • Mirror Scan Correction Data (MSCD) processing
  • ETM+/Landsat 7 sensor/platform geometric model creation
  • Sensor line-of-sight generation and projection
  • Output space/input space correction grid generation
  • Image resampling
  • Geometric model precision correction using ground control
  • Terrain correction

5.3 Ancillary Data

The Landsat 7 ETM+ geometric correction algorithms are applied to the wideband data (image plus supporting PCD and MSCD) contained in the raw (L0R) or radiometrically-corrected (Level-1R) products. Some of these algorithms also require additional ancillary input data sets and include the following:

  1. Precise ephemeris from the Flight Dynamics Facility (FDF) - used to minimize ephemeris error when performing sensor to spacecraft alignment calibration
  2. Ground control/reference images for geometric test sites - used in precision correction, geodetic accuracy assessment, and geometric calibration algorithms
  3. Digital elevation data for geometric test sites - used in terrain correction and geometric calibration
  4. Pre-launch ground calibration results including band/detector placement and timing, scan mirror profiles, and attitude sensor data transfer functions (gyro and Attitude Displacement Sensors (ADS)), to be used in the generation of the initial Calibration Parameter File (CPF)
  5. Earth parameters including: static Earth model parameters (e.g., ellipsoid axes, gravity constants) and dynamic Earth model parameters (e.g., polar wander offsets, UT1-UTC time corrections) - used in systematic model creation and incorporated into the CPF

5.4 Image Assessment System (IAS)

The IAS (see Section 3.5) is responsible for the off-line assessment of image quality to ensure compliance with the radiometric and geometric requirements of the spacecraft and the ETM+ sensor throughout the life of the Landsat 7 mission. In this role, the IAS orders up to 10 Level-0R scenes daily from the Landsat Archive Manager (LAM) and processes them to Level-1s, performing additional quality checks and trending that are not part of the standard Level-1 product generation. In addition to its assessment functions, the IAS is responsible for the radiometric and geometric calibration of the ETM+ instrument. The IAS maintains a database of calibration and quality information, and trends many performance parameters and measurements. On a quarterly basis, the IAS updates the CPF which contains the radiometric and geometric correction parameters required during Level-1 processing to create products of uniform consistency. This file is stamped with applicability dates and sent to the Landsat Product Generation System (LPGS) for use in Level-1 processing. The CPF is also made available to the International Ground Stations (IGS) for their use in processing data received at their sites. Operational IAS activities occur at the EROS Center while less frequent assessments and calibration certification are the responsibility of the Landsat 7 Project Science Office (LPSO) at NASA’s Goddard Space Flight Center (GSFC).

Detailed algorithm descriptions for the high-level processing flows for the IAS Level-1 processing algorithms can be found in the Landsat 7 IAS Geometric Algorithm Theoretical Basis Document.  This document lists supporting theoretical concepts and mathematics of the IAS geometric algorithms, a review of the Landsat 7 ETM+ viewing geometry, a discussion of the coordinate and time systems used by the algorithms and the relationships between them, the mathematical development of, and solution procedures for the Level-1 processing, geometric calibration, and geometric characterization algorithms, and an examination of the estimates of uncertainty (error analysis) associated with each of the algorithms.

5.5 Landsat Product Generation System (LPGS)

The LPGS performs radiometric and geometric processing on data to generate a standard Landsat 7 Level-1 product with the following parameters:

  • Standard terrain correction (L1TP—precision and terrain correction) if possible; otherwise, best available (L1GS - systematic or L1GT - systematic terrain)
  • Geographic Tagged Information File Format (GeoTIFF) output format
  • Cubic Convolution (CC) resampling method (see Figure 5-1)
  • 15-meter (Band 8), 30-meter (Bands 1-5, and Band 7), and 60-meter (Band 6) pixel sizes
  • Universal Transverse Mercator (UTM) map projection (Polar Stereographic (PS) projection for scenes with a center latitude greater than or equal to 63.0 degrees)
  • World Geodetic System 1984 (WGS84) datum
  • North-up (NUP) image orientation

All newly acquired Landsat data are processed to a Level-1 product immediately after receipt and posted to the online disk storage from which it can be downloaded,at no charge, via EarthExplorerGloVis, or LandsatLook Viewer web interfaces. 

Figure 5-1 Illustration of Landsat 7 Cubic Convolution (CC) resampling versus other methods
Figure 5-1. Illustration of Landsat 7 Cubic Convolution (CC) Resampling Cersus Other Methods

5.6 Level-1 Products

All Landsat scenes from USGS EROS are processed to Level-1 products. The Level-1 product available to users is a high-quality, radiometrically, and geometrically corrected image. Each product is created using the best available processing level for each scene. The processing level used is determined by the existence of ground control points (GCP), elevation data provided by a Digital Elevation Model (DEM), and/or data collected by the spacecraft and sensor PCD. The LPGS uses one of three processing levels to create the Level-1 products:

  • L1TP: Precision and Terrain Correction provides radiometric and geometric accuracy by incorporating GCPs while employing a DEM for topographic displacement. Geodetic accuracy of the product depends on the image quality and the accuracy, number, and distribution of the GCPs.
  • L1GT: Systematic Terrain Correction provides systematic, radiometric, and geometric accuracy, while employing a DEM to correct for relief displacement.
  • L1GS: Systematic Correction provides systematic radiometric and geometric corrections, which are derived from PCD.

The Level-1 image is delivered in Digital Numbers (DNs). These units can easily be rescaled to spectral radiance or Top-of-the-Atmosphere (TOA) reflectance.

5.6.1 Product Components

A complete Landsat 7 Level-1 product consists of 21 total files. The breakdown of these 21 files is as follows:

  • 9 individual band images - (_B#.TIF): Seven spectral bands and two thermal bands are included with the Level-1 product. Spectral bands 1-5 and 7 have a spatial resolution of 30 meters while spectral band 8 (panchromatic, grey-scale image) has a 15 meter spatial resolution. The thermal bands are collected in both low and high gain (Bands 61 and 62, respectively) for increased radiometric sensitivity and dynamic range. The band images are delivered as 8-bit images with 256 grey levels.
  • 1 metadata file (_MTL.txt): Level-1 metadata are stored in a .txt file. Metadata files contain beneficial information for the systematic searching and archiving practices of data while also providing the essential characteristics of the Level-1 product.
  • 1 GCP file (GCP.txt): GCPs are defined as points on the surface of the earth of known location used to geo-reference Landsat Level-1 imagery. This file contains the GCPs used during image processing.
  • 1 README file (README.GTF): The README file contains a summary and brief description of the Level-1 product contents and naming conventions for the files included.
  • 9 gap mask files in individual directory (SLC-off data only) (GM_B#.TIF): Landsat 7 ETM+ scenes acquired after May 2003 include gap mask files for each band. These files identify the location of all pixels affected by the original data gaps caused by the SLC failure in May 2003.

5.6.2 Product Naming Convention

The Landsat scene identifier provides valuable information about each scene. A Landsat scene ID details which satellite and sensor acquired the image, the WRS path/row location, date of acquisition, which facility obtained the transmitted data, and the archive version number. Figure 5-2 shows the scene ID naming convention along with examples of Landsat scene IDs.

Landsat Scene ID Naming Convention
Figure 5-2. Landsat Scene ID Naming Convention

5.6.3 Product Size

All available Landsat 7 Level-1 products are spatially defined by Worldwide Reference System-2 (WRS-2) boundaries. Standard WRS-2 defined scenes can be downloaded from USGS EROS. Landsat 4-5 TM, Landsat 7 ETM+, and Landsat 8 OLI/TIRS follow the WRS-2. Landsat 1, 2, and 3 followed WRS-1. Figure 5-3 shows the size of an example WRS-2 path/row for Landsat 7.

 

Example Landsat 7 ETM+ WRS-2 Path/Row
Figure 5-3. Example Landsat 7 ETM+ WRS-2 Path/Row

5.6.4 Product Format

The Landsat 7 data product is packaged as Geographic Tagged Image File Format (GeoTIFF). GeoTIFF is based on Adobe's TIFF - a self-describing format developed to exchange raster images such as clipart, logotypes, and scanned images between applications and computer platforms. Today, the TIFF is the only full-featured format in the public domain capable of supporting compression, tiling, and extension to include geographic metadata.

The TIFF file consists of a number of labels (tags), which describe certain properties of the file (such as gray levels, color table, byte format, and compression size). After the initial tags come the image data, which may be interrupted by more descriptive tags. GeoTIFF refers to TIFF files, which have geographic (or cartographic) data embedded as tags within the TIFF file. The geographic data can then be used to position the image in the correct location and geometry on the screen of a geographic information display.

Each individual Landsat 7 band is delivered as its own 8-bit greyscale GeoTIFF image. A standard WRS-2 scene possessing the full band complement is comprised of nine separate GeoTIFF images or files. For detailed information regarding the Landsat 7 GeoTIFF implementation please refer to LSDS-272 Landsat 7 ETM+ Level-1 Product DFCB. For details on GeoTIFF format, please download the GeoTIFF Format Specification or visit this GeoTIFF description webpage.

5.6.5 Conversion to Radiance

During Level-1 product rendering, image pixels are converted from DN to units of absolute radiance using 32-bit floating-point calculations. Pixel values are then scaled to byte values prior to media output. The following equation is used to convert DNs in a Level-1 product back to radiance units:

Lλ=Grescale ⋅QCAL+Brescale

which is also expressed as:

5.6.5 Conversion to Radiance formula

where:

Lλ = Spectral radiance at the sensor’s aperture in (Watts/(m² * sr * µm))
Grescale = Rescaled gain (the data product "gain" contained in the Level-1 product header or ancillary data record) in (Watts/(m² * sr * µm))/DN
Brescale = Rescaled bias (the data product "offset" contained in the Level-1 product header or ancillary data record) in (Watts/(m² * sr * µm))
QCAL = Quantized calibrated pixel value in DN
LMINλ = Spectral radiance scaled to QCALMIN in (Watts/(m² * sr * µm))
LMAXλ = Spectral radiance scaled to QCALMAX in (Watts/(m² * sr * µm))
QCALMIN = Minimum quantized calibrated pixel value (corresponding to LMINλ) in DN (1 for LPGS products; 1 for NLAPS** products processed after 4/4/2004; 0 for NLAPS** products processed before 4/5/2004)
QCALMAX = Maximum quantized calibrated pixel value (corresponding to LMAXλ) in DN = 255

**Some Landsat 4-5 TM data were processed through the National Land Archive Production System (NLAPS). Products generated by the LPGS and NLAPS systems are similar, but not identical. The LPGS-NLAPS comparison webpage describes the product differences. Landsat 7 data were never processed using NLAPS.

The LMINs and LMAXs are the spectral radiances for each band at DN 0 or 1 and 255 (i.e. QCALMIN, QCALMAX), respectively. LPGS used 1 for QCALMIN. One LMIN/LMAX set exists for each gain state. These values change slowly over time as the ETM+ detectors lose responsivity. Table 5-1 lists two sets of LMINs and LMAXs. The first set should be used for LPGS Level-1 products created before July 1, 2000, and the second set for Level-1 products created after July 1, 2000. Please note the distinction between acquisition and processing dates. Use of the appropriate LMINs and LMAXs ensure accurate conversion to radiance units.

Note for Band 6: A bias was found in the pre-launch calibration by two Landsat Science Team investigator groups post-launch. For data processed before December 20, 2000, the image radiances given by the above transform were 0.31 (Watts/(m² * sr * µm)) too high. See the official announcement for more details. This was corrected in the LPGS processing system. Note for MSS, TM, and Advanced Land Imager (ALI) sensors: the required radiometry constants are tabulated in this paper.

  Processed Before July 1, 2000 Processed After July 1, 2000
  Low Gain High Gain Low Gain High Gain
Band Number LMIN LMAX LMIN LMAX LMIN LMAX LMIN LMAX
1 6.2 297.5 6.2 194.3 6.2 293.7 6.2 191.6
2 6.0 303.4 6.0 202.4 6.4 300.9 6.4 169.5
3 4.5 235.5 4.5 158.6 5.0 234.4 5.0 152.9
4 4.5 235.0 4.5 157.5 5.1 241.1 5.1 157.4
5 1.0 47.70 1.0 31.76 1.0 47.57 1.0 31.06
6 0.0 17.04 3.2 12.65 0.0 17.04 3.2 12.65
7 0.35 16.60 0.35 10.932 0.35 16.54 0.35 10.80
8 5.0 244.00 5.0 158.40 4.7 243.1 4.7 158.3

Table 5-1. ETM+ Spectral Radiance Range (Watts/(m² * sr * µm))

5.6.6 Radiance to Reflectance

For relatively clear Landsat scenes, a reduction in between-scene variability can be achieved through a normalization for solar irradiance by converting spectral radiance, as calculated above, to planetary reflectance or albedo. This combined surface and atmospheric reflectance of the Earth is computed with the following formula:

5.6.6 Radiance to Reflectance formula

Where:
ρp = Unitless planetary reflectance
π = Mathematical constant approximately equal to 3.14159
Lλ = Spectral radiance at the sensor's aperture
= Earth-Sun distance in astronomical units interpolated from values listed in Table 5-2
ESUNλ = Mean solar exo-atmospheric irradiances from Table 5-3
θs = Solar zenith angle in degrees

 

Day
of
Year
Distance Day
of
Year
Distance Day
of
Year
Distance Day
of
Year
Distance Day
of
Year
Distance
1 .98331 74 .99446 152 1.01403 227 1.01281 305 .99253
15 .98365 91 .99926 166 1.01577 242 1.00969 319 .98916
32 .98509 106 1.00353 182 1.01667 258 1.00566 335 .98608
46 .98774 121 1.00756 196 1.01646 274 1.00119 349 .98426
60 .99084 135 1.01087 213 1.01497 288 .99718 365 .98331

Table 5-2. Earth-Sun Distance in Astronomical Units

 

Band Watts/(m² * µm)
1 1970
2 1842
3 1547
4 1044
5 225.7
7 82.06
8 1369

Table 5-3. ETM+ Solar Spectral Irradiances (generated using ChKur** solar spectrum)

**ChKur is the combined Chance-Kurucz Solar Spectrum within MODTRAN 5 (Berk et al. 2011). This spectrum has been used for the validation of the Landsat 7 ETM+ reflective band calibration and is therefore recommended for use for Landsat 7. The Thuillier spectrum (2003), previously used to calculate ESUN values presented in this document and elsewhere, has been recommended by the Committee on Earth Sciences (CEOS) to be used, where possible, as a standard. Thuillier data has not been used for the validations and there is an up to 3.5 percent difference in the integrated ESUN values using these two spectra, the largest difference being in ETM+ Band 7 data.

5.6.7 Band 6 Conversion to Temperature

ETM+ Band 6 imagery can also be converted from spectral radiance to a more physically useful variable. This is the effective at-satellite temperatures of the viewed Earth-atmosphere system under an assumption of unity emissivity and using pre-launch calibration constants listed in Table 5-4. The conversion formula is:

5.6.7 Band 6 Conversion to Temperature formula

 

Where:

T = Effective at-satellite temperature in Kelvin
K2 = Calibration constant 2
K1 = Calibration constant 1
Lλ = Spectral radiance in (Watts/(m² * sr * µm))

 

Sensor Constant 1 - K1
(Watts/(m2 *sr * µm))
Constant 2 - K2
Kelvin
Landsat 7 ETM+ 666.09 1282.71
Landsat 5 TM 607.76 1260.56
Landsat 4 TM 671.62 1284.30

Table 5-4. ETM+ and TM Thermal Band Calibration Constants

5.6.8 Radiometric Scaling Parameters for Landsat 7 ETM+ Level-1 Products

The LMINs and LMAXs are a representation of how the output Landsat ETM+ Level-1 data products are scaled in radiance units. The LMIN corresponds to the radiance at the minimum quantized and calibrated data DN (QCALMIN), which is typically "1" or "0" and LMAX corresponds to the radiance at the maximum quantized and calibrated data DN (QCALMAX), typically "255".

5.6.8.1 Reflective Bands

The LMIN's are set so that a "zero radiance" scene is still on scale in the 8-bit output product, even with sensor noise included. LMIN should result in "zero" radiance being about 5 DN in low gain and 7.5 DN in high gain. The LMAXs are set so that LMAX corresponds to slightly less than the saturation radiance of the most sensitive detector. This is done so that in the output product all "pixels" saturate at the same radiance. Currently, the LMAX is set to 0.99 of the pre-launch saturation radiance of the most sensitive detector in each band.

Normally, there is no need to change the LMINs or LMAXs, unless something changes drastically on the instrument. If the sensitivity of the instrument increases, which is not expected, there is no need to change the LMIN and LMAX values. If the sensitivity decreases, the LMAX values can be increased which in turn increases the usable dynamic range of the product (this does not occur unless the change is large). The changes that have taken place to date have been mostly due to the adoption of "improved" pre-launch gains for the instrument that have, in effect, "increased" its sensitivity. The LPSO also detected a few errors in the original numbers, which were corrected.

5.6.9 Definitive Ephemeris

An ephemeris is a set of data that provides the assigned places of a celestial body (including a manmade satellite) for regular intervals. In the context of Landsat 7, ephemeris data shows the position and velocity of the spacecraft at the time imagery is collected. The position and velocity information are used during product generation.

If available, the Landsat 7 definitive ephemeris is used for geometrically correcting ETM+ data. Definitive ephemeris substantially improves the positional accuracy of the Level-1 product over predicted ephemeris.

The Landsat 7 Mission Operations Center (MOC) receives tracking data on a daily basis that shows the position and velocity of the Landsat 7 spacecraft. This information comes from the three U.S. operated ground-receiving stations and is augmented by similar data from NASA's Tracking and Data Relay Satellites (TDRS). The Flight Operations Team (FOT) processes this information to produce a refined or "definitive" ephemeris that shows the position and velocity of Landsat 7 in one-minute intervals. Tracking data are used to compute the actual spacecraft position and velocity for the last 61 hours and to predict these values for the next 72 hours. The predicted ephemeris data are uploaded to the spacecraft daily. On-board software interpolates from this data to generate the positional information contained in the PCD.

Engineers with the Landsat Project have completed a predicted versus definitive ephemeris analysis. Comparisons to GCPs demonstrate the definitive ephemeris is, in fact, reliably more accurate than the predicted ephemeris. Geometric accuracy on the order of 30-50 (1 sigma) meters, excluding terrain effects, can be achieved when the definitive ephemeris is used to process the data. Level-1 products produced after March 29, 2000 use definitive ephemeris if available. The metadata (MTL.txt) field "ephemeris type" identifies whether a product was created with definitive or predicted ephemeris. Daily definitive ephemeris profiles have been archived since June 29, 1999 and are available for downloading from the Landsat Missions Website.

5.6.10 Automated Cloud Cover Assessment (ACCA)

5.6.10.1 ACCA Overview

As part of standard LPGS processing, a cloud cover assessment is provided as part of the EarthExplorerGloVis, or LandsatLook Viewer scene information. The Landsat 7 ACCA algorithm recognizes clouds by passing through the scene data twice (Irish, 2000). This two-pass approach differs from the single pass algorithm employed for Landsat 5. The algorithm is based on the premise that clouds are colder than Earth surface features. While true in most cases, temperature inversions do occur. Unexpected cloud cover calculations occur in these situations. This is not unusual in the Polar Regions for example.

The first pass through the data is designed to trap clouds and only clouds. Twenty-six different filters are deployed for this purpose. Omission errors are expected. The pass one goal is to develop a reliable cloud signature for use in pass two where the remaining clouds are identified. Commission errors, however, create algorithm havoc and must be minimized. Three class categories result from pass one processing - clouds, non-clouds, and an ambiguous group that is revisited in pass two.

After pass one processing, descriptive statistics are calculated from the cloud category using Band 6. These include mean temperature, standard deviation, and distribution skew. New Band 6 thresholds are developed from these statistics for use during pass two. Only the thermal band is examined during this pass to capture the remaining clouds. Image pixels that fall below the new threshold qualify. After processing, the pass one and two cloud cover scores are compared. Extreme differences are indicative of cloud signature corruption. When this occurs, the pass two results are ignored and the cloud score reverts to the pass one result.

During processing, a cloud cover mask is created. After two passes through the data, a filter is applied to the mask to fill in cloud holes. This filtering operation works by examining each non-cloud pixel in the mask. If five of the eight neighbors are clouds then the pixel is reclassified as cloud. The final cloud cover percentage for the image is calculated based on the filtered cloud mask.

The images in Figure 5-4 demonstrate an example of the improved power the Landsat 7 ACCA algorithm has over the Landsat 5 algorithm. In this case, clouds are successfully separated from the desert terrain below.

Figure 5-4. Landsat 5 and 7 Cloud Cover Mask Comparison - This desert area in Sudan (left: Bands 5-4-2) represents terrain where the Landsat 5 algorithm misclassified rocks as clouds due to their high reflectance in Band 5 (right). In the middle example, the Landsat 7 algorithm captures clouds while recognizing rocks for what they are.
Figure 5-4. Landsat 5 and 7 Cloud Cover Mask Comparison 

This desert area in Sudan (left: Bands 5-4-2) represents terrain where the Landsat 5 algorithm misclassified rocks as clouds due to their high reflectance in Band 5 (right). In the middle example, the Landsat 7 algorithm captures clouds while recognizing rocks for what they are.

5.6.10.2 Landsat 7 ETM+ ACCA Algorithm

This section describes the operational Landsat 7 ETM+ ACCA algorithm. It has been divided into five processes:

  1. Pass 1 Spectral Cloud Identification (Filters 1-11)

  2. Band 6 Cloud Signature Development (Filters 12-16)

  3. Pass 2 Thermal Band Cloud Separation (Filters 17-20)

  4. Image-Based Cloud-Cover Assignments and Aggregation (Filters 21-15)

  5. Nearest-Neighbor Cloud-Filling (Filter 26)

Most of the computer-intensive part of processing is in Pass 1 and therefore most of the following ACCA algorithm documentation addresses the processes within Pass 1.

A. Pass 1 – Spectral Cloud Identification

The algorithm handles the cloud population in each scene uniquely by examining the raw uncalibrated Level-0R image data twice. Data preparation involves converting Band 2 through Band 5 to planetary reflectance and Band 6 to apparent at-satellite radiance temperature.
For each spectral band (λ), the 8-bit observed raw uncalibrated image quantized level Q, in units of DN, is related to TOA spectral radiance (L_λ) (in Watts/(m² * sr * µm)) by

Qi = GLλ + Q0i

Where:

Gi = Sensor responsivity (in DN per unit spectral radiance) for each detector in the band
Q0i = Average zero-radiance shutter background (in DN) from the CPF.

Sensor responsivity and the zero-radiance bias for each detector is maintained by the IAS and recorded in the CPF at the U.S. ground processing system at EROS. Radiometric detector normalization is applied in each spectral band. Bias-corrected image values are then given by:

∆Qi = Qi - Q0i = Gi Lλ

 

Thus, TOA spectral radiances (Lλ) are related to image data by:

Section 5.6.10.2 Pass 1 Formula 3

TOA planetary albedo or reflectance (ρλ) for Band 2 through Band 5 is related to TOA spectral radiance (Lλ) by:

Section 5.6.10.2 Pass 1 Formula 4

Where:

π = Mathematical constant approximately equal to 3.14159
= Earth-Sun distance in Astronomical Units interpolated from Table 5-2
ESUNλ = Exo-atmospheric solar irradiance in each spectral band in Watts/(m² * µm), which are referenced in Table 5-3
θs = Solar zenith angle (degrees)

At-satellite temperature for Band 6 (T) is related to TOA spectral radiance (Lλ) by:

Section 5.6.10.2 Pass 1 Formula 5

Where:

T = The at-sensor temperature in Kelvin
K2 = The calibration constant 1282.71 in degrees Kelvin
K1 = The calibration constant 666.09 in Watts/(m² * sr * µm)
Lλ = The spectral radiance from equation 3

The first pass through the data is designed to capture pixels that are unambiguously clouds and not something on the ground. Eight different filters are used to isolate clouds and to eliminate cloudless areas including problem land surface features such as snow and sand. The pixels of clouds from Pass 1 are used to develop a Band 6 thermal profile and thresholds for clouds for use in Pass 2 where the remaining clouds are identified. Five categories result from Pass 1 processing warm clouds, cold clouds, desert, non-clouds, and an ambiguous group of image pixels that are reexamined in Pass 2.

The Band 6 temperature profile is formulated from the observed Pass 1 cloud population if it exists. The profile is defined by the cloud populations mean, variance, and skewness, and undergoes modulation if snow or desert features are present in a scene.

A description of each filter, presented in the order implemented, follows:

  • Filter 1 - Brightness Threshold
    Each Band 3 pixel in the scene is first compared to a brightness threshold. Pixel values that exceed the Band-3 threshold, which is set at .08, are passed to Filter 2. Pixels that fall below this threshold are identified as non-clouds and flagged as such in the cloud mask.
  • Filter 2 - Non-cloud/Ambiguous Discriminator, Band 3
    Comparing each pixel entering this filter to a Band 3 threshold set at .07 identifies potential low-reflectance clouds. Pixels that exceed this threshold are labeled as ambiguous and are re-examined in Pass 2. Those pixels falling below .07 are identified as non-clouds and flagged as such in the cloud mask.
  • Filter 3 - Normalized Difference Snow Index
    The Normalized Difference Snow Index (NDSI) is used to detect snow (Hall et al.1995). The NDSI filter is expressed as:

    NDSI = ((Band 2 – Band 5) / (Band 2 + Band 5))

    This filter is designed to eliminate snow. The reflectance of clouds and snow are similar in Band 2. However, in Band 5, reflectance for clouds is high while snow is low. Hall discovered that NDSI values greater than 0.4 represent snow cover quite well. This value was initially tried for ACCA to eliminate snow but clouds composed of ice crystals (e.g., cirrostratus) were also eliminated. The threshold was raised to 0.7 to capture potential clouds of this type. Pixels that fall between an NDSI range of -0.25 and 0.7 qualify as potential clouds and are passed to Filter 5. Pixels outside this NDSI range are labeled as non-cloud and passed to Filter 4. Snow pixels that slip through are generally trapped later.

  • Filter 4 - Snow Threshold
    Knowledge of snow in a scene is important for Pass 2 processing; therefore, a tally of snow pixels is retained. NDSI values above a 0.8 threshold qualify as snow and are recorded as non-cloud in the cloud mask.
  • Filter 5 - Temperature Threshold
    The Band 6 temperature (T) values are used to identify potential clouds. If a pixel value exceeds 300K, a realistic cloud temperature maximum, it is labeled as non- cloud. Pixels with a temperature value less than 300K are passed to Filter 6.
  • Filter 6 - Band 5/6 Composite
    The low values of the product of values are sensitive to the detection of clouds. The Band 5/6 Composite is expressed as:

    Band 5/6 Composite = (1 – Band 5) * Band 6

    This filter works well because clouds have cold temperatures (< 300K) and are highly reflective in Band 5 and therefore have low Band 5/6 Composite values. It is particularly useful for eliminating cold land surface features that have low Band 5 reflectance such as snow and tundra. Sensitivity analysis demonstrated that a threshold setting of 225 works optimally. Pixels below this threshold are passed to Filter 8 as possible clouds. Pixel values above this threshold are examined using Filter 7.

  • Filter 7 - Non-cloud/Ambiguous Discriminator, Band 5
    Comparing each pixel entering this filter to a Band 5 threshold set at 0.08 identifies potential low-reflectance clouds. Pixels that exceed this threshold are labeled as ambiguous and are re-examined in Pass 2. Those falling below 0.08 are identified as non-clouds (probably water) and flagged as such in the cloud mask.
  • Filter 8 - Band 4/3 Ratio for Growing Vegetation
    This filter eliminates highly reflective vegetation and is simply Band 4 reflectance divided by Band 3 reflectance. In the near-infrared (Band 4), reflectance for green leaves is high because very little energy is absorbed. In the red region (Band 3), the chlorophyll in green leaves absorbs energy so reflectance is low. The 4/3 ratio results in higher values for vegetation than for other scene features, including clouds. A threshold setting of 2.0 is used. Pixels that exceed this threshold are labeled ambiguous and are revisited in Pass 2. Pixels with ratios below this threshold are passed to Filter 9.
  • Filter 9 - Band 4/2 Ratio for Senescing Vegetation
    This filter eliminates highly reflective senescing vegetation and is formed by dividing the Band 4 reflectance by the Band 2 reflectance. In the near-infrared (Band 4), green leaves that are dead or dying absorb even less energy and are thus highly reflective. In the green region (Band 2), the leaves absorb less energy because of chlorophyll loss and exhibit increased reflectivity. The 4/2 ratio values are higher for vegetation than other scene features, including clouds. A threshold setting of 2.16248 works effectively. The at-launch setting was 2.16 but was changed in May of 2001 when the operational decision was made to image Band 4 in low gain mode. Pixels that exceed this number are ambiguous and revisited in Pass 2. Pixels with ratios below this threshold are passed to Filter 10.
  • Filter 10 - Band 4/5 Ratio for Soil
    This filter eliminates highly reflective rocks and sands in desert landscapes and is formed by dividing the Band 4 reflectance by the Band 5 reflectance. Rocks and sand tend to exhibit higher reflectance in Band 5 than in Band 4, whereas the reverse is true for clouds. A threshold setting of 1.0 works effectively. Pixels that fall below this threshold are labeled ambiguous and are revisited in Pass 2. Knowledge of desert pixels in a scene is important for Pass 2 processing. Therefore, a desert pixel tally is retained. Pixels with ratios that exceed this threshold are passed to Filter 11.
  • Filter 11 - Band 5/6 Composite for Warm and Cold Clouds
    All pixels reaching this filtering level are classified as clouds. A further separation into two classes is achieved by again using the Band 5/6 Composite filter. For each cloud pixel, the Band 5/6 Composite is compared against a threshold setting of 210. Pixels above and below this threshold are classified as warm and cold clouds, respectively. These two cloud classes are recorded in the cloud mask and used to develop two cloud signatures, one for the cold clouds and the other conjoined cloud classes.

 

B. Band 6 Cloud Signature Development

Pass 2 processing requires two new Band 6 thresholds against which all ambiguous pixels are compared. These thresholds are computed using the Pass 1 cloud temperature statistics. Only the cold clouds are used if snow or desert soil is present, otherwise the cold and warm clouds are combined and treated as a single population. The cloud thermal profile developed includes key statistics including the maximum cloud temperature, mean, standard deviation, and histogram skewness.

  • Filter 12 - Snow and Desert Indicator
    An infrared/short-wave infrared ratio is used to identify highly reflective rocks and sands in desert landscapes. Snow was previously accounted. If snow or desert rocks are present, the warm cloud class is eliminated. The desert indicator employed is simply the ratio of potential cloud pixels exiting and entering Filter 10 compared against a threshold value of 0.5. If the remaining pixels are less than 50 percent, the warm clouds are removed. The snow percentage for the scene is computed and compared against a threshold of 1 percent. If the scene is more than 1 percent snow, the warm clouds are removed.
  • Filter 13 - Pass 1 Cloud-free Indicator
    The Pass 1 cloud tally is compared against zero. If no clouds were tallied processing ends and the scene is declared cloud-free.
  • Filter 14 - Pass 1 Cold Cloud, Desert, and Mean Cloud Temperature Indicator
    Three conditions have to exist to continue Pass 2 processing. The cold cloud scene percentage must be greater than 0.4 percent, the Pass 1 cloud temperature mean must less than 295K, and desert conditions must not exist. If any of these tests are not met processing passes to Filter 22. If all three tests are met, Pass 2 processing continues at Filter 15.
  • Filter 15 - Temperature Histogram Negative Skewness
    Prior to computing the Band 6 thresholds, a skew factor is computed from the skewness of the cloud temperature histogram. If the histogram skewness is negative, the cloud population is biased towards the left or colder tail of the distribution. No skew factor is necessary because the thresholds are set at appropriate levels for identifying clouds that skew colder. The shift factor is set to 0.0.
  • Filter 16 - Temperature Histogram Positive Skewness
    If the histogram skewness is positive, the cloud population is biased towards the right or warmer tail of the cloud temperature distribution. Because of the upward bias in temperature, an upward threshold adjustment is necessary. The skewness becomes the skew factor if it is less than 1.0, otherwise it is set to 1.0.

 

C. Pass 2 – Thermal Band Cloud Separation

One of the thresholds is set low and used to generate a conservative estimate of clouds in a scene. The other is set high and used to compute a less restrictive estimate of cloud cover. The thresholds are determined from the Pass 1 cloud temperature histogram. The histogram's 97.5 and 83.5 percentiles are the starting points for the two temperature thresholds and adjustments are made if necessary. All ambiguous pixels are tested against the two thresholds. If a pixel falls below the high threshold, it is labeled as a warm cloud. It is re-labeled cold cloud if it also falls below the lower threshold.

If a pixel temperature falls below the upper threshold, the cloud mask is flagged with a unique number that identifies a class of warmer clouds. If the pixel temperature also falls below the lower threshold, the cloud-mask value is changed to a colder cloud-class identifier.

  • Filter 17 - Threshold Shift Deployment
    If the skew factor is positive, upward adjustments are made to compensate for the warm cloud bias. The threshold shift is the product of the skew factor and cloud temperature standard deviation. Both thresholds are adjusted by this amount. If the skew factor is 0.0, processing continues at Filter 19.
  • Filter 18 - Band 6 Maximum Threshold
    A final check is made to see if the new upper threshold exceeds the histogram's 98.75 percentile (a threshold above or near the cloud temperature maximum is unwanted). If so, the 98.75 percentile becomes the new upper threshold and the lower threshold is adjusted by the amount of skewness compensation allowed.
  • Filter 19 - Band 6 Warm Cloud Indicator
    Each ambiguous pixel’s Band 6 temperature is tested against the higher threshold and labeled a warm cloud if it is lower. Pass 2 processing continues at Filter 20. If it is higher, it is skipped and the next ambiguous pixel is likewise examined. The process continues until all ambiguous pixels are accounted.
  • Filter 20 - Band 6 Cold Cloud Indicator
    Each Pass 2 warm cloud is tested against the lower threshold and is re-labeled cold cloud if it is lower. Processing continues at Filter 19 until all ambiguous pixels are accounted.

 

D. Image-Based Cloud Cover Assignments and Aggregation

After Band 6 is processed, the scene percentages for the warm and cold cloud classes are computed. The integrity of the two additional cloud classes is then appraised. The presence of snow or desert features, and the magnitude of the two new cloud classes are used to accept or reject one or both classes. Cloud classes that qualify as legitimate are combined with the Pass 1 clouds to form a single unified cloud class in the mask.

  • Filter 21 - Pass 2 Cloud-free Indicator
    The Pass 2 cloud tally is compared against zero. If no clouds are tallied, an event unlikely to happen, the final scene cloud cover score reported is the cold cloud percent from Pass 1. Processing resumes at Filter 26.
  • Filter 22 - Pass 1 Cloud Temperature Mean
    This filter is used when the three Pass 1 criteria are not met in Filter 14. The cold cloud scene percentage might be less than 0.4 percent, the Pass 1 cloud temperature might be greater than 295K and desert conditions may exist. The mean of the Pass 1 cold cloud population is again tested against the limit of 295K. If it is less the clouds are accepted as real but with less certainty. If any of these tests are not met, processing passes to Filter 22. The final scene cloud cover score reported is the cold cloud percent from Pass 1. Processing resumes at Filter 26. If the mean is greater than 295K, then uncertainties exists. The final scene cloud cover score is set to zero and processing ends.
  • Filter 23 - Pass 1 Cloud Acceptance Indicator
    If the snow or desert conditions determined earlier exist, the Pass 1 cloud percentage is determined using the Pass 1 cold clouds only. If the scene is free from snow and desert soil the Pass 1 cloud percentage is determined using both the cold and warm Pass 1 clouds.
  • Filter 24 - Pass 2 Cold and Warm Cloud Acceptance
    The temperature means and maximums are computed for the Pass 2 cold and warm cloud populations. Additionally, the percentages of the scene represented by the Pass 2 cold class and combined classes (cold and warm) are computed. The Pass 2 cold and warm clouds are united with the Pass 1 if four conditions are met. The Pass 2 cold and warm cloud contribution cannot be more than 35 percent, snow cannot be greater than 1percent of the scene, the mean temperature for combined cloud population cannot be greater than 295K, and between the combined cloud maximum temperature and the upper threshold cannot be less than 2 degrees. If these four conditions are met, all clouds identified in Pass 2 are united with the Pass 1 clouds and processing proceeds to Filter 26. If any one condition is breached, processing passes to Filter 25.
  • Filter 25 - Pass-2 Cold Cloud Acceptance
    The Pass 2 cold clouds are used if their contribution to scene cloud percentage is less than 25 percent and their mean temperature is less than 295K. If these two conditions are satisfied, the cold clouds are united with the Pass 1 clouds and processing advances to Filter 26. If either rule is breached, the Pass 2 analysis is considered invalid and the Pass 1 cold clouds are used in computing the final scene score. Processing continues at filter 26.

E. Nearest-Neighbor Cloud Filling

A final step involves identifying and filling cloud mask holes. This operation boosts the cloud-cover content to more accurately reflect the amount of unusable image data in a scene. Afterwards, the cloud pixels in the mask are tabulated and a final cloud cover percentage score for the scene is computed.E. Nearest-Neighbor Cloud-Filling

  • Filter 26 - Threshold Shift Deployment
    Each non-cloud image pixel is examined and converted to cloud if at least five of its eight neighbors are clouds. Filled pixels qualify as cloudy neighbors in subsequent tests.

5.6.11 Algorithm for Calculation of Scene Quality

Besides the cloud-cover assessment that is part of the standard LPGS processing, a scene quality assessment is provided as part of the EarthExplorerGloVis, or LandsatLook Viewer scene information. A two-digit number that separates image and PCD quality is used by the Landsat Processing System (LPS) for Landsat 7. The first digit represents image data quality and can range in value from 0 to 9. The second digit represents PCD quality and can range in value from 0 to 9. The formula for the combined score is:

image score * 10 + PCD score

The following describes how the image quality and PCD quality scores are assigned.

5.6.11.1 Image Quality Component

The image quality digit is based on the number and distribution of bad scans or equivalent bad scans in a scene. It is computed by dividing the total number of filled minor frames for a scene by 6313 (the nominal number of image data minor frames in a major frame for 30 meter bands). This gives a number of equivalent bad scans. The distribution of filled minor frames is characterized as being either clustered or scattered. A cluster of 128 bad scans still yields a scene with a cluster of 246 good scans which is almost 2/3 of a scene. A scattering of 128 band scans may make the entire image worthless.

It is proposed that bad scan lines are clustered if they occur within a grouping of 128 contiguous scans (approximately 1/3 of a scene). Errors are characterized as scattered if they occur outside the bounds of 128 contiguous scans. The image score is assigned according to the rules in Table 5-5.

Score Image Quality
9 No errors detected, a perfect scene
8 ≤ 4 equivalent bad scans, clustered
7 ≤ 4 equivalent bad scans, scattered
6 ≤ 16 equivalent bad scans, clustered
5 ≤ 16 equivalent bad scans, scattered
4 ≤ 64 equivalent bad scans, clustered
3 ≤ 64 equivalent bad scans, scattered
2 ≤ 128 equivalent bad scans, clustered
1 ≤ 128 equivalent bad scans, scattered
0 > 128 equivalent bad scans, scattered (> 33% of the scene is bad)

Table 5-5. Scene Quality Score - Image Quality Component

5.6.11.2 PCD Quality Component

The PCD quality digit is based on the number and distribution of filled PCD minor frames. There are approximately 7 PCD major frames for a standard WRS scene comprised of 375 scans. Each PCD major frame consists of 128 minor frames or 16,384 bytes. Clustering of filled PCD minor frames indicates that errors are localized whereas scattering indicates that numerous or all major frames may be affected.

Each PCD minor frame has 16 jitter measurements and corresponds to 30 milliseconds or approximately 1/2 of a scan. Two minor frames correspond to a single scan while 256 minor frames (i.e., 2 PCD major frames) correspond to 128 scans or approximately 1/3 of a scene.

Like the image data, it is proposed that bad PCD minor frames are clustered if they occur within a grouping of two contiguous PCD major frames (1/3 of a scene). Errors are characterized as scattered if they occur outside the bounds of contiguous PCD major frames. The PCD score is assigned according to the rules in Table 5-6.

Score PCD Quality
9 No PCD errors detected
8 ≤ 8 bad minor frames, clustered
7 ≤ 8 bad minor frames, scattered
6 ≤ 32 bad minor frames, clustered
5 ≤ 32 bad minor frames, scattered
4 ≤ 128 bad minor frames, clustered
3 ≤ 128 bad minor frames, scattered
2 ≤ 256 bad minor frames, clustered
1 ≤ 256 bad minor frames, scattered
0 > 256 bad minor frames, scattered (> 33% of the scene is bad)

Table 5-6 Scene Quality Score - PCD Quality Component

5.6.11.3. Scene Quality

The score calculated using the methods described is recorded in the scene level metadata under the keyword “SCENE_QUALITY”. Using this scoring system, the highest possible rating for an image would be 99, and the lowest would be 00. The score treats missing image data more critically than missing or filled PCD data. For example, an image with 16 filled scans that are scattered and with errorless PCD would have a 59 score whereas an image with intact image data and a 32 filled PCD minor frames that are scattered would receive a score of 95. The rationale is that PCD is less important because missing values can always be extrapolated or interpolated to enable Level-1 processing. Missing image data cannot be retrieved and thus impacts the user more severely than missing PCD. The score construct unambiguously alerts the user to image data deterioration.

5.7 Data Properties

5.7.1 Scientific Theory of Measurements

When solar energy strikes an object, five types of interactions are possible and in most cases, multiple interactions occur at the same time. The energy can be:

  1. Transmitted - The energy passes through with a change in velocity as determined by the index of refraction for the two media in question.
  2. Absorbed - The energy is given up to the object through electron or molecular reactions and the object heats up as a result of this process.
  3. Reflected - The energy is returned unchanged with the angle of incidence equal to the angle of reflection. Reflectance is the ratio of reflected energy to the total amount incident on a body. The wavelength of the reflected energy (not absorbed) determines the color of an object.
  4. Scattered - The direction of energy propagation is randomly changed. Rayleigh and Mie scatter are the two most important types of scatter in the atmosphere.
  5. Emitted - In this case, the incident solar energy is first absorbed, and then re-emitted, usually at longer wavelengths. The object also heats up due to this process.

The Landsat 7 system is designed to collect seven bands or channels of reflected energy and one channel of emitted energy (see Table 5-7). A well-calibrated ETM+ data set enables the conversion of the raw solar energy collected by the sensor to absolute units of radiance. Radiance refers to the flux of energy (primarily irradiant or incident energy) per solid angle leaving a unit surface area in a given direction. Radiance corresponds to brightness in a given direction toward the sensor, and is often confused with reflectance, which is the ratio of reflected versus total incoming energy. Radiance is what is measured at the sensor and is somewhat dependent on reflectance as well as environmental conditions.

 

Band Wavelenth (µm) Resolution (meters)
1 0.450 – 0.515 30
2 0.525 – 0.605 30
3 0.630 – 0.690 30
4 0.775 – 0.900 30
5 1.550 – 1.750 30
6 10.40 – 12.50 60*
7 2.080 – 2.350 30
9 0.520 – 0.900 15
*Band 6 is collected at 60m then resampled to 30m to match the ETM+ reflective bands

Table 5-7. Landsat 7 ETM+ Band Wavelengths and Resolution

The seven spectral bands of ETM+ data are designed to distinguish Earth surface materials through the development of spectral signatures. For any given material, the amount of emitted and reflected radiation varies by wavelength. These variations are used to establish the signature reflectance fingerprint for that material. The basic premise of using spectral signatures is that similar objects or classes of objects have similar interactive properties with electromagnetic radiation at any given wavelength. Conversely, different objects have different interactive properties.

A plot of the collective response of scattered, emitted, reflected, and absorbed radiation at specific wavelengths of the electromagnetic spectrum should, according to the basic premise, result in a unique curve, or spectral signature that is diagnostic of the object or class of objects. A signature on such a graph can be defined as reflectance as a function of wavelength. Four such signatures are illustrated in Figure 5-5.

Figure 5-5 A plot of the spectral reflectance curves of four different Earth surface targets
Figure 5-5. A Plot of the Spectral Reflectance Curves of Four Different Earth Surface Targets

ETM+ data can be used to plot spectral signatures although the data are limited to eight data points corresponding to each of the sensor's bands, which cover the spectral range of 0.45 to 12.5 µm. It is often more useful to plot the ETM+ spectral signatures in multi-dimensional feature space. The four Earth surface materials shown in Figure 5-5 are plotted in Figure 5-6 using just two of the ETM+ spectral bands (in this case, Band 2 versus Band 4. Using this technique, the four test surface targets, (grasslands, pinewoods, red sand, and silty water), may be characterized as distinctly different by this technique.

Each of the materials has been plotted according to its percent reflectance for two of the wavelengths or spectral bands. Multi-dimensional plots, which use more than two wavelengths, are involved; the plots in multi-dimensional space tend to increase the separability among different materials. This technique of spectral separation forms the basis for multispectral analysis where the goal is to accurately define the bounds of accurately identified, spectrally-identifiable data point clusters.

Figure 5-6 Spectral separability of surface targets using two ETM+ bands
Figure 5-6. Spectral Separability of Surface Targets Using Two ETM+ Bands

5.7.2 Spatial Characteristics

Spatial resolution is the resolving power of an instrument needed for the discrimination of features and is based on detector size, focal length, and sensor altitude. More commonly used descriptive terms for spatial resolution are Ground Sample Distance (GSD) and Instantaneous Field of View (IFOV). The IFOV, a synonym for pixel size, is the area of terrain or ocean covered by the field of view of a single detector. The ETM+ is designed to sample the ground at three different resolutions; 30 meters for Bands 1-5, and Band 7, 60 meters for Band 6, and 15 meters for Band 8 (panchromatic) data (see Table 5-8). Figure 5-7 shows an example of a 30-meter pixel. Each 30-meter pixel is about the size of a baseball diamond.

A standard WRS-2 scene covers a land area approximately 185 km (across-track) by about 170 km (along-track). A more precise estimate for actual scene size can be calculated from the Level-0R product image dimensions (see Appendix D)

Band Number Resolution (m) Samples
(columns)
Data Lines
(Rows)
Bits per
Sample
1-5,7 30 6,600 6,000 8
6 60 3,300 3,000 8
8 15 13,200 12,000 8

Table 5-8. Image Dimensions for a Landsat 7 Level 0R Product

Figure 5-7 Landsat pixel size relative to a baseball diamond
Figure 5-7. Landsat Pixel Size Relative to a Baseball Diamond

A Landsat scene's spatial extent cannot be determined simply by multiplying the rows and columns of a scene by the IFOV. This would lead to a scene width of 198 km (6600 samples * 30 meters) and a scene length of 180 kilometers (6000 lines * 30 meters). While this calculation applies to scene length, the scene width calculation is more complicated due to the presence of image buffers and the staggered image bands in the Level-0R product. Left and right image buffers were placed in the Level-0R product to accommodate a possible increase in scan line length over the mission's life. The staggered image bands result from the focal plane design (see Section 2.2), which LPS accounts for by registering the bands during Level-0R processing. The end result is an increasing amount of zero-fill preamble according to the band order on the ground projected focal plane array.

The detector offsets determine the amount of zero-fill preamble for each band. These are listed in Table 5-9 and can also be found in the CPF (see Section 3.10). Coincident imagery for all 8 bands starts at pixel location 247 for the 30-meter bands. This is shown by the reverse scan odd detector offset for Band 6. This number, 116, is in 60-meter IFOVs, which translates to 232 30-meter pixels. Another 14 pixels must be added to this number to account for the seven minor frames of image data pre-empted by time code. Coincident imagery for all 8 bands ends at pixel location 6333 for the 30-meter bands. This number is determined by looking at the reverse even detector offset for Band 8. Add to this number the value 12,626 that represents the number of Band 8 pixels per line (6313 minor frames multiplied by 2). The total, 12,666, is halved to put the ending pixel number into 30-meter units. The number of coincident image pixels in a scan is therefore 6087 (6333 - 247 + 1). The nominal width for a scene is therefore 182.61 km (6087 * 30 meters).

Band Number Forward Scan
Even Detectors
Forward Scan
Odd Detectors
Reverse Scan
Even Detectors
Reverse Scan
Odd Detectors
1 49.0 51.0 45.0 48.0
2 74.0 76.0 70.0 73.0
3 99.0 101.0 95.0 98.0
4 124.0 126.0 120.0 123.0
5 195.0 197.0 191.0 194.0
6 110.0 113.0 114.0 116.0
7 169.0 171.0 165.0 168.0
8 50.0 54.0 40.0 44.0

Table 5-9. Landsat 7 ETM+ Detector Shifts

5.7.3 Temporal Characteristics

5.7.3.1 Sun Elevation Effects

While the orbit of Landsat 7 allows the spacecraft to pass over the same point on the Earth at essentially the same local time every 16 days, changes in sun elevation angle, as defined in Figure 5-8, cause variations in the illumination conditions under which imagery is obtained.

Figure 5-8 Illustration of the sun's variable elevation angle relative to Landsat 7's nadir view
Figure 5-8. Illustration of the Sun's Variable Elevation Angle Relative to Landsat 7's Nadir View

These changes are due primarily to the north-south seasonal position of the sun relative to the Earth (see Figure 5-9). The actual effects of variations in solar elevation angle on a given scene are very dependent on the scene area itself. The reflectance of sand, for example, is significantly more sensitive to variations in sun elevation angle than most types of vegetation.

Figure 5-9 Effects of seasonal changes on solar elevation angle for a particular latitude
Figure 5-9. Effects of Seasonal Changes on Solar Elevation Angle for a Particular Latitude

Atmospheric effects also affect the amount of radiant energy reaching the Landsat sensor, and these too can vary with time of year. Because of such factors, each general type of scene area must be evaluated individually to determine the range of sun elevation angles over which useful imagery can be realized (see Section 4). Depending on the scene area, it may or may not be possible to obtain useful imagery at lower sun elevation angles. At sun elevation angles greater than 30 degrees, typically all image data can be used. For the Landsat 7 mission, a minimum solar elevation angle of 5 degrees is generally accepted as the lowest angle at which some amount of ETM+ imagery can be acquired over most land cover types. However, data acquired over land ice (e.g., Antarctica, Greenland) at 2 degrees elevation does still yield useful information on surface characteristics through casted shadows but this is a rather specialized application.

Apart from the variability of scene effects, sun elevation angle is affected by a number of perturbing forces on the Landsat orbit. These include forces such as atmospheric drag and the sun's gravity. These forces have the effect of shifting the time of descending node throughout the year, which results in changes to the nominal sun elevation angle. The effects of orbit perturbations, however, can be considered minor for most applications, and are compensated for by periodic orbit maneuvers (see Section 2.1.6).

5.7.3.2 Revisit Opportunities

Repeat imaging opportunities for a given scene occur every 16 days. This does not mean every scene is collected every 16 days (see Section 4 for details). Duty cycle constraints, limited onboard recorder storage, the use of cloud cover predictions, and adherence to the LTAP make this impossible. The goal, however, is to collect as much imagery as possible over dynamically changing landscapes. Deserts do not meet this definition and thus are imaged once or twice per year. Temperate forests and agricultural regions qualify as dynamic and are imaged more frequently. Figure 5-10 illustrates over 4 million archived ETM+ scenes acquired during the mission's first 18 years (ca. 2017).

More than 17,000 unique scenes are now in the Landsat 7 portion of the archive at EROS. These numbers are only from descending or daylight passes and tens of thousands of additional scenes have been acquired during night-time passes over volcanoes and calibration/validation sites, as well as for instrument assessments.

Due to Landsat 7’s maturity, coverage trends can be observed, as Figure 5-10 illustrates. The U.S., including Alaska, has been extensively covered because every imaging opportunity was exploited. South America and Australia are similarly well covered. North Africa is mostly desert and appears brown to yellow as do large portions of Earth’s taiga / tundra regions. Northern Asia is mostly green and yellow due to recorder and downlink opportunity constraints.

Figure 5-10 A map showing all Landsat 7 ETM+ data acquired and archived up through June 30, 2017 (descending/daytime only)
Figure 5-10. A Map Showing all Landsat 7 ETM+ Data Acquired and Archived Through June 30, 2017 (descending/daytime only)

The importance of repeatedly imaging dynamically changing landscapes frequently is illustrated in Figure 5-11.  The dramatic color changes in the mountains to the east of Salt Lake City indicate the montane growing season is over in October. A multi-temporal analysis using images such as these allows people to resolve, with greater accuracy, key landscape components such as biomass, species distribution, and phenological growth patterns.

Figure 5-11 August 14, 1999 (left) and October 17, 1999 (right) images of the Salt Lake City area shown in band combination 5-4-2
Figure 5-11. August 14, 1999 (left) and October 17, 1999 (right) Images of the Salt Lake City Area Shown in Band Combination 5-4-2.

 


Top Page    Section 1    Section 2    Section 3    Section 4     Section 5    Section 6    Appendix A     Appendix B     Appendix C    Appendix D    Appendix E    References