Название | Imagery and GIS |
---|---|
Автор произведения | Kass Green |
Жанр | География |
Серия | |
Издательство | География |
Год выпуска | 0 |
isbn | 9781589484894 |
Figure 4.1. Comparison of required spatial and temporal resolutions of different mapping applications
What types of features need to be mapped?
The types of features to be mapped will affect the spectral and temporal resolutions required. Mapping general land-use land-cover classes (e.g., urban versus agricultural versus water versus forests) can be accomplished with one date of panchromatic imagery. Identifying tree species usually requires multispectral imagery and is greatly enhanced if lidar is also available to measure tree height. Adding in multitemporal imagery will help distinguish deciduous from evergreen tree species, or to map different deciduous species from one another if they change colors differently during the fall or spring. Mapping crop types also requires multispectral imagery taken when the crops are established and growing. Mapping crop yield requires multispectral and multitemporal imagery. Coastal wetland mapping often requires careful coordination of the imagery collection with tidal and weather conditions, often necessitating collection of imagery at low tide on calm days so that a maximum amount of wetland vegetation is exposed, and wave action does not interfere with the vegetation’s spectral response. Mapping evapotranspiration (the transfer of water from land and plants to the atmosphere) requires thermal imagery.
Change detection requires multitemporal imagery. Change detection of obvious changes such as flooding, forest harvesting, or urban expansion can often be accomplished with multiple dates of panchromatic imagery because the spectral differences between to/from classes are very distinct. For example, clear-cuts do not look like forests, wildland and crop lands do not look like subdivisions, and the spectral response of water versus other land-cover types makes flood mapping fairly straightforward. Mapping subtle changes such as tree growth or crop production requires multitemporal, multispectral imagery and can be greatly aided by lidar.
Also important is the quality of calibration of the imagery. Calibration compensates for radiometric variation caused by sensor defects, system noise, and scan angle. Landsat sensors are methodically calibrated using preflight, postlaunch-onboard, and ground reference data. As a result, Landsat data is considered the gold standard of radiometric quality, and many other satellite systems calibrate their imagery to Landsat.
What is the size, shape, and accessibility of the project area?
The size, shape, and accessibility of a project area often determine the platform used to collect the imagery. Utility corridors, coastlines, and sinewy river corridors are often mapped best with airborne platforms instead of satellite imagery because, unlike satellites, aircraft can closely follow the shape of the project area. Large statewide, regional, or country-sized areas can often best be mapped from satellites with large image footprints. Areas inaccessible to aircraft because of government restrictions are best imaged with satellites. Small areas inaccessible to aircraft because the infrastructure does not exist to support aircraft operations might be best imaged by a UAS.
What are the requirements for spatial and spectral accuracy?
All imagery used to make a map or measure distances on the landscape needs to be registered to the ground and have the effect of terrain displacement removed (see chapter 6 for more detail on these topics). While once a cumbersome and difficult task, georeferencing and terrain correction have become much easier with the development of worldwide digital elevation models, control points, and image matching algorithms. However, not all remote sensing systems have the same quality of instrumentation, and not all remote sensing companies have the same quality of processing systems, access to high-quality digital elevation models, existing accurate orthoimagery, or ground control points. Additionally, many remote sensing companies sell products with different levels of spatial accuracy.
When acquiring imagery, the analyst should always understand its stated spatial accuracy, which is usually expressed as the maximum circular error in meters at a 90 percent confidence level—termed “CE90” (see chapter 12 for a detailed discussion of how spatial accuracy is determined). Additionally, the accuracy of the imagery should be checked against ground control points or a GIS dataset known to be more accurate than the imagery. It is not unusual for spatial accuracies to be less than stated, especially in areas with little ground control or of high terrain relief. Additionally, spatial accuracy is also affected by the viewing angle of the collection. Usually, the more off-nadir the collection, the lower the spatial accuracy.
Equally important but less frequently discussed is the spectral accuracy of the sensor—do the sensor’s measured data recordings match its expected data recordings? Atmospheric interference, sensor defects, system noise, and variations in scan angle can result in a device recording data for an object that is different from the true spectral reflectance or emission of the object. There are two important questions to ask regarding a sensor’s spectral accuracy. First, is the sensor regularly tested to determine whether it records data precisely and accurately? Second, does the sensor operator either calibrate their imagery to correct for sensor errors or provide calibration statistics and algorithms so that the user can correct for errors?
For example, the spectral accuracy of Landsat data is continually tested using preflight, postlaunch-onboard, and ground reference data. USGS also provides calibration parameter files of geometric and radiometric coefficients needed for correcting raw Landsat image data. The calibration of Landsat imagery is considered the gold standard and is so good that other satellite image providers often calibrate their imagery against Landsat imagery rather than collecting their own ground reference data. Most, but not all, sensor operators include some metadata with their image data, which can support calibration. Calibrating imagery is a common preprocessing step that is discussed in more detail in chapter 6.
Will the imagery be shared with other organizations?
The need to share imagery with others will affect the type of imagery license chosen. High- and very-high-spatial-resolution satellite imagery often has some sort of license restriction, although licenses that allow some sharing (e.g., within an agency, or across federal agencies) are common. If the imagery is to be shared with multiple users both inside and outside of your organization, it might be best to focus on nonlicensed, unrestricted imagery available in the public domain, or to acquire new imagery from an organization that does not license restrict their products. For example, a great deal of the high-spatial-resolution imagery captured over the United States is in the public domain and is not license restricted. This includes NAIP 1-m multispectral imagery and even higher-spatial-resolution-imagery funded and collected for many local and regional government agencies. Moderate-resolution imagery is also available in the public domain worldwide from either the Sentinel (10 to 20 m) (https://sentinel.esa.int/web/sentinel/sentinel-data-access) or Landsat (30 m) (http://landsat.usgs.gov/Landsat_Search_and_Download.php) programs, including Landsat’s archive of more than 40 years. NOAA weather data and NASA earth science data are also freely accessible and shareable. Aside from military systems or imagery captured by UASs, access to high- or very-high-resolution imagery outside of the United States is usually available only from commercial satellite companies who restrict sharing of their imagery through licensing.
Is the imagery accessible?