Skip to content

Understanding and Evaluating Satellite Remote Sensing Technology in Agriculture

remote sensing

The use of satellites as a remote sensing tool for land observations is nothing new – in 1972, NASA launched Landsat 1. NASA is now operating Landsat 8 and more than 300 other Earth observation satellites are in orbit with almost half providing optical imaging.

In order to best evaluate the different offerings, it is important to understand the variability in remote sensing processes. In the end, different insights can be produced with access to the same data source.

So, how does remote sensing work?

Optical satellites acquire images from solar radiation reflected by the Earth. Because satellites utilize different sensors and technology, the information available from each satellite varies.

While using satellite remote sensing data doesn’t require an in-depth understanding of how the technology works, having working knowledge of a few key concepts helps to better evaluate what is feasible with the technology and how it can meet your agribusiness needs.

Pixels and Resolution

Understanding the multiple forms of resolution is important when evaluating satellite imagery. It is common to oversimplify imagery by referring to it as high, medium or low resolution. This creates confusion and misconceptions based on general photography knowledge in which resolutions is simply measured by the number of pixels in an image, whereas in satellite imagery what is often being referred to is the spatial resolution.

Distinguishing between pixel size and spatial resolution is important, as they are often confused, and not interchangeable.

As with any digital image, satellite imagery is also made up of pixels. The image is acquired by sampling the light reflected by the surface and recording the measurements in a matrix of pixels which creates a grid. The ground sampling distance (GSD) refers to the distance between each pixel of this predefined grid.

When talking about resolution, satellite data providers are often referring to the spatial resolution, which refers to the smallest object that can be identified on the ground. This varies based on the position of the sensor relative to its target, atmospheric diffractions and other factors. Meaning spatial resolution – and therefore the quality of the information provided – varies from the image center to the swath edge.

For example, MODIS images have a GSD of 250 meters. Therefore, each pixel represents an area of 250 meters x 250 meters, or 6.25 hectare. The spatial resolution of MODIS at NADIR (point right below the satellite) equals 250 meters, but off-NADIR it can be more than 500 meters within the 250 meters x 250 meters pixel.

The same is true with higher resolution satellites – and even more so when a satellite points its sensor at a wide angle, off-NADIR. For example, RapidEye satellites (Planet) can point up to 20° off-NADIR to observe target areas further away from its direct path, which impacts the normal spatial resolution of 6m.

While spatial resolution is important, temporal resolution is just as critical and often misunderstood.

In this illustration, the ground sampling distance is 250 meters and, as such, the spatial resolution at NADIR is also 250 meters. However, looking at the pixels captured away from NADIR, the spatial resolution of the data captured decreases.

An Earth observation satellite provides the greatest remote sensing value in its ability to record a sequence of images over time. The amount of time between image acquisitions, or time revisit, represents the temporal resolution. The closer the intervals of time, the higher the temporal resolution. The higher the temporal resolution, the greater the monitoring capabilities.

However, the time revisit is not the only component of temporal resolution because it only reflects the potential imagery acquisitions. It is important that the data actually gets acquired and downloaded to the ground segment – this is the difference between systematic acquisition and tasking. Many providers claim to offer daily revisit even though they are not acquiring data systematically. Instead, they task their acquisitions by:

1. Choosing where to acquire imagery by pointing the sensor at a target; or

2. Limiting the number of pixels acquired to their download capacity.

This makes it impossible to get daily information everywhere and serve
all the customers equally.

In agriculture, true monitoring requires daily image acquisition, or high temporal resolution. This helps ensure the delivery of sufficient cloudfree images. If images have low temporal resolution, such as every five days, there is a greater risk of missing information as cloud coverage would create a ten-day information gap (assuming it’s cloudy 50% of the time). Alternatively, intrafield analysis can be done with less frequent reviews – such as weekly – or low temporal resolution.

Tradeoffs are often required between spatial and temporal resolution (volume of pixels that will be downloaded to the ground segment). As users of remote sensing data, understanding how the various resolutions affect the information being provided is important in comparing offerings based on your data needs.

Considering the various optical earth-observation satellite constellations commercially available today, none are able to deliver true global monitoring, which means the use of several constellations is required in order to deliver a consistent service to all customers.

Field Monitoring versus Intrafield Analysis

The difference between field monitoring and intrafield analysis is similar to the difference between watching a movie and viewing snapshots. You cannot tell a complete story with the snapshots but they allow you to review details in that moment of time more carefully. Knowing what you are trying to achieve with the data will help you better identify your needs. Variable rate application of most inputs can be achieved with snapshots of a field. Understanding how a field is performing compared to other fields or seasons requires true field monitoring.

Multispectral vs. Hyperspectral

The difference between multispectral and hyperspectral is the number and width of the spectral bands. Hyperspectral sensors have hundreds of narrow bands which access more details of the plants characteristics yet require a high effort of calibration. This limits the capacity to detect or repeat the detection of phenomena. Few satellites use these sensors, and most have narrow swath making the frequency and capacity of image acquisition very limited. The same is true for drones that utilize hyperspectral sensors.

Multispectral sensors have three to 20 narrow bands. If you think of the two in regard to charts of information, hyperspectral would produce a line chart where multispectral would produce a bar chart. The multispectral sensors offer very valuable information in monitoring crop health. Due to the volume of information available from the numerous satellites that use the multispectral sensors, the frequency, swath and capacity of image acquisition is much more abundant and reliable.

The Spectrum

Radiometers on satellites measure wavelengths of electromagnetic radiation being reflected by targets on Earth – known as the spectral response – based on the electromagnetic spectrum. Different objects have different spectral signatures, which provide information on what is being observed.

Each satellite has its own set of spectral bands corresponding to the wavelength domains (i.e. blue, green, red) which they use to measure the spectral response. It is important to note that wavelength domains are ranges and each band is strictly defined for a satellite but varies from satellite to satellite.

For example, the red channel for Landsat 8 is characterized by wavelengths between 640 to 670 nanometers but the red channel for Sentinel-2A is characterized by wavelengths between 650 to 679 nanometers. The image below illustrates how two satellites with similar GSD capture data differently for the same area.

Due to this, satellites do not see the same exact colors for a given surface. This means data being collected from multiple sources needs to be carefully cross-calibrated to maintain consistent monitoring and analysis.

From Wavelengths to Crop Health

The wavelengths and intensity of the spectral response provide the information needed for vegetation indices calculations. Vegetation Indices (VI) provide an indication for the relative density and health of vegetation for each pixel in a satellite image.

Over the past 20 years, a wide array of VI have been described in scientific publications for remote sensing. The primary index for crop monitoring is Normalized Difference Vegetation Index (NDVI) published in 1979. Due to its robustness and the small number of input bands required, this index can be calculated from data acquired by basically all the satellites, enabling cross-calibration and comparisons across satellites and locations.

NDVI is sensitive to both biomass and chlorophyll activity which provides a data range from -1 to +1. Since there is more reflected radiation in near-infrared wavelengths (NIR) than in visible wavelengths, the closer to +1 indicates the healthier the vegetation. If there is little difference, the vegetation may be stressed or dead – or the data captured is of bare soil.

Plants use energy contained in optical sun light to grow – photosynthesis, performed by chlorophyll. The red band is absorbed only by chlorophyll whereas the blue band is absorbed by other components of leaves such as carotenoids (making blue less correlated to chlorophyll content). So, the more red that is being absorbed, the more chlorophyll is active in the leaves.

Plants cells, which are full of water, act as a mirror to infrared. When osmotic pressure is high, most of the infrared is reflected by the plant. Therefore, the more infrared that is reflected by the plant, the more biomass is in good health.

Another index, known as the Enhanced Vegetative Index (EVI) was optimized to improve variability assessment in high biomass regions, making it more responsive to canopy structure variations.

In addition to the data required for NDVI calculation, satellites also capture visible to shortwave infrared (SWIR) data, which provide vegetation water content information. This data is used in calculating the Normalized Difference Water Index (NDWI) and helps monitor changes in the water content of plants.

In the end, each pixel captured by the satellite provides a wealth of information that can be used in a variety of calculations to provide multiple data points. The more cloud-free pixels you are able to capture, the more information you have available and the better the resulting data will be.

The Power of Processing for Remote Sensing Data

When it comes to agricultural application, satellite imagery is not suitable in its raw form. The information captured from the satellite sensors must be processed before it can be used for analyses such as NDVI.

There are several factors that affect imagery captured by a single satellite that must be corrected, including (but not limited to):

  • The atmosphere
  • Cloud cover
  • Shadows due to sun blockage
  • Varying spatial resolutions based on the distance from the NADIR
  • Varying angles of sunlight based on the curvature of the Earth
  • Viewing angle of the sensor
  • Topographical distortions
  • Environmental effect

Free Satellite Images

Satellite imagery is readily available today through the internet – so why pay anyone for the service?  While some satellite imagery can be accessed free of charge, the data is raw. The ability to properly process the data is where providers add value.

For example, when a RapidEye satellite points to an object observed 20° off-NADIR, the thickness of the atmosphere changes and affects how the reflected light is being passed through. This has an impact on the signal measured. A similar affect can be noticed when looking at a sunset – the sun appears larger and more red at sunset than it does at noon.  

This is important because reflectance measurements need to be corrected from acquisition conditions in order to allow for comparison within a single field over time, or between multiple fields in the vicinity, and properly detect changes in crop health.

Most of these distortions can be removed automatically, given the right algorithms and metadata, but others can be a rather labor-intensive process. This is what makes the difference between a pretty, colorful map and actual remote sensing data that users can capitalize on.

More Satellites = More Pixels = Better Data

Each satellite provides access to a unique set of information through the pixels it collects. The more satellites used, the more pixels of information available. More information available means more inputs for the data being produced, resulting in better data.

In order to obtain this better data, you need to be able to cross-calibrate the information collected from different satellites and process it correctly – otherwise you will be comparing apples to oranges. While you might be evaluating hundreds of thousands of hectares, each pixel impacts the data output you receive.

Conclusion

As more companies enter the agriculture market as satellite imagery providers, understanding the basic functionality of the remote sensing technology becomes increasingly important. We want to empower agribusinesses to best evaluate the different services available and assess them against their individual business needs.

Determining the need for monitoring versus analysis support – or both – is an important first step as this will help in evaluating these key takeaways:

  • Pixel size and time of revisit are not an adequate representation of resolutions in satellite imagery – simple descriptions such as high, medium and low resolution or daily revisit are too vague. Ask questions to understand the spatial and temporal resolution being offered.
  • The more satellites a provider uses, the more pixels they have available – and this translates to better data. Ask questions to understand how many satellites are available and how they are being used by the provider to best assess the quality of the data they will be able to provide.
  • Processing capabilities are major differentiating factors – and experience counts. Ask questions to understand how a provider is processing images in order to supply you with the best possible results.

In making more informed decisions, you will be able to better support your long-term business goals with remote sensing technology.

Share This Post

More To Explore