Thursday, March 27, 2014

Remote Sensing Lab 4

Goal and Background:
The goal of this lab exercise was to gain a better understanding of 5 key methods essential in image analysis in the field of remote sensing. These include:

1. Assigning and isolating an area of interest (AOI) using image sub setting to focus on a more specific region of a larger aerial image.
2. Making adjustments to the spatial resolution of aerial images to improve the analyst's ability to interpret features within the image. 
3. Utilize radiometric enhancement techniques to improve spectral and radiometric resolution of aerial photographs.
4. Using Google Earth as a source of ancillary information by pairing it with a satellite image being analyzed.
5. Introduction to methods of re sampling satellite images.

By the end of the lab the analyst will gain skills in improving satellite images to better collect information and improve visual interpretation while processing the image. 

Methods:
In order to achieve each goal throughout this lab exercise, ERDAS Imagine 2013 was used. Each method that was taught throughout this lab was broken down into specific sections which will be presented below. 

Isolating an AOI:
In image analysis, it is likely that the original satellite image you are starting with is covering a larger area than you wish to study. If this is the case it will be very beneficial to use image sub setting to isolate a more specific area of interest (AOI) to allow you to analyze only the area that is pertinent to your project. For example, we are give the image in which is a large scale satellite image of various counties (Fig. 1). If we wish to focus our study in the counties, image sub setting will be the technique to use to achieve this goal. 


(Fig. 1) This is an image which shows the AOI of a much larger, original satellite image. Using image sub setting, the analyst is able to isolate specific counties and produce a more focused image on the area they are studying.

Improving Spatial Resolution by Pan Sharpening:
Using the resolution merge technique to combine a panchromatic image to the reflective image in order to produce a pan sharpened image with a better spatial resolution. This image fusion uses the panchromatic image as the high resolution input file and the reflective image as the multispectral input file. For this process the multiplicative method was used which uses a simple multiplicative algorithm that integrates the two raster images. The re sampling technique used for this process was nearest neighbor.

Radiometric Enhancement by Haze Reduction:
As can be seen in Fig. 2, the original image has haze concentrated mostly in the lower right hand corner. This haze is important to remove as it makes visual interpretation of the image more difficult because there is disruption of an area of the image. To remove this haze, use the haze reduction tool under the raster section of tools in ERDAS Imagine 2013. 



(Fig. 2) The image on the left represents the original reflective image and the image on the right represents the corrected image after haze was removed. 

Pairing of Google Earth and Image Viewer:
Using Google Earth as a source of ancillary information can allow for the image analyst to view the same satellite information from two sources. Once you match Google Earth to the view in your image viewer you can pair the Google Earth image with the image in your viewer in ERDAS Imagine 2013 (Fig. 3). You have the ability to link and synchronize your reflective image and the data collected by Google Earth at the same time, displaying the same area he benefit of this process include a more detailed, higher resolution satellite image from Google Earth (because the satellite information collected is most recent) that is specifically synced to the area you are focusing on in your reflective image (Fig. 4). This is a type of selective image interpretation key.


(Fig. 3) This image displays the commands used in ERDAS Imagine 2013 which allow the user to connect to Google Earth, match the GE (Google Earth image) to the view in ERDAS as well as how to link and sync the GE to the view.  



(Fig. 4) The reflective image is displayed on the left in ERDAS Imagine 2013. Once the image was linked and synchronized with Google Earth the image on the right was produced by Google Earth which displays the same area as the ERDAS image. 

Re Sampling of Satellite Images:
The process of adjusting the pixel size is known as re sampling which can be done in order to increase or decrease the size of the pixels. This can be accomplished by using the raster tool of spatial and re sample pixel size in ERDAS Imagine 2013. Through this, you can change the output image's pixel size using the nearest neighbor or bilinear interpolation method. The spatial resolutions of these changes to the spatial resolutions/ changes in pixel size are displayed below in Fig. 5. In adjusting the pixel size, the image will have greater clarity, allowing for the image analyst to better utilize the image (Fig. 6).


(Fig. 5) The far left and central metadata displays represent the nearest neighbor and bilinear interpolation methods to adjusting the pixel size. The metadata on the far right is showing the information from the original image prior to the pixel size being adjusted via re sampling.


(Fig. 6) The reflective image with the original pixel size and the image on the right represents the bilinear interpolation method used to adjust the pixel size. As you can see the image on the right that has had the pixel size adjusted to 20x 20 (compared to the original of 30x 30) is more detailed.

Results:
The results of this lab exercise are shown in the images presented above. Through this process we are able to understand various forms of image function including image sub setting using AOI, improving spatial resolution by pan sharpening, radiometric enhancement by haze reduction, pairing with Google Earth and re sampling of satellite images. 

Sources:
Data utilized in this lab exercise was provided by Dr. Cyril Wilson. Satellite images are 
from Earth Resources Observation and Science Center, United States Geological Survey and the shapefile used is from Price Data. Google Earth images are from GeoEye.