Wednesday, December 7, 2016
GIS4035 - Final Project
For our final project in Remote Sensing we were tasked with utilizing the concepts, data types, processes and techniques we had learned over the course of the semester in lecture and lab assignments. I chose to apply remote sensing data and image processing techniques to the suggested project topic of completing a LULC analysis of Lake Tahoe. The question I posed was to do a LULC comparison between provided 1992 data and 1999 data to discover if a change in LULC, specifically permeable to impermeable surfaces had increased dramatically during the specified decade. A rapid increase in impermeable surfaces could potentially be a cause for an increase in fine sediment particles in the Lake Tahoe Basin.
Lake Tahoe is the largest alpine lake in North America. The Lake Tahoe region has undergone rapid urbanization over the last 2-3 decades. As a result of this urbanization, the lake is increasingly becoming eutrophic.
For my LULC comparison I used two provided images. The first image from the MRLC consortium was an unsupervised classification of Landsat Thematic Mapper circa 1990's satellite data (NLCD 1992). The second image was provided by the USGS. Metadata provided for the final project identified the image as a Landsat 7 ETM+ image from April of 1999. However, after performing a multispectral analysis of the image I noticed some anomalies. Displaying the image in a RGB 642 and 653 band combination identified an area north of the lake that was fire. The provided image had 6 layers but layer 6 did not appear to be a true thermal band 6 but rather band 7 from the ETM+ satellite. If layer 6 was a thermal band then the fire feature would not have appeared in the multispectral analysis and areas of snow and ice on the southwest side of the lake would appear darker than other object emitting more energy when viewing Layer 7 in greyscale. Next, I compared the provided image with the original image from the USGS Glovis Visualization Viewer. I could not find an image from the Landsat 7 ETM+ for April 1999. I then discovered through a Google search that the satellite had been launched in April 1999 so the earliest available images began in June of 1999. However, the LandsatLook jpeg image from June 1999 did not visually match the provided image for the project. I then compared the July 1999 LandsatLook image and it visually appeared to be a match to our provided image. This sidestep in analysis made me realize how important accurate metadata is to an enduser using dowloaded images from different data sources. Of course to positively confirm the date of the provide image I would need to download the original and compare brightness values between the provided image and the original.
I performed an NDVI of the 1999 image in order to aid in classification techniques. Because I would be directly comparing the two images I chose to use the same classification legend provided for the NLCD image - 21 Level II classes. A majority of these classes were for vegetative features so having the NDVI would enable me to more easily identify those classes when applying classification techniques to the 1999 image. I performed an unsupervised classification of the image first using 50 classes which I then reclassified using the NLCD 1993 LULC legend. I recoded an additional time to ensure I had at least one wetland class to distinguish from the open water class. In the unsupervised classification groupings of pixels are created based on their brightness values. In the image Lake Tahoe many of the features appear mingled; low and high residential classes were interspersed with forest, transportation and water classes. I performed a supervised classification to see if there would be a difference in classification methods. The supervised classification was performed using signatures I had created from seeds and polygons within the image.
When comparing the two classified 1999 images with the 1992 unsupervised image an increase in impermeable surfaces was seen. Likewise there was a decrease in permeable surfaces over the decade with both classified 1999 images. It was difficult to make precise judgements based on the comparison of the two images. The total coverage area differed between the 1992 and 1999 images even though the 1999 image was clipped to the 1992 image. I think the difference in pixel size between the two images may have caused this difference in coverage area (28.5 m² vs. 30 m²). The 1992 image had 17 classes while my unsupervised classified 1999 image only had 13 of the same classes. Because of the difference in the number of classes being four, I may have misclassified or lumped missing classes into other classes. My supervised classification of the 1999 image clearly exhibits signs of spectral confusion between classes. The region southwest of Lake Tahoe is known as the Desolation Wilderness Area, but in my supervised classified image it appears that the Commercial/Industrial/Transportation class is spectrally confused with Bare Rock, Shrubland and Grasslands classes. Creating the signatures for the supervised classification was difficult, as I previously mentioned features were in close proximity to other classes making it extremely difficult to get signatures that contained enough pixels. When using my signature file I also should have used a RGB 543 band combination as the setting for Approximate True Colors as that band combination may have made classes more distinguishable than the 742 band combination I had chosen.
This project was challenging but enabled the individual course assignments from the semester to be used collectively to answer a question. This course by far has been the most difficult. The sheer volume of information about remote sensing devices and techniques at first seemed daunting, having no significant prior experience in remote sensing and photo interpretation. However, as each lab progressed I realized that I was learning and applying that knowledge to the various assignments. The course ended being very rewarding.
Monday, November 7, 2016
GIS4035 - Lab 10 - Supervised Image Classification
Well, I finished off the kids' halloween chocolate while working on this final lab assignment; new proof that eating chocolate can reduce daily stress. Module 10 was a continuation of last week's digital classification of images using ERDAS Imagine. We performed three exercises prior to creating our final deliverable which was to create a supervised classification of land use consumption of Germantown, Maryland.
First we learned how to create or append signature files by drawing polygons around clusters of like pixels or by growing a signature from "seed" using the Drawing tab/Growing Properties dialog. Adjusting the Spectral Euclidean Distance (SED) and/or changing the Neighborhood setting from 4-way to 8-way will adjust how a "seed" grows. Next, we learned how to evaluate the signatures. Using histograms and mean plots we could view the bands where overlap occurred between two or more signatures. Determining which bands showed the greatest difference between signatures was important because those bands were chosen to Set Signature Colors in the Signature Editor. Once colors were set, we could perform a supervised classification of our image using our signatures to "train" the classification. Using the Maximum Likelihood Parametric Rule, pixels were classed based on the probability that a pixel belonged to a particular signature/class. A distance output file was also created. This file was another analysis tool. Areas that appear very bright in the distance file indicate greater spectral difference which may predict misclassification. Additional signatures could be added or existing signatures could be modified/replaced to include a larger pixel count. Once the final supervised image was obtained, the classes could be recoded to merge together "like" classes. The recoded image requires class names to be recreated; additionally, area calculations can be performed.
For our final deliverable we had to create a signature file using a new AOI layer and the Inquire button to either draw or "seed" polygons around AOIs. I primarily used the drawing tool but a few times used the seed method. I created three water body signatures and two road signatures. Using an 8-way neighborhood setting and a SED value of 7 and 9 respectively I was able to create my road signatures. After viewing and comparing histograms and mean plots for all signatures I determined that bands 4, 5, and 3 had the greatest separation between the signatures. Using this R4,G5,B3 band combination I set the signature colors. The first supervised classification I performed used the Maximum Likelihood Parametric Rule to determine which pixels were assigned to each specific class. The classified image appeared fine in the viewer, but the distance image showed several bright areas indicating some pixels were misclassified. I went back to my signature file and noticed that some signatures had pixel counts that were 10 or less. I modified these polygons and replaced them in the signature editor. I performed the supervised classification again and then recoded the image. I noticed my colors did not appear correctly in my recoded image. I went back again to my signature file and noticed my Deciduous Forest signature was out of place/order. The Order and Value column values did not match in the Signature Editor. I wasn't sure how I messed the Deciduous Forest signature up but I edited the Order values to match the Value column and then sorted the signatures based on their Order value. This made things appear as they should except my Class # was still incorrect for Deciduous Forest (Class 8 instead of 5). I could not figure out how to fix the Class #. I recoded the image but again noticed my colors were not matching the 453 combination of the original supervised image. I recoded the image again but this time in the Setup Recode dialog I assigned New Values based on the Old Values of each class category. Instead of assigning new values of 1-8 as I had previously done, the new values were 3-Urban, 4-Grasses, 5-Deciduous Forest, 6-Mixed Forest, 7-Fallow, 10-Agriculture, 13-Water, and 16-Roads. This recode setup did the trick and my image appeared with the classes distinguishable with 453 band combination colors. Once I added the image to ArcMap I chose to change the symbology of each class to more clearly identify the type of land use/land cover. This was a challenging lab as it was important to know the purpose of each step and how it affected the image output as well as to stay on top of where files were being saved. This lab was most useful in learning about and understanding the Signature Editor.
First we learned how to create or append signature files by drawing polygons around clusters of like pixels or by growing a signature from "seed" using the Drawing tab/Growing Properties dialog. Adjusting the Spectral Euclidean Distance (SED) and/or changing the Neighborhood setting from 4-way to 8-way will adjust how a "seed" grows. Next, we learned how to evaluate the signatures. Using histograms and mean plots we could view the bands where overlap occurred between two or more signatures. Determining which bands showed the greatest difference between signatures was important because those bands were chosen to Set Signature Colors in the Signature Editor. Once colors were set, we could perform a supervised classification of our image using our signatures to "train" the classification. Using the Maximum Likelihood Parametric Rule, pixels were classed based on the probability that a pixel belonged to a particular signature/class. A distance output file was also created. This file was another analysis tool. Areas that appear very bright in the distance file indicate greater spectral difference which may predict misclassification. Additional signatures could be added or existing signatures could be modified/replaced to include a larger pixel count. Once the final supervised image was obtained, the classes could be recoded to merge together "like" classes. The recoded image requires class names to be recreated; additionally, area calculations can be performed.
For our final deliverable we had to create a signature file using a new AOI layer and the Inquire button to either draw or "seed" polygons around AOIs. I primarily used the drawing tool but a few times used the seed method. I created three water body signatures and two road signatures. Using an 8-way neighborhood setting and a SED value of 7 and 9 respectively I was able to create my road signatures. After viewing and comparing histograms and mean plots for all signatures I determined that bands 4, 5, and 3 had the greatest separation between the signatures. Using this R4,G5,B3 band combination I set the signature colors. The first supervised classification I performed used the Maximum Likelihood Parametric Rule to determine which pixels were assigned to each specific class. The classified image appeared fine in the viewer, but the distance image showed several bright areas indicating some pixels were misclassified. I went back to my signature file and noticed that some signatures had pixel counts that were 10 or less. I modified these polygons and replaced them in the signature editor. I performed the supervised classification again and then recoded the image. I noticed my colors did not appear correctly in my recoded image. I went back again to my signature file and noticed my Deciduous Forest signature was out of place/order. The Order and Value column values did not match in the Signature Editor. I wasn't sure how I messed the Deciduous Forest signature up but I edited the Order values to match the Value column and then sorted the signatures based on their Order value. This made things appear as they should except my Class # was still incorrect for Deciduous Forest (Class 8 instead of 5). I could not figure out how to fix the Class #. I recoded the image but again noticed my colors were not matching the 453 combination of the original supervised image. I recoded the image again but this time in the Setup Recode dialog I assigned New Values based on the Old Values of each class category. Instead of assigning new values of 1-8 as I had previously done, the new values were 3-Urban, 4-Grasses, 5-Deciduous Forest, 6-Mixed Forest, 7-Fallow, 10-Agriculture, 13-Water, and 16-Roads. This recode setup did the trick and my image appeared with the classes distinguishable with 453 band combination colors. Once I added the image to ArcMap I chose to change the symbology of each class to more clearly identify the type of land use/land cover. This was a challenging lab as it was important to know the purpose of each step and how it affected the image output as well as to stay on top of where files were being saved. This lab was most useful in learning about and understanding the Signature Editor.
Supervised Classification of Germantown, Maryland using ERDAS Imagine |
Tuesday, November 1, 2016
GIS4035 - Lab9 - Unsupervised Classification
In Module 9 we learned about using ERDAS Imagine and ArcMap to perform unsupervised automated classifications. In a previous labs, Modules 3 and 4, we had learned about manual land use land cover classifications. Exercise 1 had us use the ArcMap Iso Cluster and Maximum Likelihood Classification tools which are part of the Spatial Analyst toolset.
Exercise 2 required us to use ERDAS
Imagine to classify a high resolution image of UWF’s campus. The goal was to reclassify all
fifty categories of the original image into 5 feature classes. Using
the ERDAS Imagine attribute table we zoomed into the image to view pixels of
several features that were clearly visible.
Next, we changed the color of that feature in the attribute table to one
of the specified four categories: Trees/Dark Green, Grass/Green,
Buildings/Grey, Shadows/Black. Using the
Swipe/Toggle/Blend tools under the Home tab/View Group we could compare our
classification category assignments to the original image. Changing the color of one feature to make it
stand out helped tremendously in assigning class categories. Some features were difficult to classify because
the pixels were split between impermeable and permeable objects. For some pixels it was difficult to tell whether
it represented a roadway, barren land, or grass.
The assigned pixel color might indicate any of those items. Shadows on building rooftops were really part
of the roof but also a shadow creating a darker pixel value than the
other rooftop pixels. Creating a “mixed”
category class was needed for those pixels that were split among the
impermeable and permeable classes of buildings/roads, grass, trees, shadows.
It was necessary to “recode” the image using the Raster tab/Thematic button and select Recode. This is different than the Thematic tab/recode button. The Recode dialog box enables you to sort columns in alphabetical order and create a New Value for each feature row. After the recode we had to go back and add the Class Names column and add an Area column.
Lastly we calculated the total surface area of the image by adding the individual classification area values. We had to determine the percentage of land surface that was permeable and impermeable. In dividing the shadow and mixed categories between permeable and impermeable I chose to split their acreage 50/50. Some shadows of buildings for example were fairly large in comparison to shadows of trees. However, there were more trees with shadows than buildings with shadows. It was hard to determine the exact split so I took the conservative approach and split it evenly. I used the same assumption for the Mixed class.
It was necessary to “recode” the image using the Raster tab/Thematic button and select Recode. This is different than the Thematic tab/recode button. The Recode dialog box enables you to sort columns in alphabetical order and create a New Value for each feature row. After the recode we had to go back and add the Class Names column and add an Area column.
Lastly we calculated the total surface area of the image by adding the individual classification area values. We had to determine the percentage of land surface that was permeable and impermeable. In dividing the shadow and mixed categories between permeable and impermeable I chose to split their acreage 50/50. Some shadows of buildings for example were fairly large in comparison to shadows of trees. However, there were more trees with shadows than buildings with shadows. It was hard to determine the exact split so I took the conservative approach and split it evenly. I used the same assumption for the Mixed class.
Unsupervised Classification of the University of West Florida Campus |
Tuesday, October 25, 2016
GIS4035 - Lab8 - Thermal & Multispectral Analysis
For the deliverable assignment I
scanned both composite images from the lab to look for unique areas of interest. Ultimately, I chose to use the composite image of
Northwest Florida as I am more familiar with that area. I opened two views in ERDAS with a grey scale
band 6 image and a multispectral image of the same location. I adjusted the breakpoints of the gray scale
histogram by using the Discrete DRA button.
This removed the “whitewash” appearance of the image and made features
appear more distinctly by providing more contrast. I chose to change the band combination of the
multispectral image to be a false natural color or Thermal IR composite of 647
as we used in the lab Exercise 3.
After beginning with the setup described above, I began to scan the synced images. I honestly used size, shape, and texture to notice my area of interest. It appeared very bright in both images but had a distinct, unusual shape that was highlighted in the multispectral image. The overall feature was a square shape that had a Star of David like symbol in the center of the feature square. The symbol in the center was a square with a rotated 45° square on top. The details of this feature could not be seen in the gray scale image. I began displaying the image in various multispectral bands to see which band combination could display the feature the best. Ultimately, the RGB 742 band combination made the feature stand out the most from its surroundings. I used the Portland State University webpage (PDX) mentioned in the lecture to help me understand how this combination could provide more detail about the feature. Urban areas in this band combination appear in different shades of magenta. The symbol at the center of this feature area had two distinct shades of magenta. The shape of the feature and width of the sides of the square and diamond comprising the center symbol resembled that of an airport runway. I was confused by the shape of the symbol as most runways appear as an X or cross or as parallel long lines. The PDX website described differences of reflectance responses depending on the terrain feature. If this feature was a runway the square and the diamond may appear as different shades of magenta because one is concrete and the other is asphalt. Another explanation for the color shade difference could be the age of the terrain feature. Maybe the square and diamond are the same material but one was resurfaced or is older than the other surface. To confirm my suspicion that this was some type of airport I used Google Earth. The feature is a Naval Outlying Field that is used for helicopter training which explained the difference in the runway layout versus a traditional airport.
After beginning with the setup described above, I began to scan the synced images. I honestly used size, shape, and texture to notice my area of interest. It appeared very bright in both images but had a distinct, unusual shape that was highlighted in the multispectral image. The overall feature was a square shape that had a Star of David like symbol in the center of the feature square. The symbol in the center was a square with a rotated 45° square on top. The details of this feature could not be seen in the gray scale image. I began displaying the image in various multispectral bands to see which band combination could display the feature the best. Ultimately, the RGB 742 band combination made the feature stand out the most from its surroundings. I used the Portland State University webpage (PDX) mentioned in the lecture to help me understand how this combination could provide more detail about the feature. Urban areas in this band combination appear in different shades of magenta. The symbol at the center of this feature area had two distinct shades of magenta. The shape of the feature and width of the sides of the square and diamond comprising the center symbol resembled that of an airport runway. I was confused by the shape of the symbol as most runways appear as an X or cross or as parallel long lines. The PDX website described differences of reflectance responses depending on the terrain feature. If this feature was a runway the square and the diamond may appear as different shades of magenta because one is concrete and the other is asphalt. Another explanation for the color shade difference could be the age of the terrain feature. Maybe the square and diamond are the same material but one was resurfaced or is older than the other surface. To confirm my suspicion that this was some type of airport I used Google Earth. The feature is a Naval Outlying Field that is used for helicopter training which explained the difference in the runway layout versus a traditional airport.
I
feel less sure of my understanding of the relationship between bands and layers
and how to know which band combination is covering what wavelengths. I think some of my confusion is related to
semantics and the interchanging of the words bands and layers.
Spencer Naval Outlying Field Used For Helicopter Training Displayed Best in R-7, G-4, B-2 Band Combination |
Subscribe to:
Posts (Atom)