Thursday, January 29, 2026

Lab 3 Elevation and Hillshade

 

This weeks lab had an emphasis on hillshading and visualize 3d elements of maps. In the first part of this lab we learned how to use a visualize contour lines in a mountain range. We smoothed out the more jagged contour lines using the geoprocessing tool "Focal Statistics" to make the contours easier to interpret. 

In the second and third part of this lab I learned how to hillshade and what hillshading means. I learned the importance of azimuth and how different parts of the day can cause your hillshading to be different. 

In the final part of this lab I had a short look at local scenes in ArcPro. I used tin data to visualize a 3d scene of a valley.

Friday, January 23, 2026

Lab 2 Coordinate Systems

 This week the lab was focused on coordinate systems. I really enjoyed this lab because projections have always left me a little confused. I feel like this lab gave me much more  clarity on appropriate projections for study areas. 

the map I created was a accurately projected map of Massachusetts


I chose Massachusetts for my area of interest because this is where I grew up. When looking between UTM_Zones and the state_plane_zones, the clear answer was state plane. UTM zones cut Massachusetts in half so any zone would not fit all of Massachusetts perfectly. State plane has all of mainland Massachusetts in one area. This allows all of Massachusetts to be projected accurately in the projection. I chose NAD 1983 (2011) State Plane Massachusetts FIPS 2001 (Meters).


Friday, January 16, 2026

Lab 1 Communicating GIS

 This week I learned the a lot about the 5 basic map design principles:

▪ Visual contrast

▪ Legibility

▪ Figure-ground organization

▪ Hierarchical organization

▪ Balance

I created 5 maps looking into different symbology and labeling features.
This map highlight different areas in San Francisco.
In the map I kept all my text font to be Calibi to have the map remain consistent. For water I made my font Italic and chose a dark blue to contrast against the lighter blue water. For parks I chose a dark green with a white halo around it to increase visibility against the green symbology of the park. When labeling the general areas I chose a black with a white halo to improve visibility and contrast. With the smaller islands I used a line callout to avoid having the font very small causing it to be illegible. With the neighborhoods I used a black font with a balloon callout because these are very small areas and I wanted the neighborhoods to stand out and not look like the general areas. For the twin peaks and hills I created curved labels with a light gray font since these are geographic landmarks.


Thursday, October 16, 2025

Module 6


Scale can have a large effect on vector data. When studying hydrographic data with varying scales it is clear the line water will change depending on scale, a higher scale giving much more accurate data. Resolution, just like scale, will have a large effect on data. When you have a better resolution your data will be smoother and more accurate. 

Gerrymandering is the manipulation of boundaries to favor one party over another. One way to measure this is using the Polsby-Popper score to find the compactness of a district. The lower the score the worst the 'offender' is. Below is a screenshot of the worst offender in the data.

Tuesday, October 7, 2025

Module 5

 


Image of Spline interpolation method using tension. 
In this weeks lab we used different methods of interpolation to map data from water qualify samples.

The first method was non-spatial, which was just getting the data from the actual points. The next method was Thiessen. this method draws lines from neighboring points and create polygons with the resulting polygon having the value of the center point. The next method was IDW. This method estimates unknown values from points nearby. In this method the further away the point is, the less weight it has in determining the unsampled value. The last method is the spline method. In the spline method the data is used to create smooth curves from sample to sample. 

This weeks lab was pretty straightforward and I know feel like I have a good understanding of interpolation methods. 

Wednesday, September 24, 2025

Module 4

 The purpose of this weeks lab was to explore TIN and DEM data sets. I had to create 3d visualizations of elevation model. I learned to purpose of TIN data and how it can be used as an elevation source creating these 3d models. Using different symbology's and the reclassify tool allowed me to make the models more presentable. In the image provided I used a DEM to develop a Ski Run Suitability Map:



Wednesday, September 17, 2025

Module 3

 In this blog I did data analysis to see which road dataset was more complete within a grid. The first thing I did was use the Projection  tool to project the Tiger Road data to be the same as the Centerline.

 Next I used the summarize within tool. When using this tool I set the input polygon as grid and the input summary feature at the tiger roads polygon. For the summary fields I used the LengthinKM field I created to generate the length in Km of the roads and for the statistic I chose sum. In shape unit I used KM. This left me with a table with the sum of roads within each grid. I did the same analysis for the centerline roads data. When both of these were complete I created 2 new fields in the grid: Tiger, for the tiger roads sum, and centerline, for the centerline roads sum.

I joined the summary table of each of the roads to the grid and calculated the field with the summary data. I then created another field for my percent difference calculation. I used the calculate field tool to run this formula.

For symbology I wanted to have a graduated color ramp to show the difference in percentage. I chose to use Natural Breaks with 7 different breaks in the data.



Wednesday, September 10, 2025

Module 2

 

Location of 20 test points

In this lab I used the Positional Accuracy Handbook to find the horizontal accuracy of 2 datasets of streets in Albuquerque: the cities street data and StreetMap USA's data. To complete my analysis I found 20 test points. In finding the right test points I looked out for right angle intersections. I placed a point for each dataset at all 20 locations, then using aerial imagery provided, I placed another point where the intersection should be. I got the latitude and longitude of all points and exported the 3 new tables into .csv tables.

I used the worksheet provided in the Positional Accuracy Handbook to find the RSME and the National Standard for Spatial Data Accuracy.

Formal Accuracy Statements:

Using the National Standard for Spatial Data Accuracy, the street data from the city of 
Albuquerque tested 28.93 meters horizontal accuracy at 95% confidence level

Using the National Standard for Spatial Data Accuracy, the street data from StreetMap USA
tested 206.30 meters horizontal accuracy at 95% confidence level

Wednesday, September 3, 2025

Module 1

 


The distance between the average location and the reference point is 3.18 meters. This lands in the 68% buffer. This tells me that the gps data averages to be within 68% of the actual point location.


For horizontal accuracy the distance from the average and the reference was 3.14 meters. For the results of the horizontal precision looking at the 68% buffer value the distance is 4.4 meters. These 2 values are very close giving a difference of 1.26 meter. This could mean that the gps data is more accurate with horizontal accuracy than it  is precise.

Horizonal accuracy and precision are measured by first getting the average of all of the points longitudes and latitudes. This will leave you with one average waypoint. To get the horizontal precision you will find the difference between the average waypoints longitudes and latitudes and all of the gps points longitudes and latitudes. you will then section these differences out into percentiles, 50th, 68th, and 95th. Using the percentiles you create buffers around the average points. the get the Horizontal accuracy you find the distance between the actual point location and the average of all the gps points.

Friday, August 8, 2025

Module 6 part 2

 In this lab I made a least cost corridor for black bears in Coronado National Forest. For my analysis I made a modelbuilder model. I realized i did not need to change my elevation to a slope. The elevation DEM is already in meters according to its metadata so no transformation was needed before running it through the reclassify tool. Once using the weighted overlay tool i used the cost distance tool twice using both Coronado polygons as sources. To create my corridor i used the same calculations we used in the last scenario. For my final layout i combined the first 2 classes to create one class to represent to corridor.

My final map ending with this corridor:



Wednesday, August 6, 2025

Module 6 Part 1

 In this lab I did a suitability analysis for land development. First I had to plan what i was going to do to achieve this. First I had to find the problem: Find suitable land for development next I determined what my criteria would be:

  1. Land-rank from 5(best)-1(worst). 5-meadow, grass, agriculture. 4-barron. 2-forest. 1-urban, water, wetland

  2. Soils-rank soils from from 5(best)-1(worst) according to soil class

  3. Slope- rank slope from 5(best)-1(worst) less thank 2 degrees in best. Over 12 degrees worst

  4. Rivers- must be 1000 feet away from the river

  5. roads-5(best)-1(worst) closer to roads better

I used model builder to plan out my analysis
In the end I didnt use raster calculator and used weighted overlay
The map I ended with

was this:



Sunday, August 3, 2025

Module 5

 In this lab I created feature classes to create a line of the coastline and point representing land parcels. I used the mosaic dataset feature to create two different mosiacs of pre hurricane rasters and post hurricane. I used these 2 mosiacs to find the damages in a study area. 

I used the select by location tool, selecting within a distance 100 meters from the coastline. The for 200 m i selecting by location 200m and the removed from current selection with 100m. The did a new selection for 300m then removing from current selection 200m.

Within 0-100 meter: 8% had minor damage, 33% had major damage, and 58% were destroyed

Within 100-200 meters: 71 had no damage, 10% had major damage, and 18 % were destroyed

Within 200-300 meters: 95% had no damage, 5% had minor damage. I dont think this is reliable enough for nearby areas since the parking lot only had minor damage being within 100 meters of the coastline throwing off the minor damage rate.



Sunday, July 27, 2025

Module 4-Coastal Flooding Lab

 

In this weeks lab we used tools to analyze LiDAR data to find areas that will likely be flooded due to storm surges in different areas. Using the reclassify tools we were able to find cells that would be flooded at 2 meters and at 1 meter. when then used building data to see what building would be flooded and what kind of building they are. I had a hard time with this lab but I am proud of myself for getting through it.