Wednesday, September 17, 2025

Module 3

 In this blog I did data analysis to see which road dataset was more complete within a grid. The first thing I did was use the Projection  tool to project the Tiger Road data to be the same as the Centerline.

 Next I used the summarize within tool. When using this tool I set the input polygon as grid and the input summary feature at the tiger roads polygon. For the summary fields I used the LengthinKM field I created to generate the length in Km of the roads and for the statistic I chose sum. In shape unit I used KM. This left me with a table with the sum of roads within each grid. I did the same analysis for the centerline roads data. When both of these were complete I created 2 new fields in the grid: Tiger, for the tiger roads sum, and centerline, for the centerline roads sum.

I joined the summary table of each of the roads to the grid and calculated the field with the summary data. I then created another field for my percent difference calculation. I used the calculate field tool to run this formula.

For symbology I wanted to have a graduated color ramp to show the difference in percentage. I chose to use Natural Breaks with 7 different breaks in the data.



Wednesday, September 10, 2025

Module 2

 

Location of 20 test points

In this lab I used the Positional Accuracy Handbook to find the horizontal accuracy of 2 datasets of streets in Albuquerque: the cities street data and StreetMap USA's data. To complete my analysis I found 20 test points. In finding the right test points I looked out for right angle intersections. I placed a point for each dataset at all 20 locations, then using aerial imagery provided, I placed another point where the intersection should be. I got the latitude and longitude of all points and exported the 3 new tables into .csv tables.

I used the worksheet provided in the Positional Accuracy Handbook to find the RSME and the National Standard for Spatial Data Accuracy.

Formal Accuracy Statements:

Using the National Standard for Spatial Data Accuracy, the street data from the city of 
Albuquerque tested 28.93 meters horizontal accuracy at 95% confidence level

Using the National Standard for Spatial Data Accuracy, the street data from StreetMap USA
tested 206.30 meters horizontal accuracy at 95% confidence level

Wednesday, September 3, 2025

Module 1

 


The distance between the average location and the reference point is 3.18 meters. This lands in the 68% buffer. This tells me that the gps data averages to be within 68% of the actual point location.


For horizontal accuracy the distance from the average and the reference was 3.14 meters. For the results of the horizontal precision looking at the 68% buffer value the distance is 4.4 meters. These 2 values are very close giving a difference of 1.26 meter. This could mean that the gps data is more accurate with horizontal accuracy than it  is precise.

Horizonal accuracy and precision are measured by first getting the average of all of the points longitudes and latitudes. This will leave you with one average waypoint. To get the horizontal precision you will find the difference between the average waypoints longitudes and latitudes and all of the gps points longitudes and latitudes. you will then section these differences out into percentiles, 50th, 68th, and 95th. Using the percentiles you create buffers around the average points. the get the Horizontal accuracy you find the distance between the actual point location and the average of all the gps points.

Friday, August 8, 2025

Module 6 part 2

 In this lab I made a least cost corridor for black bears in Coronado National Forest. For my analysis I made a modelbuilder model. I realized i did not need to change my elevation to a slope. The elevation DEM is already in meters according to its metadata so no transformation was needed before running it through the reclassify tool. Once using the weighted overlay tool i used the cost distance tool twice using both Coronado polygons as sources. To create my corridor i used the same calculations we used in the last scenario. For my final layout i combined the first 2 classes to create one class to represent to corridor.

My final map ending with this corridor:



Wednesday, August 6, 2025

Module 6 Part 1

 In this lab I did a suitability analysis for land development. First I had to plan what i was going to do to achieve this. First I had to find the problem: Find suitable land for development next I determined what my criteria would be:

  1. Land-rank from 5(best)-1(worst). 5-meadow, grass, agriculture. 4-barron. 2-forest. 1-urban, water, wetland

  2. Soils-rank soils from from 5(best)-1(worst) according to soil class

  3. Slope- rank slope from 5(best)-1(worst) less thank 2 degrees in best. Over 12 degrees worst

  4. Rivers- must be 1000 feet away from the river

  5. roads-5(best)-1(worst) closer to roads better

I used model builder to plan out my analysis
In the end I didnt use raster calculator and used weighted overlay
The map I ended with

was this:



Sunday, August 3, 2025

Module 5

 In this lab I created feature classes to create a line of the coastline and point representing land parcels. I used the mosaic dataset feature to create two different mosiacs of pre hurricane rasters and post hurricane. I used these 2 mosiacs to find the damages in a study area. 

I used the select by location tool, selecting within a distance 100 meters from the coastline. The for 200 m i selecting by location 200m and the removed from current selection with 100m. The did a new selection for 300m then removing from current selection 200m.

Within 0-100 meter: 8% had minor damage, 33% had major damage, and 58% were destroyed

Within 100-200 meters: 71 had no damage, 10% had major damage, and 18 % were destroyed

Within 200-300 meters: 95% had no damage, 5% had minor damage. I dont think this is reliable enough for nearby areas since the parking lot only had minor damage being within 100 meters of the coastline throwing off the minor damage rate.



Sunday, July 27, 2025

Module 4-Coastal Flooding Lab

 

In this weeks lab we used tools to analyze LiDAR data to find areas that will likely be flooded due to storm surges in different areas. Using the reclassify tools we were able to find cells that would be flooded at 2 meters and at 1 meter. when then used building data to see what building would be flooded and what kind of building they are. I had a hard time with this lab but I am proud of myself for getting through it.

Friday, July 18, 2025

Module 3- LIDAR Visibility Analysis

 In this week we had to take a course on ESRI for Visibility Analysis. We took four courses within the activity: Introduction to 3D visualization, Performing Line Sight Analysis, Performing Viewshed Analysis in ArcGIS Pro, Sharing 3D Content Using Scene Layer Packages. 

In the first course, Intro to 3D visualization we learned how z values can represent elevation adn will allow us to see a 3D image. We used the provided data to navigate and investigate a 3d scene of Crater Lake in Oregon.

In Performing Line Sight Analysis we learned about line of sight. We used the Construct Sight Lines tool to create sight lines from an observer point. After we used the Line of Sight tool that tells you the visibility along the sight lines we just created. 

In Performing Viewshed Analysis in Arcpro we used the Viewshed tool to see waved based items by adjusting refractivity coefficients. This tools let us see streetlight cover, and how buildings can be effected by terrain. 

In Sharing 3d content we learned how we can upload our data to AGOL to share with the public, or organizations

Sunday, July 13, 2025

Module 2, LiDAR and forestry

 In this weeks lab I had to extract liDAR data from Virginia, from the vgil website. With the liDAR data I used many geoprocessing tools to find information. In part 1 of the lab I used tools to separate the part of the liDAR that represents the ground, and then get the area that is not on the ground. Using the minus tool and these 2 new datasets I was able to find the hieght of the canopy in Virginia. 

In part 2 of the lab I used the count, null, plus, float, and divide tool to create a Density map of the canopy in Virginia. In the density map you are able to see areas with dense vegetation with the 1 values showing up darker. The lighter (0) values are showing the ground areas, with low vegetation. This is helpful to foresters to see the areas with higher density, You can see in this map that roads have very low density. 

The lab this week was very interesting but was difficult with the slow speed of the remote desktop. Creating my presentable maps was a challenge due to the slow network speed. 





Wednesday, July 9, 2025

Module 1 Crime Analysis

 The purpose of this lab was to use Arcpro to analyze crime data. Using different techniques of showing the hotspots of crimes can be extremely helpful to police forces for predicting future crime areas. With this lab I create three different homicide hotspot maps using kernal density, grid-based thematics, and local morans I analysis. 

Grid_based: Spatial Join of Chicago grid and 2017 total homicides, keeping all other parameters as default. Opened the table for this feature and selected by attributes- join count greater then 0, exported this as new feature class. To get the top 20 percent I took the total number of of objects (311) and multiplied by .2 which gave me 62.2. Rounding to 62 i sorted join counts by descending and selected the top 62 and exported this.


Kernel Density: Using the Kernel Density tool I made the KD raster image using the 2017 total homicides with the chicago boundary as the barrier. Under symbology, statistics, the mean was 1.18*3=3.54 and the max was 38.86. I used those 2 numbers at the 2 breaks. I then used the reclassify tool, then the raster to polygon tool. I opened the table for the polygon and selected by attribute for grid code is equal to 2.


Local Morans I:I used spatial join between census tracts and  2017 total homicides, keeping all other parameters as default. Added the field “crime_rate” and used the field calculator with the equation  (Crime Rate = !Join_Count! / !total_households!) * 1000. I then used the cluster and outlier analysis (anselin local morans I) tool with the new feature class with the crime rate field as the input field. In this new feature class I selected by attribute only the high-high crime areas, exported this feature. I then used the dissolve tool to create my final feature class. 


If I were given a limited policing budget and had to choose which of these three maps I would use to allocate my officers I would choose the grid overlay. According to the crime density, 11 homicides per square mile is the highest rate of homicides. This is also the map with the smallest total area, so I would need the least amount of land cover with all the maps.

Wednesday, May 28, 2025

Module 2

 This week I learned how to use methods, functions, strings, and loops. This was my first attempt at creating a script and I did feel intimated. After doing the readings I did feel more confident. I pritned my last name from a string, made fixes to the provided dice game script, created a list of 20 random numbers from 1-10. I then determined an unlucky number and created a code to remove this unlucky number from the list and tell me how many times it was removed. I did have troubles with formatting and indenting in the scripts. 






Monday, May 19, 2025

Module 1 Python Environments & Flowcharts

 I this weeks lab we used IDLE to run a script provided in the R:\GisProgramming folder. IDLE has two windows that pop up. One is a shell window and one is an editor window. To run the script I selected file and then selected open. After selecting the necessary script I pressed run. The result of this code was all of the folders I will need for this course we made instantly. 

I learned the basics of pseudocode and flowcharts in the readings this week and created a flowchart illustrating the converting of 3 radians to degrees and print the result



Using IDLE I typed "import this" and pressed run to get the poem "The Zen of Python" This poem is mixing references in python to real life scenarios and vice versa. The line errors should never pass silently would mean in python errors in your code will not be silent, they will be called out by error messages. This poem is almost like a set of guidelines that a coder could look to for the correct ways to create a script. Readability counts meaning making the script readable is important. In the face of ambiguity, refuse the temptation to guess could mean don’t just guess on how to solve issues in your code, get a definite answer. 


Friday, May 2, 2025

Module 7



 
In this lab I learned how to use google earth pro to create maps and create a tour of south Florida. Initially I converted data of south florida surface water into a kmz file so we can use the data in google earth pro. Then I used that data and other data provided to create a dot density map of south Florida. I learned how to overlay a legend into google earth pro to create the map provided above. 

In the second part of this lab I created a tour of south Florida but dropping pins in different south Florida locations. I recorded a tour of the pins by clicking on different points of the map. 

I enjoyed exploring the different ways you can show data on google earth pro. I felt like this lab was relativlely easy and google earth pros function were very user friendly and straightforward.