The Rural Technology Initiative ceased operations in 2011. This site is maintained as an archive of works from RTI collaborators from 2000 to 2011 and is no longer updated. RTI's successor and remaining staff can be found at NRSIG.org


     
 
   
Search the RTI Website
 
Click to go to the Precision Forestry Cooperative website
Click to go to the RTI Home page
Click to go to the About RTI page
Click to go to the RTI Projects page
Click to go to the RTI Publications page
Click to go to the RTI Tools page
Click to go to the RTI Geographic Information Systems page
Click to go to the RTI Streaming Video Directory
Click to go to the RTI Training page
Click to go to the RTI Contacts page
Click to go to the RTI Image Archive
Click to go to the RTI Site Map
Click to go to the RTI Links page


Appendix B

LiDAR Data Collection

This section was developed as data was processed for the University of Washington Forest Engineering Capstone course using data provided by Washington State Department of Natural Resources. The LIDAR data that was being processed was for the Tahoma State Forest and was flown by TerraPoint, LLP in the spring of 2003.

It was decided to use an algorithm developed by Haugerud and Harding 2001 for a variety of reasons. The first was that the process is published and publicly available. Their connection with the University of Washington made access to assistance a little easier. The algorithm was also developed using data provided by TerraPoint, LLP so it had been used before and some of the issues related to processing on TerraPoint data were documented.

The data was provided in the Washington State Plane Nad83 as ASCII files. The data contained up to four returns per point. Before beginning the process of removing the vegetation points, Hans Anderson of Precision Forestry ran the data through a script that removed all of the points leaving only the last return for each point. The process described below is based on using the last return only data.

Process

  1. The last return data files received from Hans were text files where the first three numbers were [X Y Z], space delimited. Using java application asc2gen.exe, the data was converted into Arc Info Generate format (see Appendix B). This creates new files in the directories with the same names as the input but with the extension of TXT_L.
  2. Copy the file TXT2TIN.AML (see Appendix C) to the folder containing the text files of Generate and last return. Run this script. No arguments are needed.
  3. A folder with the [name]_l for each of the TXT_L files will appear. These are the last return TINs. Copy these folders to a new folder for Deforestation. The folder name GROUND was used and the name will be used for the rest of the process.
  4. In this new folder ground, copy AML scripts VDF.AML, DESPIKE3.AML, ELAPSEDTIME.AML and TIN2DEM.AML (see Appendix D).
  5. Change the working directory in Arc to the GROUND folder and run VDF.AML. The script will look for all the TINs in the directory and process them all at once. This is a very long process. See the following discussion for issues and things to watch out for when running the VDF scripts.
  6. The final stage of the processing is to run the script TIN2DEM.AML. This is the script that handles the combining of the tins into one and then creating the grid and DEM files to be used in other applications. See the following discussion for issues and things to watch out for when running the TIN2DEM.AML script.

Discussion
The first time that this process was run, there were a few things that came up that are worthy of discussion. The first issue was time to process. It took about 12 days to process 270 files in 2 GB of ASCII data. Part of that time was used in modifying scripts to automate the processing of the files. Most of this time was in handling the creation and re-creation of all of the TIN files. Arc Info is very slow in processing these files. Due to a noticed problem in the DZ2 setting (discussed next) it was necessary to rerun the data. This time the process was run using scripts created by Hans Anderson in IDL. The processing time was dropped to about four days. Unfortunately IDL can only be run on a computer with the interpreter so it is not possible to use this standalone. It does leave some thoughts on ways to program the process into a standard language such as Java, C++, or Visual Basic in order to create the ground data set or at least reduce the data size that Arc Info needs to process.

The next problem that was noticed is that in areas that appeared to be dense stands and in particular younger stands, the scripts were not removing enough points so the areas were showing up as raised areas. The problem was traced to the DZ2 setting in the VDF.AML file. The default setting from the website was 8' which is described as four times the grid size used in the processing. Since the default two-foot setting was left alone, it was thought this would stay the same. This setting was placed in the program to get rid of false minimum points in the data and would remove any points that were more than eight feet below the average for that grid. Unfortunately with very dense vegetation the majority of the point get caught high in the canopy and the few points that do make it to the ground are viewed as false lows. To fix this error the DZ2 setting has been modified to 100. This still allows for an initial check for extreme false lows while not removing points that are approximately tree height from the average.

One problem that was noticed that may not have a direct solution is areas of large triangle surfaces. These were particularly noticeable in the dense stands were a lot of the points never make it to the ground. This creates visible indications but has not been determined to provide any negative aspects to some of the analysis procedures like hydrology and view sheds. It does leave a reminder; in those areas the data is of lower quality at the ground level.

Managing the scripts and running them at the correct time was inconvenient at times. With just a little work it would be possible to create a master AML that handled all of the moving of data to the correct locations and running the correct AML to process the data. Due to the amount of data for Tahoma, it was broken into four blocks to be processed. This meant continually monitoring the processes to see where they were.

Most of the variables in the VDF aml were not changed. The one constant in DESPIKE3 that is worthy of a brief note is the two-foot grid setting in the Arc TINLATTICE command. The setting was left at two since the expected data quality was one point per square meter. The effect of increasing this value is not known but might be a way to speed up the processing by increasing the grid size. This would most likely decrease the ground quality to some degree but is something to look into.

A final caution when processing files in Arc Info, Arc is limited in the file name sizes. If any problems occur with the scripts stopping then the first thing to check is the names of the files. If the names are too large it will be necessary to rename the files. A simple script was created in DOS to automate the renaming process for all of the files that were attempted to manage.

File size is important when processing the entire LIDAR data using the TIN2DEM script. The processing limitation in Arc Info is 2.1 GB. Since the LIDAR set for the Tahoma State Forest combined was about 2.8 GB, this lead to Arc Info processing 75% of the data set. Arc Info would give an error message and leave the script during the creation of the MASTER_TIN.

Arc Info usually would allow the appending of the files prior to the creation of MASTER_TIN but at time would give an error messages indicating that it could not append the desired files.

The way to correct the problem was to run the TIN2DEM script for the north and south side of Tahoma State Forest separately. In order to merge the sides together, it is necessary to process an overlap section between the north and south. The outcome would be MASTER files for the north, south and middle section. Load the MASTER_GRD for the middles section into ArcMap and cut the edge around the grid out in order to remove the areas that were interpolated with only a few points. Use the merge function in raster calculator to merge the middle, north and south. Make sure to list the middle section first so that the boundary of the north and south will be ignored. Run the Hillshade process in ArcMap to better visualize if any error occurred in the merge. A visual error is a line from the boundary of the north, south, and middle. A minor error may not be avoidable.

 
School of Environmental and Forest Sciences
USDA Forest Service State & Private Forestry
WSU Cooperative Extension
The Rural Technology Home Page is provided by the College of Forest Resources. For more information, please contact the Rural Technology Initiative, University of Washington Box 352100 Seattle, WA 98195, (206) 543-0827. © 2000-2004, University of Washington, Rural Technology Initiative, including all photographs and images unless otherwise noted. To view the www.ruraltech.org privacy policy, click here.
Last Updated 10/13/2022 12:34:38 PM