Index: sparse_sample/damkjer_sparse_sample.tex
===================================================================
--- sparse_sample/damkjer_sparse_sample.tex	(revision 4)
+++ sparse_sample/damkjer_sparse_sample.tex	(revision 5)
@@ -19,4 +19,6 @@
 %%       Marked up sections in work. Added Prior work section for discussion on
 %%       similar approaches and foundational material. Cleaned up equations.
+%%    2013-NOV-15  K. Damkjer
+%%       Provided initial draft of Prior Work section.
 %%=============================================================================
 
@@ -111,16 +113,21 @@
 An industry standard binary exchange format---LASER File Format (LAS)---was introduced by the American Society for Photogrammetry and Remote Sensing (ASPRS) to facilitate data exchange and minimize overhead.\cite{ASPRS:2012} Recent development of a compressed version of LAS---LASzip---has gained wide-spread adoption and offers lossless, non-progressive, streaming, order-preserving compression, random access, and typical compression ratios between 10 and 20 percent of original file size.\cite{Isenburg:2011} 
 
-The compression achieved by the LASzip format can be significant, however achieving optimal results requires that the uncompressed input data be aliased to a regular point spacing. This operation may not be an appropriate modification of the data in all scenarios. Even with an effective compression strategy, data reduction may be necessary to support users in bandwidth-limited and mobile device environments or to support efficient querying and comparison of data holdings in processing and archival systems.
-
-An ideal data reduction algorithm should remove elements from a data set in an information-preserving manner. Several approaches have been developed to identify salient elements in dense scenes. Basic features were proposed based on structure-tensor--eigenvalue analysis of local point neighborhoods.\cite{West:2004} These feature sets have been enhanced to extract strong spatially linear features to support scene modeling applications.\cite{Gross:2006} Methods have also been developed to direct optimal neighborhood scale selection for feature attribution.\cite{Demantke:2011}
-
-In this paper, we present a method for extending the previously mentioned metrics and methods to higher dimensional spaces support unsupervised sparse sampling of LiDAR point data while preserving scene information content.
+The compression achieved by the LASzip format can be significant, however achieving optimal results requires that the uncompressed input data be aliased to a regular point spacing. This operation may not be an appropriate modification of the data in all scenarios. Even with an effective compression strategy, data reduction may be necessary to support users in bandwidth-limited and mobile device environments or to support efficient querying and comparison of data holdings in processing and archival systems. It is therefore necessary to establish approaches to intelligently thin point data in a manner that simplifies scene content without diminishing information content.
+
+An ideal data reduction algorithm should remove elements from a data set in an information-preserving manner. While it is important to consider mechanisms for efficiently pruning insignificant points, it is more important to establish approaches for efficiently and effectively identifying salient points for retention. Several approaches have been developed to identify salient points based solely on spatial coordinates. Basic features were proposed based on structure-tensor--eigenvalue analysis of local point neighborhoods.\cite{West:2004} These feature sets have been enhanced to extract strong spatially linear features to support scene modeling applications.\cite{Gross:2006} Methods have also been developed to direct optimal neighborhood scale selection for feature attribution.\cite{Demantke:2011}
+
+In this paper, we present a method for extending the previously described metrics and methods to higher dimensional spaces support unsupervised sparse sampling of LiDAR point data while preserving scene information content.
 
 %TODO Provide overview of paper structure.
-
 \section{Prior Work}
-{\color{brickred}
-Cover point cloud simplification approaches by Dyn, Moenning, and Yu. Discuss commonalities like avoiding intermediate mesh representations and segmentation, identification of salient sections, spatial-only consideration, \textit{etc.} Discuss differences like adaptability in the case of Yu, progressive simplification in the case of Moenning, \textit{etc.}
-} \cite{Dyn:2008} \cite{Moenning:2003} \cite{Yu:2010}
+Point cloud thinning and model simplification are not new areas of research in computer graphics and related fields relying on point-based model representations. Mesh-based approaches for model simplification have long been considered in 3D graphics to support efficient rendering and representation of complex models in real-time applications. Mesh-free approaches have been largely considered to aid in surface reconstruction from unorganized point data, usually from laser-scanned data. {\color{brickred} TODO: Add references. Beware weasel words and blind assertions.}
+
+Moenning and Dodgson present an approach to model simplification using Fast Marching farthest point sampling for implicit surfaces and point clouds. Their approach operates in a coarse-to-fine manner subject to user controlled density guarantees. They also present options uniform or feature-driven point selection. \cite{Moenning:2003}
+
+Dyn et al present a related approach using recursive sub-sampling driven by local surface approximation. Their approach operates in a fine-to-coarse manner driven by a desired terminal point set size. Their point selection metric is solely based on a significance criterion, and the input point cloud geometry. \cite{Dyn:2008}
+ 
+Similar to Dyn, Yu et al present an approach that enforces a post-condition of a terminal point set size. Their approach differs from those previously discussed by operating in an adaptive manner driven by point clustering and user-specified simplification criteria and optimization process. \cite{Yu:2010}
+
+While all of these approaches operate without generating an explicit mesh surface, they implicitly carry forward the legacy of mesh-based approaches by limiting their analysis to spatial coordinates. They also largely operate under the assumption that they are selecting points that appear significant for participation in a local surface. When considering remotely sensed LiDAR data, several of these assumptions are invalidated. Scenes imaged by LiDAR sensors are complex and contain significant points belonging to linear, planar, and isotropic structures. LiDAR data also frequently contains additional intensity or color data. These additional dimensions may contain content that is salient to end-user applications, but is not discoverable through analysis of the spatial dimensions alone. Points may also be attributed with any number of features that should be preserved through the simplification process, suggesting the need for a multi-dimensional approach.
 
 \section{Local Statistic Attribution}
