start exp chapter
This commit is contained in:
@@ -440,7 +440,7 @@ Each instance a lidar emits and receives a laser pulse, it can use the ray's dir
|
|||||||
|
|
||||||
%In subterranean and disaster environments—such as collapsed tunnels or earthquake‐damaged structures—LiDAR has become the de facto sensing modality for mapping and navigation. Its ability to rapidly generate accurate 3D point clouds enables simultaneous localization and mapping (SLAM) even in GPS‐denied conditions. Yet, the airborne particles prevalent in these scenarios—dust stirred by collapse, smoke from fires—introduce significant noise: early returns from nearby aerosols obscure real obstacles, and missing returns behind particle clouds can conceal hazards. While many SLAM and perception algorithms assume clean, high‐quality data, real‐world rescue deployments must contend with degraded point clouds. This mismatch motivates our work: rather than simply removing noise, we aim to quantify the degree of degradation so that downstream mapping and decision‐making algorithms can adapt to and compensate for varying data quality.
|
%In subterranean and disaster environments—such as collapsed tunnels or earthquake‐damaged structures—LiDAR has become the de facto sensing modality for mapping and navigation. Its ability to rapidly generate accurate 3D point clouds enables simultaneous localization and mapping (SLAM) even in GPS‐denied conditions. Yet, the airborne particles prevalent in these scenarios—dust stirred by collapse, smoke from fires—introduce significant noise: early returns from nearby aerosols obscure real obstacles, and missing returns behind particle clouds can conceal hazards. While many SLAM and perception algorithms assume clean, high‐quality data, real‐world rescue deployments must contend with degraded point clouds. This mismatch motivates our work: rather than simply removing noise, we aim to quantify the degree of degradation so that downstream mapping and decision‐making algorithms can adapt to and compensate for varying data quality.
|
||||||
|
|
||||||
LiDAR’s high accuracy, long range, and full-circle field of view make it indispensable for tasks like obstacle detection, simultaneous localization and mapping (SLAM), and terrain modeling in autonomous driving and mobile robotics. While complementary sensors—such as time-of-flight cameras, ultrasonic sensors, and RGB cameras—have their strengths at short range or in particular lighting, only LiDAR delivers the combination of precise 3D measurements over medium to long distances, consistent performance regardless of illumination, and the point-cloud density needed for safe navigation. LiDAR systems do exhibit intrinsic noise (e.g., range quantization or occasional multi-return ambiguities), but in most robotic applications these effects are minor compared to environmental degradation.
|
LiDAR’s high accuracy, long range, and full-circle field of view make it indispensable for tasks like obstacle detection, simultaneous localization and mapping (SLAM), and terrain modeling in autonomous driving and mobile robotics. While complementary sensors—such as time-of-flight cameras, ultrasonic sensors, and RGB cameras—have their strengths at short range or in particular lighting, only LiDAR delivers the combination of precise 3D measurements over medium to long distances, consistent performance regardless of illumination, and the pointcloud density needed for safe navigation. LiDAR systems do exhibit intrinsic noise (e.g., range quantization or occasional multi-return ambiguities), but in most robotic applications these effects are minor compared to environmental degradation.
|
||||||
|
|
||||||
In subterranean and rescue domain scenarios, the dominant challenge is airborne particles: dust kicked up by debris or smoke from fires. These aerosols create early returns that can mask real obstacles and cause missing data behind particle clouds, undermining SLAM and perception algorithms designed for cleaner data. This degradation is a type of atmospheric scattering, which can be caused by any kind of airborne particulates (e.g., snowflakes) or liquids (e.g., water droplets). Other kinds of environmental noise exist as well, such as specular reflections caused by smooth surfaces, beam occlusion due to close objects blocking the sensor's field of view or even thermal drift-temperature affecting the sensor's circuits and mechanics, introducing biases in the measurements.
|
In subterranean and rescue domain scenarios, the dominant challenge is airborne particles: dust kicked up by debris or smoke from fires. These aerosols create early returns that can mask real obstacles and cause missing data behind particle clouds, undermining SLAM and perception algorithms designed for cleaner data. This degradation is a type of atmospheric scattering, which can be caused by any kind of airborne particulates (e.g., snowflakes) or liquids (e.g., water droplets). Other kinds of environmental noise exist as well, such as specular reflections caused by smooth surfaces, beam occlusion due to close objects blocking the sensor's field of view or even thermal drift-temperature affecting the sensor's circuits and mechanics, introducing biases in the measurements.
|
||||||
|
|
||||||
@@ -898,6 +898,20 @@ By evaluating and comparing both approaches, we hope to demonstrate a more thoro
|
|||||||
{codebase, hardware description overview of training setup, details of deepsad setup}
|
{codebase, hardware description overview of training setup, details of deepsad setup}
|
||||||
{overview of chapter given $\rightarrow$ give sequential setup overview}
|
{overview of chapter given $\rightarrow$ give sequential setup overview}
|
||||||
|
|
||||||
|
%The authors of \citetitle{deepsad} share the code to their implementation and testing methodologies online on \url{https://github.com/lukasruff/Deep-SAD-PyTorch}, which we used in our thesis as a base to work from. The codebase implements a framework for loading predefined datasets, training and testing DeepSAD as well as some baselines that were used in the original paper. In the coming sections we will describe how we adapted this code to our needs, how we loaded the data from \citetitle{subter}, how we set up DeepSAD's network, how we trained, tested and compared DeepSAD and two baseline algorithms, namely an isolation forest and an one-class support vector machine. We also shortly go over the hardware and software used during experiments and give guidelines for training and inference times.
|
||||||
|
|
||||||
|
We built our experiments on the official DeepSAD PyTorch implementation and evaluation framework, available at \url{https://github.com/lukasruff/Deep-SAD-PyTorch}. This codebase provides routines for loading standard datasets, training DeepSAD and several baseline models, and evaluating their performance.
|
||||||
|
|
||||||
|
In the following sections, we detail our adaptations to this framework:
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Data integration: preprocessing and loading the dataset from \citetitle{subter}.
|
||||||
|
\item Model architecture: configuring DeepSAD’s encoder to match our pointcloud input format.
|
||||||
|
\item Training \& evaluation: training DeepSAD alongside two classical baselines—Isolation Forest and one-class SVM—and comparing their degradation-quantification performance.
|
||||||
|
\item Experimental environment: the hardware and software stack used, with typical training and inference runtimes.
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
|
||||||
%\todo[inline]{codebase}
|
%\todo[inline]{codebase}
|
||||||
|
|
||||||
\newsection{setup_overview}{Experimental Setup Overview}
|
\newsection{setup_overview}{Experimental Setup Overview}
|
||||||
@@ -909,6 +923,10 @@ By evaluating and comparing both approaches, we hope to demonstrate a more thoro
|
|||||||
{codebase, github, dataloading, training, testing, baselines}
|
{codebase, github, dataloading, training, testing, baselines}
|
||||||
{codebase understood $\rightarrow$ how was it adapted}
|
{codebase understood $\rightarrow$ how was it adapted}
|
||||||
|
|
||||||
|
The PyTorch implementation of the DeepSAD implementation and framework was originally developed for Python 3.7 but could be adapted to Python 3.12 without the need for many changes. It originally included the MNIST, Fashion-MNIST, and CIFAR-10 datasets and arrhythmia, cardio, satellite, satimage-2, shuttle, and thyroid datasets from \citetitle{odds}~\cite{odds}, as well as suitable autoencoder and DeepSAD network architectures for the corresponding datatypes.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
%\todo[inline]{data preprocessed (2d projections, normalized range)}
|
%\todo[inline]{data preprocessed (2d projections, normalized range)}
|
||||||
\threadtodo
|
\threadtodo
|
||||||
{explain how dataloading was adapted}
|
{explain how dataloading was adapted}
|
||||||
|
|||||||
@@ -529,7 +529,15 @@
|
|||||||
year = {2017},
|
year = {2017},
|
||||||
month = feb,
|
month = feb,
|
||||||
pages = {985–1009},
|
pages = {985–1009},
|
||||||
|
},
|
||||||
|
@misc{odds,
|
||||||
|
author = {Shebuti Rayana},
|
||||||
|
year = {2016},
|
||||||
|
title = {ODDS Library},
|
||||||
|
url = {https://odds.cs.stonybrook.edu},
|
||||||
|
institution = {Stony Brook University, Department of Computer Sciences},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user