diff --git a/thesis/Main.pdf b/thesis/Main.pdf index 6e71807..ccfa7ad 100644 Binary files a/thesis/Main.pdf and b/thesis/Main.pdf differ diff --git a/thesis/Main.tex b/thesis/Main.tex index 8e14a74..5d331c4 100755 --- a/thesis/Main.tex +++ b/thesis/Main.tex @@ -1672,12 +1672,10 @@ Representative precision–recall curves illustrate how methods differ in their \newsection{results_latent}{Effect of latent space dimensionality} Figure~\ref{fig:latent_dim_ap} plots AP versus latent dimension under the experiment-based evaluation. DeepSAD benefits from compact latent spaces (e.g., 32–128), with diminishing or negative returns at larger codes. Baseline methods are largely flat across dimensions, reflecting their reliance on fixed embeddings. (Hand-labeled results saturate and are shown in the appendix.) -\fig{latent_dim_ap}{figures/latent_dim_ap.png}{AP as a function of latent dimension (experiment-based evaluation). DeepSAD benefits from smaller codes; baselines remain flat.} +\fig{latent_dim_ap}{figures/results_ap_over_latent.png}{AP as a function of latent dimension (experiment-based evaluation). DeepSAD shows inverse correlation between AP and latent space size.} -\newsection{results_semi_labeling}{Effect of semi-supervised labeling regime} -Figure~\ref{fig:labeling_regime_ap} compares AP across labeling regimes (0/0, 50/10, 500/100). Surprisingly, the unsupervised regime (0/0) often performs best; adding labels does not consistently help, likely due to label noise and the scarcity/ambiguity of anomalous labels. Baselines (which do not use labels) are stable across regimes. - -\fig{labeling_regime_ap}{figures/labeling_regime_ap.png}{AP across semi-supervised labeling regimes. Unsupervised training often performs best; added labels do not yield consistent gains under noisy conditions.} +\newsection{results_semi}{Effect of semi-supervised labeling regime} +Refering back to the results in table~\ref{tab:results_ap} compares AP across labeling regimes (0/0, 50/10, 500/100). Surprisingly, the unsupervised regime (0/0) often performs best; adding labels does not consistently help, likely due to label noise and the scarcity/ambiguity of anomalous labels. Baselines (which do not use labels) are stable across regimes. % --- Section: Autoencoder Pretraining Results --- \newsection{results_inference}{Autoencoder Pretraining Results} @@ -1771,9 +1769,9 @@ This work has shown that the DeepSAD principle is applicable to lidar degradatio We also observed that the choice of encoder architecture is critical. As discussed in Section~\ref{sec:results_deepsad}, the Efficient architecture consistently outperformed the LeNet-inspired baseline in pretraining and contributed to stronger downstream performance. The influence of encoder design on DeepSAD training merits further study under cleaner evaluation conditions. In particular, benchmarking different encoder architectures on datasets with high-quality ground truth could clarify how much of DeepSAD’s performance gain stems from representation quality versus optimization. -Future work could also explore per-sample weighting of semi-supervised targets. If analog ground truth becomes available, this would allow DeepSAD to better capture varying degrees of degradation by treating supervision as a graded signal rather than a binary label. +Future work could also explore per-sample weighting of semi-supervised targets. If analog ground truth becomes available, this may allow DeepSAD to better capture varying degrees of degradation by treating supervision as a graded signal rather than a binary label. -\newsection{conclusion_open_questions}{Open Questions and Future Directions} +\newsection{conclusion_open_questions}{Open Questions and Future Work} Several promising avenues remain open for future exploration: \begin{itemize} diff --git a/thesis/figures/results_ap_over_latent.png b/thesis/figures/results_ap_over_latent.png new file mode 100644 index 0000000..fce89f1 Binary files /dev/null and b/thesis/figures/results_ap_over_latent.png differ