some more work on background
This commit is contained in:
@@ -316,6 +316,8 @@ Chapter~\ref{chp:deepsad} describes DeepSAD in more detail, which shows that it
|
||||
|
||||
%\todo[inline, color=green!40]{data availability leading into semi-supervised learning algorithms}
|
||||
|
||||
There is a wide array of problems in domains similar to the one we research in this paper, for which modeling them as anomaly detection problems has been proven successful. The degradation of pointclouds, produced by an industrial 3D sensor, has been modeled as an anomaly detection task in~\cite{bg_ad_pointclouds_scans}. \citeauthor{bg_ad_pointclouds_scans} propose a student-teacher model capable of infering a pointwise anomaly score for degradation in point clouds. The teacher network is trained on an anomaly-free dataset to extract dense features of the point clouds' local geometries, after which an identical student network is trained to emulate the teacher networks' outputs. For degraded pointclouds the regression between the teacher's and student's outputs is calculated and interpreted as the anomaly score, with the rationalization that the student network has not observed features produced by anomalous geometries during training, leaving it incapable of producing a similar output as the teacher for those regions. Another example would be~\cite{bg_ad_pointclouds_poles}, which proposes a method to detect and classify pole-like objects in urban point cloud data, to differentiate between natural and man-made objects such as street signs, for autonomous driving purposes. An anomaly detection method was used to identify the vertical pole-like objects in the point clouds and then using a clustering algorithm to group similar objects and classify them as either trees or poles.
|
||||
|
||||
As already shortly mentioned at the beginning of this section, anomaly detection methods and their usage are oftentimes challenged by the limited availability of anomalous data, owing to the very nature of anomalies which are rare occurences. Oftentimes the intended use-case is to even find unknown anomalies in a given dataset which have not yet been identified. In addition, it can be challenging to classify anomalies correctly for complex data, since the very definition of an anomaly is dependent on many factors such as the type of data, the intended use-case or even how the data evolves over time. For these reasons most types of anomaly detection approaches limit their reliance on anomalous data during training and many of them do not differentiate between normal and anomalous data at all. DeepSAD is a semi-supervised method which is characterized by using a mixture of labeled and unlabeled data.
|
||||
|
||||
|
||||
@@ -349,7 +351,11 @@ As already shortly mentioned at the beginning of this section, anomaly detection
|
||||
{explain what ML is, how the different approaches work, why to use semi-supervised}
|
||||
{autoencoder special case (un-/self-supervised) used in DeepSAD $\rightarrow$ explain autoencoder}
|
||||
|
||||
Machine learning defines types of algorithms capable of learning from existing data to perform tasks on previously unseen data without being explicitely programmed to do so~\cite{machine_learning_first_definition}. They are oftentimes categorized by the underlying technique employed, by the type of task they are trained to achieve or by the feedback provided to the algorithm during training. For the latter, the most prominent categories are supervised learning, unsupervised learning and reinforcement learning.
|
||||
Machine learning defines types of algorithms capable of learning from existing data to perform tasks on previously unseen data without being explicitely programmed to do so~\cite{machine_learning_first_definition}. Many kinds of machine learning methods exist, but neural networks are one of the most commonly used and researched of them, due to their versatility and domain independent success over the last decades. They are comprised of connected artifical neurons, modeled roughly after neurons and synapses in the brain.
|
||||
\todo[inline, color=green!40]{talk about neural networks, deep learning, backwards propagation, optimization goals, iterative process, then transition to the categories}
|
||||
One way to categorize machine learning algorithms is by the nature of the feedback provided for the algorithm to learn. The most prominent of those categories are supervised learning, unsupervised learning and reinforcement learning.
|
||||
|
||||
\todo[inline, color=green!40]{rewrite last paragraph to be more generally about ML first, talk about neural networks, deep learning, backwards propagation, optimization goals, iterative process, then transition to the categories}
|
||||
|
||||
For supervised learning each data sample is augmented by including a label depicting the ideal output the algorithm can produce for the given input. During the learning step these algorithms can compare their generated output with the one provided by an expert and calculate the error between them, minimizing the error to improve performance. Such labels are typically either a categorical or continuous target which are most commonly used for classification and regression tasks respectively.
|
||||
|
||||
@@ -357,7 +363,7 @@ For supervised learning each data sample is augmented by including a label depic
|
||||
|
||||
Unsupervised learning algorithms use raw data without a target label that can be used during the learning process. These types of algorithms are often utilized to identify underlying patterns in data which may be hard to discover using classical data analysis due to for example large data size or high data complexity. Cluster analysis depicts one common use case, in which data is grouped into clusters such that data from one cluster resembles other data from the same cluster more closely than data from other clusters, according to some predesignated criteria. Another important use case are dimensionality reduction tasks which transform high-dimensional data into a lower-dimensional subspace while retaining meaningful information of the original data.
|
||||
|
||||
\todo[inline, color=green!40]{illustration unsupervised learning}
|
||||
\fig{ml_unsupervised_learning}{figures/ml_unsupervised_learning_placeholder.png}{PLACEHOLDER - An illustration of unsupervised learning-the training data does not contain any additional information like a label. The algorithm learns to group similar input data together.}
|
||||
|
||||
A more interactive approach to learning is taken by reinforcement learning, which provides the algorithm with an environment and an interpreter of the environment's state. During training the algorithm explores new possible actions and their impact on the provided environment. The interpreter can then reward or punish the algorithm based on the outcome of its actions. To improve the algorithms capability it will try to maximize the rewards received from the interpreter, retaining some randomness as to enable the exploration of different actions and their outcomes. Reinforcement learning is usually used for cases where an algorithm has to make sequences of decisions in complex environments e.g., autonomous driving tasks.
|
||||
|
||||
@@ -366,7 +372,6 @@ A more interactive approach to learning is taken by reinforcement learning, whic
|
||||
Semi-Supervised learning algorithms are an inbetween category of supervised and unsupervised algorithms, in that they use a mixture of labeled and unlabeled data. Typically vastly more unlabeled data is used during training of such algorithms than labeled data, due to the effort and expertise required to label large quantities of data correctly. Semi-supervised methods are oftentimes an effort to improve a machine learning algorithm belonging to either the supervised or unsupervised category. Supervised methods such as classification tasks are enhanced by using large amounts of unlabeled data to augment the supervised training without additional need of labeling work. Alternatively, unsupervised methods like clustering algorithms may not only use unlabeled data but improve their performance by considering some hand-labeled data during training.
|
||||
%Semi-Supervised learning algorithms are an inbetween category of supervised and unsupervised algorithms, in that they use a mixture of labeled and unlabeled data. Typically vastly more unlabeled data is used during training of such algorithms than labeled data, due to the effort and expertise required to label large quantities of data correctly. The type of task performed by semi-supervised methods can originate from either supervised learningor unsupervised learning domain. For classification tasks which are oftentimes achieved using supervised learning the additional unsupervised data is added during training with the hope to achieve a better outcome than when training only with the supervised portion of the data. In contrast for unsupervised learning use cases such as clustering algorithms, the addition of labeled samples can help guide the learning algorithm to improve performance over fully unsupervised training.
|
||||
|
||||
\todo[inline, color=green!40]{Talk about deep learning, backwards propagation, optimization goals, iterative process?}
|
||||
|
||||
For anomaly detection methods, the underlying techniques can belong to any of these or other categories of machine learning algorithms. As described in section~\ref{sec:anomaly_detection}, they may not even use any machine learning at all. While supervised anomaly detection methods exist, their suitability depends mostly on the availability of labeled training data and on a reasonable proportionality between normal and anomalous data. Both requirements can be challenging due to labeling often being labour intesive and the anomalies' intrinsic property to occur rarely when compared to normal data. DeepSAD is a semi-supervised method which extends its unsupervised predecessor Deep SVDD by including some labeled samples during training with the intention to improve the algorithm's performance. Both, DeepSAD and Deep SVDD include the training of an autoencoder as a pre-training step, a machine learning architecture, frequently grouped with unsupervised algorithms, even though that definition can be contested when scrutinizing it in more detail, which we will look at next.
|
||||
|
||||
@@ -382,19 +387,12 @@ Autoencoders are a type of neural network architecture, whose main goal is learn
|
||||
|
||||
\fig{autoencoder_general}{figures/autoencoder_principle_placeholder.png}{PLACEHOLDER - An illustration of autoencoders' general architecture and reconstruction task.}
|
||||
|
||||
One key use case of autoencoders is to employ them as a dimensionality reduction technique. In that case, the latent space inbetween the encoder and decoder is of a lower dimensionality than the input data itself. Due to the aforementioned reconstruction goal, the shared information between the input data and its latent space representation is maximized, which is known as following the infomax principle. After training such an autoencoder, it may be used to generate lower-dimensional representations of the given datatype, enabling more performant computations which may have been infeasible to achieve on the original data. DeepSAD which we employ in this paper, uses an autoencoder in a pre-training step to achieve this goal among others.
|
||||
|
||||
|
||||
\todo[inline, color=green!40]{VAEs?}
|
||||
%Another way to employ autoencoders is to use them as a generative technique. The decoder in autoencoders is trained to reproduce the input state from its encoded representation, which can also be interpreted as the decoder being able to generate data of the input type, from an encoded representation. A classic autoencoder trains the encoder to map its input to a single point in the latent space-a distriminative modeling approach, which can succesfully learn a predictor given enough data. In generative modeling on the other hand, the goal is to learn the distribution the data originates from, which is the idea behind variational autoencoders (VAE). VAEs have the encoder produce an distribution instead of a point representation, samples from which are then fed to the decoder to reconstruct the original input. The result is the encoder learning to model the generative distribution of the input data, which enables new usecases, due to the latent representation
|
||||
|
||||
|
||||
|
||||
\todo[inline]{autoencoder explanation}
|
||||
\todo[inline, color=green!40]{autoencoders are a neural network architecture archetype (words) whose training target is to reproduce the input data itself - hence the name. the architecture is most commonly a mirrored one consisting of an encoder which transforms input data into a hyperspace represantation in a latent space and a decoder which transforms the latent space into the same data format as the input data (phrasing), this method typically results in the encoder learning to extract the most robust and critical information of the data and the (todo maybe something about the decoder + citation for both). it is used in many domains translations, LLMs, something with images (search example + citations)}
|
||||
\todo[inline, color=green!40]{typical encoder decoder mirrored figure}
|
||||
\todo[inline, color=green!40]{explain figure}
|
||||
\todo[inline, color=green!40]{our chosen method Deep SAD uses an autoencoder to translate input data into a latent space, in which it can more easily differentiate between normal and anomalous data}
|
||||
\todo[inline, color=green!40]{Paragraph about Variational Autoencoders? generative models vs discriminative models, enables other common use cases such as generating new data by changing parameterized generative distribution in latent space - VAES are not really relevant, maybe leave them out and just mention them shortly, with the hint that they are important but too much to explain since they are not key knowledge for this thesis}
|
||||
|
||||
One key use case of autoencoders is to employ them as a dimensionality reduction technique. In that case, the latent space inbetween the encoder and decoder is of a lower dimensionality than the input data itself. Due to the aforementioned reconstruction goal, the shared information between the input data and its latent space representation is maximized, which is known as following the infomax principle. After training such an autoencoder, it may be used to generate lower-dimensional representations of the given datatype, enabling more performant computations which may have been infeasible to achieve on the original data. DeepSAD uses an autoencoder in a pre-training step to achieve this goal among others. This is especially useful for our usecase since point clouds produced by lidar sensors such as the one used in robotics are usually very high-dimensional, owed to the difficulty in mapping the whole scene with enough detail to navigate it.
|
||||
|
||||
%Another way to employ autoencoders is to use them as a generative technique. The decoder in autoencoders is trained to reproduce the input state from its encoded representation, which can also be interpreted as the decoder being able to generate data of the input type, from an encoded representation. A classic autoencoder trains the encoder to map its input to a single point in the latent space-a distriminative modeling approach, which can succesfully learn a predictor given enough data. In generative modeling on the other hand, the goal is to learn the distribution the data originates from, which is the idea behind variational autoencoders (VAE). VAEs have the encoder produce an distribution instead of a point representation, samples from which are then fed to the decoder to reconstruct the original input. The result is the encoder learning to model the generative distribution of the input data, which enables new usecases, due to the latent representation
|
||||
|
||||
\newsection{lidar_related_work}{Lidar - Light Detection and Ranging}
|
||||
|
||||
@@ -543,7 +541,7 @@ The neural network architecture of DeepSAD is not fixed but rather dependent on
|
||||
|
||||
%Fortunately situations like earthquakes, structural failures and other circumstances where rescue robots need to be employed are uncommon occurences. When such an operation is conducted, the main focus lies on the fast and safe rescue of any survivors from the hazardous environment, therefore it makes sense that data collection is not a priority. Paired with the rare occurences this leads to a lack of publicly available data of such situations. To improve any method, a large enough, diversified and high quality dataset is always necessary to provide a comprehensive evaluation. Additionally, in this work we evaluate a training based method, which increases the requirements on the data manifold, which makes it all the more complex to find a suitable dataset. In this chapter we will state the requirements we defined for the data, talk about the dataset that was chosen for this task, including some statistics and points of interest, as well as how it was preprocessed for the training and evaluation of the methods.
|
||||
|
||||
Situations such as earthquakes, structural failures, and other emergencies that require rescue robots are fortunately rare. When these operations do occur, the primary focus is on the rapid and safe rescue of survivors rather than on data collection. Consequently, there is a scarcity of publicly available data from such scenarios. To improve any method, however, a large, diverse, and high-quality dataset is essential for comprehensive evaluation. This challenge is further compounded in our work, as we evaluate a training-based approach that imposes even higher requirements on the data to enable training, making it difficult to find a suitable dataset.
|
||||
Situations such as earthquakes, structural failures, and other emergencies that require rescue robots are fortunately rare. When these operations do occur, the primary focus is on the rapid and safe rescue of survivors rather than on data collection. Consequently, there is a scarcity of publicly available data from such scenarios. To improve any method, however, a large, diverse, and high-quality dataset is essential for comprehensive evaluation. This challenge is further compounded in our work, as we evaluate a training-based approach that imposes even higher demands on the data to enable training, making it difficult to find a suitable dataset.
|
||||
|
||||
In this chapter, we outline the specific requirements we established for the data, describe the dataset selected for this task—including key statistics and notable features—and explain the preprocessing steps applied for training and evaluating the methods.
|
||||
|
||||
@@ -576,9 +574,9 @@ Additionally, the dataset must be sufficiently large for training learning-based
|
||||
|
||||
To evaluate how effectively a method can quantify LiDAR data degradation, we require a degradation label for each scan. Ideally, each scan would be assigned an analog value that correlates with the degree of degradation, but even a binary label—indicating whether a scan is degraded or not—would be useful.
|
||||
|
||||
Before identifying available options for labeling, it is essential to define what “degradation” means in the context of LiDAR scans and the resulting point clouds. LiDAR sensors combine multiple range measurements, taken nearly simultaneously, into a single point cloud with the sensor’s location as the reference point. In an ideal scenario, each measurement produces one point; however, in practice, various factors cause some measurements to be incomplete, resulting in missing points even under good conditions. Additionally, some measurements may return incorrect ranges. For example, when a measurement ray strikes an aerosol particle, it may register a shorter range than the distance to the next solid object. The combined effect of missing and erroneous measurements constitutes degradation. One could also argue that degradation includes the type or structure of errors and missing points, which in turn affects how the point cloud can be further processed. For instance, if aerosol particles are densely concentrated in a small region, they might be interpreted as a solid object which could indicate a high level of degradation, even if the overall number of erroneous measurements is lower when compared to a scan where aerosol particles are evenly distributed. In the latter case, outlier detection algorithms might easily remove the erroneous points, minimizing their impact on subsequent processing. Thus, defining data degradation for LiDAR scans is not straightforward.
|
||||
Before identifying available options for labeling, it is essential to define what “degradation” means in the context of LiDAR scans and the resulting point clouds. LiDAR sensors combine multiple range measurements, taken nearly simultaneously, into a single point cloud with the sensor’s location as the reference point. In an ideal scenario, each measurement produces one point; however, in practice, various factors cause some measurements to be incomplete, resulting in missing points even under good conditions. Additionally, some measurements may return incorrect ranges. For example, when a measurement ray strikes an aerosol particle, it may register a shorter range than the distance to the next solid object. The combined effect of missing and erroneous measurements can be argued to constitute the scan's degradation. On the other hand, degradation could also include the type or structure of errors and missing points, which in turn affects how the point cloud can be processed further. For instance, if aerosol particles are densely concentrated in a small region, they might be interpreted as a solid object which could indicate a high level of degradation, even if the overall number of erroneous measurements is lower when compared to a scan where aerosol particles are evenly distributed. In the latter case, outlier detection algorithms might easily remove the erroneous points, minimizing their impact on subsequent processing. Thus, defining data degradation for LiDAR scans is not straightforward.
|
||||
|
||||
An alternative approach would be to establish an objective measurement of degradation. Since the degradation in our use case primarily arises from airborne particles, one might assume that directly measuring their concentration would allow us to assign an analog score that correlates with degradation. However, this approach is challenging to implement in practice. Sensors that measure airborne particle concentration and size typically do so only at the sensor’s immediate location, whereas the LiDAR emits measurement rays that traverse a wide field of view. This localized measurement might be sufficient if the aerosol distribution is uniform, but it does not capture variations in degradation across the entire point cloud. To our knowledge, no public dataset exists that meets our requirements while also including detailed data on aerosol particle density and size.
|
||||
An alternative approach would be to establish an objective measurement of degradation. Since the degradation in our use case primarily arises from airborne particles, one might assume that directly measuring their concentration would allow us to assign an analog score that correlates with degradation. However, this approach is challenging to implement in practice. Sensors that measure airborne particle concentration and size typically do so only at the sensor’s immediate location, whereas lidar sensors emit measurement rays that traverse a wide field of view and distance. This localized measurement might be sufficient if the aerosol distribution is uniform, but it does not capture variations in degradation across the entire point cloud. To our knowledge, no public dataset exists that meets our requirements while also including detailed data on aerosol particle density and size.
|
||||
|
||||
%For training purposes we generally do not require labels since the semi-supervised method may fall back to a unsupervised one if no labels are provided. To improve the method's performance it is possible to provide binary labels i.e., normal and anomalous-correlating to non-degraded and degraded respectively-but the amount of the provided training labels does not have to be large and can be handlabelled as is typical for semi-supervised methods, since they often work on mostly unlabeled data which is difficult or even impossible to fully label.
|
||||
|
||||
@@ -619,7 +617,7 @@ We use data from the \emph{Ouster OS1-32} LiDAR sensor, which was configured to
|
||||
|
||||
%During the measurement campaign 14 experiments were conducted, of which 10 did not contain the utilization of the artifical smoke machine and 4 which did contain the artifical degradation, henceforth refered to as normal and anomalous experiments respectively. During 13 of the experiments the sensor platform was in near constant movement (sometimes translation - sometimes rotation) with only 1 anomalous experiment having the sensor platform stationary. This means we do not have 2 stationary experiments to directly compare the data from a normal and an anomalous experiment, where the sensor platform was not moved, nonetheless the genereal experiments are similar enough for direct comparisons. During anomalous experiments the artifical smoke machine appears to have been running for some time before data collection, since in camera images and lidar data alike, the water vapor appears to be distributed quite evenly throughout the closer perimeter of the smoke machine. The stationary experiment is also unique in that the smoke machine is quite close to the sensor platform and actively produces new smoke, which is dense enough for the lidar data to see the surface of the newly produced water vapor as a solid object.
|
||||
|
||||
During the measurement campaign, 14 experiments were conducted—10 without the artificial smoke machine (hereafter referred to as normal experiments) and 4 with it (anomalous experiments). In 13 of these experiments, the sensor platform was in near-constant motion (either translating or rotating), with only one anomalous experiment conducted while the platform remained stationary. Although this means we do not have two stationary experiments for a direct comparison between normal and anomalous conditions, the overall experiments are similar enough to allow for meaningful comparisons.
|
||||
During the measurement campaign, 14 experiments were conducted—10 without the artificial smoke machine (hereafter referred to as normal experiments) and 4 with it (anomalous experiments). In 13 of these experiments, the sensor platform was in near-constant motion (either translating or rotating), with only one anomalous experiment conducted while the platform remained stationary. Although this means we do not have two stationary experiments from the same exact position for a direct comparison between normal and anomalous conditions, the overall experiments are similar enough to allow for meaningful comparisons.
|
||||
|
||||
In the anomalous experiments, the artificial smoke machine appears to have been running for some time before data collection began, as evidenced by both camera images and LiDAR data showing an even distribution of water vapor around the machine. The stationary experiment is particularly unique: the smoke machine was positioned very close to the sensor platform and was actively generating new, dense smoke, to the extent that the LiDAR registered the surface of the fresh water vapor as if it were a solid object.
|
||||
|
||||
@@ -653,7 +651,7 @@ As we can see in figure~\ref{fig:data_missing_points}, the artifical smoke intro
|
||||
|
||||
% In experiments with artifical smoke present, we observe many points in the point cloud very close to the sensor where there are no solid objects and therefore the points have to be produced by airborne particles from the artifical smoke. The phenomenon can be explained, in that the closer to the sensor an airborne particle is hit, the higher the chance of it reflecting the ray in a way the lidar can measure. In \ref{fig:particles_near_sensor} we see a box diagram depicting how significantly more measurements of the anomaly expirements produce a range smaller than 50 centimeters. Due to the sensor platform's setup and its paths taken during experiments we can conclude that any measurement with a range smaller than 50 centimeters has to be erroneous. While the amount of these returns near the sensor could most likely be used to estimate the sensor data quality while the sensor itself is located inside an environment containing airborne particles, this method would not allow to anticipate sensor data degradation before the sensor itself enters the affected area. Since lidar is used to sense the visible geometry from a distance, it would be desireable to quantify the data degradation of an area before the sensor itself enters it. Due to these reasons we did not use this phenomenon in our work.
|
||||
|
||||
In experiments with artificial smoke, we observe numerous points in the point cloud very close to the sensor, even though no solid objects exist at that range. These points are therefore generated by airborne particles in the artificial smoke. This phenomenon occurs because the closer an airborne particle is to the sensor, the higher the probability it reflects the laser beam in a measurable way. As shown in Figure~\ref{fig:particles_near_sensor}, a box diagram illustrates that significantly more measurements during these experiments report ranges shorter than 50 centimeters. Given the sensor platform's setup and its experimental trajectory, we conclude that any measurement with a range under 50 centimeters is erroneous.
|
||||
In experiments with artificial smoke, we observe numerous points in the point cloud very close to the sensor, even though no solid objects exist at that range. These points are therefore generated by airborne particles in the artificial smoke. This phenomenon likely occurs because the closer an airborne particle is to the sensor, the higher the probability it reflects the laser beam in a measurable way. As shown in Figure~\ref{fig:particles_near_sensor}, a box diagram illustrates that significantly more measurements during these experiments report ranges shorter than 50 centimeters. Given the sensor platform's setup and its experimental trajectory, we conclude that any measurement with a range under 50 centimeters is erroneous.
|
||||
|
||||
While the density of these near-sensor returns might be used to estimate data quality when the sensor is already in an environment with airborne particles, this method cannot anticipate data degradation before the sensor enters such an area. Since LiDAR is intended to capture visible geometry from a distance, it is preferable to quantify potential degradation of an area in advance. For these reasons, we did not incorporate this phenomenon into our subsequent analysis.
|
||||
|
||||
|
||||
@@ -339,5 +339,34 @@
|
||||
year = {1959},
|
||||
month = jul,
|
||||
pages = {210–229},
|
||||
},
|
||||
@inproceedings{bg_ad_pointclouds_scans,
|
||||
title = {Anomaly Detection in 3D Point Clouds using Deep Geometric Descriptors
|
||||
},
|
||||
url = {http://dx.doi.org/10.1109/WACV56688.2023.00264},
|
||||
DOI = {10.1109/wacv56688.2023.00264},
|
||||
booktitle = {2023 IEEE/CVF Winter Conference on Applications of Computer
|
||||
Vision (WACV)},
|
||||
publisher = {IEEE},
|
||||
author = {Bergmann, Paul and Sattlegger, David},
|
||||
year = {2023},
|
||||
month = jan,
|
||||
pages = {2612–2622},
|
||||
},
|
||||
@article{bg_ad_pointclouds_poles,
|
||||
title = {Automatic Detection and Classification of Pole-Like Objects in Urban
|
||||
Point Cloud Data Using an Anomaly Detection Algorithm},
|
||||
volume = {7},
|
||||
ISSN = {2072-4292},
|
||||
url = {http://dx.doi.org/10.3390/rs71012680},
|
||||
DOI = {10.3390/rs71012680},
|
||||
number = {10},
|
||||
journal = {Remote Sensing},
|
||||
publisher = {MDPI AG},
|
||||
author = {Rodríguez-Cuenca, Borja and García-Cortés, Silverio and Ordóñez,
|
||||
Celestino and Alonso, Maria},
|
||||
year = {2015},
|
||||
month = sep,
|
||||
pages = {12680–12703},
|
||||
}
|
||||
|
||||
|
||||
BIN
thesis/figures/ml_unsupervised_learning_placeholder.png
Normal file
BIN
thesis/figures/ml_unsupervised_learning_placeholder.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 157 KiB |
Reference in New Issue
Block a user