Next Article in Journal
Sea Spectral Estimation Using ARMA Models
Previous Article in Journal
Fiber Bragg Grating Sensors for Underwater Vibration Measurement: Potential Hydropower Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Saliency-Based Sparse Representation Method for Point Cloud Simplification

1
Facultad de Ingenierías, Universidad Autónoma del Caribe, Barranquilla 080001, Colombia
2
Facultad de Ingenierías, Universidad del Magdalena, Santa Marta 470004, Colombia
3
Facultad de Minas, Universidad Nacional de Colombia-Sede Medellín, Medellín 050041, Colombia
4
Instituto Universitario de Automática e Informática Industrial, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(13), 4279; https://doi.org/10.3390/s21134279
Submission received: 17 May 2021 / Revised: 6 June 2021 / Accepted: 8 June 2021 / Published: 23 June 2021
(This article belongs to the Section Remote Sensors)

Abstract

:
High-resolution 3D scanning devices produce high-density point clouds, which require a large capacity of storage and time-consuming processing algorithms. In order to reduce both needs, it is common to apply surface simplification algorithms as a preprocessing stage. The goal of point cloud simplification algorithms is to reduce the volume of data while preserving the most relevant features of the original point cloud. In this paper, we present a new point cloud feature-preserving simplification algorithm. We use a global approach to detect saliencies on a given point cloud. Our method estimates a feature vector for each point in the cloud. The components of the feature vector are the normal vector coordinates, the point coordinates, and the surface curvature at each point. Feature vectors are used as basis signals to carry out a dictionary learning process, producing a trained dictionary. We perform the corresponding sparse coding process to produce a sparse matrix. To detect the saliencies, the proposed method uses two measures, the first of which takes into account the quantity of nonzero elements in each column vector of the sparse matrix and the second the reconstruction error of each signal. These measures are then combined to produce the final saliency value for each point in the cloud. Next, we proceed with the simplification of the point cloud, guided by the detected saliency and using the saliency values of each point as a dynamic clusterization radius. We validate the proposed method by comparing it with a set of state-of-the-art methods, demonstrating the effectiveness of the simplification method.

1. Introduction

Point clouds have become a standard data input tool for many fields, including scientific visualization, photogrammetry, and medical applications. For data acquisition of 3D shapes, modern 3D scanning devices can produce a vast amount of data, reaching millions of points [1]. This amount of data creates challenges on several fronts, like large storage requirements and increased data transmission and rendering times. To reduce the complexity of such point clouds and make the subsequent geometric processing algorithms more efficient, it is common to simplify the point cloud.
The main requirement for point cloud simplification algorithms is that they should maintain the global shape, the sharp features, and the curvatures of the original cloud. For the last of these, transitions between planar and curved areas should be preserved [2]. It is important to preserve the representative points and the sampling density in order to approximate faithfully the original point cloud both geometrically and topologically. The simplified point cloud must be dense around the sharp features (corners, edges, and curvatures) to preserve the global topology and sparse in flattened regions (low or zero curvature).
Some of the limitations of current simplification algorithms are nonuniformity in the simplified point clouds [3,4], problems in keeping the balance between preserved and lost features [5], reduced accuracy, and high computational cost [6]. Some of the proposed algorithms solve those shortcomings using parameters for tuning the final metric by means of weights of scales, but the burden is on the user to obtain satisfactory results [7]. Other methods present high computational cost because they use clustering algorithms in their initial stages [5,8] and some use only one feature (e.g., normal or curvature) for the simplification [9,10].
In this paper, we propose a reliable, robust, and simple solution for the above problems. Our method uses the normal vector, the surface variation (curvature), and the point coordinates, integrated into a unique feature vector, as input to train a dictionary. There are two advantages of using this approach: on one hand, it is possible to unify different descriptors in a unique feature vector, and on the other hand, it is possible to capture the local and the global structure of the point cloud using dictionary learning and sparse coding representation.
Since sharp features are often sparse, the use of sparsity-based modeling to describe and preserve sharp features is an attractive tool for point cloud simplification.
The main contribution of our work is to use the sparse matrix to analyze the structure of point sets, gathering evidence from local geometry to infer global properties about the objects. When the point cloud sparse matrix representation is very sparse, it means that it has found the intrinsic structure of the input point cloud. In the context of point cloud simplification, this means that the model can properly represent the sampling points, preserving the sharp features and at the same time maintaining the uniformity of the point cloud.
The original point cloud data only contains the coordinates of the points with no topological information. To extract the implicit geometric information (normal vectors, surface variation, curvatures), the point-based simplification algorithms use the local information around each point in the cloud.
Usually, the k-nearest neighbor algorithm is used to estimate such geometric information. For each point in the cloud, the proposed method uses the coordinates of the normal vector, the coordinates of the point, and the curvature as a feature vector to identify potential saliency points. The feature vectors of each data point are the training signals for a dictionary learning process. With the dictionary trained, a sparse coding process is carried out to identify the most salient regions in the point cloud. Finally, the proposed method simplifies the point cloud by using the sparse vectors as a clusterization radius.
Formally, the problem of point cloud simplification is defined as follows: Given a surface S defined by a point cloud P and a target sampling rate N < | P | , the goal is to find a point cloud P with | P | = N such that the distance ε of the surface S to the original surface S is minimal [6]. Symbolically we write the above as follows:
P P ,
where | P | = N < | P | and P P < ε , where | · | is the point cloud cardinality and . is the Euclidean distance. The error limit ε is used to enforce that that no point in the simplified cloud P is further than ε with respect to the original model.
As far as we know, we have not found in the state of the art any method that uses dictionary learning and sparse coding as a basis for point cloud simplification. The proposed method does not introduce a new technique or modification to the classic dictionary learning and sparse coding algorithms.
The contributions of this paper are as follows:
1.
The proposed point cloud simplification method based on dictionary learning and sparse coding maintains a balance between sharp features and the density of point distribution.
2.
The proposed method reduces the cardinality of the point cloud very efficiently due to its inherent perceptual nature, which selects important points based on their saliency.
3.
The saliency-based simplification provides an importance criterion to preserve the most important geometric features.
4.
The analysis of the dispersion matrix α 1 together with the fit or approximation error x D α 2 2 (Equation (3)) can be used to determine when a point is salient or not.

2. Related Work

In recent decades, a considerable amount of research has been conducted on point cloud simplification. Point cloud simplification algorithms can be roughly divided into four categories: particle simulation-based methods, iteration-based methods, formulation-based methods, and clustering-based methods.

2.1. Particle Simulation-Based Methods

Pauly et al. [9] presented a particle simulation method. The proposed algorithm distributes a set of points called particles evenly onto a surface, producing point clouds with low approximation error to the original point cloud. Collections of particle simulation-based methods are called local optimal projection (LOP)-based methods [3]. These methods project a set of points over an underlying surface using a localized version of the L1 median filter regularized by a repulsion potential. Huang et al. [5] proposed a correction over the original LOP algorithm, distributing the points evenly over the underlying surface. Huang et al. [6] and Liao et al. [11] aimed to integrate the vector normal to each projected point in order to preserve sharp features in the point cloud. These methods produce good results for surface simplification but are computationally expensive. Furthermore, the original points are replaced by the particles, changing their location in the process.

2.2. Clustering-Based Methods

These methods divide the point cloud into clusters, applying some criteria and then replacing the cluster points with a centroid. Pauly et al. [9] presented two algorithms: uniform incremental clustering and hierarchical clustering. These methods are memory- and time-efficient but produce high average approximation errors with respect to the original surface. Shi et al. [10] presented an adaptive method for simplifying point clouds. They applied a recursive subdivision scheme in which the algorithm selects representative points and removes redundant ones. They used k-means clustering to group similar spatial points and applied the maximum normal vector deviation measure to subdivide the clusters. The algorithm can handle boundaries and produce uniform density in flat regions and high density in curved regions. Mahdaoui et al. [12] presented a comparison between two simplification algorithms using k-means and fuzzy c-means algorithms. The method proposes using a metric based on entropy estimation for clustering the point cloud. Liu et al. [13] presented an edge-sensitive feature detail preserving algorithm; they used two clustering schemas to split the point cloud into the geometric and spatial domains. These methods can preserve global structures of the point clouds, and some of them preserve sharp features; however, because of the clustering process, they are computational time-consuming.

2.3. Formulation-Based Methods

These methods are based on mathematically modeled optimality. Leal et al. [8] proposed a three-step method. In the first step, they apply a clusterization algorithm. The second step involves the identification of points with high curvature to be preserved. The last step uses a linear programming model to simplify the point cloud, maintaining a density equivalent to the original point cloud. Chen et al. [14] employed a resampling strategy based on a graph that selects representative points while preserving features. The minimization of the point cloud is carried out by a proposed reconstruction error based on a feature extraction operator. Qi et al. [15] proposed an optimization strategy for maintaining the balance between finding the sharp features and preserving the density in the point cloud. The optimization is represented using a graph filter. The results of this method are superior to some other state-of-the-art methods, but it is computationally expensive.

2.4. Iteration-Based Methods

Pauly et al. [9] proposed an iterative simplification method using quadric error metrics. The algorithm produces point clouds with low approximation errors, but they are expensive to compute. Alexa et al. [4] proposed a decimation process based on the moving least square (MLS) method. The proposed method removes redundant information using a surface error metric. The global result of the algorithm is good, but it can produce uneven sampling because the subsampling unnecessarily restricts the potential sampling position. Zang et al. [16] presented a method based on a multilevel strategy for point cloud simplification, which adaptively determines the optimal level of each point. For each level, the method extracts the points based on a measure of importance given by a 3D Gaussian method. Zhu et al. [17] proposed a multiview method for point cloud simplification, projecting the points onto the three orthographic planes, in order to identify the model edges. The edges are merged to produce the 3D edges of the model, and the points with less importance are separated from the point cloud. Shoaib et al. [18] proposed a method called fractal bubble to simplify point clouds, selecting important data points through the expansion of a recursive generation of self-similar 2D bubbles until contact is made with a point. Ji et al. [7] presented a detailed feature points simplified algorithm (DFPSA). They proposed estimating the importance of each point using a four characteristic operator, which involves estimating normal curvature distance between the points and the projection distance to each point in the point cloud. Finally, a threshold is used to decide whether each point may be classified as a feature point or not. The nonfeature points are simplified using an octree structure to avoid creating regions with holes. Zhang et al. [19] presented a feature-preserved point cloud simplification (FPPS) method. For the simplification, an entropy measure is defined, which quantifies the geometric features hidden in the point cloud. Then, the key points are selected based on the entropy.

3. Dictionary Learning and Sparse Coding

Dictionary learning is a technique whose goal is to learn a set of overcomplete basis (dictionary) in order to model data vectors as a sparse linear combination of basis elements (atoms of the dictionary) [20].
Formally, the dictionary learning problem can be formulated as follows:
Given a set of training data vectors X = { x i R N } i = 1 , 2 , , N , the aim is to find a basis vector D = { d i R N } i = 1 , 2 , , N , which can sparsely represent the training data vectors in the set X , with α being its sparsest representation. The goal is to minimize Equation (1).
{ D ^ , α ^ } = arg min D , α X D α 2 2 s . t . α 0 L
L controls the sparsity of X in D . Equation (1) is minimized using the K-SVD algorithm proposed by Aharon et al. [21].
The purpose of sparse coding [22,23] is to approximate a feature input vector as a linear combination of basis vectors, which are selected from a dictionary that has been learned from the data directly.
Formally, let x be a signal of dimension n ; the sparse coding aims to find a dictionary D = { d 1 , d 2 , , d N } , such that x may be approximated by a linear combination of atoms { d i } i = 1 N . This is x D α = j = 1 N α j d j , where most of the coefficients α j are zero or close to zero [20]. Thus, the sparse coding problem can typically be formulated as an optimization problem:
α ^ = arg min α x D α 2 2 s . t . α 0 L
In this formulation, the dictionary D is given and L once again controls the sparsity of x in D . The term α 0 measures the dispersion of the decomposition and can be understood as the number of nonzero coefficients in α , or sparse coefficients, in order to approximate the signal x as sparsely as possible. Or, alternatively,
α ^ = arg min α x D α 2 2 + λ α 1
Equation (3) is an optimization problem where the norm l 0   ( · 0 ) is changed by the norm l 1 ( · 1 ) and λ is the regularization parameter. The solution to Equation (2) with l 0 norm is an NP-hard problem; fortunately, under certain conditions, it is possible to relax the problem using l 1 norm and find an approximated solution using Equation (3) with l 1 norm.

4. Proposed Method

Our proposed method is based on dictionary learning and sparse coding. The input point set is analyzed using the covariance matrix to extract the local features; then, using the dictionary and the sparse representation matrix, the point set is analyzed globally to identify saliency features. Finally, we use the saliencies to sample the point cloud, keeping the most representative points. Figure 1 shows the pipeline of the proposed method.

4.1. Selecting the Features

To characterize the point set, we define a descriptor for each point p i . The point descriptor is composed of the normal vector, the total variation of surface (curvature), and the point coordinate. With these features, we build a feature vector for each point to measure its importance with respect to the entire set.
The normal vector is used for two reasons. The first is because it can help to identify feature points. A large difference between the normals around a point means that the surface at the point is not planar; that is, it is likely to be a feature point. The second reason is related to the problem of obtaining a simplified point cloud that, when rendered, looks like or mimics the original point cloud from which it was derived. The normals are used in the rendering process to estimate shading and lighting. Therefore, when a point is in a sharp feature, it is considered an important point, and its normal vector must be retained in the simplified point cloud. We use the normal coordinates as components of the feature vector.
The surface curvature captures the surface variation at a point. The curvature is used in several algorithms of point cloud simplification because it is an intrinsic property that intuitively reflects the sharpness of a point in a surface. High curvatures reflect large variations of the surface at the point and hence pinpoint a sharp feature. Therefore, we use the surface variation at the point as a curvature measure, and we include it as a component of the feature vector.
In addition to the normal vector and the surface variation or curvature, the position of each point is also considered in order to guarantee a minimum sample density in every region of the cloud. Without this information, low-saliency areas could be heavily decimated, appearing holes in the point cloud and thus compromising the continuity of the surface when the cloud is rendered. Hence, the coordinate of each point is also used as a component in its feature vector.

4.2. Low-Level Feature Estimation

A common way to estimate low-level features in a point set is to apply the principal component analysis (PCA) method locally to each neighborhood around each point p i [9]. Specifically, we use a weighted version of PCA [24,25] with a covariance matrix C m i , as defined in Equation (4).
C m i = 1 k i 1 j = 1 k i w j ( p j p ¯ ) ( p j p ¯ ) T
p ¯ = 1 k i j = 1 k i p j
where k i = | N g ( p i ) | is the cardinality of the neighborhood around p i , N g ( p i ) ; w j is a weight estimated by w j = e x p ( d 2 k i 2 ) ;   d = | | p i p ¯ | | is the Euclidean distance. Next, we analyze the eigenvalues λ 0 λ 1 λ 2 and eigenvectors v 0 , v 1 , v 2 of the covariance matrix C m i .
The eigenvector v 0 corresponding to the smallest eigenvalue λ 0 is the normal vector n i at point p i . Pauly et al. [9,26] proved that the surface variation is equivalent to the surface curvature, as defined in Equation (6).
σ ( p i ) = λ 0 / ( λ 0 + λ 1 + λ 2 )
n i = ( n x , n y , n z )
p i = ( p x , p y , p z )
Once the low-level features are defined, we build a seven-dimensional feature vector F i for each point p i P , where
F i = ( n x , n y , n z , σ , p x , p y , p z )

4.3. Dictionary Construction and Sparse Model

Using the feature vectors defined in the above section as data vectors F i R n × 1 , with n = 7 (number of low-level features), we construct the data matrix F = { F 1 , F 2 , , F K } R n × K , where K = | P | is the number of feature vectors. A sparse coding matrix α R S × K and a dictionary D R n × S are defined using sparse coding theory. S is the number of atoms of the dictionary. Un our experiment, we set S = 200 ; for all the models, the fixed value of the dictionary with S = 200 was selected using the mean square error variation. We found that for values greater than 200 atoms, the MSE tends to converge, as is verified in Section 5. The dictionary learning problem is solved using the K-SVD algorithm, as per Aharon et al. [21], obtaining the estimation of α and D . Now F can be reconstructed as F = D α , obtaining the sparse representation of the data matrix F in the dictionary D . The saliency points can be found by analyzing the sparse matrix α .

4.4. Detecting Saliency Points

Once the sparse coding matrix α has been obtained, we analyze what vectors correspond to saliencies. Let α j and F j be column vectors of the matrices α and F , respectively. A feature vector is considered salient if its sparse representation α j 1 has many nonzero elements—implying that a linear combination of many atoms is required to represent the point correctly—and if its sparse reconstruction error F j D α j 2 produces a high residual. On the other hand, a feature vector is not considered salient if its sparse representation α j 1 has few nonzero elements, i.e., if it can be represented by the linear combination of only a few atoms and its sparse reconstruction error F j D α j 2 produces a low residual.
On this basis, we sum the nonzero elements of each column of the matrix α . A score vector with these sums is built as follows:
f ( α j ) = p = 1 S h ( α p , j ) j = 1 , 2 , , S
h ( α p , j ) = { 1 , α p , j 0 0 , o t h e r w i s e
The sparse reconstruction error is computed by summing the residuals resulting from the difference between each signal F j and its respective reconstruction D α j ; i.e., r j = F j D α j 2 . The score vector is defined as follows:
g ( F j ) = r j j = 1 , 2 , , S
Now we normalize the score vectors f ( α j ) and g ( F j ) , dividing each vector by its highest component.
f ( α j ) = f ( α j ) / m a x ( f ) j = 1 , 2 , , S
g ( F j ) = g ( F j ) / max ( g ) j = 1 , 2 , , S
Next, both score vectors are integrated into a unique score vector as follows:
S f ( i ) = f ( α i ) g ( F i ) i = 1 , 2 , , S
We use the vector score S f as a metric for the simplification process. Figure 2 shows the saliency levels found in the vector S f ; to visualize it, we use a threshold T with different values. Equation (14), was proposed by [27] in a local context, and the present work is a generalization to use it globally.

4.5. Simplification-Based Saliency

The saliency points characterize the most relevant features in the point cloud. These points must be retained in the simplification process. On the other hand, points with low saliency are redundant and have less importance for representing the original surface. Using the vector score defined by (14), we establish a dynamic ratio of influence that depends on the importance of the saliency of each point in the entire cloud. If point p i is salient, the ratio of influence will be small, and few points will be removed. If, however, it is not salient, the ratio of influence will be large, and more points will be removed (see Figure 3).
To proceed with the simplification, as a first step, the vector score S f ( i ) is sorted by the absolute value of its components. In the second step, we calculate the ratio of influence as follows:
ρ i = δ · 1 S f ( i )
According to (15), the dynamic ratio ρ i is determined by 1 / S f ( i ) . Therefore, in points with high saliency, the ratio is small, while in points with low saliency, the ratio is large, as shown in Figure 3, where δ is a user-defined scale parameter that controls the number of points to be simplified.

5. Results and Discussion

We evaluated the proposed method using a set of models, namely the Max Planck data set (50,112 points, few detail features), the Fandisk data set (6475 points; high, sharp features), the Asian dragon data set (3,609,600 points, many detail features), the Bunny data set (35,947 points, few detail features), the Elephant data set (24,955 points, many detail features), the Horse data set (48,485 points, few detail features), the Gargoyle data set (25,038 points, many detail features), and the Nicolo data set (50,419 points, few detail features).
We also compared the results of our method to other approaches. For quantitative comparison, our method, which we named saliency dictionary-based simplification (SDBS), is compared to three point-based methods, namely the curvature-based method (CV), implemented using Geomagic Studio; simplification on graph (FPUC) [15]; and fast resampling via graphs (FRGR) [14], and one mesh-based method, namely poisson sampled disk (PSD), implemented using MeshLab. For visual comparison, we replicated the same experiment carried out in [7], and we used the results to compare the proposed algorithm with our method and six state-of-the-art simplification methods: grid simplification (GRID) from CGAL library, hierarchical clustering simplification (HCS) [9], weighted LOP (WLOP) [5], simplification on graph (FPUC) [15], fast resampling via graphs (FRGR) [14] and detailed feature points simplified algorithm (DFPSA) [7].
All the experiments were run on a PC with Intel Core i7-2670QM CPU 2.20 GHz and 8 GB RAM. For implementing the proposed method, we used the MATLAB R2016b programming environment.
Figure 4, Figure 5, Figure 6 and Figure 7 are examples of the effectiveness of the proposed simplification method in different types of point clouds (free-form surfaces and surfaces with sharp edges and corners). It is clear that the proposed method is capable of preserving the global structure of the clouds as the simplification rate increases in all cloud types, since the needed information is integrated into the dictionary training.
Figure 4 shows the Fandisk model. The edges and corners are preserved as the simplification rate increases, and in flat regions, the method tries to distribute the points evenly.
Figure 5 shows how the Asian dragon model is simplified from millions of points (3,609,600) to thousands (1502). The proposed method preserves the global structure and the most relevant details of the original point cloud.
In Figure 6, it can be appreciated how the Max Plank model is simplified from 50,112 to 1502 points. The proposed method preserves the global structure and some of the details of the original point set. The Max Plank model is a free-form surface, showing that our method operates efficiently over these types of models.
Figure 7 shows the Elephant model simplified from 24,955 to 167 points. The renderings of the simplified and original models are shown from different points of view, showing how the global structure is preserved even with a low sampling rate.

5.1. Parameter Selection

There are three parameters in our method: the regularization parameter λ in Equation (3), the dictionary size S, and the fraction of points to be simplified δ . The parameter λ is the balance between the data fidelity and the regularization term. Small values can produce a simplification with few details, points, and features, while large values can result in more details, points, and features (see Figure 8). In all our tests, we set λ = 0.5 , which obtains the best results since this value maintains the balance between the number of points and the features.
We established the size of the dictionary, S , based on Figure 8. It shows the mean square error (MSE) variation as the dictionary size increases. As the size of the dictionary increases, the MSE decreases, but processing time increases. On the other hand, when the dictionary size is reduced, the MSE increases, but the processing time decreases. Our goal was to find a balance between a suitable dictionary size and low processing time.
Figure 9 shows that in the range of values between 200 and 400, the MSEs are low, and the size of the dictionary is not significant. In all the experiments, we set the dictionary size S = 200 , producing good results.
The scale parameter δ is the only free user-defined parameter, and it is used for tuning the number of points to be removed.

5.2. Quantitative Analysis Parameter Selection

We chose the geometric error between the original and the simplified point cloud as a metric to evaluate the quality of the proposed simplification method, following Pauly et al. [9]. Similarly, we measured the maximum error distance and the average error distance between the original point cloud, P , and the simplified point cloud, P . We denote the surface of P as S and the surface of P as S . The simplified error is estimated using the maximum error (16) and the average error (17) as follows:
Δ m a x ( S , S ) = max p i S | d ( p i , S ) |
Δ a v g ( S , S ) = 1 S p i S | d ( p i , S ) |
For each point p i S , the geometric error d ( p i , S ) , is defined as the Euclidean distance between the sampled point p i and its projection point p i ¯ on the simplified surface approximation S . Since our method is mesh-free, we approximate the simplified surface S using a least squares plane (LSP). To estimate the LSP, we select a set of neighboring points N H i in P closest to p i , using a Kd-tree data structure, and perform a PCA to obtain a regression plane ( L N H i ), which represents the local approximation S , i.e., d ( p i , S ) d ( p i , L N H i ) (Figure 10).
Table 1 shows the test models with the original sizes and the sampled points with different sampling rates (the value shown is the arithmetic average of the number of points resulting from the different methods for each simplification rate).
Figure 11 shows the Gargoyle, Horse, and Nicolo models, as examples of Table 1; the originals are shown in the left column, the models simplified at 5% are shown in the middle column, and the models simplified at 50% are shown in the right column.
Table 2 shows the different values of the parameter δ for different simplification rates; we can appreciate how the variation of δ does not clarify the relationship between the number of points to be simplified and its values in the table. This indicates that the algorithm is sensitive when its values change between different simplification rates, showing a weakness of the algorithm, which can be improved if the parameter δ can be related to the density and distance between the points of the cloud to be simplified.
Table 3 shows the quantitative comparison between our method and the state-of-the-art methods. Table 3 shows four simplification rates, i.e., 5%, 10%, 20%, and 50%. All five methods reduce the original number of points to a similar number of simplified points. Our method provides the most accurate simplification result of the five algorithms with respect to the average error metric Δ avg . However, considering the maximum error metric Δ max , the Poisson disk mesh-based method is the best, closely followed by our method.
As shown in Table 3, the CV and PSD methods produce similar results in terms of average surface error. The PSD method achieved relatively better results in terms of maximum surface error; however, a mesh structure must be used in the simplification. There are some practical applications where only the 3D coordinate information is available, which limits the applicability of the PSD sampling method. The SGR method and our SDBS method achieved the best results in terms of average surface error, but the SDBS outperforms all other methods.
We compared the SDBS method with the other methods in accuracy and running time. Table 4 shows the running time and the number of preserved points of the proposed approach compared to six state-of-the-art methods. We simplified all the point clouds at a similar simplification rate with all the algorithms. We ran each method 10 times on each point cloud, and the average execution time is shown in Table 4. The programming language is also shown. It is worth noting that the simplification rate of our method is the lowest in the study (the Bunny model was simplified from 35,945 points to 4517 points, and the Elephant model was simplified from 24,955 points to 2154). The SDBS keeps the balance between the sharp features and the point density in the data set.

5.3. Visual Comparison

To validate our method with respect to the visual quality of its results, we performed two experiments. The first experiment shows how the point cloud is affected in two scenarios: (1) when the normal coordinates are excluded from the feature vector and (2) when the coordinates of the point are excluded (Figure 12). The second experiment compares our results with different state-of-the-art methods (Figure 13 and Figure 14). For rendering purposes, our point clouds were meshed using the Geomagic Studio software.
Figure 12b shows the result of simplifying the elephant using only the normal and curvature, excluding the point coordinates from the feature vector of each point. Compared with the original model (Figure 12a), the simplification has overdecimated some areas (ears, tusk, and tube), producing holes in the reconstructed model. On the other hand, the lighting in the simplified model mimics the original one (red arrows). Figure 12c shows the simplification results for the Elephant model using the point coordinates and curvature, excluding the normal from the feature vector of each point. Compared with the original, the point density is maintained, producing a better reconstruction of the model surface, but the lighting of the simplification does not improve, as shown in Figure 12b (see highlighted details). Finally, Figure 12d shows the simplification results for the elephant using the normal, the point coordinates, and the surface variation (curvature). The combination of features improves the results, as shown in the details in the lighting and the preservation of details such as the elephant eye.
To compare visually the results of the studied algorithms, we simplified the models to approximately the same number of points with all methods. Figure 13 shows the simplified results of the application of different algorithms to the Bunny data set. Figure 13b,c,e–g shows how more points are retained in curved parts, while fewer points are kept in smooth parts. The simplification result of Figure 13d is uniform. All the methods present good reconstruction results but cannot reconstruct narrow features such as ears, except for the DFPSA method, which shows only a small hole. The proposed method (Figure 13h) retains the most relevant features and details of the model, and the reconstruction does not present the problems observed with the other algorithms. The zoomed regions (nose commissure and paw) highlight how our approach better preserves geometric details of the original point cloud compared to previous methods, even when the simplification rate of our method is lower than the others.
Figure 14 shows the simplification result for the Elephant data set with a high simplification rate. Figure 14c,d,g shows how the GRID, WLOP, and DFPSA simplification methods preserve few points in smooth regions and more points in feature regions such as legs, ears, trunk, and tusks. The HCS, FRGR, and FPUC simplification methods, as shown in Figure 14b,e,f, present problems in retaining the global structure of the respective point clouds. Our method also preserves more points in feature areas, but it distributes the points evenly in smooth regions. Due to the high simplification rate, all algorithms present failures, but our method is the best in preserving the overall structure of the data set, as shown in the zoomed regions (mouth and chest), even when the simplification rate of our method is lower than the others.

6. Conclusions and Future Work

In this paper, we have presented a new method for point cloud simplification based on dictionary learning and sparse coding. The proposed method preserves the sharp features and produces evenly distributed points. Our method uses the normal vector, curvature, and the position of the points as a component of a feature vector. The feature vectors of all points of the cloud are the input for a dictionary learning and sparse coding process for saliency detection. We use the sparse representation of a signal to establish when a point is salient or not for the entire point cloud; i.e., points are considered salient if their feature vectors are reconstructed with many atoms from the dictionary, while points are not considered salient if the feature vectors are reconstructed with few atoms. The simplification is guided by global saliency using the sparse vectors resulting from the sparse coding process; we use its sparsity as an adaptive simplification ratio in different regions. The proposed method produces low simplification rates in salient regions (borders, corners, high curvatures, valleys) and high simplification rates in relatively planar regions while maintaining an appropriate density through an even distribution of points.
The robustness and efficiency of our approach are demonstrated by some experimental results that show that our method reduces the size of point clouds and retains the shape features without creating surface holes. Finally, the proposed method is compared with different state-of-the-art approaches, producing good simplification results and outperforming competing methods. As future work, we propose examining ways to automatically determine the choice of the regularization parameter λ and the size of the dictionary, S. Another future work is the mathematical demonstration of the interpretation when a point is considered salient or not salient and how to relate the δ directly with the number of points to be simplified.

Author Contributions

Conceptualization, E.L.; methodology, E.L., G.S.-T., J.W.B.-B., F.A. and N.L. software, E.L.; validation, E.L., G.S.-T., J.W.B.-B., F.A. and N.L.; formal analysis, E.L.; investigation, E.L.; writing—original draft preparation, E.L.; writing—review and editing, F.A., G.S.-T. and N.L.; supervision, G.S.-T. and J.W.B.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Administrative Department of Science and Technology of Colombia (COLCIENCIAS) under the doctoral scholarship program COLCIENCIAS 2015-727 and by The Universidad Nacional de Colombia campus Medellín.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Levoy, M.; Ginsberg, J.; Shade, J.; Fulk, D.; Pulli, K.; Curless, B.; Rusinkiewicz, S.; Koller, D.; Pereira, L.; Ginzton, M.; et al. The digital Michelangelo project. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 131–144. [Google Scholar]
  2. Chen, Y.; Yue, L. A method for dynamic simplification of massive point cloud. In Proceedings of the 2016 IEEE International Conference on Industrial Technology (ICIT), Taipei, Taiwan, 14–17 March 2016; pp. 1690–1693. [Google Scholar]
  3. Lipman, Y.; Cohen-Or, D.; Levin, D.; Tal-Ezer, H. Parameterization-free projection for geometry reconstruction. ACM Trans. Graphics 2007, 26, 22. [Google Scholar] [CrossRef]
  4. Alexa, M.; Behr, J.; Cohen-Or, D.; Fleishman, S.; Levin, D.; Silva, C. Computing and rendering point set surfaces. IEEE Trans. Vis. Comput. Graph. 2003, 9, 3–15. [Google Scholar] [CrossRef] [Green Version]
  5. Huang, H.; Li, D.; Zhang, H.; Ascher, U.; Cohen-Or, D. Consolidation of unorganized point clouds for surface reconstruction. ACM Trans. Graph. 2009, 28, 1–7. [Google Scholar] [CrossRef] [Green Version]
  6. Huang, H.; Wu, S.; Gong, M.; Cohen-Or, D.; Ascher, U.; Zhang, H. Edge-aware point set resampling. ACM Trans. Graph. 2013, 32, 1–12. [Google Scholar] [CrossRef]
  7. Ji, C.; Li, Y.; Fan, J.; Lan, S. A Novel Simplification Method for 3D Geometric Point Cloud Based on the Importance of Point. IEEE Access 2019, 7, 129029–129042. [Google Scholar] [CrossRef]
  8. Leal, N.; Leal, E.; German, S.-T. A Linear Programming Approach for 3D Point Cloud Simplification. IAENG Int. J. Comput. Sci. 2017, 44, 60–67. [Google Scholar]
  9. Pauly, M.; Gross, M.; Kobbelt, L.P. Efficient simplification of point-sampled surfaces. In IEEE Visualization; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2003; pp. 163–170. [Google Scholar]
  10. Shi, B.-Q.; Liang, J.; Liu, Q. Adaptive simplification of point cloud using means clustering. Comput. Des. 2011, 43, 910–922. [Google Scholar] [CrossRef]
  11. Liao, B.; Xiao, C.; Jin, L.; Fu, H. Efficient feature-preserving local projection operator for geometry reconstruction. Comput. Des. 2013, 45, 861–874. [Google Scholar] [CrossRef] [Green Version]
  12. Mahdaoui, A.; Bouazi, A.; Hsaini, A.M.; Sbai, E.H. Comparison of K-Means and Fuzzy C-Means Algorithms on Simplification of 3D Point Cloud Based on Entropy Estimation. Adv. Sci. Technol. Eng. Syst. J. 2017, 2, 38–44. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, S.; Liang, J.; Ren, M.; He, J.; Gong, C.; Lu, W.; Miao, Z. An edge-sensitive simplification method for scanned point clouds. Meas. Sci. Technol. 2019, 31, 045203. [Google Scholar] [CrossRef]
  14. Chen, S.; Tian, D.; Feng, C.; Vetro, A.; Kovacevic, J. Fast Resampling of Three-Dimensional Point Clouds via Graphs. IEEE Trans. Signal Process. 2018, 66, 666–681. [Google Scholar] [CrossRef] [Green Version]
  15. Qi, J.; Hu, W.; Guo, Z. Feature Preserving and Uniformity-Controllable Point Cloud Simplification on Graph. In Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July 2019; pp. 284–289. [Google Scholar]
  16. Zang, Y.; Yang, B.; Liang, F.; Xiao, X. Novel Adaptive Laser Scanning Method for Point Clouds of Free-Form Objects. Sensors 2018, 18, 2239. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Zhu, L.; Kukko, A.; Virtanen, J.-P.; Hyyppä, J.; Kaartinen, H.; Turppa, T. Multisource Point Clouds, Point Simplification and Surface Reconstruction. Remote Sens. 2019, 11, 2659. [Google Scholar] [CrossRef] [Green Version]
  18. Shoaib, M.; Cheong, J.; Kim, Y.; Cho, H. Fractal bubble algorithm for simplification of 3D point cloud data. J. Intell. Fuzzy Syst. 2019, 37, 7815–7830. [Google Scholar] [CrossRef]
  19. Zhang, K.; Qiao, S.; Wang, X.; Yang, Y.; Zhang, Y. Feature-Preserved Point Cloud Simplification Based on Natural Quadric Shape Models. Appl. Sci. 2019, 9, 2130. [Google Scholar] [CrossRef] [Green Version]
  20. Bao, C.; Ji, H.; Quan, Y.; Shen, Z. Dictionary Learning for Sparse Coding: Algorithms and Convergence Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1356–1369. [Google Scholar] [CrossRef] [PubMed]
  21. Aharon, M.; Elad, M.; Bruckstein, A.M. K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  22. Olshausen, B.A.; Field, D.J. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vis. Res. 1997, 37, 3311–3325. [Google Scholar] [CrossRef] [Green Version]
  23. Lee, H.; Battle, A.; Raina, R.; Ng, A.Y. Efficient sparse coding algorithms. In Proceedings of the 19th International Conference on Neural Information Processing Systems, Doha, Qatar, 12–15 November 2012; pp. 801–808. [Google Scholar]
  24. Fan, Z.; Liu, E.; Xu, B. Weighted Principal Component Analysis. In Transactions on Petri Nets and Other Models of Concurrency XV; Springer: Berlin/Heidelberg, Germany, 2011; Volume 7004, pp. 569–574. [Google Scholar]
  25. Narváez, E.A.L.; Narvaez, N.E.L. Point cloud denoising using robust principal component analysis. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Madeira, Portugal, 27–29 January 2018. [Google Scholar]
  26. Pauly, M.; Keiser, R.; Gross, M.H. Multi-scale Feature Extraction on Point-Sampled Surfaces. Comput. Graph. Forum 2003, 22, 281–289. [Google Scholar] [CrossRef] [Green Version]
  27. Narvaez, E.A.L.; Torres, G.S.; Bedoya, J.W.B. Point cloud saliency detection via local sparse coding. Dyna 2019, 86, 238–247. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of the steps involved in the proposed method to simplify a point cloud.
Figure 1. Illustration of the steps involved in the proposed method to simplify a point cloud.
Sensors 21 04279 g001
Figure 2. Different levels of saliency produced by the thresholding of the vector S f with different values: (a) T = 0.9 , (b) T = 0.8 , (c) T = 0.7 and (d) T = 0.6 .
Figure 2. Different levels of saliency produced by the thresholding of the vector S f with different values: (a) T = 0.9 , (b) T = 0.8 , (c) T = 0.7 and (d) T = 0.6 .
Sensors 21 04279 g002
Figure 3. Dynamic ratio.
Figure 3. Dynamic ratio.
Sensors 21 04279 g003
Figure 4. The Fandisk model: (a) original 6475 points; (b) simplified to 1465 points; (c) simplified to 738 points.
Figure 4. The Fandisk model: (a) original 6475 points; (b) simplified to 1465 points; (c) simplified to 738 points.
Sensors 21 04279 g004
Figure 5. The Asian dragon model: (a) original 3,609,600 points; simplified to (b) 410,208 points, (c) 78,268 points, (d) 30,487 points (e) 12,621 points (f) 8196 points, (g) 5758 points, (h) 3307 points, and (i) 1502 points.
Figure 5. The Asian dragon model: (a) original 3,609,600 points; simplified to (b) 410,208 points, (c) 78,268 points, (d) 30,487 points (e) 12,621 points (f) 8196 points, (g) 5758 points, (h) 3307 points, and (i) 1502 points.
Sensors 21 04279 g005
Figure 6. The Max Plank model: (a) original 50,112 points; simplified to (b) 40,108 points, (c) 26,387 points, (d) 20,105 points (e) 12,761 points (f) 8898 points, (g) 6588 points, (h) 5108 points, and (i) 4100 points.
Figure 6. The Max Plank model: (a) original 50,112 points; simplified to (b) 40,108 points, (c) 26,387 points, (d) 20,105 points (e) 12,761 points (f) 8898 points, (g) 6588 points, (h) 5108 points, and (i) 4100 points.
Sensors 21 04279 g006
Figure 7. The Elephant model: (a) original, reconstructed with 24,955 points; (b) simplified, reconstructed with 167 points.
Figure 7. The Elephant model: (a) original, reconstructed with 24,955 points; (b) simplified, reconstructed with 167 points.
Sensors 21 04279 g007
Figure 8. Variation of the parameter λ with the parameter δ = 0.36 fixed.
Figure 8. Variation of the parameter λ with the parameter δ = 0.36 fixed.
Sensors 21 04279 g008
Figure 9. MSE variation vs. dictionary size.
Figure 9. MSE variation vs. dictionary size.
Sensors 21 04279 g009
Figure 10. Local surface approximation and error computation as the distance from p i to L N H i .
Figure 10. Local surface approximation and error computation as the distance from p i to L N H i .
Sensors 21 04279 g010
Figure 11. The Gargoyle, Horse, and Nicolo models: (a) original models; (b) models simplified at 50%; (c) models simplified at 10%.
Figure 11. The Gargoyle, Horse, and Nicolo models: (a) original models; (b) models simplified at 50%; (c) models simplified at 10%.
Sensors 21 04279 g011
Figure 12. Effect on the lighting and density in (a) the original Elephant model when (b) the point coordinates are not included, (c) the normal coordinates are not included, and (d) all three features are included. The arrows show some of the lighting zones.
Figure 12. Effect on the lighting and density in (a) the original Elephant model when (b) the point coordinates are not included, (c) the normal coordinates are not included, and (d) all three features are included. The arrows show some of the lighting zones.
Sensors 21 04279 g012
Figure 13. Point cloud simplification of the Bunny model. (a) The original data set, number of points = 35,947; (b) HCS method, number of points = 4644; (c) GRID method, number of points = 4562; (d) WLOP method, number of points = 4572; (e) FRGR method, number of points = 4638; (f) FPUC method, number of points = 4644; (g) DFPSA method, number of points = 4566; (h) proposed SDBS method, number of points = 4517. The image (g) is taken from [7].
Figure 13. Point cloud simplification of the Bunny model. (a) The original data set, number of points = 35,947; (b) HCS method, number of points = 4644; (c) GRID method, number of points = 4562; (d) WLOP method, number of points = 4572; (e) FRGR method, number of points = 4638; (f) FPUC method, number of points = 4644; (g) DFPSA method, number of points = 4566; (h) proposed SDBS method, number of points = 4517. The image (g) is taken from [7].
Sensors 21 04279 g013
Figure 14. Point cloud simplification by different algorithms for the Elephant model. (a) The original data set, number of points = 24,955; (b) HCS method, number of points = 2184; (c) GRID method, number of points = 2684; (d) WLOP method, number of points = 2438; (e) FRGR method, number of points = 2164; (f) FPUC method, number of points = 2165; (g) DFPSA method, number of points = 2872; (h) proposed SDBS method, number of points = 2154. The image (g) is taken from [7].
Figure 14. Point cloud simplification by different algorithms for the Elephant model. (a) The original data set, number of points = 24,955; (b) HCS method, number of points = 2184; (c) GRID method, number of points = 2684; (d) WLOP method, number of points = 2438; (e) FRGR method, number of points = 2164; (f) FPUC method, number of points = 2165; (g) DFPSA method, number of points = 2872; (h) proposed SDBS method, number of points = 2154. The image (g) is taken from [7].
Sensors 21 04279 g014
Table 1. Test models with the original number of points and the sampling results at different simplification rates.
Table 1. Test models with the original number of points and the sampling results at different simplification rates.
ModelsOriginal PointsSampled Points 5%Sampled Points 10%Sampled Points 20%Sampled Points 50%
Bunny35,94717973610718617,976
Elephant24,95512462489499112,478
Gargoyle25,03812532496500812,522
Horse48,48524284872969324,247
Max Plank50,11224594892982624,569
Nicolo50,4192519505310,08225,213
Fandisk25,89412492480497412,437
Table 2. Values of δ used for different models, when they were simplified at 5%, 10%, 20% and 50%.
Table 2. Values of δ used for different models, when they were simplified at 5%, 10%, 20% and 50%.
Models δ Value Sampled 5% δ Value Sampled 10% δ Value Sampled 20% δ Value Sampled 50%
Bunny0.001400.0009680.0006270.000348
Elephant0.012330.0083300.0057000.003460
Gargoyle0.080000.0467000.0302000.015000
Horse0.001330.0009000.0006000.000333
Max Plank0.153300.0767000.0400000.015670
Nicolo0.025330.0180000.0011800.006533
Fandisk0.000530.0003650.0002660.000176
Table 3. Simplification results, comparison at different sampling rates (SRs) (5%, 10%, 20% and 50%) using the Δ m a x and Δ a v g metrics between the proposed method and the state-of-the-art methods.
Table 3. Simplification results, comparison at different sampling rates (SRs) (5%, 10%, 20% and 50%) using the Δ m a x and Δ a v g metrics between the proposed method and the state-of-the-art methods.
Mesh-BasedPoint-Based
PSDCVFRGRFPUCSDBS
SR 5% Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g
Bunny0.0050650.0005350.0125290.0007860.0109890.0007810.0237270.0012670.0060190.000517
Elephant0.0295240.0044530.0710160.0064730.0793000.0074810.0765020.0067230.0325340.004307
Gargoyle0.5204730.0965181.9204980.1293552.1916070.1308391.3347820.2229110.6535880.090725
Horse0.0035440.0003430.0084350.0004870.0092630.0004900.0178940.0010560.0043400.000322
Max Plank1.3015190.0996813.7021090.1451902.8024590.1651324.7253970.2839581.6180010.087707
Nicolo0.1341430.0110210.3704150.0158980.3319870.0168160.2923580.0150140.1499770.010250
Fandisk0.2069120.0178871.2510900.0791580.4415570.0327660.6275950.0443140.2150290.016890
SR 10% Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g
Bunny0.0044300.0003080.0089440.0004260.0071640.0004160.0123470.0004960.0052120.000287
Elephant0.0223180.0024140.0486410.0036950.0579840.0038800.0511370.0034120.0295240.002312
Gargoyle0.0381920.0062161.2830420.0828792.1628950.0848560.9683520.0125220.0549680.005627
Horse0.0025060.0001900.0064700.0002610.0054130.0002450.0086800.0003690.0035440.000171
Max Plank0.9612360.0593492.8568820.0845471.7983080.0856174.4049360.1357281.2409500.049349
Nicolo0.1060500.0065800.2464370.0090230.2805810.0093290.1774550.0083150.1161710.005710
Fandisk0.1492060.0094141.2407820.0507760.3190610.0206140.4320450.0169750.1548390.009010
SR 20% Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g
Bunny0.0030090.0001830.0082410.0002230.0056300.0002260.0066160.0002270.0034750.000151
Elephant0.0176440.0013670.0394530.0018120.0466010.0020640.0352880.0016630.0193280.001171
Gargoyle0.3062200.0409870.9243130.0478882.1483940.0509020.6038740.0693400.4677590.033819
Horse0.0020460.0001090.0047980.0001340.0043400.0001320.0050110.0001390.0025060.000087
Max Plank0.6796930.0335802.0009720.0429231.3015190.0443491.3593930.0489010.9612360.024317
Nicolo0.0948540.0039020.2173370.0045680.1897070.0047590.1572970.0045140.0948540.003155
Fandisk0.1013660.0050301.2407820.0355600.2429860.0145980.4117500.0090570.1308630.004398
SR 50% Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g Δ m a x Δ a v g
Bunny0.0021280.0000940.0061430.0000720.0038850.0000770.0027470.0000670.0027470.000060
Elephant0.0068630.0007170.0284500.0005920.0249520.0006240.0176440.0005160.0176440.000526
Gargoyle0.2282430.0241440.6535880.0173681.4362990.0172100.3680310.0202280.2500280.012773
Horse0.0020460.0000540.0035440.0000430.0025060.0000410.0043400.0000380.0020460.000033
Max Plank0.5549700.0168121.3593930.0154990.8774840.0137780.7848460.0129120.5549700.009639
Nicolo0.0670720.0019860.1422800.0016260.1060500.0016380.0670720.0013660.0670720.001191
Fandisk0.0716760.0022201.2373270.0306150.1559240.0097280.3837640.0058450.0925340.001499
Table 4. Comparison of simplification time and preserved number of points.
Table 4. Comparison of simplification time and preserved number of points.
MethodPreserved Number of Points (Bunny)Preserved Number of Points (Elephant)Bunny Running Time (s)Elephant Running Time (s)Language
SDBS4517215421.22315.186MATLAB
DFPSA4566287256.15626.220---
FPUC4644216538.09429.503MATLAB
FRGR463821649.57401.0030MATLAB
WLOP4572243816.67810.879C/C++
GRID456221540.69200.5170C/C++
HCS464421844.45903.1470C/C++
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Leal, E.; Sanchez-Torres, G.; Branch-Bedoya, J.W.; Abad, F.; Leal, N. A Saliency-Based Sparse Representation Method for Point Cloud Simplification. Sensors 2021, 21, 4279. https://doi.org/10.3390/s21134279

AMA Style

Leal E, Sanchez-Torres G, Branch-Bedoya JW, Abad F, Leal N. A Saliency-Based Sparse Representation Method for Point Cloud Simplification. Sensors. 2021; 21(13):4279. https://doi.org/10.3390/s21134279

Chicago/Turabian Style

Leal, Esmeide, German Sanchez-Torres, John W. Branch-Bedoya, Francisco Abad, and Nallig Leal. 2021. "A Saliency-Based Sparse Representation Method for Point Cloud Simplification" Sensors 21, no. 13: 4279. https://doi.org/10.3390/s21134279

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop