Abstract

Indoor localization has continued to garner interest over the last decade or so, due to the fact that its realization remains a challenge. Fingerprinting-based systems are exciting because these embody signal propagation-related information intrinsically as compared to radio propagation models. Wi-Fi (an RF technology) is best suited for indoor localization because it is so widely deployed that literally, no additional infrastructure is required. Since location-based services depend on the fingerprints acquired through the underlying technology, smart mechanisms such as machine learning are increasingly being incorporated to extract intelligible information. We propose CEnsLoc, a new easy to train-and-deploy Wi-Fi localization methodology established on GMM clustering and Random Forest Ensembles (RFEs). Principal component analysis was applied for dimension reduction of raw data. Conducted experimentation demonstrates that it provides 97% accuracy for room prediction. However, artificial neural networks, k-nearest neighbors, K, FURIA, and DeepLearning4J-based localization solutions provided mean 85%, 91%, 90%, 92%, and 73% accuracy on our collected real-world dataset, respectively. It delivers high room-level accuracy with negligible response time, making it viable and befitted for real-time applications.

1. Introduction

Positioning systems aka localization systems both for outside and inside buildings is an ever-exciting area of research and development due to increasing market shares as in smart buildings, assistive and assisted living, safer metropolitans using geographical information systems, and tracking of IoT objects for commercial purposes. User localization is poised to reach 2.6 billion dollars’ worth of market share soon [1], specially involving indoor localization solutions. Localization for the outdoors has been successfully commercialized in the form of satellite-based technologies such as GPS, BeiDou, GLONASS, COMPASS, and GALILEO [2]. Indoor positioning cannot be performed using the same technologies because of No-line-of-sight (NLOS) and occlusion. Radio frequency (RF) signals, on the contrary, do not require explicit LOS for operation.

A received signal strength indicator (RSSI) as a measure of RF signal quality for Wi-Fi, RFID, Bluetooth [3], ZigBee [4] and ultrawideband [5] has been used in several indoor positioning systems. Moreover, several kinds of sensory input such as images [6], video, ambient sound [7], accelerometer [8], magnetometer [9], pedometer, gyroscope readings, and their amalgamation with various aforesaid RF signals [10] have also been explored.

Several approaches and their combinations such as time of arrival (TOA), time difference of arrival (TDOA), pedestrian dead reckoning (PDR), and angle of arrival (AOA) [11] have also been utilized for indoors. Each of these has been shown to have serious limitations. For instance, at successive predictions, PDR suffers from error propagation, and precise clock synchronization between sender and receiver is a requirement for TOA- and TDOA-based systems. In addition, specialized antennas are needed for AOA-based systems.

RSSI-based systems are based on two widely adopted mechanisms; location estimation using formalized propagation models or fingerprints. Location systems that are based on the former suffer from low precision at run time because of variability in channel behavior including fading and shadowing, and also due to heterogeneity of device types and form factors [12].

Wi-Fi networks are deployed as in Access Points (APs) that are prevalent everywhere. Utilizing these to capture RSSI fingerprint (FP) is conveniently possible with device as simple as smartphones or phablets. Wi-Fi fingerprint-based localization has the following benefits: no requirement of extra hardware at both sender and receiver sides; utilization of already existent infrastructure; easily implementable; and no essential need of propagation model building which may or may not depict real signal propagation at run time [13].

The infrastructure of APs allows us to collect a dataset comprising FPs on selected Reference Points (RPs) that essentially becomes a cue to the physical layout of the building, like a map. It is then utilized to prepare the localization system as in the training phase. Once trained, the system is ready to be used, i.e., for an unseen FP captured anew, a room number or an associated label is returned by the system to estimate the location.

In this paper, a new localization methodology is presented that uses a combination of data reduction technique as in principal component analysis (PCA), soft clustering technique such as Gaussian mixture model (GMM), and bootstrapped aggregated/bagged ensembles of decision trees commonly referred to as Random Forest Ensembles (RFEs). We aspire to provide infrastructure-less indoor localization methodology which is scalable, easy to deploy, and provides real-time response for high room-level accuracy instead of explicit coordinates. First of all, PCA was employed for raw data dimension reduction. Then, clustering was performed to split the data in similar groups to help classifier better learn the data dynamics. Finally, a separate RFE is trained for every single cluster.

The remainder of the paper has been organized as follows. Section 2 presents related work. Preliminary experimentation results are summarized in Section 3. Section 4 provides details of the localization methodology that we have proposed. Section 5 delves into experimental setup, layout and results for validation. Conclusion along with possible future directions is showcased in Section 6.

1.1. Scope of the Study

It is important to declare the scope of the study here to give a prelude to the paper that follows. The paper is aimed at proposing a new methodology that is put to application in a practical and particular environment. The resulting constraints and limitations of our experimental regime are quite natural and fairly generalizable in terms of spatiotemporal aspects, specifically for building size and type, fingerprints’ collecting device type, and time of the year. The fingerprints were collected at the Software Engineering Centre of our own university. The building is double-storey and has architectural diversity in terms of rooms, corridors, and an inlaying garden. Nonetheless, an extensive dataset spanned over multiple buildings can be obtained to ensure wider spatial diversity. Our dataset was collected using a single Android phone; however, with a large team using a variety of devices, data collection can be performed over different times of the year to obtain data both for training and testing of our approach.

We summarize here the work on IPS based on Wi-Fi that is typically a WLAN standard IEEE802.11 (b, a, g, ac, or any) or a combination of Wi-Fi with another wireless or sensory input. RADAR [14] from Microsoft® labs is the pioneer research work to employ Wi-Fi signals. Wi-Fi signals received at the base stations (Access Points) from a laptop were used for predicting the user’s coordinates using k-NN-based method and triangulation, reporting a median error of 2-3 m. Li et al. [12] combined affinity propagation as a message passing-based clustering algorithm with PSA-based artificial neural network (ANN) for the Cartesian coordinates-based prediction. A mean error as low as 1.89 m was reported by them including 2.9 m for 90% of estimates.

Song et al. [13] analyzed FP collection as an AP relevancy problem. Hidden Naïve Bayes (HNB) was used as a mechanism to infer the most relevant APs and suggested that redundant APs may be obviated for each RP through a variant of ReliefF with the Pearson product-moment correlation coefficient (PPMCC). Moreover, clustering was performed on RPs. One HNB was trained per cluster to approximate user coordinates. Cooper et al. [15] employed combination of Wi-Fi and Bluetooth low energy radio signal FPs based on boosting technique targeting the room-level prediction. They trained one classifier per room based on a variant of AdaBoost that conveniently harnessed decision stumps in one-vs-all notion. Using combination of Wi-Fi + BLE, they acquired 96% accuracy with 4.3E − 03 seconds response time. Wang et al. [16], similar to Li et al., presented training of ANN with back propagation that was based on PSA for RSSI measurements of RFID tags. They performed data preprocessing by normalizing dataset to [0, 1] range along with using Gaussian filter. They estimated x and y coordinates reporting a mean error of 0.34 m.

Xu et al. [17] utilized multilayer neural network (MLNN) for Wi-Fi signals along with network boosting. They trained the MLNN in two stages commonly followed in deep learning, namely, pretraining using autoencoders and fine tuning using back propagation algorithm reporting a mean error of 1.09 m. Zhang et al. [18] proposed a coarse localizer composed of four-layered deep neural network using stacked denoising autoencoders, succeeded by the hidden Markov model-based fine localizer, reporting a mean error of 0.39 m.

Calderoni et al. [19] utilized RFID tags’ RSSI values targeting room-level accuracy in a hospital environment. They divided the total area into macroregions using k-means variant, followed by a Random Forest trained per macroregion. Multiple random forests for whom the cluster matching score was greater than a particular threshold determined the final prediction with 83% reported accuracy. Jedari et al. [20] investigated room-level prediction using k-NN, rule-based JRip, and Random Forest classifier based on Wi-Fi signals. They concluded that Random Forest produced much better results than k-NN (77.4%) and JRip (72.2%) with 91.3% accuracy.

Mo et al. [21] proposed the usage of kernel PCA (KPCA) algorithm for the coarse-level prediction of manually labelled cluster using Random Forest. They derived trained matrices from extracted KPCA features and prepared subradio maps. For prediction, the features extracted from coarse positioning, refined by the trained KPCA matrices were fed to weighted k-NN (WK-NN) for final coordinates estimation. They reported an accuracy of 93% with an error distance of 2 m. Górak et al. [22] employed Random Forest for finding important APs and applied threshold-based elimination. They determined malfunctioning APs during operation based on important APs. They were able to report error rates as low as 4% for detecting the floor, and as for horizontal detection, 2 m error was reported. The results were compared against 30% and 7 m, respectively, when malfunctioning AP were undetected. They [23] divided FP dataset both into subsets which were either overlapping and/or nonoverlapping as per the presence of RSSI from every AP. Furthermore, one Random Forest was trained for each such subset. They compared the results with base Random Forest, signifying around 5% to 9% betterment in average reported error at the floor level. The performance in terms of detection of floors was unchanged.

Aforementioned are some recent efforts in field of indoor localization using RF signals. k-NN works by storing all the data samples along with marked ground truth labels. An unseen sample is compared with complete dataset based on a similarity measure/distance to determine nearest neighbors, where k is the number of neighbors and neighbors’ weightage. The final decision of the sample’s class is based on the majority of the labels of the k-nearest neighbors. Several IPS such as in Oussalah et al. [11] and Niu et al. [24] based on k-NN and its variants do not scale well when the dataset grows because they require FP matching with whole dataset. Moreover, recently artificial neural networks and deep learning have gained a great deal of focus for indoor localization. Artificial neural networks try to mimic human brain. They form multiple layers of neurons, namely, a single input layer, a single output layer, and one or more hidden layers. At every layer, several neurons are connected to one another according to a specific configuration and triggering function. Every layer affects and triggers neurons in the next layer following rules of the learning function which eventually evolves into the final output. Heavy resource utilization is required during the training phase; however, their response time is negligible due to minimal required computation. IPS employing ANNs and deep learning such as Li et al. [12], Ding et al. [25], León et al. [26], Zhang et al. [18] and Tuncer and Tuncer [27] faces the challenge of finding several tunable parameters such as the optimal architecture, no. of layers, learning function, and no. of neurons at every layer. Moreover, the convergence rate and the final accuracy of various configurations do not follow any specific trend. Sometimes, a 2-layer simpler configuration takes more time to converge than a 4-layer network, and the accuracy achieved by a 6-layer network is lower than the accuracy obtained by a 3-layer ANN. Hence, heuristics are predominantly used for proposing an architecture based on ANNs because Monte Carlo configuration testing is impossible. An AP location change, or the addition/removal of new APs, will lead to essential retraining of the IPS putting ANN and deep learning at a disadvantage.

Inspired from existing works, we suggest a clustering-based multiclass classifier approach for room-level prediction which is easy to train-and-deploy in terms of computational complexity, provides suitable accuracy, and offers response time appropriate for real-time applications. Clustering follows the divide and conquer approach to help the classifier better learn the group of similar observations instead of the whole dataset. We perform soft classification (clustering) of FPs, not for RPs which has been mostly done in the existing works. PCA-based data dimension reduction helps us to reduce the response time of the system by decreasing the number of predictors.

This approach scales well in terms of response time with an increase in the number of rooms since a single classifier is invoked for each location providing maximum accuracy reported so far to the best of our knowledge.

3. Preliminary Experiments

Preliminary experiments were performed on sample dataset collected from our departmental building to evaluate classifier suitability [28]. The results presented here are on the sample dataset. The detailed experimentation results on the complete dataset of all locations in building are presented in Section 5.

3.1. Dataset Acquisition

A customized app was developed for an Android phone, built to record RSSI as vector data coming from Wi-Fi APs. The Wi-Fi FPs of all observable APs both within 2.4 and 5 GHz bands at a RP were scanned using a commercial off-the-shelf Samsung phone (Version: J5 Galaxy). FPs were obtained at each RP while hovering the smart phone starting at 0° up to right angle (90°) with respect to the floor as shown in Figure 1, so as to make the phone face N, NW, W, SW, S, SE, E, and NE with an effort to keep occlusion as minimum as possible due to the human torso [5, 9]. The user stood at the centre of the RP, held the phone used for FP collection in his hand, and captured multiple FPs in each direction out of the total 8 directions shown in Figure 1. A total of A APs were present in the premises continuously emitting radio signals. These FPs were then stored into DB, each with respective room labels.

3.2. Preprocessing

The resultant dataset was found to be sparsely populated with the identifiers of APs because of the presence/absence of these APs at various RPs labelled in rooms and in the corridors of building. The measured RSSI values varied between −98 dBm and −15 dBm (from being weak to being strong as a result of distance from APs). Being consistent with the well-known practice to keep the missing values slightly weaker than the weakest signal detected in the dataset [12, 14, 18, 19], the missing values were replaced with −100 dBm.

3.3. Classifier Evaluation

We evaluated performance of 60+ classifiers in WEKA for room-level prediction on sample dataset out of which top ten best performing classifier performances are summarized in Table 1.

Taking into account all the performance measures such as accuracy to receiver operating characteristics (ROC) area, the overall performance best attained (descending order) was by K, k-nearest neighbors (k-NN), Random Forest Ensemble (RFE), and algorithm for Fuzzy Unordered Rule Induction Algorithm (FURIA), multilayer perceptron, deep learning for JAVA (Dl4jMlpClassifier), support vector machines, Naive Bayes classifier, and finally AdaBoost that uses stumps for decision-making. Kand k-NN both are instance-based classifiers and produced similar performance results. Followed by RFE, FURIA, and multilayer perceptron (ANN), FURIA and ANN show almost similar performance trend. We selected k-NN and ANN for comparison as many existing works had utilized these which makes comparison with other works easier. Moreover, we chose K, FURIA, and DeepLearning4J classifiers for comparison too, as they are state-of-the-art and relatively new machine learning methods. RFE is suited for large datasets to give high accuracy and time efficiency. It is resilient to noise in data and is also capable of dealing with missing values in data. RFE utilizes bootstrapping that reduces variance and keeps the bias in check because creating different subsets of training dataset along with a replacement mechanism ensures that the trees have little or no correlation. Therefore, overfitting is avoided, making it more generalizable. Both training and prediction time of RFE due to parallel computation supported by bagging make it suited for real-time implementation of IPS. Hence, we selected RFE [29] as the suitable classifier module in our proposed methodology.

4. Proposed Localization Methodology

4.1. Problem Formulation

We assume localization/positioning as a combination of clustering and multiclass classification problem where each room is considered to be a class. A two-dimensional indoor area is partitioned into R square grids of dimensions C × D m2. The centre of each square grid is a reference point (RP). A device equipped with the wireless adapter card can sense wireless signals from a total of A AP-s at a certain RP at a given time, which forms the fingerprint FPi = {RVi, Li}. RVi = {rssii1, rssii2, rssii3, …, rssiiA,} where rssiij symbolizes the RSSI value from jth AP (dBm) in the ith sample of FP collected and Li is the respective class/room label. Let N such FP constitute the dataset. Localization function, LF, is learnt from the FP dataset to map the observed FP to a certain room label Lx as described by the following equation:

There are two phases of the proposed methodology (CEnsLoc); namely, training phase and prediction phase as shown in Figure 2. First of all, a sparse Wi-Fi FP dataset with many missing values is collected. In the training phase, the collected FPs reserved for training the system are preprocessed for missing values replacement, followed by PCA application for dimension reduction. Then GMM-based hard clustering is used for nonoverlapping/disjoint data subsets generation. For each such subset, a separate RFE is trained for location prediction and stored in the database. In the location prediction phase, same steps of missing value replacement and PCA computation are performed on the captured FP. Then, FP is matched with a single cluster using the stored GMM. The final prediction Lx is generated by invoking the respective pretrained RFE for the best matched cluster/subset. These phases are formally elaborated in detail in Sections 4.2 and 4.3 respectively.

4.2. Training Phase

During training, the training dataset is fed to the preprocessing module which replaces empty readings with missing value replacement (MVr). PCA is then performed on the dataset for dimension reduction. Orthogonal transformation is applied by PCA for redundant information removal to decrease the number of predictors. Principal components (PCs) were obtained by applying PCA, which are a set of linearly uncorrelated variables such that maximum variance by some projection of the data is captured by the first principal component and so on. Choosing the smaller of the number of predictors/APs and number of samples minus one, A PCs are generated {PC1, PC2, …, PCA}. For computation of PCs, first the mean RSSI value of each AP is subtracted from the ith RP using the following equation:where is the total no. of rows of samples/dataset and is the total no. of RPs. The PCs matrix is computed by the following equation:where is the eigenvector matrix of the average RSSI value of each AP in the ith RP. The resulting dataset is then divided into subsets using GMM clustering (K data subsets). Equation (4) is a distribution based on 2D Gaussian where mean is represented by µ and the covariance matrix is ∑. A GMM with as the no. of overlapping distributions is described by the following equations:where defines the mixing coefficient to express the weight of each mixing element (weighted sum being 1). The resultant shape of 2D Gaussian is the average of distributions individually, in terms of covariance and mixing coefficients. Assuming that a linear mix of weighted coefficients for each of the respective distribution’s average and covariance is obtained, and by incorporating sufficient distributions, a final density function may be obtained. The reason behind GMM clustering was the similarity between Gaussian distribution and the radio propagation characteristics of a Wi-Fi AP [30] which makes GMM a highly suitable candidate for clustering Wi-Fi RSSI vectors.

Furthermore, each data subset is used for training a RFE to predict the room label as a multiclass classifier. The trained models of GMM as well as all RFEs are stored for later use in the prediction phase.

The algorithm for training and prediction phases of CEnsLoc is given in Algorithm 1.

Input: training dataset with total A predictors
 Missing value replacement MVr
 Maximum number of clusters Kmax
Output: predicted location Lx
For training:
Replace empty values with MVr
Apply PCA on dataset to generate A’ predictors
For k = 1 -> Kmax
 Generate clusters
 Generate and save k data subsets
 For each p є k data subsets
  Train p RFE using Algorithm 2 (training)
  Calculate performance measures
 End for
End for
Choose optimal configuration
Save respective models for GMM, all RFEs
For prediction at a new point x:
Replace missing values with MVr
Apply PCA on the FP
Match one cluster Cmatch
Invoke RFE of Cmatch using Algorithm 2 (prediction)

Let the total number of samples be Nsample in the training set, the number of trees is Ntree, the number of maximum splits allowed is Smax, the number of predictors for the classifier is A’, f is the value specifying no. of input predictors that are utilized to split at a tree node, and tc denotes the total no. of classes in the dataset. For finding best split, RFE uses Gini Index as given in the following equation where Pj is the class j’s relative frequency in Nsample:

RFE is trained using the method presented in Algorithm 2.

Input: data subset with total A’ predictors for training
 No. of tree Ntree
 Allowed maximum no. of splits Smax
 Random no. of predictors/features f
Output: estimated location Lx
For system training:
Step 1: for l = 1 to Ntree
(i)Choose a bootstrap sample set (SS) of size (Nsample) with replacement from the training data subset
(ii)Generate a Random Forest Tree (Tl) to SS, via recursively iterating (a-c) for every terminal node of tree, unless the maximum no. of splits (Smax) is reached(a)Randomly select f features/variables from the A’ predictors (f << A’)(b)Choose the best features/split-point from the f employing Gini Index(c)Split node forming into two children nodesStep 2: Produce the resulting ensemble of trees {Tl}1NtreeFor location prediction at a new point x from RFE LrfNtree:Let, Lm(x) be the room/class prediction by the mth RFE treeLrfNtree(x) = maj. vote {Lm(x)}1Ntree
4.3. Prediction Phase

The average of collected RSSI FP is fed to CEnsLoc, A PCs are computed by the method similar to the training phase. The saved model of GMM is invoked for cluster matching. Matched cluster’s trained RFE is invoked where the final decision is computed by the majority vote described in Algorithm 2.

4.4. Time Complexity of Training and Prediction Phases for CEnsLoc

Ceteris paribus, the time complexity of the training phase and the prediction phase for CEnsLoc is essentially dependent upon the size of the experimental area, our acquisition regimen, the resultant dataset of FPs, and how the dataset is manipulated by the tandem of schemes we employ.

4.4.1. Time Complexity of Training

In the training phase for PCA, time complexity is governed by the following equation:where A = no. of predictors and N = no. of observations.

For training a decision tree (DT) that has not been pruned, the expression is as follows:

As RFE consists of numerous DTs, merely a small no. f is used from total predictors A. Complexity for a single DT in RFE is represented by Equation (10) and the complexity of Ntree by Equation (11):where Ntree = no. of trees in RFE and f = random features that are chosen to get the best split.

While trying to control trees’ depth grown using Smax, training complexity of one RFE is as follows:

Since K ensembles are grown for predicting room level, time to train complexity is represented by the following equation:

For GMM, the complexity is expressed by the following equation:where N = no. of samples, K = no. of components, and D = no. of dimensions.

Incorporating all, the time complexity to train for CEnsLoc is as follows:

4.4.2. Time Complexity of Prediction

For prediction, time complexity for PCA is given by the following equation:

The complexity for a DT and an ensemble in terms of prediction time are shown by Equations (17) and (18), respectively:

Smax controls trees depth; therefore, complexity for an ensemble is given by the following equation:

Only a single RFE out of K ensembles gets invoked for predicting a room.

Therefore, time complexity for GMM is given by the following equation:

Finally, the overall complexity for CEnsLoc in terms of prediction is as follows:

5. Experimental Results and Discussion

This section entails hardware equipment, software used, and particulars of experiments that were used to evaluate the performance of CEnsLoc in light of accuracy, precision, recall, training, and response time.

An Intel machine (64-bit Xeon: X5650) with a master clock at 2.67 GHz with 24 GB RAM, and 64-bit Windows 10 Education was used for experimentation in MATLAB. The real dataset was developed through FP collection at the ground floor of Software Engineering (SE) Centre, University of Engineering and Technology (UET), Lahore Pakistan. The building’s dimensions are 39 m × 31 m (1209 m2) containing offices, class rooms, laboratories, and open corridors. Figures 3 and 4 depict the building’s floor plan, room labels (L1–L10 closed rooms, L12 open corridor, and L11 a semiopen room), a total of 180 RPs, and the number of samples collected per room. RPs that are represented by small coloured dots in rooms in Figures 3 and 4 enlist the total number of samples collected at each location L1–L12. Such notion of rooms was used as walls playing a crucial role in fluctuation of Wi-Fi RSSI values [31, 32]. The area was planned into a grid of cells of 1.5 × 1.5 m2. Each cell center was marked as the Reference Point (RP) for FP collection.

The complete dataset consisted of 20087 Wi-Fi RSSI FPs, in which total 40 APs were detected. Figure 4 depicts the number of FPs captured in each room/location marked as L1–L12. It must be noted that all these APs belong to university infrastructure comprising of SE centre and its immediate neighboring buildings. The FPs were preprocessed following the same process described in Sections 3.1 and 3.2. PCA-based dimension reduction was employed with optimal results found with 23 PCs. The resultant dataset was divided with 70 : 30% stratified ratio for training and testing subsets. The experimental results are discussed using both 10-fold cross validation (10-CV) on training subset and on unseen 30% test subset. GMM clustering then partitioned the training data into further subsets, and the optimal configuration was two clusters, shared covariance kept as true and diagonal covariance. Furthermore, one RFE was trained for each subset with 132 trees, 1024 maximum splits, and 8 random features. CEnsLoc was hosted at a machine, and the mean of observed RSSI FPs was used for run time location estimation after applying the same model of PCA and GMM cluster matching finalized from the training phase. Best matched cluster’s respective RFE was invoked for room prediction. Various companion apps can query location of subscribed users using the IPS configured as a server. Response time was computed as an average of the difference between localization query time and prediction generation time. The following formulae were used to compute the performance parameters:

5.1. Classification Effectiveness and Efficiency

Tables 2 and 3 summarize the 10-CV performance evaluation of CEnsLoc, comparing it with k-NN [33], artificial neural network (ANN) [34], K [35], FURIA [36], and DeepLearning4J [36]. The best results are highlighted throughout with boldface. k-NN results were computed averaged over six different configurations based on the number of neighbors, similarity measure employed, and neighbor weightage. Similarly, six different configurations of ANN, namely, 2-, 3-, and 4-layer ANNs each having varying number of neurons per layer, employing two learning algorithms SCG and RBP, were averaged out to obtain the results. For K, the entropic blend percentage was varied from 10 to 90 percent. The results obtained for FURIA were obtained by varying the count of folds for growth and pruning as well as through varying minimum instances weight that was for each split. DeepLearning4J results were obtained by varying the number of neurons per layer, number of dense layers in the network, and training algorithm out of many possible variations in the hyper parameters. The results by CEnsLoc were generated by the found optimal configuration of our proposed approach. For all the approaches, the same set of configurations, as used during 10-CV performance evaluation, were used for result generation on 30% unseen stratified test data subset whose results are presented in Table 4. The best performance on both 10-CV results (Table 2) and test dataset (Table 4) were obtained by CEnsLoc with 97% and 95% accuracy followed by FURIA (92%, 90%), k-NN (91%, 89%), K (90%, 87%), ANN (85%, 82%), and DeepLearning4J (73%, 71%), respectively. The results are presented on both 10-CV and unseen test data subset to show that, for a small dataset, 10-CV results can provide a good performance estimate, as results on the test data subset were found to be showing similar trends as depicted by the 10-CV results. Although deep leaning and ANN provide good performance results, but finding optimal configuration for a huge number of tunable parameters is a tricky task. Moreover, during experiments, it was found that despite generic guidelines for parameter tuning, the performance measures and convergence rate of ANN as well as deep learning schemes highly fluctuate with even a slight variation in the number of neurons in layers, training algorithm, and number of hidden layers, which makes it harder to find optimal configuration for ANN and deep learning models. Lazy and instance-based approaches such as k-NN and K are also good candidates for indoor localization; however, they have a very limited number of tunable parameters resulting in inability to surpass CEnsLoc performance despite trying different parameter combinations. They do not generalize well as compared to RFE as the end result of prediction is heavily dependent on the majority vote by k closest matched samples in the dataset. They also need template matching with the entire dataset for one location prediction which results in growing response time with increasing number of samples in the dataset which is highly likely in practical real-world scenarios. As in a typical building, there are a quite huge number of visible APs, and a sufficiently large number of samples are also required for classifiers to work properly.

A minimum response time of 2.05E − 05 seconds was obtained by FURIA. DeepLearning4J stood second with 6.82E − 05 seconds response time. ANN, k-NN, and CEnsLoc all had response time on the scale of E − 04 seconds which is 10 times slower than aforementioned two approaches, but the difference was trifling, which cannot be detected by any human user of the system; the accuracy, precision, and recall provided by CEnsLoc was much greater than all other approaches.

FURIA stands out as the second best performer regarding indoor localization, which is an upgraded version of the RIPPER algorithm, indicating that rule-based algorithms, specially fuzzified versions with good generalization capabilities perform well for location estimation as well. However, it lagged behind CEnsLoc in terms of accuracy, precision, and recall by 5%, 10%, and 15%, respectively.

The details of both 10-CV and test dataset results for all IPS compared and CEnsLoc are depicted in Figure 5 for side by side visual comparison.

5.2. Out-of-Bag (OOB) Error Results

There is a performance measure called OOB error which is peculiar to RFE. During training of RFE, data subsets are generated with replacement, resulting in some repeated and left out observations (OOB observations) for each tree. That particular tree does not train on these left out OOB observations. Prediction capability of RFE can be measured using “OOB Loss” which is the error made on unseen OOB observations during training. OOB loss measure has been investigated and shown to provide an upper bound on testing error [37], specifically, useful for small-sized datasets. Hence, OOB error can be used just like/instead of unseen test data subset if the available dataset is small or unseen test dataset is unavailable. It provides a very good estimate of the trained classifier’s generalization capability. Table 5 summarizes the OOB loss compared with averaged out 10-CV loss indicating that it indeed bounds it.

6. Conclusion and Future Work

Location prediction/estimation provides derivation of meaningful context for a broad range of services and applications. Indoor localization can open altogether a plethora of new opportunities because humans spend most of their time indoors. CEnsLoc offers shorter response time and an overall improvement in accuracy, precision, and recall. With only a few parameters to be tuned, it is suited for FP-based localization which requires frequent recollection of data and retraining. CEnsLoc was able to attain 97% accuracy in comparison with other IPS averaged over 6 different configurations, namely, FURIA, k-NN, K, ANN, and DeepLearning4J with 92%, 91%, 90%, 85%, and 73% accuracies, respectively. It can be utilized for elderly assistance, navigation, smart buildings, and smart transportation to name a few potential applications.

Our future work includes deployment of CEnsLoc across a wide range of civil infrastructures including different floors and/or buildings to understand its performance in more detail as well as its scalability using crowdsourcing. We also aim to build safety, security, and evacuation guide applications with CEnsLoc at their core for users in offices, universities, and retail. Furthermore, integration with GPS, Bluetooth, and PDR to further enhance accuracy, utilizing available hybrid technologies at a given time, is also part of the plan.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07048697).