Abstract

This paper presents a novel hybrid simulation method based on the combination of an in-house developed 3D ray launching algorithm and a collaborative filtering (CF) technique, which will be used to analyze the performance of ZigBee-based wireless sensor networks (WSNs) to enable ambient assisted living (AAL). The combination of Low Definition results obtained by means of a deterministic ray launching method and the application of a CF technique leads to a drastic reduction of the time and computational cost required to obtain accurate simulation results. The paper also reports that this kind of AAL indoor complex scenario with multiple wireless devices needs a thorough and personalized radioplanning analysis as radiopropagation has a strong dependence on the network topology and the specific morphology of the scenario. The wireless channel analysis performed by our hybrid method provides valuable insight into network design phases of complex wireless systems, typical in AAL-oriented environments. Thus, it results in optimizing network deployment, reducing overall interference levels, and increasing the overall system performance in terms of cost reduction, transmission rates, and energy efficiency.

1. Introduction

The wide adoption of diverse technological elements, in particular those related to Information and Communication Technologies, is a key driver in the transformation, provision, and delivery of healthcare services. Traditionally, healthcare services entailed large amounts of resources, many of which required patients to be in direct contact with health specialists, for diagnostics as well as for treatment. In the past decade, with the steady adoption of software solutions and seamless connectivity, electronic health (e-Health) and mobile health (m-Health) have enabled ambient assisted living (AAL) and context-aware scenarios. In this context, real time monitoring and interaction of patients and users with health specialists can be performed remotely [1, 2], decreasing overall costs and increasing quality of life, reducing patients displacement, and allowing them to live in their homes [3, 4]. Parameters such as biomedical signals, drug distribution, patients’ behavior, interactions, and alarm signals can be readily collected and analyzed. The interaction of these localized AAL solutions within a smart city or smart region gave rise to the Smart Health concept [5].

The implementation of context-aware environments relies on the use of a wide variety of communication systems and very particularly wireless communication solutions, which allow seamless connectivity by means of different wireless infrastructures [6]. In this sense, sustained use of wireless communication systems has led to the adoption of adaptive modulation and coding and spectrum allocation schemes, in order to optimize coverage/capacity requirements. In this sense, interference control plays a vital role in the performance of wireless systems, particularly in 4G and 5G communication systems. Overall power spectral density of interference depends on network topology, spatial concentration of users, and intrinsic characteristics of network terminals and access points/base stations. When considering the implementation of AAL environments and Smart Health solutions, we need to face scenarios that are complex in terms of wireless propagation, due to the presence of multiple elements, such as furniture, building structure, or people, which can give rise to strong fading effects in a nonuniform manner. Moreover, user density also affects interference levels, with density values that can exhibit strong variations as a function of scenario location or time of analysis, particularly in the case of interaction with wearable devices, wireless body area networks, or device-to-device connections. Accurate estimations of these interference levels will help to avoid possible communication errors, which in some cases, like some e-Health applications where medical sensors take part, could be critical [7].

Wireless channel characterization, in terms of useful received signal levels for a given set of connections as well as for potential interfering connections, can be a challenging task due to the large size of scenarios, the existence of multiple frequency dependent materials, and inherent variability of mobile connections, given by the movement of potential scatterers. In this sense, several approaches can be followed, from empirical estimations, which provide results at low computational cost with low precision (requiring site-specific calibration measurements), to full-wave simulation techniques, which provide high accuracy at very large computational cost. As a midpoint, deterministic techniques, such as ray launching, can provide adequate accuracy while reducing computational cost. As the scenario size grows, ray launching techniques increase their computational cost. In this sense, approaches such as source definition based on Huygens box approximation or the combination of 3D ray launching with neural networks reduce the computational effort when the size of the scenario grows [8, 9].

In this paper, novel approximation based on the combination of an in-house developed 3D ray launching code and a collaborative filtering (CF) technique of bidimensional calculation points is used to analyze the performance of wireless channels emulating AAL scenarios. The studied scenarios have an inherent complexity and a large number of users and interferers. The aim of the proposed method is to provide optimal deployment strategies for massive wireless system and wireless sensor networks. The application of the presented method, in combination with new optimizing algorithms [1012], will enhance the WSNs performance in AAL and context-aware environments.

2. Hybrid Simulation Technique

In this section, the description of the in-house developed 3D ray launching code combined with a collaborative filtering approach is presented. This hybrid method will be used to analyze the behavior of ZigBee wireless sensor networks (WSNs) in AAL environments with the aim of drastically reducing the computational time required to obtain accurate simulation results.

2.1. 3D Ray Launching

The 3D ray launching code is based on Geometrical Optics (GO) and the Uniform Theory of Diffraction (UTD). The principle of the algorithm is that rays are launched in a solid angle with input angular resolution of parameters and . Electromagnetic phenomena such as reflection, refraction, and diffraction are taken into account. We also consider the properties of all the obstacles within the environment considering the conductivity and permittivity of their different materials at the frequency of the system under analysis. It is worth noticing that a grid is defined in the space and the ray launching parameters are stored at each cuboid during the propagation of each ray. The configuration parameters of the system are frequency, power, gain, polarization and directivity of the transceivers, bit rate, angular resolution of the launching and diffracted rays, and number of reflections. The received power is calculated with the sum of the electric field vectors inside each cuboid of the defined mesh. When a ray hits an obstacle, new reflected and transmitted rays are generated with new angles provided by Snell’s law. When a ray hits an edge, a new family of diffracted rays is generated, as we can see in Figure 1, with the diffraction coefficients given by the UTD, which are shown in (1), given by [13, 14]where is the wedge angle, , and are defined in [13], and are the reflection coefficients for the appropriate polarization for the 0 face or face, respectively. The complete approach has been explained in detail in [15], and it has been used successfully in different complex indoor environments [1622].

Two simulation configurations have been used to fit the CF method with results: High Definition (HD) simulations and Low Definition (LD) simulations. HD simulations use parameters usually set to obtain accurate estimations in indoor environments. On the contrary, LD simulations provide less accurate estimations, but the computational cost is much lower. The main parameters that have been used for both LD and HD simulations are summarized in Table 1.

2.2. Recommender Systems and Collaborative Filtering

Recommender Systems (RS) are a family of techniques used to manage and understand the information created by users in Web 2.0 websites and to allow them to obtain recommendations about products and services [23]. RS help users to distinguish between noise and profitable information, hence achieving their goals more efficiently. Moreover, thanks to RS, companies increase their revenue, reduce their costs, and provide better services to their customers.

In this paper, we use a special kind of RS called collaborative filtering [24]. The aim of CF is to make suggestions on a set of items () (e.g., books, music, films, or routes), based on the preferences of a set of users () that have already acquired and/or rated some of those items. In order to make recommendations (i.e., to predict whether an item would please a given user), CF methods rely on large databases with information on the relationships between sets of users and items.

These data take the form of matrices composed of users and items, and each matrix cell stores the evaluation of user on item . Recommendations provided by CF methods make the assumption that similar users are interested in the same items. Hence, ’s high rated items could be recommended to user if and are similar. CF methods are classified into three main categories according to the data they use: (i) memory-based methods, which use the data matrix with all entries, ratings, and relationships, (ii) model-based methods, which estimate statistical models and functions based on the data matrix, and (iii) hybrid methods, which combine the previous methods with content-based recommendation [25].

In this paper, we use a memory-based approach to finely tune (recommend) the results obtained by LD simulations and produce better values that resemble HD simulations in much less time. Our CF approach is divided into two steps: neighborhood search and recommendation/prediction computation. In the neighborhood search phase, given a user , we use similarity functions to determine the users that are most similar to (i.e., the neighborhood of ). In the recommendation/prediction computation phase, we determine the neighborhood of and make a recommendation using well-known methods such as the ones proposed in [25]. The interested reader could refer to [2629] to delve into CF’s state of the art.

2.3. Applying Collaborative Filtering on 3D Ray Launching Simulations

We represent simulation scenarios, obtained by 3D ray launching techniques, as matrices with dimensions . Without loss of generality, we represent each matrix as a row vector that results from the concatenation of all rows of . Each scenario may contain different sets of obstacles and materials. Our goal is to predict (recommend) values that LD simulations were unable to compute properly due to their low resolution. To do so, we follow the idea proposed in [30] where a memory-based CF method with two stages was suggested. Those stages are (i) knowledge database creation and (ii) values prediction.

Knowledge Database Creation. This stage consists in the creation of a database that will be later used to predict missing values in LD simulations (in the second stage). For each scenario represented by a vector with length , we create a collection of subvectors, with fixed length so that . Figure 2 shows how subvectors with are generated from a given vector scenario . This process results in a database comprising fixed-length subvectors that are used as patterns representing a variety of scenarios. Note that we can create several databases with different subvector lengths containing as many scenarios as we want. Also, it is worth noticing that this procedure is applied to vectors of LD simulations and their corresponding HD counterparts. As a result, the databases contain LD patterns/vectors that are correlated/associated with their corresponding HD counterparts.

With the aim of increasing the quality of the database, we select LD subvectors that have similar/highly correlated values to their HD counterparts; in this way, we reduce the matching error between LD and HD in the database. To do so, we fix a maximum distance threshold between LD and HD values for each subvector and we discard those that surpass this value (for each LD simulation in the knowledge database, we have its corresponding HD simulation and we can compare their results). Moreover, we have observed that LD measurements in the vicinity of cells that have not been properly simulated (e.g., due to the simulation lack of accuracy in LD) and have null or error values do not correlate well with their HD counterparts, even when they fall within the aforementioned threshold. Hence, in order to solve this problem, we discard values in the vicinity of those cells. To do so, we compute the Manhattan distance between the cells containing null/error values and the others. Next, we set a minimum distance (i.e., in our case 3, due to the minimum subvector length) and we discard those LD values/cells whose distance is lower. So, in essence, we discard all cells that are located at Manhattan distances equal to or lower than 2 from a null/error cell. Figure 3 shows an example of how Manhattan distances are computed. After applying these noise-reducing techniques, the result is a more reliable and robust database.

Values Prediction. Given an LD simulation with missing/null values, our goal is to predict them so that they are as similar as possible to the values that would have been obtained in an HD simulation. To do so, the values of are normalized to be comparable with those in the LD knowledge database. Next, is divided into subvectors of a chosen length (like in the previous step). For each subvector containing missing values, most similar subvectors in the LD knowledge database (created in the previous step) are found. Then, their corresponding HD subvectors are retrieved and the missing values in are replaced by the average of those HD values.

To compute the similarity between subvectors, we use the Euclidean distance over nonmissing values. A graphical representation of this procedure is depicted in Figure 4. With the aim of increasing the predictions quality, we only consider subvectors having, at most, one missing value. Moreover, for each vector, we compute the percentage of empty cells and avoid computations if this percentage is higher than 50% (it would be useless to predict any value without information to do so; in such a case, the choice would be mostly random), thus increasing the performance of our method.

The prediction procedure is first applied by rows and next by columns and then we compute the average of the outputs (predicted values are only considered if they have been predicted both by rows and by columns (otherwise they are discarded)). Finally, we compute the mean of the obtained predictions in order to fill those cells that could not be inferred using the aforementioned approach. It is worth mentioning that the prediction procedure is applied for each subvector length. Hence, we will have different outcomes depending on .

3. ZigBee Network Performance Analysis

In this section, we describe the methodology followed to analyze the performance of a dense ZigBee wireless network in AAL environments and to validate our hybrid method based on CF. First, the description of the scenario under analysis is presented. Then, with the aim of validating the simulation results obtained by the in-house 3D ray launching algorithm, a measurement campaign was carried out to compare real measurements with the estimated values. Finally, once the results obtained by the 3D ray launching method have been validated, the performance analysis of a dense ZigBee network deployed within the scenario is presented. For that purpose, the estimations obtained by both the HD ray launching method and the hybrid LD + CF method are shown.

3.1. Description of the Scenario under Analysis

The scenario considered in this paper is a common apartment located in the neighborhood of “La Milagrosa” in the city of Pamplona, Navarre (Spain). The apartment is approximately 65 m2 and it consists of 2 bedrooms, 1 kitchen, 1 bathroom, 1 study room, 1 living room, and 1 small box room, as it can be seen in Figure 5. The dimensions of the scenario are 9.05 m × 7.255 m × 2.625 m. In order to obtain accurate results with the 3D ray launching simulation tool, the real size and material properties (dielectric constant as well as conductivity) of all the furniture such as chairs, tables, doors, beds, wardrobes, bath, and walls have been taken into account. In Table 2, the properties of the materials with greater presence in the scenario are listed.

Due to the increasing number of wireless systems and applications for AAL, scenarios like the apartment studied in this paper are expected to need a large number of wireless devices deployed in quite small areas. In order to emulate a dense wireless network, a 56-device ZigBee network has been distributed throughout the apartment. Those devices could be either static or mobile and wearable devices. Figure 5 shows the distribution of the devices: ZigBee End Devices (ZEDs) are represented by red dots and ZigBee Routers (ZR) by green dots. In order to make the AAL environment more realistic, non-ZigBee wireless devices have been also placed within the scenario so as to analyze their coexistence in terms of intersystem interferences. For that purpose, a WiFi access point (black dot in Figure 5) and a WiFi device that could be a laptop or a smart phone (blue dot in Figure 5) have been placed in the living room and in the study room, respectively. The characteristics of the antennas and devices used in the scenario, such as the transmission power level and the radiation pattern, have been defined as those typical for real devices. Table 3 shows the main parameters used in the simulations for both ZigBee and WiFi devices.

3.2. Validation of the Ray Launching Simulation Results

Once the simulation scenario has been defined, a measurement campaign has been carried out in the real scenario in order to validate the results obtained by the 3D ray launching algorithm. For that purpose, a ZigBee-compliant XBee Pro module connected to a computer through a USB cable has been used as a transmitter (cf. Figure 6(a)). The transmission power level of the wireless device can be adjusted from 0 dBm to 18 dBm. For the validation, the 0 dBm level has been set, which will be the level used for the simulations of the ZigBee network reported in the following sections. Received power values in different points within the scenario have been measured by means of a 2.4 GHz centered omnidirectional antenna connected to a portable N9912A FieldFox spectrum analyzer (cf. Figure 6(b)). In order to minimize the spectrum analyzer’s measurement time, the transmitter has been configured to operate sending one data packet per millisecond. The measurement points and the position of the transmitter element, depicted in the schematic view shown in Figure 7, are represented by green points and a red rectangle, respectively. The transmitter has been placed at height 0.8 m and the measurements have been taken at height 0.7 m. The obtained power levels for each measurement point have been depicted in Figure 8, where a comparison between measurements and 3D ray launching simulation results is shown. It may be observed that the 3D ray launching simulation tool provides accurate estimations. In this case, the outcome is a mean error of 1.38 dB with a standard deviation of 2.55 dB.

3.3. ZigBee Network Performance Analysis

Following the validation of the 3D ray launching simulation algorithm, the feasibility and performance analysis of the ZigBee network deployed in the scenario are presented. A ZigBee-based wireless network has been deployed within the scenario (cf. Figure 5). The network consists of 50 ZEDs, which emulate different kinds of sensors distributed throughout the apartment and 6 ZR, which emulate the devices that will receive the information transmitted by the ZEDs included in their own network. Finally, a WiFi access point has been placed in the living room and a WiFi device in the study room. In the real scenario, there is a WiFi access point in the same place. These wireless elements, which are very common in real scenarios, could interfere with a ZigBee network. In order to gain insight into this issue, two spectrograms have been measured in the aisle of the real scenario. First, a spectrogram with only the WiFi access point operating has been measured. Then, an operating XBee Pro mote has been measured, with the WiFi access point off. Figure 9 shows both spectrograms. The XBee Pro module is operating at the lowest frequency band allowed for these devices, and it can be seen how its spectrum overlaps the WiFi signal. Although WiFi and ZigBee can be configured to operate at other frequency bands, the bandwidth of the WiFi signal is wider than the ZigBee signal. Thus, since we may deploy lots of ZigBee-based networks, each one operating at different frequency bands, it is likely that overlapping between such wireless systems occurs.

Therefore, in addition to the performance of a WSN in AAL environments in terms of received power distribution, the assessment of the nondesired interferences is a major issue, especially in complex indoor environments where many wireless networks coexist and where radiopropagation phenomena like diffraction and fast fading are very strong. The effect of these phenomena, as well as the topology and morphology of the scenario under analysis, can be studied with the previously presented simulation method. The deployment of ZigBee-based WSNs is versatile and allows several topology configurations such as star, mesh, or tree. However, due to the wireless inherent properties and the size of the motes, their position and the network topology itself can be easily modified and reconfigured. Hence, very different networks and subnetworks may be discovered in a single AAL environment. The 3D ray launching algorithm is an adequate tool to analyze the performance of this kind of wireless network, as it has been shown in previous sections. The main result provided by the simulations is the received power level for the whole volume of the scenario. Figure 10 shows an example of the received power distribution in a plane at a height of 1.5 m for the transmitting ZED placed within the box room. It can be observed that the morphology of the scenario has a great impact on the results. In this example, it is shown how the radiopropagation through the box room’s gate and through the isle is greater than for further rooms because there are less obstacles and walls in the rays path.

The main propagation phenomenon in indoor scenarios with topological complexity is multipath propagation, which appears due to the strong presence of diffractions, reflections, and refractions of the transmitted radio signals. Multipath propagation typically produces short-term signal strength variations. This behavior can be observed in Figure 11, where the estimated received power versus linear distance is depicted for 3 different heights corresponding to the white dashed line of Figure 10. The relevance of multipath propagation within the scenario can be determined by the estimation of the values of all the components reached at a specific point, usually the receiver. In order to assess this, Power Delay Profiles are utilized. Figure 12 shows the Power Delay Profiles for 3 different positions within the scenario, when the ZED of the box room is emitting. As it may be observed, the complex environment creates lots of multipath components, which are very strong in the box room as the transmitter is in it. As expected, these components are fewer and weaker (i.e., lower power level) as the distance to the transmitter grows, as shown in the Power Delay Profiles of randomly chosen points within the aisle and the living room. Note that the arrival time (x-axis) of the first component of each graph clearly shows the increasing distance, being higher while the distance grows. The dashed red lines depicted in the Power Delay Profiles correspond to the sensitivity of the XBee Pro modules, indicating which of the received components can be read by the receiver: those with a power level higher than −100 dBm. An alternative approach to show the impact of multipath propagation is the use of Delay Spread graphs, which provide information for a whole plane of the scenario in a single graph, instead of the “point-like” information provided by Power Delay Profiles. The Delay Spread is the timespan from the arrival of the first ray to the arrival of the last ray, at any position of the scenario, that is, the timespan between the first and the last element depicted in Power Delay Profile figures. Figure 13 shows the Delay Spread for a plane at a height of 2 m when the ZED placed in the box room transmits. As expected, larger timespan values can be found in the vicinity of the emitting ZED since this is the area where the reflected valid rays (i.e., rays with power level higher than the sensitivity value) arrive sooner. In other areas, Delay Spread values vary depending on the topology of the scenario.

The Power Delay Profiles and Delay Spread results show the complexity of this kind of scenario in terms of radiopropagation and the amount of multipath components. Assuming that the number of wireless devices deployed in an AAL WSN can be high, radioplanning becomes essential to optimize the performance of WSNs in terms of achievable data rate, coexistence with other wireless systems, and overall energy consumption. One of the most WSN performance limiting factors is given by the receivers sensitivity, which is basically determined by the hardware itself. Other limiting factors are the modulation and the coding schemes, which determine the maximum interference levels tolerated for a successful communication. The interference level in AAL environments with dense WSNs deployed within could be very high, negatively affecting the communication between wireless devices measured in terms of increasing packet losses. As an illustrative example, the interference and performance assessment on the ZR deployed in the kitchen (see Figure 5 for the position) is shown in Figure 14. Typically, the ZED nodes of the network do not transmit simultaneously, as they are programmed to send information (e.g., temperature) in specific intervals or when an event occurs (e.g., detection of smoke). Thus, in this example, a ZED located in the kitchen ( m,  m, and  m) sends information to the ZR. Figure 14 shows the received power plane at 2.4 m height where the ZR is deployed. Note that the ZED is placed at 1 m height. The power level received by the ZR is −50.26 dBm. Considering that the sensitivity of the XBee Pro modules is −100 dBm, the received power level is enough to have a good communication channel. Nevertheless, the feasibility of a good communication between both the ZED and the ZR will depend mainly on the interference level received by the ZR device.

The undesirable interference received by a wireless receiver could have different sources such as electric appliances that generate electromagnetic noise. However, interference is mainly generated by other wireless communication systems and devices. With the aim of showing the usefulness of the presented method for the analysis of interferences, we analyze 3 different situations for the wireless communication between a ZED and a ZR deployed in the scenario under analysis. We will study an intrasystem interference case, a ZigBee intersystem case, and a WiFi intersystem case. The interference level (in dBm) produced by the correspondent sources of the 3 case studies is, respectively, shown in Figures 15, 16, and 17. Note that, in all cases, the interference will occur only when the valid ZED and the interference sources transmit simultaneously and the frequency bands overlap.

In order to show a graphic comparison between the HD results obtained by the 3D ray launching algorithm and our hybrid approach based LD + CF estimations, both of them are shown in each figure. For the intrasystem interference case, a ZED within the same network of the valid ZED-ZR devices has been chosen to act as interference source (cf. Figure 15). For the intersystem interference analysis, two different configurations have been studied. On the one hand, 4 ZigBee modules of another network deployed within the scenario act as interference sources. In Figure 16, we show the interference level when the 4 ZEDs (at 1 m height) transmit simultaneously. On the other hand, 2 WiFi devices, placed at , , and and , , and , act as interference sources. It can be observed in Figure 17 that the interference level is significantly higher than in previous cases. This is due to the transmission power of the WiFi devices, which has been set to 20 dBm (i.e., the maximum value for 802.11 b/g/n 2.4 GHz), while that of ZED has been set to 0 dBm. In all cases, we confirm that the morphology of the scenario has a great impact on the interference level that will reach the ZR. The number of interfering devices and their transmission power level will also be key issues in terms of received interference power. Therefore, an adequate and exhaustive radioplanning analysis is fundamental to obtain optimized WSN deployments, reducing to the minimum the density of wireless nodes and the overall power consumption of the network. For that purpose, the method presented in this paper is a very useful tool.

Based on the previous results, SNR (Signal-to-Noise Ratio) maps can be obtained. The SNR maps provide very useful information about how the interfering sources affect the communication between two elements, making it easier to identify zones where the SNR is higher and, hence, better to deploy a potential receiver. Figure 18 shows SNR maps for the 3 interference cases previously analyzed. For some applications, it might not be possible to choose the receiver’s position, nor the position of other wireless emitters. In those cases, instead of using SNR maps, estimations of SNR level at the specific point where the receiver is placed are calculated. Figure 19 shows the SNR value at the position of the ZR for the previously studied cases. Note that the x-axis represents possible transmission power levels of the valid ZED. Again, the estimations by the HD ray launching as well as estimated values by LD ray launching + collaborative filtering (LD + CF) are shown in order to show the similarity between both methods in terms of predicted values. The red dashed lines in the figure represent the minimum required SNR value for a correct transmission between the valid ZED and the ZR at 256 Kbps and 32 Kbps, which have been calculated with the aid of the following well-known formula:where is the channel capacity (250 Kbps, maximum value for ZigBee devices, and 32 Kbps) and BW is the bandwidth of the channel (3 MHz for ZigBee). As it can be seen in Figure 19, for the analyzed ZigBee intersystem interference case (green curve), estimations show that it is likely to have no problem and the transmission will be done at the highest data rate (256 Kbps), even for a transmission power level of −10 dBm. For the intrasystem case (black curve), a communication could fail when the transmitted power decays to −10 dBm, which means that the SNR value is not enough to transmit at 256 Kbps, but it is still enough to successfully achieve lower data rate transmissions. Finally, under the conditions of the 2 WiFi devices interfering with the ZED-ZR link, the communication viability depends strongly on the transmission power level of the valid ZED, even for quite low data rates, as the noise level produced by the WiFi devices transmitting at 20 dBm is high. It is important to note that these SNR results have been calculated for a specific case with specific transmission power levels for the involved devices. Due to the huge number of possible combinations of the previous factors (number, position, and transmission power of interfering sources, valid communicating devices, required data rates, morphology of the scenario, etc.), the design procedure is site specific, and thus the estimated SNR will vary if we change the configuration of the scenario’s wireless networks. Therefore, the presented simulation method can help in estimating the SNR values at each point of the scenario for any WSN configuration, allowing the designer to make the correct decision in order to deploy and configure the wireless devices in an energy-efficient way.

4. LD + CF Prediction Quality Analysis

In previous sections, we have described our hybrid LD + CF proposal and we have shown that the results obtained with this method are similar to those obtained with an HD simulation. Next, we summarize how these results have been obtained and we evaluate, in detail, the difference between the computational costs of HD simulations and the LD + CF approach.

4.1. Knowledge Database Creation

To create the knowledge databases used by the LD + CF method, 16 different scenarios have been simulated with 30 to 40 layers, each containing a variety of features (i.e., corridors, columns, walls, doors, and furniture). Each scenario has been simulated in LD and HD. With these simulations, we have created five LD knowledge databases with , 9, 7, 5, and 3 and their five HD counterparts. Each knowledge database contains approximately one million patterns/subvectors. In this study, the scenario under analysis has similar characteristics to those, which have been used for the creation of the database. Its dimensions and density are summarized in Table 4. Rows and Columns refer to the planar dimensions of the scenario (i.e., the dimensions () of the matrices analyzed/used by the CF method). The Layers row indicates the number of matrixes, that is, the height () of the scenario. Finally, the Density row shows the percentage of the volume of the scenario occupied by obstacles.

4.2. Accuracy and Benefits of the Collaborative Filtering Approach

With the aim of analyzing the accuracy and performance of our approach, different prediction strategies have been applied for the scenario under analysis (cf. Table 5). For instance, Strategy 1 uses the previously created knowledge database with to compute predictions; in Strategy 2, we use , and so on. In all cases, an aggregator value is used. Hence, for each missing value of an LD simulation, the CF approach finds most similar subvectors and computes their average to predict the missing value. Although the CF approach significantly helps to improve the quality of the LD simulation, it does not always predict the exact same value of the HD simulation. In order to compute this discrepancy, we use the well-known mean absolute error “MAE,” defined in (3), as follows:where is the number of missing values predicted, is the predicted value for a missing element , and is the real value of in the HD simulation. Note that the HD simulation is only used to compute the error but it is not involved in the prediction process of the CF method.

Without loss of generality and for the sake of brevity, we have randomly selected 24 sensors from the studied scenario and we have compared the simulation results obtained by HD simulations and our hybrid approach LD + CF. Table 6 reports the obtained results. Specifically, Table 6 reports Sparseness that is the % of empty values in the LD simulation, Time HD, Time LD, Time CF, and Time LD + CF which are the time in seconds of each method (for LD + CF, we report the worst time, not the average), Best Strategy which is the strategy that performed better, which represents the mean absolute error considering that null cells are kept empty (i.e., LD versus HD), which represents the mean absolute error considering that null cells are replaced by the values predicted by the CF method (i.e., LD + CF versus HD), and that is the standard deviation of .

It is worth noting the difference between and . Note that, for the mean absolute error, the lower the values, the better the result. Hence, it is apparent that the results obtained by our hybrid method () clearly outperform those of the LD simulation alone (). Moreover, this significant improvement requires very little time and, in addition, the low values of indicate that predictions are stable and reliable. It may also be observed that our hybrid approach requires 10 to 20 times less time than HD simulations and that the cost of the CF method is almost negligible (cf. Table 6 column “Time LD + CF” and Figure 20). Also, we observe that sparseness has an adverse effect on the prediction accuracy of all methods. This result is not surprising since with less data it is more difficult to make better decisions (however, the knowledge database is also an important factor as can be observed in Sim21 and Sim23, where Sim21 has a higher sparseness value but the prediction’s accuracy (i.e., ) is far better than in Sim23, which exhibits the opposite behavior). Since the best prediction is not always obtained by the same CF strategy, if more accurate results were to be obtained, parameters such as , subvector length, and its corresponding knowledge database might be tuned depending on each simulation’s features.

The relationship between the MAE of each approach and simulation/prediction time is depicted in Figure 21. LD predictions (in red) correspond to values in Table 6. LD + CF results (in green) show that the computational time is slightly increased with respect to LD simulations. Notwithstanding, the MAE is reduced between 8 and 10 times. Finally, the values of HD simulations (in blue) clearly show that the time required is 10 to 20 times higher than the other approaches. Clearly, the best tradeoff between MAE and time is obtained by our proposed hybrid LD + CF method. Hence, we may conclude that our method outperforms the others when we consider both accuracy and computational cost.

5. Conclusions

In this paper, context-aware scenarios for ambient assisted living have been analyzed in the framework of the deployment of wireless communication systems (mainly wearable transceivers and wireless sensor networks) and the impact on coverage/capacity relations.

We have presented and discussed the use of ray launching simulations in High Definition (HD) and Low Definition (LD) and collaborative filtering (CF) techniques. A complex indoor scenario with multiple transceiver elements has been analyzed with those techniques and the obtained results show that the proposed hybrid LD + CF approach outperforms LD and HD approaches in terms of error/time ratio.

We have shown that radiopropagation in indoor complex AAL environments has strong dependence on the network topology, the indoor scenario configuration, and the density of users/devices within it. Results also show that the presented hybrid calculation approach enables us to enlarge the scenario size without increasing computational complexity or, in other words, to reduce drastically the simulation time consumption, while the error of the estimations remains low. Our proposal provides valuable insight into the network design phases of complex wireless systems, typical in AAL, which derives in optimal network deployment, reducing overall interference levels and increasing overall system performance in terms of cost reduction, transmission rates, and energy efficiency.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors wish to acknowledge the support provided under Grant TEC2013-45585-C2-1-R, funded by the Ministry of Economy and Competitiveness, Spain. URV authors are partly supported by La Caixa Foundation through Project SIMPATIC and the Spanish Ministry of Economy and Competitiveness under Project “Co-Privacy, TIN2011-27076-C03-01” and by the Government of Catalonia under Grant 2014-SGR-537.