Computational Intelligence and Neuroscience

Volume 2017 (2017), Article ID 2157852, 9 pages

https://doi.org/10.1155/2017/2157852

## Application of the Intuitionistic Fuzzy InterCriteria Analysis Method with Triples to a Neural Network Preprocessing Procedure

^{1}Laboratory of Intelligent Systems, University “Prof. Dr. Assen Zlatarov”, 1 “Prof. Yakimov” Blvd., 8010 Burgas, Bulgaria^{2}Department of Bioinformatics and Mathematical Modelling, Institute of Biophysics and Biomedical Engineering, 105 “Acad. G. Bonchev” Str., 1113 Sofia, Bulgaria^{3}Department of Intelligent Systems, Institute of Information and Communication Technologies, 2 “Acad. G. Bonchev” Str., 1113 Sofia, Bulgaria

Correspondence should be addressed to Sotir Sotirov; gb.utb@voritoss

Received 9 February 2017; Accepted 14 June 2017; Published 10 August 2017

Academic Editor: George A. Papakostas

Copyright © 2017 Sotir Sotirov et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The approach of InterCriteria Analysis (ICA) was applied for the aim of reducing the set of variables on the input of a neural network, taking into account the fact that their large number increases the number of neurons in the network, thus making them unusable for hardware implementation. Here, for the first time, with the help of the ICA method, correlations between triples of the input parameters for training of the neural networks were obtained. In this case, we use the approach of ICA for data preprocessing, which may yield reduction of the total time for training the neural networks, hence, the time for the network’s processing of data and images.

#### 1. Introduction

Working with neural networks presents many difficulties; for example, the number of neurons in the perception of the individual values can be too large, and since a proportionally larger amount of memory and computing power is necessary to train the networks, this would lead to a longer periods for training. Therefore, researchers are forced to look for better methods for training neural networks. Backpropagation is the most applied such method—in it neural networks are trained with uplink (applied on a Multilayer Perceptron). There are, however, many other methods that accelerate the training of neural networks [1–3], by reducing memory usage, which in turn lowers the needed amount of computing power.

In the stage of preprocessing, the data at the input of the neural network can be used as a constant threshold value to distinguish static from dynamic activities, as it was done in [4]. This way, the amount of incidental values due to unforeseen circumstances is reduced.

Another approach is to use a wavelet-based neural network classifier to reduce the power interference in the training of the neural network or randomly stumbled measurements [5]. Here the discrete wavelet transform (DWT) technique is integrated with the neural network to build a classifier.

Particle Swarm Optimization (PSO) is an established method for parameter optimization. It represents a population-based adaptive optimization technique that is influenced by several “strategy parameters.” Choosing reasonable parameter values for PSO is crucial for its convergence behavior and depends on the optimization task. In [6] a method is presented for parameter metaoptimization based on PSO and it is applied to neural network training. The idea of Optimized Particle Swarm Optimization (OPSO) is to optimize the free parameters of PSO by having swarms within a swarm.

When working with neural networks it is essential to reduce the amount of neurons in the hidden layer, which also reduces the number of weight coefficients of the neural network as a whole. This leads to a smaller dimension of the weight matrices, and hence the used amount of memory. An additional consequence from this is the decreased usage of computing power and the shortened training time [7].

Multilayer Perceptrons are often used to model complex relationships between sets of data. The removal of nonessential components of the data can lead to smaller sizes of the neural networks, and, respectively, to lower requirements for the input data. In [8] it is described that this can be achieved by analyzing the common interference of the network outputs, which is caused by distortions in the data that is passed to the neural network’s inputs. The attempt to find superfluous data is based on the concept of sensitivity of linear neural networks. In [9] a neural network is developed, in which the outputs of the neurons of part of the layers are not connected to the next layer. The structure thus created is called a “Network in a Network.” In this way part of the inputs of the neural network are reduced, which removes part of the information, and along with it part of the error accumulated during training and data transfer. The improved local connection method given in [9] produces a global collation by fundamental cards in the classification layer. This layer is easier to interpret and less prone to overloading than the traditional fully connected layers.

In this paper, we apply the intuitionistic fuzzy sets-based method of InterCriteria Analysis to reduce the number of input parameters of a Multilayer Perceptron. This will allow the reduction of the weight matrices, as well as the implementation of the neural network in limited hardware, and will save time and resources in training.

The neural network is tested after reducing the data (effectively the number of inputs), so as to obtain an acceptable relation between the input and output values, as well as the average deviation (or match) of the result.

#### 2. Presentation of the InterCriteria Analysis

The InterCriteria Analysis (ICA) method is introduced in [10] by Atanassov et al. It can be applied to multiobject multicriteria problems, where measurements according to some of the criteria are slower or more expensive, which results in delaying or raising the cost of the overall process of decision-making. When solving such problems it may be necessary to adopt an approach for reasonable elimination of these criteria, in order to achieve economy and efficiency.

The ICA method is based on two fundamental concepts: intuitionistic fuzzy sets and index matrices. Intuitionistic fuzzy sets were first defined by Atanassov [11–13] as an extension of the concept of fuzzy sets defined by Zadeh [14]. The second concept on which the proposed method relies is the concept of index matrix, a matrix which features two index sets. The theory behind the index matrices is described in [15].

According to the ICA method, a set of objects is evaluated or measured against a set of criteria, and the table with these evaluations is the input for the method. The number of criteria can be reduced by calculating the correlations (differentiated in ICA to: positive consonance, negative consonance, and dissonance) in each pair of criteria in the form of intuitionistic fuzzy pairs of values, that is, a pair of numbers in the interval , whose sum is also a number in this interval. If some (slow, expensive, etc.) criteria exhibit positive consonance with some of the rest of the criteria (that are faster, cheaper, etc.), and this degree of consonance is considered high enough with respect to some predefined thresholds, with this degree of precision the decision maker may decide to omit them in the further decision-making process. The higher the number of objects involved in the measurement, the more precise the evaluation of the intercriteria consonances (correlations). This makes the approach completely data-driven and ongoing approbations over various application problems and datasets are helping us better perceive its reliability and practical applicability.

Let us consider a number of criteria, , and a number of objects, ; that is, we use the following sets: a set of criteria and a set of objects .

We obtain an index matrix* M* that contains two sets of indices, one for rows and another for columns. For every* p*,* q* (, ), in an evaluated object, is an evaluation criterion, and is the evaluation of the* p*th object against the* q*th criterion, defined as a real number or another object that is comparable according to a relation with all the other elements of the index matrix* M.*

The next step is to apply the InterCriteria Analysis for calculating the evaluations. The result is a new index matrix with intuitionistic fuzzy pairs that represents an intuitionistic fuzzy evaluation of the relations between every pair of criteria* C*_{k} and* C*_{l}. In this way the index matrix* M* that relates the evaluated objects with the evaluating criteria can be transformed to another index matrix that gives the relations among the criteria:

The last step of the algorithm is to determine the degrees of correlation between groups of indicators depending of the chosen thresholds for and from the user. The correlations between the criteria are called “positive consonance,” “negative consonance,” or “dissonance.” Here we use one of the possible approaches to defining these thresholds, namely, the scale shown in Box 1 [16].