The Scientific World Journal: Computer Science The latest articles from Hindawi Publishing Corporation © 2016 , Hindawi Publishing Corporation . All rights reserved. Recent Trends and Techniques in Computing Information Intelligence Sun, 05 Jun 2016 09:30:49 +0000 Venkatesh Jaganathan, Balasubramanie Palanisamy, and Mariofanna Milanova Copyright © 2016 Venkatesh Jaganathan et al. All rights reserved. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud Thu, 28 Apr 2016 09:43:54 +0000 Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. A. Paulin Florence, V. Shanthi, and C. B. Sunil Simon Copyright © 2016 A. Paulin Florence et al. All rights reserved. Retracted: Medical Dataset Classification: A Machine Learning Paradigm Integrating Particle Swarm Optimization with Extreme Learning Machine Classifier Thu, 21 Apr 2016 14:26:10 +0000 The Scientific World Journal Copyright © 2016 The Scientific World Journal. All rights reserved. Retracted: An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm Thu, 21 Apr 2016 14:25:46 +0000 The Scientific World Journal Copyright © 2016 The Scientific World Journal. All rights reserved. Retracted: Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch Thu, 21 Apr 2016 14:25:19 +0000 The Scientific World Journal Copyright © 2016 The Scientific World Journal. All rights reserved. Retracted: An Improved Differential Evolution Solution for Software Project Scheduling Problem Thu, 21 Apr 2016 14:24:50 +0000 The Scientific World Journal Copyright © 2016 The Scientific World Journal. All rights reserved. Retracted: Differential Evolution Algorithm with Diversified Vicinity Operator for Optimal Routing and Clustering of Energy Efficient Wireless Sensor Networks Thu, 21 Apr 2016 14:24:22 +0000 The Scientific World Journal Copyright © 2016 The Scientific World Journal. All rights reserved. Multiagent Systems Based Modeling and Implementation of Dynamic Energy Management of Smart Microgrid Using MACSimJX Mon, 04 Apr 2016 13:19:29 +0000 The objective of this paper is implementation of multiagent system (MAS) for the advanced distributed energy management and demand side management of a solar microgrid. Initially, Java agent development environment (JADE) frame work is used to implement MAS based dynamic energy management of solar microgrid. Due to unstable nature of MATLAB, when dealing with multithreading environment, MAS operating in JADE is linked with the MATLAB using a middle ware called Multiagent Control Using Simulink with Jade Extension (MACSimJX). MACSimJX allows the solar microgrid components designed with MATLAB to be controlled by the corresponding agents of MAS. The microgrid environment variables are captured through sensors and given to agents through MATLAB/Simulink and after the agent operations in JADE, the results are given to the actuators through MATLAB for the implementation of dynamic operation in solar microgrid. MAS operating in JADE maximizes operational efficiency of solar microgrid by decentralized approach and increase in runtime efficiency due to JADE. Autonomous demand side management is implemented for optimizing the power exchange between main grid and microgrid with intermittent nature of solar power, randomness of load, and variation of noncritical load and grid price. These dynamics are considered for every time step and complex environment simulation is designed to emulate the distributed microgrid operations and evaluate the impact of agent operations. Leo Raju, R. S. Milton, and Senthilkumaran Mahadevan Copyright © 2016 Leo Raju et al. All rights reserved. Retracted: An Improved Ant Colony Optimization Approach for Optimization of Process Planning Thu, 24 Mar 2016 08:28:44 +0000 The Scientific World Journal Copyright © 2016 The Scientific World Journal. All rights reserved. Evaluation of the Parameters and Conditions of Process in the Ethylbenzene Dehydrogenation with Application of Permselective Membranes to Enhance Styrene Yield Wed, 16 Mar 2016 07:26:47 +0000 Styrene is an important monomer in the manufacture of thermoplastic. Most of it is produced by the catalytic dehydrogenation of ethylbenzene. In this process that depends on reversible reactions, the yield is usually limited by the establishment of thermodynamic equilibrium in the reactor. The styrene yield can be increased by using a hybrid process, with reaction and separation simultaneously. It is proposed using permselective composite membrane to remove hydrogen and thus suppress the reverse and secondary reactions. This paper describes the simulation of a dehydrogenation process carried out in a tubular fixed-bed reactor wrapped in a permselective composite membrane. A mathematical model was developed, incorporating the various mass transport mechanisms found in each of the membrane layers and in the catalytic fixed bed. The effects of the reactor feed conditions (temperature, steam-to-oil ratio, and the weight hourly space velocity), the fixed-bed geometry (length, diameter, and volume), and the membrane geometry (thickness of the layers) on the styrene yield were analyzed. These variables were used to determine experimental conditions that favour the production of styrene. The simulation showed that an increase of 40.98% in the styrene yield, compared to a conventional fixed-bed process, could be obtained by wrapping the reactor in a permselective composite membrane. Paulo Jardel P. Araújo, Manuela Souza Leite, and Teresa M. Kakuta Ravagnani Copyright © 2016 Paulo Jardel P. Araújo et al. All rights reserved. An Example-Based Super-Resolution Algorithm for Selfie Images Tue, 15 Mar 2016 13:33:47 +0000 A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. Jino Hans William, N. Venkateswaran, Srinath Narayanan, and Sandeep Ramachandran Copyright © 2016 Jino Hans William et al. All rights reserved. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems Tue, 01 Mar 2016 09:49:47 +0000 Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. V. Ranganayaki and S. N. Deepa Copyright © 2016 V. Ranganayaki and S. N. Deepa. All rights reserved. An Automatic Multidocument Text Summarization Approach Based on Naïve Bayesian Classifier Using Timestamp Strategy Mon, 29 Feb 2016 07:58:43 +0000 Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and -score than the existing clustering with lexical chaining approach. Nedunchelian Ramanujam and Manivannan Kaliappan Copyright © 2016 Nedunchelian Ramanujam and Manivannan Kaliappan. All rights reserved. A Dynamic Probabilistic Based Broadcasting Scheme for MANETs Thu, 25 Feb 2016 14:08:19 +0000 MANET is commonly known as Mobile Ad Hoc Network in which cluster of mobile nodes can communicate with each other without having any basic infrastructure. The basic characteristic of MANET is dynamic topology. Due to the dynamic behavior nature, the topology of the network changes very frequently, and this will lead to the failure of the valid route repeatedly. Thus, the process of finding the valid route leads to notable drop in the throughput of the network. To identify a new valid path to the targeted mobile node, available proactive routing protocols use simple broadcasting method known as simple flooding. The simple flooding method broadcasts the RREQ packet from the source to the rest of the nodes in mobile network. But the problem with this method is disproportionate repetitive retransmission of RREQ packet which could result in high contention on the available channel and packet collision due to extreme traffic in the network. A reasonable number of routing algorithms have been suggested for reducing the lethal impact of flooding the RREQ packets. However, most of the algorithms have resulted in considerable amount of complexity and deduce the throughput by depending on special hardware components and maintaining complex information which will be less frequently used. By considering routing complexity with the goal of increasing the throughput of the network, in this paper, we have introduced a new approach called Dynamic Probabilistic Route (DPR) discovery. The Node’s Forwarding Probability (NFP) is dynamically calculated by the DPR mobile nodes using Probability Function (PF) which depends on density of local neighbor nodes and the cumulative number of its broadcast covered neighbors. Kannan Shanmugam, Karthik Subburathinam, and Arunachalam Velayuthampalayam Palanisamy Copyright © 2016 Kannan Shanmugam et al. All rights reserved. Access to Network Login by Three-Factor Authentication for Effective Information Security Wed, 24 Feb 2016 16:14:34 +0000 Today’s technology development in the field of computer along with internet of things made huge difference in the transformation of our lives. Basic computer framework and web client need to make significant login signify getting to mail, long range interpersonal communication, internet keeping money, booking tickets, perusing online daily papers, and so forth. The login user name and secret key mapping validate if the logging user is the intended client. Secret key is assumed an indispensable part in security. The objective of MFA is to make a layered safeguard and make it more troublesome for an unauthenticated entity to get to an objective, for example, a physical area, processing gadget, system, or database. In the event that one element is bargained or broken, the assailant still has two more boundaries to rupture before effectively breaking into the objective. An endeavor has been made by utilizing three variable types of authentication. In this way managing additional secret key includes an additional layer of security. S. Vaithyasubramanian, A. Christy, and D. Saravanan Copyright © 2016 S. Vaithyasubramanian et al. All rights reserved. Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks Tue, 23 Feb 2016 13:12:05 +0000 Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator’s configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance. Yoshiaki Taniguchi, Hiroaki Tsutsumi, Nobukazu Iguchi, and Kenzi Watanabe Copyright © 2016 Yoshiaki Taniguchi et al. All rights reserved. Soft Computational Approaches for Prediction and Estimation of Software Development Thu, 18 Feb 2016 06:28:05 +0000 Xiao-Zhi Gao, Arun Kumar Sangaiah, and Muthu Ramachandran Copyright © 2016 Xiao-Zhi Gao et al. All rights reserved. An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment Mon, 15 Feb 2016 13:29:08 +0000 Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation. Vinothkumar Muthurajan and Balaji Narayanasamy Copyright © 2016 Vinothkumar Muthurajan and Balaji Narayanasamy. All rights reserved. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network Mon, 08 Feb 2016 10:02:18 +0000 This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with -means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. Lukas Falat, Dusan Marcek, and Maria Durisova Copyright © 2016 Lukas Falat et al. All rights reserved. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks Wed, 03 Feb 2016 13:08:03 +0000 Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM’s multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. D. Chitra Devi and V. Rhymend Uthariaraj Copyright © 2016 D. Chitra Devi and V. Rhymend Uthariaraj. All rights reserved. Comparative Study on Various Authentication Protocols in Wireless Sensor Networks Wed, 13 Jan 2016 07:13:48 +0000 Wireless sensor networks (WSNs) consist of lightweight devices with low cost, low power, and short-ranged wireless communication. The sensors can communicate with each other to form a network. In WSNs, broadcast transmission is widely used along with the maximum usage of wireless networks and their applications. Hence, it has become crucial to authenticate broadcast messages. Key management is also an active research topic in WSNs. Several key management schemes have been introduced, and their benefits are not recognized in a specific WSN application. Security services are vital for ensuring the integrity, authenticity, and confidentiality of the critical information. Therefore, the authentication mechanisms are required to support these security services and to be resilient to distinct attacks. Various authentication protocols such as key management protocols, lightweight authentication protocols, and broadcast authentication protocols are compared and analyzed for all secure transmission applications. The major goal of this survey is to compare and find out the appropriate protocol for further research. Moreover, the comparisons between various authentication techniques are also illustrated. S. Raja Rajeswari and V. Seenivasagam Copyright © 2016 S. Raja Rajeswari and V. Seenivasagam. All rights reserved. An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network Wed, 06 Jan 2016 11:30:35 +0000 Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption. C. Vimalarani, R. Subramanian, and S. N. Sivanandam Copyright © 2016 C. Vimalarani et al. All rights reserved. Development of Energy Efficient Clustering Protocol in Wireless Sensor Network Using Neuro-Fuzzy Approach Mon, 04 Jan 2016 05:40:29 +0000 Wireless sensor networks (WSNs) consist of sensor nodes with limited processing capability and limited nonrechargeable battery power. Energy consumption in WSN is a significant issue in networks for improving network lifetime. It is essential to develop an energy aware clustering protocol in WSN to reduce energy consumption for increasing network lifetime. In this paper, a neuro-fuzzy energy aware clustering scheme (NFEACS) is proposed to form optimum and energy aware clusters. NFEACS consists of two parts: fuzzy subsystem and neural network system that achieved energy efficiency in forming clusters and cluster heads in WSN. NFEACS used neural network that provides effective training set related to energy and received signal strength of all nodes to estimate the expected energy for tentative cluster heads. Sensor nodes with higher energy are trained with center location of base station to select energy aware cluster heads. Fuzzy rule is used in fuzzy logic part that inputs to form clusters. NFEACS is designed for WSN handling mobility of node. The proposed scheme NFEACS is compared with related clustering schemes, cluster-head election mechanism using fuzzy logic, and energy aware fuzzy unequal clustering. The experiment results show that NFEACS performs better than the other related schemes. E. Golden Julie and S. Tamil Selvi Copyright © 2016 E. Golden Julie and S. Tamil Selvi. All rights reserved. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks Sun, 03 Jan 2016 11:18:58 +0000 Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used. Narayanan Manikandan and Srinivasan Subha Copyright © 2016 Narayanan Manikandan and Srinivasan Subha. All rights reserved. Semantic Clustering of Search Engine Results Thu, 31 Dec 2015 07:52:19 +0000 This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision. Sara Saad Soliman, Maged F. El-Sayed, and Yasser F. Hassan Copyright © 2015 Sara Saad Soliman et al. All rights reserved. Optimal Decomposition of Service Level Objectives into Policy Assertions Tue, 29 Dec 2015 11:38:16 +0000 WS-agreement specifies quality objectives that each partner is obligated to provide. To meet quality objectives, the corresponding partner should apply appropriate policy assertions to its web services and adjust their parameters accordingly. Transformation of WS-CDL to WSBPEL is addressed in some related works, but neither of them considers quality aspects of transformation nor run-time adaptation. Here, in conformance with web services standards, we propose an optimal decomposition method to make a set of WS-policy assertions. Assertions can be applied to WSBPEL elements and affect their run-time behaviors. The decomposition method achieves the best outcome for a performance indicator. It also guarantees the lowest adaptation overhead by reducing the number of service reselections. We considered securities settlement case study to prototype and evaluate the decomposition method. The results show an acceptable threshold between customer satisfaction—the targeted performance indicator in our case study—and adaptation overhead. Yousef Rastegari and Fereidoon Shams Copyright © 2015 Yousef Rastegari and Fereidoon Shams. All rights reserved. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks Thu, 24 Dec 2015 15:11:34 +0000 Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. Y. Harold Robinson and M. Rajaram Copyright © 2015 Y. Harold Robinson and M. Rajaram. All rights reserved. Retracted: Intelligent Advisory Speed Limit Dedication in Highway Using VANET Thu, 17 Dec 2015 09:21:51 +0000 The Scientific World Journal Copyright © 2015 The Scientific World Journal. All rights reserved. Game Theory Based Security in Wireless Body Area Network with Stackelberg Security Equilibrium Wed, 02 Dec 2015 08:03:15 +0000 Wireless Body Area Network (WBAN) is effectively used in healthcare to increase the value of the patient’s life and also the value of healthcare services. The biosensor based approach in medical care system makes it difficult to respond to the patients with minimal response time. The medical care unit does not deploy the accessing of ubiquitous broadband connections full time and hence the level of security will not be high always. The security issue also arises in monitoring the user body function records. Most of the systems on the Wireless Body Area Network are not effective in facing the security deployment issues. To access the patient’s information with higher security on WBAN, Game Theory with Stackelberg Security Equilibrium (GTSSE) is proposed in this paper. GTSSE mechanism takes all the players into account. The patients are monitored by placing the power position authority initially. The position authority in GTSSE is the organizer and all the other players react to the organizer decision. Based on our proposed approach, experiment has been conducted on factors such as security ratio based on patient’s health information, system flexibility level, energy consumption rate, and information loss rate. Stackelberg Security considerably improves the strength of solution with higher security. M. Somasundaram and R. Sivakumar Copyright © 2015 M. Somasundaram and R. Sivakumar. All rights reserved. Palm-Print Pattern Matching Based on Features Using Rabin-Karp for Person Identification Tue, 01 Dec 2015 06:59:14 +0000 Palm-print based individual identification is regarded as an effectual method for identifying persons with high confidence. Palm-print with larger inner surface of hand contains many features such as principle lines, ridges, minutiae points, singular points, and textures. Feature based pattern matching has faced the challenge that the spatial positional variations occur between the training and test samples. To perform effective palm-print features matching, Rabin-Karp Palm-Print Pattern Matching (RPPM) method is proposed in this paper. With the objective of improving the accuracy of pattern matching, double hashing is employed in RPPM method. Multiple patterns of features are matched using the Aho-Corasick Multiple Feature matching procedure by locating the position of the features with finite set of bit values as an input text, improving the cumulative accuracy on hashing. Finally, a time efficient bit parallel ordering presents an efficient variation on matching the palm-print features of test and training samples with minimal time. Experiment is conducted on the factors such as pattern matching efficiency rate, time taken on multiple palm-print feature matching efficiency, and cumulative accuracy on hashing. S. Kanchana and G. Balakrishnan Copyright © 2015 S. Kanchana and G. Balakrishnan. All rights reserved. Assigning Priorities for Fixed Priority Preemption Threshold Scheduling Wed, 25 Nov 2015 09:44:49 +0000 Preemption threshold scheduling (PTS) enhances real-time schedulability by controlling preemptiveness of tasks. This benefit of PTS highly depends on a proper algorithm that assigns each task feasible scheduling attributes, which are priority and preemption threshold. Due to the existence of an efficient optimal preemption threshold assignment algorithm that works with fully assigned priority orderings, we need an optimal priority assignment algorithm for PTS. This paper analyzes the inefficiency or nonoptimality of the previously proposed optimal priority assignment algorithms for PTS. We develop theorems for exhaustively but safely pruning infeasible priority orderings while assigning priorities to tasks for PTS. Based on the developed theorems, we correct the previously proposed optimal priority assignment algorithm for PTS. We also propose a performance improved optimal priority assignment algorithm for PTS proving its optimality. The empirical evaluation results clearly show the effectiveness of the proposed algorithm. Saehwa Kim Copyright © 2015 Saehwa Kim. All rights reserved. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes Tue, 24 Nov 2015 14:21:27 +0000 The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. Maya Eapen, Reeba Korah, and G. Geetha Copyright © 2015 Maya Eapen et al. All rights reserved. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification Tue, 10 Nov 2015 14:16:59 +0000 This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. Gayathri Rajagopal and Ramamoorthy Palaniswamy Copyright © 2015 Gayathri Rajagopal and Ramamoorthy Palaniswamy. All rights reserved. Identifying User Interaction Patterns in E-Textbooks Thu, 29 Oct 2015 13:52:53 +0000 We introduce a new architecture for e-textbooks which contains two navigational aids: an index and a concept map. We report results from an evaluation in a university setting with 99 students. The interaction sequences of the users were captured during the user study. We found several clusters of user interaction types in our data. Three separate user types were identified based on the interaction sequences: passive user, term clicker, and concept map user. We also discovered that with the concept map interface users started to interact with the application significantly sooner than with the index interface. Overall, our findings suggest that analysis of interaction patterns allows deeper insights into the use of e-textbooks than is afforded by summative evaluation. Santeri Saarinen, Tomi Heimonen, Markku Turunen, Mirjamaija Mikkilä-Erdmann, Roope Raisamo, Norbert Erdmann, Sari Yrjänäinen, and Tuuli Keskinen Copyright © 2015 Santeri Saarinen et al. All rights reserved. An Efficient Framework for Large Scale Multimedia Content Distribution in P2P Network: I2NC Thu, 29 Oct 2015 11:33:19 +0000 Network coding (NC) makes content distribution more effective and easier in P2P content distribution network and reduces the burden of the original seeder. It generalizes traditional network routing by allowing the intermediate nodes to generate new coded packet by combining the received packets. The randomization introduced by network coding makes all packets equally important and resolves the problem of locating the rarest block. Further, it reduces traffic in the network. In this paper, we analyze the performance of traditional network coding in P2P content distribution network by using a mathematical model and it is proved that traffic reduction has not been fully achieved in P2P network using traditional network coding. It happens due to the redundant transmission of noninnovative information block among the peers in the network. Hence, we propose a new framework, called I2NC (intelligent-peer selection and incremental-network coding), to eliminate the unnecessary flooding of noninnovative coded packets and thereby to improve the performance of network coding in P2P content distribution further. A comparative study and analysis of the proposed system is made through various related implementations and the results show that 10–15% of traffic reduced and improved the average and maximum download time by reducing original seeder’s workload. M. Anandaraj, P. Ganeshkumar, K. P. Vijayakumar, and K. Selvaraj Copyright © 2015 M. Anandaraj et al. All rights reserved. Application of Artificial Intelligence for Bridge Deterioration Model Thu, 22 Oct 2015 13:27:19 +0000 The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. Zhang Chen, Yangyang Wu, Li Li, and Lijun Sun Copyright © 2015 Zhang Chen et al. All rights reserved. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition Wed, 21 Oct 2015 13:20:35 +0000 In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification. Rong Wang Copyright © 2015 Rong Wang. All rights reserved. A Study on Students Acquisition of IT Knowledge and Its Implication on M-Learning Wed, 21 Oct 2015 06:51:54 +0000 The boom in mobile technology has seen a dramatic rise in its usage. This has led to usage of mobiles even in the academic context for further learning. Although the advantages of m-learning (mobile learning) are visible, studies are required to address the aspects that shape its virtual expectations. The acceptance of mobile technology relies mostly on how the students feel about mobile technology fitting into their requirements. Yet, in spite of the significance in the potential of m-learning, research studies have only inadequate data to identify the factors that influence their decision to adapt the mobile technology for the purpose of learning. To deal with this space, the present study was undertaken to correlate the IT skills of students with their impact on their acceptance of m-learning. The research study found that the perceived usability along with the usefulness of m-learning impacts the association between IT expertise and the objective of learners’ acceptance of m-learning. A survey of 892 students from Engineering, Arts, and Science Colleges found that IT skills influence student’s acquisition of m-learning technology. Specialized and advanced skills in mobile technology along with basic skills play a significant role in influencing a student to accept m-learning. But no specific substantiation has been established to support the statement that highly developed IT skills have influenced the students to accept m-learning. A. Balavivekanandhan and S. Arulchelvan Copyright © 2015 A. Balavivekanandhan and S. Arulchelvan. All rights reserved. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption Tue, 13 Oct 2015 09:15:15 +0000 Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. Jeyamala Chandrasekaran and S. J. Thiruvengadam Copyright © 2015 Jeyamala Chandrasekaran and S. J. Thiruvengadam. All rights reserved. Hybrid Scheduling Model for Independent Grid Tasks Mon, 12 Oct 2015 14:05:36 +0000 Grid computing facilitates the resource sharing through the administrative domains which are geographically distributed. Scheduling in a distributed heterogeneous environment is intrinsically very hard because of the heterogeneous nature of resource collection. Makespan and tardiness are two different measures of scheduling, and many of the previous researches concentrated much on reduction of makespan, which measures the machine utilization. In this paper, we propose a hybrid scheduling algorithm for scheduling independent grid tasks with the objective of reducing total weighted tardiness of grid tasks. Tardiness is to measure the due date performance, which has a direct impact on cost for executing the jobs. In this paper we propose BG_ATC algorithm which is a combination of best gap (BG) search and Apparent Tardiness Cost (ATC) indexing algorithm. Furthermore, we implemented these two algorithms in two different phases of the scheduling process. In addition to that, the comparison was made on results with various benchmark algorithms and the experimental results show that our algorithm outperforms the benchmark algorithms. J. Shanthini, T. Kalaikumaran, and S. Karthik Copyright © 2015 J. Shanthini et al. All rights reserved. Convalescing Cluster Configuration Using a Superlative Framework Mon, 12 Oct 2015 11:40:43 +0000 Competent data mining methods are vital to discover knowledge from databases which are built as a result of enormous growth of data. Various techniques of data mining are applied to obtain knowledge from these databases. Data clustering is one such descriptive data mining technique which guides in partitioning data objects into disjoint segments. -means algorithm is a versatile algorithm among the various approaches used in data clustering. The algorithm and its diverse adaptation methods suffer certain problems in their performance. To overcome these issues a superlative algorithm has been proposed in this paper to perform data clustering. The specific feature of the proposed algorithm is discretizing the dataset, thereby improving the accuracy of clustering, and also adopting the binary search initialization method to generate cluster centroids. The generated centroids are fed as input to -means approach which iteratively segments the data objects into respective clusters. The clustered results are measured for accuracy and validity. Experiments conducted by testing the approach on datasets from the UC Irvine Machine Learning Repository evidently show that the accuracy and validity measure is higher than the other two approaches, namely, simple -means and Binary Search method. Thus, the proposed approach proves that discretization process will improve the efficacy of descriptive data mining tasks. R. Sabitha and S. Karthik Copyright © 2015 R. Sabitha and S. Karthik. All rights reserved. Optimizing Negotiation Conflict in the Cloud Service Negotiation Framework Using Probabilistic Decision Making Model Mon, 12 Oct 2015 09:30:54 +0000 Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework. Rajkumar Rajavel and Mala Thangarathinam Copyright © 2015 Rajkumar Rajavel and Mala Thangarathinam. All rights reserved. A Push on Job Anxiety for Employees on Managing Recent Difficult to Understand Computing Equipment in the Modern Issues in Indian Banking Quarter Wed, 07 Oct 2015 07:10:55 +0000 Stress management can be defined as intervention planned to decrease the force of stressors in the administrative center. These can have a human being focus, aimed at raising an individual’s ability to cope with stressors and the implementation of the CRM is essential to establish a better performance of the banking sector. Since managing stress and customer relationship management are becoming crucial in the field of management the work has forecasted them in a wide range of dimensions. This paper organizes few preliminary concepts of stress and critically analyzes the CRM strategy implemented by banking sector. Hence the employees of the Banking Industry have been asked to give their opinion about the CRM strategy adopted by banks. In order to provide the background of the employees, the profile of the employees has been discussed initially. The profile of the employees along with their opinion on the CRM practices adopted at Banking Industries has been discussed. In our work progresses we have been taken of two main parameters for consideration and it detriment in which area stress are mainly responds, and also the paper envelopes certain valuable stress management tactics and techniques that are particularly compassionate for people who have been working in the banking sector. Also an attempt to diagnose the impact of underside stress of day to day life in mounting a bigger level stress upon the employees has been made. Further development has been made with a detailed parametric analysis of employee stress conducted with the wide range of key parameters and several rounds of experiments have been conducted with techniques as Kolmogorov-Smirnov test, Garrett ranking, and ANOVA; the work ensures to pave way for an accurate measure in customer handling. The questionnaire is planned to be distributed to 175 employees in the Madurai district banks. Ragunathan Gopalakrishnan and Chellapa Swarnalatha Copyright © 2015 Ragunathan Gopalakrishnan and Chellapa Swarnalatha. All rights reserved. Hybrid RGSA and Support Vector Machine Framework for Three-Dimensional Magnetic Resonance Brain Tumor Classification Mon, 05 Oct 2015 07:10:04 +0000 A novel hybrid approach for the identification of brain regions using magnetic resonance images accountable for brain tumor is presented in this paper. Classification of medical images is substantial in both clinical and research areas. Magnetic resonance imaging (MRI) modality outperforms towards diagnosing brain abnormalities like brain tumor, multiple sclerosis, hemorrhage, and many more. The primary objective of this work is to propose a three-dimensional (3D) novel brain tumor classification model using MRI images with both micro- and macroscale textures designed to differentiate the MRI of brain under two classes of lesion, benign and malignant. The design approach was initially preprocessed using 3D Gaussian filter. Based on VOI (volume of interest) of the image, features were extracted using 3D volumetric Square Centroid Lines Gray Level Distribution Method (SCLGM) along with 3D run length and cooccurrence matrix. The optimal features are selected using the proposed refined gravitational search algorithm (RGSA). Support vector machines, over backpropagation network, and -nearest neighbor are used to evaluate the goodness of classifier approach. The preliminary evaluation of the system is performed using 320 real-time brain MRI images. The system is trained and tested by using a leave-one-case-out method. The performance of the classifier is tested using the receiver operating characteristic curve of 0.986 (±002). The experimental results demonstrate the systematic and efficient feature extraction and feature selection algorithm to the performance of state-of-the-art feature classification methods. R. Rajesh Sharma and P. Marikkannu Copyright © 2015 R. Rajesh Sharma and P. Marikkannu. All rights reserved. Priority Based Congestion Control Dynamic Clustering Protocol in Mobile Wireless Sensor Networks Sun, 04 Oct 2015 15:15:32 +0000 Wireless sensor network is widely used to monitor natural phenomena because natural disaster has globally increased which causes significant loss of life, economic setback, and social development. Saving energy in a wireless sensor network (WSN) is a critical factor to be considered. The sensor nodes are deployed to sense, compute, and communicate alerts in a WSN which are used to prevent natural hazards. Generally communication consumes more energy than sensing and computing; hence cluster based protocol is preferred. Even with clustering, multiclass traffic creates congested hotspots in the cluster, thereby causing packet loss and delay. In order to conserve energy and to avoid congestion during multiclass traffic a novel Priority Based Congestion Control Dynamic Clustering (PCCDC) protocol is developed. PCCDC is designed with mobile nodes which are organized dynamically into clusters to provide complete coverage and connectivity. PCCDC computes congestion at intra- and intercluster level using linear and binary feedback method. Each mobile node within the cluster has an appropriate queue model for scheduling prioritized packet during congestion without drop or delay. Simulation results have proven that packet drop, control overhead, and end-to-end delay are much lower in PCCDC which in turn significantly increases packet delivery ratio, network lifetime, and residual energy when compared with PASCC protocol. R. Beulah Jayakumari and V. Jawahar Senthilkumar Copyright © 2015 R. Beulah Jayakumari and V. Jawahar Senthilkumar. All rights reserved. Framing a Knowledge Base for a Legal Expert System Dealing with Indeterminate Concepts Thu, 01 Oct 2015 13:36:14 +0000 Despite decades of development of formal tools for modelling legal knowledge and reasoning, the creation of a fully fledged legal decision support system remains challenging. Among those challenges, such system requires an enormous amount of commonsense knowledge to derive legal expertise. This paper describes the development of a negotiation decision support system (the Parenting Plan Support System or PPSS) to support parents in drafting an agreement (the parenting plan) for the exercise of parental custody of minor children after a divorce is granted. The main objective here is to discuss problems of framing an intuitively appealing and computationally efficient knowledge base that can adequately represent the indeterminate legal concept of the well-being of the child in the context of continental legal culture and of Polish law in particular. In addition to commonsense reasoning, interpretation of such a concept demands both legal expertise and significant professional knowledge from other domains. Michał Araszkiewicz, Agata Łopatkiewicz, Adam Zienkiewicz, and Tomasz Zurek Copyright © 2015 Michał Araszkiewicz et al. All rights reserved. MATLAB Simulation of UPQC for Power Quality Mitigation Using an Ant Colony Based Fuzzy Control Technique Thu, 01 Oct 2015 12:59:32 +0000 This paper proposes a fuzzy logic based new control scheme for the Unified Power Quality Conditioner (UPQC) for minimizing the voltage sag and total harmonic distortion in the distribution system consequently to improve the power quality. UPQC is a recent power electronic module which guarantees better power quality mitigation as it has both series-active and shunt-active power filters (APFs). The fuzzy logic controller has recently attracted a great deal of attention and possesses conceptually the quality of the simplicity by tackling complex systems with vagueness and ambiguity. In this research, the fuzzy logic controller is utilized for the generation of reference signal controlling the UPQC. To enable this, a systematic approach for creating the fuzzy membership functions is carried out by using an ant colony optimization technique for optimal fuzzy logic control. An exhaustive simulation study using the MATLAB/Simulink is carried out to investigate and demonstrate the performance of the proposed fuzzy logic controller and the simulation results are compared with the PI controller in terms of its performance in improving the power quality by minimizing the voltage sag and total harmonic distortion. N. Kumarasabapathy and P. S. Manoharan Copyright © 2015 N. Kumarasabapathy and P. S. Manoharan. All rights reserved. Distilling Big Data: Refining Quality Information in the Era of Yottabytes Thu, 01 Oct 2015 12:29:26 +0000 Big Data is the buzzword of the modern century. With the invasion of pervasive computing, we live in a data centric environment, where we always leave a track of data related to our day to day activities. Be it a visit to a shopping mall or hospital or surfing Internet, we create voluminous data related to credit card transactions, user details, location information, and so on. These trails of data simply define an individual and form the backbone for user-profiling. With the mobile phones and their easy access to online social networks on the go, sensor data such as geo-taggings and events and sentiments around them contribute to the already overwhelming data containers. With reductions in the cost of storage and computational devices and with increasing proliferation of Cloud, we never felt any constraints in storing or processing such data. Eventually we end up having several exabytes of data and analysing them for their usefulness has introduced new frontiers of research. Effective distillation of these data is the need of the hour to improve the veracity of the Big Data. This research targets the utilization of the Fuzzy Bayesian process model to improve the quality of information in Big Data. Sivaraman Ramachandramurthy, Srinivasan Subramaniam, and Chandrasekeran Ramasamy Copyright © 2015 Sivaraman Ramachandramurthy et al. All rights reserved. Differential Evolution Based IDWNN Controller for Fault Ride-Through of Grid-Connected Doubly Fed Induction Wind Generators Thu, 01 Oct 2015 12:27:09 +0000 The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation. N. Manonmani, V. Subbiah, and L. Sivakumar Copyright © 2015 N. Manonmani et al. All rights reserved. Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search Thu, 01 Oct 2015 12:26:01 +0000 A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. S. Rajalakshmi and R. Maguteeswaran Copyright © 2015 S. Rajalakshmi and R. Maguteeswaran. All rights reserved. Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version Thu, 01 Oct 2015 12:04:20 +0000 The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things. M. Rajesh Babu, S. Moses Dian, Siva Chelladurai, and Mathiyalagan Palaniappan Copyright © 2015 M. Rajesh Babu et al. All rights reserved. Modeling and Simulation of a Novel Relay Node Based Secure Routing Protocol Using Multiple Mobile Sink for Wireless Sensor Networks Thu, 01 Oct 2015 11:11:56 +0000 Data gathering and optimal path selection for wireless sensor networks (WSN) using existing protocols result in collision. Increase in collision further increases the possibility of packet drop. Thus there is a necessity to eliminate collision during data aggregation. Increasing the efficiency is the need of the hour with maximum security. This paper is an effort to come up with a reliable and energy efficient WSN routing and secure protocol with minimum delay. This technique is named as relay node based secure routing protocol for multiple mobile sink (RSRPMS). This protocol finds the rendezvous point for optimal transmission of data using a “splitting tree” technique in tree-shaped network topology and then to determine all the subsequent positions of a sink the “Biased Random Walk” model is used. In case of an event, the sink gathers the data from all sources, when they are in the sensing range of rendezvous point. Otherwise relay node is selected from its neighbor to transfer packets from rendezvous point to sink. A symmetric key cryptography is used for secure transmission. The proposed relay node based secure routing protocol for multiple mobile sink (RSRPMS) is experimented and simulation results are compared with Intelligent Agent-Based Routing (IAR) protocol to prove that there is increase in the network lifetime compared with other routing protocols. Madhumathy Perumal and Sivakumar Dhandapani Copyright © 2015 Madhumathy Perumal and Sivakumar Dhandapani. All rights reserved. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm Thu, 01 Oct 2015 11:00:04 +0000 Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user’s request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. Deivamani Mallayya, Baskaran Ramachandran, and Suganya Viswanathan Copyright © 2015 Deivamani Mallayya et al. All rights reserved. Differential Evolution Algorithm with Diversified Vicinity Operator for Optimal Routing and Clustering of Energy Efficient Wireless Sensor Networks Thu, 01 Oct 2015 09:46:00 +0000 Due to large dimension of clusters and increasing size of sensor nodes, finding the optimal route and cluster for large wireless sensor networks (WSN) seems to be highly complex and cumbersome. This paper proposes a new method to determine a reasonably better solution of the clustering and routing problem with the highest concern of efficient energy consumption of the sensor nodes for extending network life time. The proposed method is based on the Differential Evolution (DE) algorithm with an improvised search operator called Diversified Vicinity Procedure (DVP), which models a trade-off between energy consumption of the cluster heads and delay in forwarding the data packets. The obtained route using the proposed method from all the gateways to the base station is comparatively lesser in overall distance with less number of data forwards. Extensive numerical experiments demonstrate the superiority of the proposed method in managing energy consumption of the WSN and the results are compared with the other algorithms reported in the literature. Subramaniam Sumithra and T. Aruldoss Albert Victoire Copyright © 2015 Subramaniam Sumithra and T. Aruldoss Albert Victoire. All rights reserved. Predicting Defects Using Information Intelligence Process Models in the Software Technology Project Thu, 01 Oct 2015 09:26:42 +0000 A key differentiator in a competitive market place is customer satisfaction. As per Gartner 2012 report, only 75%–80% of IT projects are successful. Customer satisfaction should be considered as a part of business strategy. The associated project parameters should be proactively managed and the project outcome needs to be predicted by a technical manager. There is lot of focus on the end state and on minimizing defect leakage as much as possible. Focus should be on proactively managing and shifting left in the software life cycle engineering model. Identify the problem upfront in the project cycle and do not wait for lessons to be learnt and take reactive steps. This paper gives the practical applicability of using predictive models and illustrates use of these models in a project to predict system testing defects thus helping to reduce residual defects. Manjula Gandhi Selvaraj, Devi Shree Jayabal, Thenmozhi Srinivasan, and Palanisamy Balasubramanie Copyright © 2015 Manjula Gandhi Selvaraj et al. All rights reserved. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence Thu, 01 Oct 2015 09:25:32 +0000 Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM), with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets. Thenmozhi Srinivasan and Balasubramanie Palanisamy Copyright © 2015 Thenmozhi Srinivasan and Balasubramanie Palanisamy. All rights reserved. An Improved Differential Evolution Solution for Software Project Scheduling Problem Thu, 01 Oct 2015 09:23:12 +0000 This paper proposes a differential evolution (DE) method for the software project scheduling problem (SPSP). The interest on finding a more efficient solution technique for SPSP is always a topic of interest due to the fact of ever growing challenges faced by the software industry. The curse of dimensionality is introduced in the scheduling problem by ever increasing software assignments and the number of staff who handles it. Thus the SPSP is a class of NP-hard problem, which requires a rigorous solution procedure which guarantees a reasonably better solution. Differential evolution is a direct search stochastic optimization technique that is fairly fast and reasonably robust. It is also capable of handling nondifferentiable, nonlinear, and multimodal objective functions like SPSP. This paper proposes a refined DE where a new mutation mechanism is introduced. The superiority of the proposed method is experimented and demonstrated by solving the SPSP on 50 random instances and the results are compared with some of the techniques in the literature. A. C. Biju, T. Aruldoss Albert Victoire, and Kumaresan Mohanasundaram Copyright © 2015 A. C. Biju et al. All rights reserved. Energy Efficient Cluster Based Scheduling Scheme for Wireless Sensor Networks Thu, 01 Oct 2015 08:16:40 +0000 The energy utilization of sensor nodes in large scale wireless sensor network points out the crucial need for scalable and energy efficient clustering protocols. Since sensor nodes usually operate on batteries, the maximum utility of network is greatly dependent on ideal usage of energy leftover in these sensor nodes. In this paper, we propose an Energy Efficient Cluster Based Scheduling Scheme for wireless sensor networks that balances the sensor network lifetime and energy efficiency. In the first phase of our proposed scheme, cluster topology is discovered and cluster head is chosen based on remaining energy level. The cluster head monitors the network energy threshold value to identify the energy drain rate of all its cluster members. In the second phase, scheduling algorithm is presented to allocate time slots to cluster member data packets. Here congestion occurrence is totally avoided. In the third phase, energy consumption model is proposed to maintain maximum residual energy level across the network. Moreover, we also propose a new packet format which is given to all cluster member nodes. The simulation results prove that the proposed scheme greatly contributes to maximum network lifetime, high energy, reduced overhead, and maximum delivery ratio. E. Srie Vidhya Janani and P. Ganesh Kumar Copyright © 2015 E. Srie Vidhya Janani and P. Ganesh Kumar. All rights reserved. Synchronous Firefly Algorithm for Cluster Head Selection in WSN Thu, 01 Oct 2015 06:27:25 +0000 Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC. Madhusudhanan Baskaran and Chitra Sadagopan Copyright © 2015 Madhusudhanan Baskaran and Chitra Sadagopan. All rights reserved. Medical Dataset Classification: A Machine Learning Paradigm Integrating Particle Swarm Optimization with Extreme Learning Machine Classifier Wed, 30 Sep 2015 14:24:45 +0000 Medical data classification is a prime data mining problem being discussed about for a decade that has attracted several researchers around the world. Most classifiers are designed so as to learn from the data itself using a training process, because complete expert knowledge to determine classifier parameters is impracticable. This paper proposes a hybrid methodology based on machine learning paradigm. This paradigm integrates the successful exploration mechanism called self-regulated learning capability of the particle swarm optimization (PSO) algorithm with the extreme learning machine (ELM) classifier. As a recent off-line learning method, ELM is a single-hidden layer feedforward neural network (FFNN), proved to be an excellent classifier with large number of hidden layer neurons. In this research, PSO is used to determine the optimum set of parameters for the ELM, thus reducing the number of hidden layer neurons, and it further improves the network generalization performance. The proposed method is experimented on five benchmarked datasets of the UCI Machine Learning Repository for handling medical dataset classification. Simulation results show that the proposed approach is able to achieve good generalization performance, compared to the results of other classifiers. C. V. Subbulakshmi and S. N. Deepa Copyright © 2015 C. V. Subbulakshmi and S. N. Deepa. All rights reserved. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch Wed, 30 Sep 2015 14:23:52 +0000 Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. M. Karthikeyan and T. Sree Ranga Raja Copyright © 2015 M. Karthikeyan and T. Sree Ranga Raja. All rights reserved. Information Retrieval and Graph Analysis Approaches for Book Recommendation Wed, 30 Sep 2015 13:34:58 +0000 A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. Chahinez Benkoussas and Patrice Bellot Copyright © 2015 Chahinez Benkoussas and Patrice Bellot. All rights reserved. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method Wed, 30 Sep 2015 13:33:41 +0000 Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM) model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method. Rasim M. Alguliyev, Ramiz M. Aliguliyev, and Rasmiyya S. Mahmudova Copyright © 2015 Rasim M. Alguliyev et al. All rights reserved. Traffic and Driving Simulator Based on Architecture of Interactive Motion Wed, 30 Sep 2015 13:32:29 +0000 This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. Alexander Paz, Naveen Veeramisti, Romesh Khaddar, Hanns de la Fuente-Mella, and Luiza Modorcea Copyright © 2015 Alexander Paz et al. All rights reserved. Investigating IT Faculty Resistance to Learning Management System Adoption Using Latent Variables in an Acceptance Technology Model Wed, 30 Sep 2015 09:44:26 +0000 To enhance instruction in higher education, many universities in the Middle East have chosen to introduce learning management systems (LMS) to their institutions. However, this new educational technology is not being used at its full potential and faces resistance from faculty members. To investigate this phenomenon, we conducted an empirical research study to uncover factors influencing faculty members’ acceptance of LMS. Thus, in the Fall semester of 2014, Information Technology faculty members were surveyed to better understand their perceptions of the incorporation of LMS into their courses. The results showed that personal factors such as motivation, load anxiety, and organizational support play important roles in the perception of the usefulness of LMS among IT faculty members. These findings suggest adding these constructs in order to extend the Technology acceptance model (TAM) for LMS acceptance, which can help stakeholders of the university to implement the use of this system. This may assist in planning and evaluating the use of e-learning. Fatiha Bousbahi and Muna Saleh Alrazgan Copyright © 2015 Fatiha Bousbahi and Muna Saleh Alrazgan. All rights reserved. A New Arbiter PUF for Enhancing Unpredictability on FPGA Wed, 30 Sep 2015 07:13:43 +0000 In general, conventional Arbiter-based Physically Unclonable Functions (PUFs) generate responses with low unpredictability. The -XOR Arbiter PUF, proposed in 2007, is a well-known technique for improving this unpredictability. In this paper, we propose a novel design for Arbiter PUF, called Double Arbiter PUF, to enhance the unpredictability on field programmable gate arrays (FPGAs), and we compare our design to conventional -XOR Arbiter PUFs. One metric for judging the unpredictability of responses is to measure their tolerance to machine-learning attacks. Although our previous work showed the superiority of Double Arbiter PUFs regarding unpredictability, its details were not clarified. We evaluate the dependency on the number of training samples for machine learning, and we discuss the reason why Double Arbiter PUFs are more tolerant than the -XOR Arbiter PUFs by evaluating intrachip variation. Further, the conventional Arbiter PUFs and proposed Double Arbiter PUFs are evaluated according to other metrics, namely, their uniqueness, randomness, and steadiness. We demonstrate that 3-1 Double Arbiter PUF archives the best performance overall. Takanori Machida, Dai Yamamoto, Mitsugu Iwamoto, and Kazuo Sakiyama Copyright © 2015 Takanori Machida et al. All rights reserved. Fully Integrated Passive UHF RFID Tag for Hash-Based Mutual Authentication Protocol Tue, 29 Sep 2015 13:50:05 +0000 Passive radio-frequency identification (RFID) tag has been used in many applications. While the RFID market is expected to grow, concerns about security and privacy of the RFID tag should be overcome for the future use. To overcome these issues, privacy-preserving authentication protocols based on cryptographic algorithms have been designed. However, to the best of our knowledge, evaluation of the whole tag, which includes an antenna, an analog front end, and a digital processing block, that runs authentication protocols has not been studied. In this paper, we present an implementation and evaluation of a fully integrated passive UHF RFID tag that runs a privacy-preserving mutual authentication protocol based on a hash function. We design a single chip including the analog front end and the digital processing block. We select a lightweight hash function supporting 80-bit security strength and a standard hash function supporting 128-bit security strength. We show that when the lightweight hash function is used, the tag completes the protocol with a reader-tag distance of 10 cm. Similarly, when the standard hash function is used, the tag completes the protocol with the distance of 8.5 cm. We discuss the impact of the peak power consumption of the tag on the distance of the tag due to the hash function. Shugo Mikami, Dai Watanabe, Yang Li, and Kazuo Sakiyama Copyright © 2015 Shugo Mikami et al. All rights reserved. An Efficient Feature Subset Selection Algorithm for Classification of Multidimensional Dataset Mon, 28 Sep 2015 09:19:00 +0000 Multidimensional medical data classification has recently received increased attention by researchers working on machine learning and data mining. In multidimensional dataset (MDD) each instance is associated with multiple class values. Due to its complex nature, feature selection and classifier built from the MDD are typically more expensive or time-consuming. Therefore, we need a robust feature selection technique for selecting the optimum single subset of the features of the MDD for further analysis or to design a classifier. In this paper, an efficient feature selection algorithm is proposed for the classification of MDD. The proposed multidimensional feature subset selection (MFSS) algorithm yields a unique feature subset for further analysis or to build a classifier and there is a computational advantage on MDD compared with the existing feature selection algorithms. The proposed work is applied to benchmark multidimensional datasets. The number of features was reduced to 3% minimum and 30% maximum by using the proposed MFSS. In conclusion, the study results show that MFSS is an efficient feature selection algorithm without affecting the classification accuracy even for the reduced number of features. Also the proposed MFSS algorithm is suitable for both problem transformation and algorithm adaptation and it has great potentials in those applications generating multidimensional datasets. Senthilkumar Devaraj and S. Paulraj Copyright © 2015 Senthilkumar Devaraj and S. Paulraj. All rights reserved. Exploiting Small Leakages in Masks to Turn a Second-Order Attack into a First-Order Attack and Improved Rotating Substitution Box Masking with Linear Code Cosets Mon, 28 Sep 2015 09:05:03 +0000 Masking countermeasures, used to thwart side-channel attacks, have been shown to be vulnerable to mask-extraction attacks. State-of-the-art mask-extraction attacks on the Advanced Encryption Standard (AES) algorithm target S-Box recomputation schemes but have not been applied to scenarios where S-Boxes are precomputed offline. We propose an attack targeting precomputed S-Boxes stored in nonvolatile memory. Our attack targets AES implemented in software protected by a low entropy masking scheme and recovers the masks with 91% success rate. Recovering the secret key requires fewer power traces (in fact, by at least two orders of magnitude) compared to a classical second-order attack. Moreover, we show that this attack remains viable in a noisy environment or with a reduced number of leakage points. Eventually, we specify a method to enhance the countermeasure by selecting a suitable coset of the masks set. Alexander DeTrano, Naghmeh Karimi, Ramesh Karri, Xiaofei Guo, Claude Carlet, and Sylvain Guilley Copyright © 2015 Alexander DeTrano et al. All rights reserved. Privacy Preserved and Secured Reliable Routing Protocol for Wireless Mesh Networks Mon, 21 Sep 2015 09:28:37 +0000 Privacy preservation and security provision against internal attacks in wireless mesh networks (WMNs) are more demanding than in wired networks due to the open nature and mobility of certain nodes in the network. Several schemes have been proposed to preserve privacy and provide security in WMNs. To provide complete privacy protection in WMNs, the properties of unobservability, unlinkability, and anonymity are to be ensured during route discovery. These properties can be achieved by implementing group signature and ID-based encryption schemes during route discovery. Due to the characteristics of WMNs, it is more vulnerable to many network layer attacks. Hence, a strong protection is needed to avoid these attacks and this can be achieved by introducing a new Cross-Layer and Subject Logic based Dynamic Reputation (CLSL-DR) mechanism during route discovery. In this paper, we propose a new Privacy preserved and Secured Reliable Routing (PSRR) protocol for WMNs. This protocol incorporates group signature, ID-based encryption schemes, and CLSL-DR mechanism to ensure strong privacy, security, and reliability in WMNs. Simulation results prove this by showing better performance in terms of most of the chosen parameters than the existing protocols. Navamani Thandava Meganathan and Yogesh Palanichamy Copyright © 2015 Navamani Thandava Meganathan and Yogesh Palanichamy. All rights reserved. Biologically Motivated Novel Localization Paradigm by High-Level Multiple Object Recognition in Panoramic Images Thu, 17 Sep 2015 12:07:19 +0000 This paper presents the novel paradigm of a global localization method motivated by human visual systems (HVSs). HVSs actively use the information of the object recognition results for self-position localization and for viewing direction. The proposed localization paradigm consisted of three parts: panoramic image acquisition, multiple object recognition, and grid-based localization. Multiple object recognition information from panoramic images is utilized in the localization part. High-level object information was useful not only for global localization, but also for robot-object interactions. The metric global localization (position, viewing direction) was conducted based on the bearing information of recognized objects from just one panoramic image. The feasibility of the novel localization paradigm was validated experimentally. Sungho Kim and Min-Sheob Shim Copyright © 2015 Sungho Kim and Min-Sheob Shim. All rights reserved. On Constructing Dynamic and Forward Secure Authenticated Group Key Agreement Scheme from Multikey Encapsulation Mechanism Wed, 16 Sep 2015 13:45:08 +0000 The approach of instantiating authenticated group key exchange (GAKE) protocol from the multikey encapsulation mechanism (mKEM) has an important advantage of achieving classical requirement of GAKE security in one communication round. In spite of the limitations of this approach, for example, lack of forward secrecy, it is very useful in group environments when maximum communication efficiency is desirable. To enrich this mKEM-based GAKE construction, we suggest an efficient solution to convert this static GAKE framework into a partially dynamic scheme. Furthermore, to address the associated lack of forward-secrecy, we propose two variants of this generic construction which can also provide a means of forward secrecy at the cost of extra communication round. In addition, concerning associated implementation cost of deploying this generic GAKE construction in elliptic curve cryptosystem, we compare the possible instantiations of this model from existing mKEM algorithms in terms of the number of elliptic curve scalar multiplications. Iraj Fathirad and John Devlin Copyright © 2015 Iraj Fathirad and John Devlin. All rights reserved. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter Tue, 15 Sep 2015 09:09:03 +0000 Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. Shyamala Loganathan and Saswati Mukherjee Copyright © 2015 Shyamala Loganathan and Saswati Mukherjee. All rights reserved. Lightweight Adaptation of Classifiers to Users and Contexts: Trends of the Emerging Domain Thu, 10 Sep 2015 13:28:24 +0000 Intelligent computer applications need to adapt their behaviour to contexts and users, but conventional classifier adaptation methods require long data collection and/or training times. Therefore classifier adaptation is often performed as follows: at design time application developers define typical usage contexts and provide reasoning models for each of these contexts, and then at runtime an appropriate model is selected from available ones. Typically, definition of usage contexts and reasoning models heavily relies on domain knowledge. However, in practice many applications are used in so diverse situations that no developer can predict them all and collect for each situation adequate training and test databases. Such applications have to adapt to a new user or unknown context at runtime just from interaction with the user, preferably in fairly lightweight ways, that is, requiring limited user effort to collect training data and limited time of performing the adaptation. This paper analyses adaptation trends in several emerging domains and outlines promising ideas, proposed for making multimodal classifiers user-specific and context-specific without significant user efforts, detailed domain knowledge, and/or complete retraining of the classifiers. Based on this analysis, this paper identifies important application characteristics and presents guidelines to consider these characteristics in adaptation design. Elena Vildjiounaite, Georgy Gimel’farb, Vesa Kyllönen, and Johannes Peltola Copyright © 2015 Elena Vildjiounaite et al. All rights reserved. Novel Strategy to Improve the Performance of Localization in WSN Mon, 07 Sep 2015 11:33:06 +0000 A novel strategy of discrete energy consumption model for WSN based on quasi Monte Carlo and crude Monte Carlo method is developed. In our model the discrete hidden Markov process plays a major role in analyzing the node location in heterogeneous media. In this energy consumption model we use both static and dynamic sensor nodes to monitor the optimized energy of all sensor nodes in which every sensor state can be considered as the dynamic Bayesian network. By using this method the power is assigned in terms of dynamic manner to each sensor over discrete time steps to control the graphical structure of our network. The simulation and experiment result shows that our proposed methods are better in terms of localization accuracy and possess minimum computational time over existing localization method. M. Vasim Babu and A. V. Ramprasad Copyright © 2015 M. Vasim Babu and A. V. Ramprasad. All rights reserved. Game Theory Based Trust Model for Cloud Environment Tue, 25 Aug 2015 12:44:30 +0000 The aim of this work is to propose a method to establish trust at bootload level in cloud computing environment. This work proposes a game theoretic based approach for achieving trust at bootload level of both resources and users perception. Nash equilibrium (NE) enhances the trust evaluation of the first-time users and providers. It also restricts the service providers and the users to violate service level agreement (SLA). Significantly, the problem of cold start and whitewashing issues are addressed by the proposed method. In addition appropriate mapping of cloud user’s application to cloud service provider for segregating trust level is achieved as a part of mapping. Thus, time complexity and space complexity are handled efficiently. Experiments were carried out to compare and contrast the performance of the conventional methods and the proposed method. Several metrics like execution time, accuracy, error identification, and undecidability of the resources were considered. K. Gokulnath and Rhymend Uthariaraj Copyright © 2015 K. Gokulnath and Rhymend Uthariaraj. All rights reserved. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud Tue, 25 Aug 2015 08:53:37 +0000 Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider’s premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization. Shyamala Devi Munisamy and Arun Chokkalingam Copyright © 2015 Shyamala Devi Munisamy and Arun Chokkalingam. All rights reserved. Recent Advances in General Game Playing Mon, 24 Aug 2015 11:47:34 +0000 The goal of General Game Playing (GGP) has been to develop computer programs that can perform well across various game types. It is natural for human game players to transfer knowledge from games they already know how to play to other similar games. GGP research attempts to design systems that work well across different game types, including unknown new games. In this review, we present a survey of recent advances (2011 to 2014) in GGP for both traditional games and video games. It is notable that research on GGP has been expanding into modern video games. Monte-Carlo Tree Search and its enhancements have been the most influential techniques in GGP for both research domains. Additionally, international competitions have become important events that promote and increase GGP research. Recently, a video GGP competition was launched. In this survey, we review recent progress in the most challenging research areas of Artificial Intelligence (AI) related to universal game playing. Maciej Świechowski, HyunSoo Park, Jacek Mańdziuk, and Kyung-Joong Kim Copyright © 2015 Maciej Świechowski et al. All rights reserved. A Dynamic Intrusion Detection System Based on Multivariate Hotelling’s T2 Statistics Approach for Network Environments Tue, 18 Aug 2015 10:05:13 +0000 The ever expanding communication requirements in today’s world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling’s T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling’s T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup’99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. Aneetha Avalappampatty Sivasamy and Bose Sundan Copyright © 2015 Aneetha Avalappampatty Sivasamy and Bose Sundan. All rights reserved. Prediction of Domain Behavior through Dynamic Well-Being Domain Model Analysis Mon, 17 Aug 2015 12:00:57 +0000 As the concept of context-awareness is becoming more popular the demand for improved quality of context-aware systems increases too. Due to the inherent challenges posed by context-awareness, it is harder to predict what the behavior of the systems and their context will be once provided to the end-user than is the case for non-context-aware systems. A domain where such upfront knowledge is highly important is that of well-being. In this paper, we introduce a method to model the well-being domain and to predict the effects the system will have on its context when implemented. This analysis can be performed at design time. Using these predictions, the design can be fine-tuned to increase the chance that systems will have the desired effect. The method has been tested using three existing well-being applications. For these applications, domain models were created in the Dynamic Well-being Domain Model language. This language allows for causal reasoning over the application domain. The models created were used to perform the analysis and behavior prediction. The analysis results were compared to existing application end-user evaluation studies. Results showed that our analysis could accurately predict success and possible problems in the focus of the systems, although certain limitation regarding the predictions should be kept into consideration. Steven Bosems and Marten van Sinderen Copyright © 2015 Steven Bosems and Marten van Sinderen. All rights reserved. Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow Mon, 17 Aug 2015 06:59:40 +0000 Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time. Liaojun Pang, Deyu Miao, Huixian Li, and Qiong Wang Copyright © 2015 Liaojun Pang et al. All rights reserved. A Customizable Quantum-Dot Cellular Automata Building Block for the Synthesis of Classical and Reversible Circuits Sun, 09 Aug 2015 13:48:28 +0000 Quantum-dot cellular automata (QCA) are nanoscale digital logic constructs that use electrons in arrays of quantum dots to carry out binary operations. In this paper, a basic building block for QCA will be proposed. The proposed basic building block can be customized to implement classical gates, such as XOR and XNOR gates, and reversible gates, such as CNOT and Toffoli gates, with less cell count and/or better latency than other proposed designs. Ahmed Moustafa, Ahmed Younes, and Yasser F. Hassan Copyright © 2015 Ahmed Moustafa et al. All rights reserved. Computer Intelligence in Modeling, Prediction, and Analysis of Complex Dynamical Systems Wed, 05 Aug 2015 08:23:29 +0000 Ivan Zelinka, Ajith Abraham, Otto Rossler, Mohammed Chadli, and Rene Lozi Copyright © 2015 Ivan Zelinka et al. All rights reserved. Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System Tue, 04 Aug 2015 11:32:17 +0000 The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency. Pavol Partila, Miroslav Voznak, and Jaromir Tovarek Copyright © 2015 Pavol Partila et al. All rights reserved. Time Evolution of Initial Errors in Lorenz’s 05 Chaotic Model Tue, 04 Aug 2015 11:24:10 +0000 Initial errors in weather prediction grow in time and, as they become larger, their growth slows down and then stops at an asymptotic value. Time of reaching this saturation point represents the limit of predictability. This paper studies the asymptotic values and time limits in a chaotic atmospheric model for five initial errors, using ensemble prediction method (model’s data) as well as error approximation by quadratic and logarithmic hypothesis and their modifications. We show that modified hypotheses approximate the model’s time limits better, but not without serious disadvantages. We demonstrate how hypotheses can be further improved to achieve better match of time limits with the model. We also show that quadratic hypothesis approximates the model’s asymptotic value best and that, after improvement, it also approximates the model’s time limits better for almost all initial errors and time lengths. Hynek Bednář, Aleš Raidl, and Jiří Mikšovský Copyright © 2015 Hynek Bednář et al. All rights reserved. Advanced Approach of Multiagent Based Buoy Communication Tue, 04 Aug 2015 11:23:35 +0000 Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. Gediminas Gricius, Darius Drungilas, Arunas Andziulis, Dale Dzemydiene, Miroslav Voznak, Mindaugas Kurmis, and Sergej Jakovlev Copyright © 2015 Gediminas Gricius et al. All rights reserved. Nonlinear versus Ordinary Adaptive Control of Continuous Stirred-Tank Reactor Tue, 04 Aug 2015 11:13:21 +0000 Unfortunately, the major group of the systems in industry has nonlinear behavior and control of such processes with conventional control approaches with fixed parameters causes problems and suboptimal or unstable control results. An adaptive control is one way to how we can cope with nonlinearity of the system. This contribution compares classic adaptive control and its modification with Wiener system. This configuration divides nonlinear controller into the dynamic linear part and the static nonlinear part. The dynamic linear part is constructed with the use of polynomial synthesis together with the pole-placement method and the spectral factorization. The static nonlinear part uses static analysis of the controlled plant for introducing the mathematical nonlinear description of the relation between the controlled output and the change of the control input. Proposed controller is tested by the simulations on the mathematical model of the continuous stirred-tank reactor with cooling in the jacket as a typical nonlinear system. Jiri Vojtesek and Petr Dostal Copyright © 2015 Jiri Vojtesek and Petr Dostal. All rights reserved. ASM Based Synthesis of Handwritten Arabic Text Pages Thu, 30 Jul 2015 15:57:49 +0000 Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. Laslo Dinges, Ayoub Al-Hamadi, Moftah Elzobi, Sherif El-etriby, and Ahmed Ghoneim Copyright © 2015 Laslo Dinges et al. All rights reserved. Fusion of Heterogeneous Intrusion Detection Systems for Network Attack Detection Wed, 29 Jul 2015 16:08:59 +0000 An intrusion detection system (IDS) helps to identify different types of attacks in general, and the detection rate will be higher for some specific category of attacks. This paper is designed on the idea that each IDS is efficient in detecting a specific type of attack. In proposed Multiple IDS Unit (MIU), there are five IDS units, and each IDS follows a unique algorithm to detect attacks. The feature selection is done with the help of genetic algorithm. The selected features of the input traffic are passed on to the MIU for processing. The decision from each IDS is termed as local decision. The fusion unit inside the MIU processes all the local decisions with the help of majority voting rule and makes the final decision. The proposed system shows a very good improvement in detection rate and reduces the false alarm rate. Jayakumar Kaliappan, Revathi Thiagarajan, and Karpagam Sundararajan Copyright © 2015 Jayakumar Kaliappan et al. All rights reserved. An Adaptive Multilevel Security Framework for the Data Stored in Cloud Environment Thu, 16 Jul 2015 10:35:10 +0000 Cloud computing is renowned for delivering information technology services based on internet. Nowadays, organizations are interested in moving their massive data and computations into cloud to reap their significant benefits of on demand service, resource pooling, and rapid elasticity that helps to satisfy the dynamically changing infrastructure demand without the burden of owning, managing, and maintaining it. Since the data needs to be secured throughout its life cycle, security of the data in cloud is a major challenge to be concentrated on because the data is in third party’s premises. Any uniform simple or high level security method for all the data either compromises the sensitive data or proves to be too costly with increased overhead. Any common multiple method for all data becomes vulnerable when the common security pattern is identified at the event of successful attack on any information and also encourages more attacks on all other data. This paper suggests an adaptive multilevel security framework based on cryptography techniques that provide adequate security for the classified data stored in cloud. The proposed security system acclimates well for cloud environment and is also customizable and more reliant to meet the required level of security of data with different sensitivity that changes with business needs and commercial conditions. Sudha Devi Dorairaj and Thilagavathy Kaliannan Copyright © 2015 Sudha Devi Dorairaj and Thilagavathy Kaliannan. All rights reserved. Theory and Application on Rough Set, Fuzzy Logic, and Granular Computing Thu, 16 Jul 2015 07:49:58 +0000 Xibei Yang, Weihua Xu, and Yanhong She Copyright © 2015 Xibei Yang et al. All rights reserved. Detecting and Preventing Sybil Attacks in Wireless Sensor Networks Using Message Authentication and Passing Method Sun, 05 Jul 2015 10:26:14 +0000 Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting. Udaya Suriya Raj Kumar Dhamodharan and Rajamani Vayanaperumal Copyright © 2015 Udaya Suriya Raj Kumar Dhamodharan and Rajamani Vayanaperumal. All rights reserved. A Dynamic Recommender System for Improved Web Usage Mining and CRM Using Swarm Intelligence Wed, 01 Jul 2015 12:30:42 +0000 In modern days, to enrich e-business, the websites are personalized for each user by understanding their interests and behavior. The main challenges of online usage data are information overload and their dynamic nature. In this paper, to address these issues, a WebBluegillRecom-annealing dynamic recommender system that uses web usage mining techniques in tandem with software agents developed for providing dynamic recommendations to users that can be used for customizing a website is proposed. The proposed WebBluegillRecom-annealing dynamic recommender uses swarm intelligence from the foraging behavior of a bluegill fish. It overcomes the information overload by handling dynamic behaviors of users. Our dynamic recommender system was compared against traditional collaborative filtering systems. The results show that the proposed system has higher precision, coverage, measure, and scalability than the traditional collaborative filtering systems. Moreover, the recommendations given by our system overcome the overspecialization problem by including variety in recommendations. Anna Alphy and S. Prabakaran Copyright © 2015 Anna Alphy and S. Prabakaran. All rights reserved. ECG Prediction Based on Classification via Neural Networks and Linguistic Fuzzy Logic Forecaster Mon, 29 Jun 2015 08:46:57 +0000 The paper deals with ECG prediction based on neural networks classification of different types of time courses of ECG signals. The main objective is to recognise normal cycles and arrhythmias and perform further diagnosis. We proposed two detection systems that have been created with usage of neural networks. The experimental part makes it possible to load ECG signals, preprocess them, and classify them into given classes. Outputs from the classifiers carry a predictive character. All experimental results from both of the proposed classifiers are mutually compared in the conclusion. We also experimented with the new method of time series transparent prediction based on fuzzy transform with linguistic IF-THEN rules. Preliminary results show interesting results based on the unique capability of this approach bringing natural language interpretation of particular prediction, that is, the properties of time series. Eva Volna, Martin Kotyrba, and Hashim Habiballa Copyright © 2015 Eva Volna et al. All rights reserved. H.264 SVC Complexity Reduction Based on Likelihood Mode Decision Sun, 28 Jun 2015 07:11:03 +0000 H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method. L. Balaji and K. K. Thyagharajan Copyright © 2015 L. Balaji and K. K. Thyagharajan. All rights reserved. Fuzzy Number Addition with the Application of Horizontal Membership Functions Tue, 23 Jun 2015 12:27:33 +0000 The paper presents addition of fuzzy numbers realised with the application of the multidimensional RDM arithmetic and horizontal membership functions (MFs). Fuzzy arithmetic (FA) is a very difficult task because operations should be performed here on multidimensional information granules. Instead, a lot of FA methods use α-cuts in connection with 1-dimensional classical interval arithmetic that operates not on multidimensional granules but on 1-dimensional intervals. Such approach causes difficulties in calculations and is a reason for arithmetical paradoxes. The multidimensional approach allows for removing drawbacks and weaknesses of FA. It is possible thanks to the application of horizontal membership functions which considerably facilitate calculations because now uncertain values can be inserted directly into equations without using the extension principle. The paper shows how the addition operation can be realised on independent fuzzy numbers and on partly or fully dependent fuzzy numbers with taking into account the order relation and how to solve equations, which can be a difficult task for 1-dimensional FAs. Andrzej Piegat and Marcin Pluciński Copyright © 2015 Andrzej Piegat and Marcin Pluciński. All rights reserved. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features Mon, 22 Jun 2015 08:47:53 +0000 Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup’99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. P. Amudha, S. Karthik, and S. Sivakumari Copyright © 2015 P. Amudha et al. All rights reserved. Optimization of Processing Parameters in ECM of Die Tool Steel Using Nanofluid by Multiobjective Genetic Algorithm Thu, 18 Jun 2015 11:29:37 +0000 Formation of spikes prevents achievement of the better material removal rate (MRR) and surface finish while using plain NaNO3 aqueous electrolyte in electrochemical machining (ECM) of die tool steel. Hence this research work attempts to minimize the formation of spikes in the selected workpiece of high carbon high chromium die tool steel using copper nanoparticles suspended in NaNO3 aqueous electrolyte, that is, nanofluid. The selected influencing parameters are applied voltage and electrolyte discharge rate with three levels and tool feed rate with four levels. Thirty-six experiments were designed using Design Expert 7.0 software and optimization was done using multiobjective genetic algorithm (MOGA). This tool identified the best possible combination for achieving the better MRR and surface roughness. The results reveal that voltage of 18 V, tool feed rate of 0.54 mm/min, and nanofluid discharge rate of 12 lit/min would be the optimum values in ECM of HCHCr die tool steel. For checking the optimality obtained from the MOGA in MATLAB software, the maximum MRR of 375.78277 mm3/min and respective surface roughness Ra of 2.339779 μm were predicted at applied voltage of 17.688986 V, tool feed rate of 0.5399705 mm/min, and nanofluid discharge rate of 11.998816 lit/min. Confirmatory tests showed that the actual performance at the optimum conditions was 361.214 mm3/min and 2.41 μm; the deviation from the predicted performance is less than 4% which proves the composite desirability of the developed models. V. Sathiyamoorthy, T. Sekar, and N. Elango Copyright © 2015 V. Sathiyamoorthy et al. All rights reserved. Erratum to “N-Screen Aware Multicriteria Hybrid Recommender System Using Weight Based Subspace Clustering” Wed, 17 Jun 2015 07:20:09 +0000 Farman Ullah, Ghulam Sarwar, and Sungchang Lee Copyright © 2015 Farman Ullah et al. All rights reserved. Usage of Probabilistic and General Regression Neural Network for Early Detection and Prevention of Oral Cancer Mon, 15 Jun 2015 13:47:42 +0000 In India, the oral cancers are usually presented in advanced stage of malignancy. It is critical to ascertain the diagnosis in order to initiate most advantageous treatment of the suspicious lesions. The main hurdle in appropriate treatment and control of oral cancer is identification and risk assessment of early disease in the community in a cost-effective fashion. The objective of this research is to design a data mining model using probabilistic neural network and general regression neural network (PNN/GRNN) for early detection and prevention of oral malignancy. The model is built using the oral cancer database which has 35 attributes and 1025 records. All the attributes pertaining to clinical symptoms and history are considered to classify malignant and non-malignant cases. Subsequently, the model attempts to predict particular type of cancer, its stage and extent with the help of attributes pertaining to symptoms, gross examination and investigations. Also, the model envisages anticipating the survivability of a patient on the basis of treatment and follow-up details. Finally, the performance of the PNN/GRNN model is compared with that of other classification models. The classification accuracy of PNN/GRNN model is 80% and hence is better for early detection and prevention of the oral cancer. Neha Sharma and Hari Om Copyright © 2015 Neha Sharma and Hari Om. All rights reserved. FoodWiki: Ontology-Driven Mobile Safe Food Consumption System Mon, 15 Jun 2015 13:44:43 +0000 An ontology-driven safe food consumption mobile system is considered. Over 3,000 compounds are being added to processed food, with numerous effects on the food: to add color, stabilize, texturize, preserve, sweeten, thicken, add flavor, soften, emulsify, and so forth. According to World Health Organization, governments have lately focused on legislation to reduce such ingredients or compounds in manufactured foods as they may have side effects causing health risks such as heart disease, cancer, diabetes, allergens, and obesity. By supervising what and how much to eat as well as what not to eat, we can maximize a patient’s life quality through avoidance of unhealthy ingredients. Smart e-health systems with powerful knowledge bases can provide suggestions of appropriate foods to individuals. Next-generation smart knowledgebase systems will not only include traditional syntactic-based search, which limits the utility of the search results, but will also provide semantics for rich searching. In this paper, performance of concept matching of food ingredients is semantic-based, meaning that it runs its own semantic based rule set to infer meaningful results through the proposed Ontology-Driven Mobile Safe Food Consumption System (FoodWiki). Duygu Çelik Copyright © 2015 Duygu Çelik. All rights reserved. Hybrid Modified -Means with C4.5 for Intrusion Detection Systems in Multiagent Systems Mon, 15 Jun 2015 11:54:40 +0000 Presently, the processing time and performance of intrusion detection systems are of great importance due to the increased speed of traffic data networks and a growing number of attacks on networks and computers. Several approaches have been proposed to address this issue, including hybridizing with several algorithms. However, this paper aims at proposing a hybrid of modified -means with C4.5 intrusion detection system in a multiagent system (MAS-IDS). The MAS-IDS consists of three agents, namely, coordinator, analysis, and communication agent. The basic concept underpinning the utilized MAS is dividing the large captured network dataset into a number of subsets and distributing these to a number of agents depending on the data network size and core CPU availability. KDD Cup 1999 dataset is used for evaluation. The proposed hybrid modified -means with C4.5 classification in MAS is developed in JADE platform. The results show that compared to the current methods, the MAS-IDS reduces the IDS processing time by up to 70%, while improving the detection accuracy. Wathiq Laftah Al-Yaseen, Zulaiha Ali Othman, and Mohd Zakree Ahmad Nazri Copyright © 2015 Wathiq Laftah Al-Yaseen et al. All rights reserved. Personal Authentication Using Multifeatures Multispectral Palm Print Traits Sun, 14 Jun 2015 12:48:39 +0000 Biometrics authentication is an effective method for automatically recognizing a person’s identity with high confidence. Multispectral palm print biometric system is relatively new biometric technology and is in the progression of being endlessly refined and developed. Multispectral palm print biometric system is a promising biometric technology for use in various applications including banking solutions, access control, hospital, construction, and forensic applications. This paper proposes a multispectral palm print recognition method with extraction of multiple features using kernel principal component analysis and modified finite radon transform. Finally, the images are classified using Local Mean K-Nearest Centroid Neighbor algorithm. The proposed method efficiently accommodates the rotational, potential deformations and translational changes by encoding the orientation conserving features. The proposed system analyses the hand vascular authentication using two databases acquired with touch-based and contactless imaging setup collected from multispectral Poly U palm print database and CASIA database. The experimental results clearly demonstrate that the proposed multispectral palm print authentication obtained better result compared to other methods discussed in the literature. Gayathri Rajagopal and Senthil Kumar Manoharan Copyright © 2015 Gayathri Rajagopal and Senthil Kumar Manoharan. All rights reserved. Effective Filtering of Query Results on Updated User Behavioral Profiles in Web Mining Wed, 10 Jun 2015 13:38:13 +0000 Web with tremendous volume of information retrieves result for user related queries. With the rapid growth of web page recommendation, results retrieved based on data mining techniques did not offer higher performance filtering rate because relationships between user profile and queries were not analyzed in an extensive manner. At the same time, existing user profile based prediction in web data mining is not exhaustive in producing personalized result rate. To improve the query result rate on dynamics of user behavior over time, Hamilton Filtered Regime Switching User Query Probability (HFRS-UQP) framework is proposed. HFRS-UQP framework is split into two processes, where filtering and switching are carried out. The data mining based filtering in our research work uses the Hamilton Filtering framework to filter user result based on personalized information on automatic updated profiles through search engine. Maximized result is fetched, that is, filtered out with respect to user behavior profiles. The switching performs accurate filtering updated profiles using regime switching. The updating in profile change (i.e., switches) regime in HFRS-UQP framework identifies the second- and higher-order association of query result on the updated profiles. Experiment is conducted on factors such as personalized information search retrieval rate, filtering efficiency, and precision ratio. S. Sadesh and R. C. Suganthe Copyright © 2015 S. Sadesh and R. C. Suganthe. All rights reserved. Brain Computer Interface on Track to Home Mon, 08 Jun 2015 09:08:54 +0000 The novel BackHome system offers individuals with disabilities a range of useful services available via brain-computer interfaces (BCIs), to help restore their independence. This is the time such technology is ready to be deployed in the real world, that is, at the target end users’ home. This has been achieved by the development of practical electrodes, easy to use software, and delivering telemonitoring and home support capabilities which have been conceived, implemented, and tested within a user-centred design approach. The final BackHome system is the result of a 3-year long process involving extensive user engagement to maximize effectiveness, reliability, robustness, and ease of use of a home based BCI system. The system is comprised of ergonomic and hassle-free BCI equipment; one-click software services for Smart Home control, cognitive stimulation, and web browsing; and remote telemonitoring and home support tools to enable independent home use for nonexpert caregivers and users. BackHome aims to successfully bring BCIs to the home of people with limited mobility to restore their independence and ultimately improve their quality of life. Felip Miralles, Eloisa Vargiu, Stefan Dauwalder, Marc Solà, Gernot Müller-Putz, Selina C. Wriessnegger, Andreas Pinegger, Andrea Kübler, Sebastian Halder, Ivo Käthner, Suzanne Martin, Jean Daly, Elaine Armstrong, Christoph Guger, Christoph Hintermüller, and Hannah Lowish Copyright © 2015 Felip Miralles et al. All rights reserved. Energy Efficient Link Aware Routing with Power Control in Wireless Ad Hoc Networks Mon, 08 Jun 2015 08:34:33 +0000 In wireless ad hoc networks, the traditional routing protocols make the route selection based on minimum distance between the nodes and the minimum number of hop counts. Most of the routing decisions do not consider the condition of the network such as link quality and residual energy of the nodes. Also, when a link failure occurs, a route discovery mechanism is initiated which incurs high routing overhead. If the broadcast nature and the spatial diversity of the wireless communication are utilized efficiently it becomes possible to achieve improvement in the performance of the wireless networks. In contrast to the traditional routing scheme which makes use of a predetermined route for packet transmission, such an opportunistic routing scheme defines a predefined forwarding candidate list formed by using single network metrics. In this paper, a protocol is proposed which uses multiple metrics such as residual energy and link quality for route selection and also includes a monitoring mechanism which initiates a route discovery for a poor link, thereby reducing the overhead involved and improving the throughput of the network while maintaining network connectivity. Power control is also implemented not only to save energy but also to improve the network performance. Using simulations, we show the performance improvement attained in the network in terms of packet delivery ratio, routing overhead, and residual energy of the network. Jeevaa Katiravan, D. Sylvia, and D. Srinivasa Rao Copyright © 2015 Jeevaa Katiravan et al. All rights reserved. Development of a Comprehensive Database System for Safety Analyst Mon, 08 Jun 2015 07:44:12 +0000 This study addressed barriers associated with the use of Safety Analyst, a state-of-the-art tool that has been developed to assist during the entire Traffic Safety Management process but that is not widely used due to a number of challenges as described in this paper. As part of this study, a comprehensive database system and tools to provide data to multiple traffic safety applications, with a focus on Safety Analyst, were developed. A number of data management tools were developed to extract, collect, transform, integrate, and load the data. The system includes consistency-checking capabilities to ensure the adequate insertion and update of data into the database. This system focused on data from roadways, ramps, intersections, and traffic characteristics for Safety Analyst. To test the proposed system and tools, data from Clark County, which is the largest county in Nevada and includes the cities of Las Vegas, Henderson, Boulder City, and North Las Vegas, was used. The database and Safety Analyst together help identify the sites with the potential for safety improvements. Specifically, this study examined the results from two case studies. The first case study, which identified sites having a potential for safety improvements with respect to fatal and all injury crashes, included all roadway elements and used default and calibrated Safety Performance Functions (SPFs). The second case study identified sites having a potential for safety improvements with respect to fatal and all injury crashes, specifically regarding intersections; it used default and calibrated SPFs as well. Conclusions were developed for the calibration of safety performance functions and the classification of site subtypes. Guidelines were provided about the selection of a particular network screening type or performance measure for network screening. Alexander Paz, Naveen Veeramisti, Indira Khanal, Justin Baker, and Hanns de la Fuente-Mella Copyright © 2015 Alexander Paz et al. All rights reserved. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling Sun, 07 Jun 2015 14:19:48 +0000 Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. R. Suganya Devi, D. Manjula, and R. K. Siddharth Copyright © 2015 R. Suganya Devi et al. All rights reserved. A Strategic Study about Quality Characteristics in e-Health Systems Based on a Systematic Literature Review Thu, 04 Jun 2015 10:08:14 +0000 e-Health Systems quality management is an expensive and hard process that entails performing several tasks such as analysis, evaluation, and quality control. Furthermore, the development of an e-Health System involves great responsibility since people’s health and quality of life depend on the system and services offered. The focus of the following study is to identify the gap in Quality Characteristics for e-Health Systems, by detecting not only which are the most studied, but also which are the most used Quality Characteristics these Systems include. A strategic study is driven in this paper by a Systematic Literature Review so as to identify Quality Characteristics in e-Health. Such study makes information and communication technology organizations reflect and act strategically to manage quality in e-Health Systems efficiently and effectively. As a result, this paper proposes the bases of a Quality Model and focuses on a set of Quality Characteristics to enable e-Health Systems quality management. Thus, we can conclude that this paper contributes to implementing knowledge with regard to the mission and view of e-Health (Systems) quality management and helps understand how current researches evaluate quality in e-Health Systems. F. J. Domínguez-Mayo, M. J. Escalona, M. Mejías, G. Aragón, J. A. García-García, J. Torres, and J. G. Enríquez Copyright © 2015 F. J. Domínguez-Mayo et al. All rights reserved. A Multiconstrained Grid Scheduling Algorithm with Load Balancing and Fault Tolerance Wed, 03 Jun 2015 07:41:42 +0000 Grid environment consists of millions of dynamic and heterogeneous resources. A grid environment which deals with computing resources is computational grid and is meant for applications that involve larger computations. A scheduling algorithm is said to be efficient if and only if it performs better resource allocation even in case of resource failure. Allocation of resources is a tedious issue since it has to consider several requirements such as system load, processing cost and time, user’s deadline, and resource failure. This work attempts to design a resource allocation algorithm which is budget constrained and also targets load balancing, fault tolerance, and user satisfaction by considering the above requirements. The proposed Multiconstrained Load Balancing Fault Tolerant algorithm (MLFT) reduces the schedule makespan, schedule cost, and task failure rate and improves resource utilization. The proposed MLFT algorithm is evaluated using Gridsim toolkit and the results are compared with the recent algorithms which separately concentrate on all these factors. The comparison results ensure that the proposed algorithm works better than its counterparts. P. Keerthika and P. Suresh Copyright © 2015 P. Keerthika and P. Suresh. All rights reserved. A Framework and Improvements of the Korea Cloud Services Certification System Mon, 01 Jun 2015 11:30:15 +0000 Cloud computing service is an evolving paradigm that affects a large part of the ICT industry and provides new opportunities for ICT service providers such as the deployment of new business models and the realization of economies of scale by increasing efficiency of resource utilization. However, despite benefits of cloud services, there are some obstacles to adopt such as lack of assessing and comparing the service quality of cloud services regarding availability, security, and reliability. In order to adopt the successful cloud service and activate it, it is necessary to establish the cloud service certification system to ensure service quality and performance of cloud services. This paper proposes a framework and improvements of the Korea certification system of cloud service. In order to develop it, the critical issues related to service quality, performance, and certification of cloud service are identified and the systematic framework for the certification system of cloud services and service provider domains are developed. Improvements of the developed Korea certification system of cloud services are also proposed. Hangoo Jeon and Kwang-Kyu Seo Copyright © 2015 Hangoo Jeon and Kwang-Kyu Seo. All rights reserved. Educational Applications for Blind and Partially Sighted Pupils Based on Speech Technologies for Serbian Mon, 01 Jun 2015 06:39:20 +0000 The inclusion of persons with disabilities has always represented an important issue. Advancements within the field of computer science have enabled the development of different types of aids, which have significantly improved the quality of life of the disabled. However, for some disabilities, such as visual impairment, the purpose of these aids is to establish an alternative communication channel and thus overcome the user’s disability. Speech technologies play the crucial role in this process. This paper presents the ongoing efforts to create a set of educational applications based on speech technologies for Serbian for the early stages of education of blind and partially sighted children. Two educational applications dealing with memory exercises and comprehension of geometrical shapes are presented, along with the initial tests results obtained from research including visually impaired pupils. Branko Lučić, Stevan Ostrogonac, Nataša Vujnović Sedlar, and Milan Sečujski Copyright © 2015 Branko Lučić et al. All rights reserved. Electronic Voting Protocol Using Identity-Based Cryptography Sun, 24 May 2015 14:30:56 +0000 Electronic voting protocols proposed to date meet their properties based on Public Key Cryptography (PKC), which offers high flexibility through key agreement protocols and authentication mechanisms. However, when PKC is used, it is necessary to implement Certification Authority (CA) to provide certificates which bind public keys to entities and enable verification of such public key bindings. Consequently, the components of the protocol increase notably. An alternative is to use Identity-Based Encryption (IBE). With this kind of cryptography, it is possible to have all the benefits offered by PKC, without neither the need of certificates nor all the core components of a Public Key Infrastructure (PKI). Considering the aforementioned, in this paper we propose an electronic voting protocol, which meets the privacy and robustness properties by using bilinear maps. Gina Gallegos-Garcia and Horacio Tapia-Recillas Copyright © 2015 Gina Gallegos-Garcia and Horacio Tapia-Recillas. All rights reserved. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation Wed, 20 May 2015 12:25:26 +0000 Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product. Senthil Kumar Murugesan and Chidhambara Rajan Balasubramanian Copyright © 2015 Senthil Kumar Murugesan and Chidhambara Rajan Balasubramanian. All rights reserved. Efficient Scheduling of Scientific Workflows with Energy Reduction Using Novel Discrete Particle Swarm Optimization and Dynamic Voltage Scaling for Computational Grids Thu, 14 May 2015 07:20:48 +0000 One of the most significant and the topmost parameters in the real world computing environment is energy. Minimizing energy imposes benefits like reduction in power consumption, decrease in cooling rates of the computing processors, provision of a green environment, and so forth. In fact, computation time and energy are directly proportional to each other and the minimization of computation time may yield a cost effective energy consumption. Proficient scheduling of Bag-of-Tasks in the grid environment ravages in minimum computation time. In this paper, a novel discrete particle swarm optimization (DPSO) algorithm based on the particle’s best position (pbDPSO) and global best position (gbDPSO) is adopted to find the global optimal solution for higher dimensions. This novel DPSO yields better schedule with minimum computation time compared to Earliest Deadline First (EDF) and First Come First Serve (FCFS) algorithms which comparably reduces energy. Other scheduling parameters, such as job completion ratio and lateness, are also calculated and compared with EDF and FCFS. An energy improvement of up to 28% was obtained when Makespan Conservative Energy Reduction (MCER) and Dynamic Voltage Scaling (DVS) were used in the proposed DPSO algorithm. M. Christobel, S. Tamil Selvi, and Shajulin Benedict Copyright © 2015 M. Christobel et al. All rights reserved. Emergency Situation Prediction Mechanism: A Novel Approach for Intelligent Transportation System Using Vehicular Ad Hoc Networks Sun, 10 May 2015 11:16:24 +0000 In Indian four-lane express highway, millions of vehicles are travelling every day. Accidents are unfortunate and frequently occurring in these highways causing deaths, increase in death toll, and damage to infrastructure. A mechanism is required to avoid such road accidents at the maximum to reduce the death toll. An Emergency Situation Prediction Mechanism, a novel and proactive approach, is proposed in this paper for achieving the best of Intelligent Transportation System using Vehicular Ad Hoc Network. ESPM intends to predict the possibility of occurrence of an accident in an Indian four-lane express highway. In ESPM, the emergency situation prediction is done by the Road Side Unit based on (i) the Status Report sent by the vehicles in the range of RSU and (ii) the road traffic flow analysis done by the RSU. Once the emergency situation or accident is predicted in advance, an Emergency Warning Message is constructed and disseminated to all vehicles in the area of RSU to alert and prevent the vehicles from accidents. ESPM performs well in emergency situation prediction in advance to the occurrence of an accident. ESPM predicts the emergency situation within 0.20 seconds which is comparatively less than the statistical value. The prediction accuracy of ESPM against vehicle density is found better in different traffic scenarios. P. Ganeshkumar and P. Gokulakrishnan Copyright © 2015 P. Ganeshkumar and P. Gokulakrishnan. All rights reserved. Online Pedagogical Tutorial Tactics Optimization Using Genetic-Based Reinforcement Learning Thu, 07 May 2015 11:48:51 +0000 Tutorial tactics are policies for an Intelligent Tutoring System (ITS) to decide the next action when there are multiple actions available. Recent research has demonstrated that when the learning contents were controlled so as to be the same, different tutorial tactics would make difference in students’ learning gains. However, the Reinforcement Learning (RL) techniques that were used in previous studies to induce tutorial tactics are insufficient when encountering large problems and hence were used in offline manners. Therefore, we introduced a Genetic-Based Reinforcement Learning (GBML) approach to induce tutorial tactics in an online-learning manner without basing on any preexisting dataset. The introduced method can learn a set of rules from the environment in a manner similar to RL. It includes a genetic-based optimizer for rule discovery task by generating new rules from the old ones. This increases the scalability of a RL learner for larger problems. The results support our hypothesis about the capability of the GBML method to induce tutorial tactics. This suggests that the GBML method should be favorable in developing real-world ITS applications in the domain of tutorial tactics induction. Hsuan-Ta Lin, Po-Ming Lee, and Tzu-Chien Hsiao Copyright © 2015 Hsuan-Ta Lin et al. All rights reserved. QRFXFreeze: Queryable Compressor for RFX Wed, 06 May 2015 13:31:39 +0000 The verbose nature of XML has been mulled over again and again and many compression techniques for XML data have been excogitated over the years. Some of the techniques incorporate support for querying the XML database in its compressed format while others have to be decompressed before they can be queried. XML compression in which querying is directly supported instantaneously with no compromise over time is forced to compromise over space. In this paper, we propose the compressor, QRFXFreeze, which not only reduces the space of storage but also supports efficient querying. The compressor does this without decompressing the compressed XML file. The compressor supports all kinds of XML documents along with insert, update, and delete operations. The forte of QRFXFreeze is that the textual data are semantically compressed and are indexed to reduce the querying time. Experimental results show that the proposed compressor performs much better than other well-known compressors. Radha Senthilkumar, Gomathi Nandagopal, and Daphne Ronald Copyright © 2015 Radha Senthilkumar et al. All rights reserved. Security of Information and Networks Mon, 04 May 2015 13:05:51 +0000 Iftikhar Ahmad, Aneel Rahim, Adeel Javed, and Hafiz Malik Copyright © 2015 Iftikhar Ahmad et al. All rights reserved. Method for Detecting Manipulated Compilation of Sensing Reports in Wireless Sensor Networks Sun, 03 May 2015 13:50:58 +0000 In cluster-based wireless sensor networks (WSNs), a few sensor nodes, including cluster heads (CHs), can be physically compromised by a malicious adversary. By using compromised CHs, the adversary can intentionally attach false message authentication codes into legitimate sensing reports in order to interrupt reporting of the real events. The existing solutions are vulnerable to such a type of security attacks, called manipulated compilation attacks (MCAs), since they assume that CHs are uncompromised. Thus, the reports manipulated by compromised CHs will be discarded by forwarding nodes or rejected at base stations, so that real events on the fields cannot be properly reported to the users. In this paper, the author proposes a method for the detection of MCAs in cluster-based WSNs. In the proposed method, every sensing report is collaboratively generated and verified by cluster nodes based on very loose synchronization. Once a cluster node has detected an MCA for a real event, it can reforward a legitimate report immediately. Therefore, the event can be properly reported to the users. The performance of the proposed method is shown with analytical and experimental results at the end of the paper. Hae Young Lee Copyright © 2015 Hae Young Lee. All rights reserved. Using a Prediction Model to Manage Cyber Security Threats Sun, 03 May 2015 10:30:00 +0000 Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. Venkatesh Jaganathan, Priyesh Cherurveettil, and Premapriya Muthu Sivashanmugam Copyright © 2015 Venkatesh Jaganathan et al. All rights reserved. A Novel Protective Framework for Defeating HTTP-Based Denial of Service and Distributed Denial of Service Attacks Sun, 03 May 2015 10:18:06 +0000 The growth of web technology has brought convenience to our life, since it has become the most important communication channel. However, now this merit is threatened by complicated network-based attacks, such as denial of service (DoS) and distributed denial of service (DDoS) attacks. Despite many researchers’ efforts, no optimal solution that addresses all sorts of HTTP DoS/DDoS attacks is on offer. Therefore, this research aims to fix this gap by designing an alternative solution called a flexible, collaborative, multilayer, DDoS prevention framework (FCMDPF). The innovative design of the FCMDPF framework handles all aspects of HTTP-based DoS/DDoS attacks through the following three subsequent framework’s schemes (layers). Firstly, an outer blocking (OB) scheme blocks attacking IP source if it is listed on the black list table. Secondly, the service traceback oriented architecture (STBOA) scheme is to validate whether the incoming request is launched by a human or by an automated tool. Then, it traces back the true attacking IP source. Thirdly, the flexible advanced entropy based (FAEB) scheme is to eliminate high rate DDoS (HR-DDoS) and flash crowd (FC) attacks. Compared to the previous researches, our framework’s design provides an efficient protection for web applications against all sorts of DoS/DDoS attacks. Mohammed A. Saleh and Azizah Abdul Manaf Copyright © 2015 Mohammed A. Saleh and Azizah Abdul Manaf. All rights reserved. Using Fuzzy Logic Techniques for Assertion-Based Software Testing Metrics Tue, 28 Apr 2015 07:38:50 +0000 Software testing is a very labor intensive and costly task. Therefore, many software testing techniques to automate the process of software testing have been reported in the literature. Assertion-Based automated software testing has been shown to be effective in detecting program faults as compared to traditional black-box and white-box software testing methods. However, the applicability of this approach in the presence of large numbers of assertions may be very costly. Therefore, software developers need assistance while making decision to apply Assertion-Based testing in order for them to get the benefits of this approach at an acceptable level of costs. In this paper, we present an Assertion-Based testing metrics technique that is based on fuzzy logic. The main goal of the proposed technique is to enhance the performance of Assertion-Based software testing in the presence of large numbers of assertions. To evaluate the proposed technique, an experimental study was performed in which the proposed technique is applied on programs with assertions. The result of this experiment shows that the effectiveness and performance of Assertion-Based software testing have improved when applying the proposed testing metrics technique. Ali M. Alakeel Copyright © 2015 Ali M. Alakeel. All rights reserved. Generating Personalized Web Search Using Semantic Context Mon, 27 Apr 2015 09:25:17 +0000 The “one size fits the all” criticism of search engines is that when queries are submitted, the same results are returned to different users. In order to solve this problem, personalized search is proposed, since it can provide different search results based upon the preferences of users. However, existing methods concentrate more on the long-term and independent user profile, and thus reduce the effectiveness of personalized search. In this paper, the method captures the user context to provide accurate preferences of users for effectively personalized search. First, the short-term query context is generated to identify related concepts of the query. Second, the user context is generated based on the click through data of users. Finally, a forgetting factor is introduced to merge the independent user context in a user session, which maintains the evolution of user preferences. Experimental results fully confirm that our approach can successfully represent user context according to individual user information needs. Zheng Xu, Hai-Yan Chen, and Jie Yu Copyright © 2015 Zheng Xu et al. All rights reserved. A Novel Way to Relate Ontology Classes Tue, 21 Apr 2015 06:42:42 +0000 The existing ontologies in the semantic web typically have anonymous union and intersection classes. The anonymous classes are limited in scope and may not be part of the whole inference process. The tools, namely, the pellet, the jena, and the protégé, interpret collection classes as (a) equivalent/subclasses of union class and (b) superclasses of intersection class. As a result, there is a possibility that the tools will produce error prone inference results for relations, namely, sub-, union, intersection, equivalent relations, and those dependent on these relations, namely, complement. To verify whether a class is complement of other involves utilization of sub- and equivalent relations. Motivated by the same, we (i) refine the test data set of the conference ontology by adding named, union, and intersection classes and (ii) propose a match algorithm to (a) calculate corrected subclasses list, (b) correctly relate intersection and union classes with their collection classes, and (c) match union, intersection, sub-, complement, and equivalent classes in a proper sequence, to avoid error prone match results. We compare the results of our algorithms with those of a candidate reasoner, namely, the pellet reasoner. To the best of our knowledge, ours is a unique attempt in establishing a novel way to relate ontology classes. Ami T. Choksi and Devesh C. Jinwala Copyright © 2015 Ami T. Choksi and Devesh C. Jinwala. All rights reserved. Smart TV-Smartphone Multiscreen Interactive Middleware for Public Displays Thu, 09 Apr 2015 12:18:25 +0000 A new generation of public displays demands high interactive and multiscreen features to enrich people’s experience in new pervasive environments. Traditionally, research on public display interaction has involved mobile devices as the main characters during the use of personal area network technologies such as Bluetooth or NFC. However, the emergent Smart TV model arises as an interesting alternative for the implementation of a new generation of public displays. This is due to its intrinsic connection capabilities with surrounding devices like smartphones or tablets. Nonetheless, the different approaches proposed by the most important vendors are still underdeveloped to support multiscreen and interaction capabilities for modern public displays, because most of them are intended for domestic environments. This research proposes multiscreen interactive middleware for public displays, which was developed from the principles of a loosely coupled interaction model, simplicity, stability, concurrency, low latency, and the usage of open standards and technologies. Moreover, a validation prototype is proposed in one of the most interesting public display scenarios: the advertising. Francisco Martinez-Pabon, Jaime Caicedo-Guerrero, Jhon Jairo Ibarra-Samboni, Gustavo Ramirez-Gonzalez, and Davinia Hernández-Leo Copyright © 2015 Francisco Martinez-Pabon et al. All rights reserved. Machine Learning in Intelligent Video and Automated Monitoring Thu, 09 Apr 2015 08:58:18 +0000 Yu-Bo Yuan, Gao Yang David, and Shan Zhao Copyright © 2015 Yu-Bo Yuan et al. All rights reserved. Corrigendum to “A Preliminary Investigation of User Perception and Behavioral Intention for Different Review Types: Customers and Designers Perspective” Tue, 07 Apr 2015 13:04:12 +0000 Atika Qazi, Ram Gopal Raj, Muhammad Tahir, Mehwish Waheed, Saif Ur Rehman Khan, and Ajith Abraham Copyright © 2015 Atika Qazi et al. All rights reserved. A Novel Rules Based Approach for Estimating Software Birthmark Mon, 06 Apr 2015 11:22:33 +0000 Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. Shah Nazir, Sara Shahzad, Sher Afzal Khan, Norma Binti Alias, and Sajid Anwar Copyright © 2015 Shah Nazir et al. All rights reserved. Hybrid Ontology for Semantic Information Retrieval Model Using Keyword Matching Indexing System Wed, 01 Apr 2015 08:36:56 +0000 Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology. K. R. Uthayan and G. S. Anandha Mala Copyright © 2015 K. R. Uthayan and G. S. Anandha Mala. All rights reserved. Strategic Management Advanced Service for Sustainable Computing Environment Mon, 30 Mar 2015 12:34:54 +0000 Sang-Soo Yeo, Qun Jin, Vincenzo Loia, and Hangbae Chang Copyright © 2015 Sang-Soo Yeo et al. All rights reserved. ACOustic: A Nature-Inspired Exploration Indicator for Ant Colony Optimization Mon, 30 Mar 2015 11:53:06 +0000 A statistical machine learning indicator, ACOustic, is proposed to evaluate the exploration behavior in the iterations of ant colony optimization algorithms. This idea is inspired by the behavior of some parasites in their mimicry to the queens’ acoustics of their ant hosts. The parasites’ reaction results from their ability to indicate the state of penetration. The proposed indicator solves the problem of robustness that results from the difference of magnitudes in the distance’s matrix, especially when combinatorial optimization problems with rugged fitness landscape are applied. The performance of the proposed indicator is evaluated against the existing indicators in six variants of ant colony optimization algorithms. Instances for travelling salesman problem and quadratic assignment problem are used in the experimental evaluation. The analytical results showed that the proposed indicator is more informative and more robust. Rafid Sagban, Ku Ruhana Ku-Mahamud, and Muhamad Shahbani Abu Bakar Copyright © 2015 Rafid Sagban et al. All rights reserved. Energy Aware Swarm Optimization with Intercluster Search for Wireless Sensor Network Mon, 30 Mar 2015 08:29:24 +0000 Wireless sensor networks (WSNs) are emerging as a low cost popular solution for many real-world challenges. The low cost ensures deployment of large sensor arrays to perform military and civilian tasks. Generally, WSNs are power constrained due to their unique deployment method which makes replacement of battery source difficult. Challenges in WSN include a well-organized communication platform for the network with negligible power utilization. In this work, an improved binary particle swarm optimization (PSO) algorithm with modified connected dominating set (CDS) based on residual energy is proposed for discovery of optimal number of clusters and cluster head (CH). Simulations show that the proposed BPSO-T and BPSO-EADS perform better than LEACH- and PSO-based system in terms of energy savings and QOS. Shanmugasundaram Thilagavathi and Bhavani Gnanasambandan Geetha Copyright © 2015 Shanmugasundaram Thilagavathi and Bhavani Gnanasambandan Geetha. All rights reserved. Fast Image Search with Locality-Sensitive Hashing and Homogeneous Kernels Map Sun, 29 Mar 2015 13:50:51 +0000 Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach’s linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH) algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate. Jun-yi Li and Jian-hua Li Copyright © 2015 Jun-yi Li and Jian-hua Li. All rights reserved. Declarative Programming with Temporal Constraints, in the Language CG Sun, 29 Mar 2015 07:35:24 +0000 Specifying and interpreting temporal constraints are key elements of knowledge representation and reasoning, with applications in temporal databases, agent programming, and ambient intelligence. We present and formally characterize the language CG, which tackles this issue. In CG, users are able to develop time-dependent programs, in a flexible and straightforward manner. Such programs can, in turn, be coupled with evolving environments, thus empowering users to control the environment’s evolution. CG relies on a structure for storing temporal information, together with a dedicated query mechanism. Hence, we explore the computational complexity of our query satisfaction problem. We discuss previous implementation attempts of CG and introduce a novel prototype which relies on logic programming. Finally, we address the issue of consistency and correctness of CG program execution, using the Event-B modeling approach. Lorina Negreanu Copyright © 2015 Lorina Negreanu. All rights reserved. Benchmarking RCGAu on the Noiseless BBOB Testbed Sun, 29 Mar 2015 07:19:15 +0000 RCGAu is a hybrid real-coded genetic algorithm with “uniform random direction” search mechanism. The uniform random direction search mechanism enhances the local search capability of RCGA. In this paper, RCGAu was tested on the BBOB-2013 noiseless testbed using restarts till a maximum number of function evaluations (#FEs) of 105 × D are reached, where D is the dimension of the function search space. RCGAu was able to solve several test functions in the low search dimensions of 2 and 3 to the desired accuracy of 108. Although RCGAu found it difficult in getting a solution with the desired accuracy 108 for high conditioning and multimodal functions within the specified maximum #FEs, it was able to solve most of the test functions with dimensions up to 40 with lower precisions. Babatunde A. Sawyerr, Aderemi O. Adewumi, and M. Montaz Ali Copyright © 2015 Babatunde A. Sawyerr et al. All rights reserved. Developing R&D Portfolio Business Validity Simulation Model and System Sun, 29 Mar 2015 07:11:03 +0000 The R&D has been recognized as critical method to take competitiveness by not only companies but also nations with its value creation such as patent value and new product. Therefore, R&D has been a decision maker’s burden in that it is hard to decide how much money to invest, how long time one should spend, and what technology to develop which means it accompanies resources such as budget, time, and manpower. Although there are diverse researches about R&D evaluation, business factors are not concerned enough because almost all previous studies are technology oriented evaluation with one R&D technology based. In that, we early proposed R&D business aspect evaluation model which consists of nine business model components. In this research, we develop a simulation model and system evaluating a company or industry’s R&D portfolio with business model point of view and clarify default and control parameters to facilitate evaluator’s business validity work in each evaluation module by integrate to one screen. Hyun Jin Yeo and Kwang Hyuk Im Copyright © 2015 Hyun Jin Yeo and Kwang Hyuk Im. All rights reserved. Research and Development of Advanced Computing Technologies Thu, 26 Mar 2015 07:59:57 +0000 Shifei Ding, Zhongzhi Shi, and Ahmad Taher Azar Copyright © 2015 Shifei Ding et al. All rights reserved. Research and Application of Knowledge Resources Network for Product Innovation Wed, 25 Mar 2015 14:03:07 +0000 In order to enhance the capabilities of knowledge service in product innovation design service platform, a method of acquiring knowledge resources supporting for product innovation from the Internet and providing knowledge active push is proposed. Through knowledge modeling for product innovation based on ontology, the integrated architecture of knowledge resources network is put forward. The technology for the acquisition of network knowledge resources based on focused crawler and web services is studied. Knowledge active push is provided for users by user behavior analysis and knowledge evaluation in order to improve users’ enthusiasm for participation in platform. Finally, an application example is illustrated to prove the effectiveness of the method. Chuan Li, Wen-qiang Li, Yan Li, Hui-zhen Na, and Qian Shi Copyright © 2015 Chuan Li et al. All rights reserved. From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization Tue, 24 Mar 2015 11:14:05 +0000 We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. Yan Pei Copyright © 2015 Yan Pei. All rights reserved. Unbiased Feature Selection in Learning Random Forests for High-Dimensional Data Tue, 24 Mar 2015 08:52:59 +0000 Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using -value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures. Thanh-Tung Nguyen, Joshua Zhexue Huang, and Thuy Thi Nguyen Copyright © 2015 Thanh-Tung Nguyen et al. All rights reserved. Reverse Engineering of Free-Form Surface Based on the Closed-Loop Theory Tue, 24 Mar 2015 06:30:15 +0000 To seek better methods of measurement and more accurate model of reconstruction in the field of reverse engineering has been the focus of researchers. Based on this, a new method of adaptive measurement, real-time reconstruction, and online evaluation of free-form surface was presented in this paper. The coordinates and vectors of the prediction points are calculated according to a Bézier curve which is fitted by measured points. Final measured point cloud distribution is in agreement with the geometric characteristics of the free-form surfaces. Fitting the point cloud to a surface model by the nonuniform B-spline method, extracting some check points from the surface models based on grids and a feature on the surface, review the location of these check points on the surface with CMM and evaluate the model, and then update the surface model to meet the accuracy. Integrated measurement, reconstruction, and evaluation, with the closed-loop reverse process, established an accurate model. The results of example show that the measuring points are distributed over the surface according to curvature, and the reconstruction model can be completely expressed with micron level. Meanwhile, measurement, reconstruction and evaluation are integrated in forms of closed-loop reverse system. Xue Ming He, Jun Fei He, Mei Ping Wu, Rong Zhang, and Xiao Gang Ji Copyright © 2015 Xue Ming He et al. All rights reserved. State of the Art of Fuzzy Methods for Gene Regulatory Networks Inference Mon, 23 Mar 2015 12:34:14 +0000 To address one of the most challenging issues at the cellular level, this paper surveys the fuzzy methods used in gene regulatory networks (GRNs) inference. GRNs represent causal relationships between genes that have a direct influence, trough protein production, on the life and the development of living organisms and provide a useful contribution to the understanding of the cellular functions as well as the mechanisms of diseases. Fuzzy systems are based on handling imprecise knowledge, such as biological information. They provide viable computational tools for inferring GRNs from gene expression data, thus contributing to the discovery of gene interactions responsible for specific diseases and/or ad hoc correcting therapies. Increasing computational power and high throughput technologies have provided powerful means to manage these challenging digital ecosystems at different levels from cell to society globally. The main aim of this paper is to report, present, and discuss the main contributions of this multidisciplinary field in a coherent and structured framework. Tuqyah Abdullah Al Qazlan, Aboubekeur Hamdi-Cherif, and Chafia Kara-Mohamed Copyright © 2015 Tuqyah Abdullah Al Qazlan et al. All rights reserved. Enhancing the Selection of Backoff Interval Using Fuzzy Logic over Wireless Ad Hoc Networks Mon, 23 Mar 2015 12:09:16 +0000 IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff—BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance. Radha Ranganathan and Kathiravan Kannan Copyright © 2015 Radha Ranganathan and Kathiravan Kannan. All rights reserved. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization Mon, 23 Mar 2015 09:58:08 +0000 A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. Yan Pei, Qiangfu Zhao, and Yong Liu Copyright © 2015 Yan Pei et al. All rights reserved. A Double Herd Krill Based Algorithm for Location Area Optimization in Mobile Wireless Cellular Network Mon, 23 Mar 2015 09:47:19 +0000 In wireless communication systems, mobility tracking deals with determining a mobile subscriber (MS) covering the area serviced by the wireless network. Tracking a mobile subscriber is governed by the two fundamental components called location updating (LU) and paging. This paper presents a novel hybrid method using a krill herd algorithm designed to optimize the location area (LA) within available spectrum such that total network cost, comprising location update (LU) cost and cost for paging, is minimized without compromise. Based on various mobility patterns of users and network architecture, the design of the LR area is formulated as a combinatorial optimization problem. Numerical results indicate that the proposed model provides a more accurate update boundary in real environment than that derived from a hexagonal cell configuration with a random walk movement pattern. The proposed model allows the network to maintain a better balance between the processing incurred due to location update and the radio bandwidth utilized for paging between call arrivals. F. Vincylloyd and B. Anand Copyright © 2015 F. Vincylloyd and B. Anand. All rights reserved. A Heuristic Ranking Approach on Capacity Benefit Margin Determination Using Pareto-Based Evolutionary Programming Technique Mon, 23 Mar 2015 09:43:02 +0000 This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas. Muhammad Murtadha Othman, Nurulazmi Abd Rahman, Ismail Musirin, Mahmud Fotuhi-Firuzabad, and Abbas Rajabi-Ghahnavieh Copyright © 2015 Muhammad Murtadha Othman et al. All rights reserved. New Enhanced Artificial Bee Colony (JA-ABC5) Algorithm with Application for Reactive Power Optimization Mon, 23 Mar 2015 09:42:16 +0000 The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. Noorazliza Sulaiman, Junita Mohamad-Saleh, and Abdul Ghani Abro Copyright © 2015 Noorazliza Sulaiman et al. All rights reserved. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift Mon, 23 Mar 2015 07:28:03 +0000 The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. Agustín Ortíz Díaz, José del Campo-Ávila, Gonzalo Ramos-Jiménez, Isvani Frías Blanco, Yailé Caballero Mota, Antonio Mustelier Hechavarría, and Rafael Morales-Bueno Copyright © 2015 Agustín Ortíz Díaz et al. All rights reserved. Negative Correlation Learning for Customer Churn Prediction: A Comparison Study Mon, 23 Mar 2015 07:12:52 +0000 Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis. Ali Rodan, Ayham Fayyoumi, Hossam Faris, Jamal Alsakran, and Omar Al-Kadi Copyright © 2015 Ali Rodan et al. All rights reserved. Unsupervised Spectral-Spatial Feature Selection-Based Camouflaged Object Detection Using VNIR Hyperspectral Camera Mon, 23 Mar 2015 06:18:05 +0000 The detection of camouflaged objects is important for industrial inspection, medical diagnoses, and military applications. Conventional supervised learning methods for hyperspectral images can be a feasible solution. Such approaches, however, require a priori information of a camouflaged object and background. This letter proposes a fully autonomous feature selection and camouflaged object detection method based on the online analysis of spectral and spatial features. The statistical distance metric can generate candidate feature bands and further analysis of the entropy-based spatial grouping property can trim the useless feature bands. Camouflaged objects can be detected better with less computational complexity by optical spectral-spatial feature analysis. Sungho Kim Copyright © 2015 Sungho Kim. All rights reserved. A Novel Clustering Algorithm Inspired by Membrane Computing Sun, 22 Mar 2015 13:09:19 +0000 P systems are a class of distributed parallel computing models; this paper presents a novel clustering algorithm, which is inspired from mechanism of a tissue-like P system with a loop structure of cells, called membrane clustering algorithm. The objects of the cells express the candidate centers of clusters and are evolved by the evolution rules. Based on the loop membrane structure, the communication rules realize a local neighborhood topology, which helps the coevolution of the objects and improves the diversity of objects in the system. The tissue-like P system can effectively search for the optimal partitioning with the help of its parallel computing advantage. The proposed clustering algorithm is evaluated on four artificial data sets and six real-life data sets. Experimental results show that the proposed clustering algorithm is superior or competitive to k-means algorithm and several evolutionary clustering algorithms recently reported in the literature. Hong Peng, Xiaohui Luo, Zhisheng Gao, Jun Wang, and Zheng Pei Copyright © 2015 Hong Peng et al. All rights reserved. A Novel Psychovisual Threshold on Large DCT for Image Compression Sun, 22 Mar 2015 12:42:46 +0000 A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression. Nur Azman Abu and Ferda Ernawan Copyright © 2015 Nur Azman Abu and Ferda Ernawan. All rights reserved. Chaos Time Series Prediction Based on Membrane Optimization Algorithms Sun, 22 Mar 2015 12:39:39 +0000 This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction and least squares support vector machine (LS-SVM) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). Meng Li, Liangzhong Yi, Zheng Pei, Zhisheng Gao, and Hong Peng Copyright © 2015 Meng Li et al. All rights reserved. Primary Path Reservation Using Enhanced Slot Assignment in TDMA for Session Admission Sun, 22 Mar 2015 12:39:33 +0000 Mobile ad hoc networks (MANET) is a self-organized collection of nodes that communicates without any infrastructure. Providing quality of service (QoS) in such networks is a competitive task due to unreliable wireless link, mobility, lack of centralized coordination, and channel contention. The success of many real time applications is purely based on the QoS, which can be achieved by quality aware routing (QAR) and admission control (AC). Recently proposed QoS mechanisms do focus completely on either reservation or admission control but are not better enough. In MANET, high mobility causes frequent path break due to the fact that every time the source node must find the route. In such cases the QoS session is affected. To admit a QoS session, admission control protocols must ensure the bandwidth of the relaying path before transmission starts; reservation of such bandwidth noticeably improves the admission control performance. Many TDMA based reservation mechanisms are proposed but need some improvement over slot reservation procedures. In order to overcome this specific issue, we propose a framework—PRAC (primary path reservation admission control protocol), which achieves improved QoS by making use of backup route combined with resource reservation. A network topology has been simulated and our approach proves to be a mechanism that admits the session effectively. Suresh Koneri Chandrasekaran, Prakash Savarimuthu, Priya Andi Elumalai, and Kathirvel Ayyaswamy Copyright © 2015 Suresh Koneri Chandrasekaran et al. All rights reserved. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis Sun, 22 Mar 2015 12:39:04 +0000 As is known, the Pareto set of a continuous multiobjective optimization problem with objective functions is a piecewise continuous ()-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is ()-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. Zhiming Song, Maocai Wang, Guangming Dai, and Massimiliano Vasile Copyright © 2015 Zhiming Song et al. All rights reserved. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing Sun, 22 Mar 2015 12:34:54 +0000 FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. Julio Dondo Gazzano, Francisco Sanchez Molina, Fernando Rincon, and Juan Carlos López Copyright © 2015 Julio Dondo Gazzano et al. All rights reserved. An Approach to Model Based Testing of Multiagent Systems Sun, 22 Mar 2015 12:33:25 +0000 Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion. Shafiq Ur Rehman and Aamer Nadeem Copyright © 2015 Shafiq Ur Rehman and Aamer Nadeem. All rights reserved. Composition of Web Services Using Markov Decision Processes and Dynamic Programming Sun, 22 Mar 2015 12:28:44 +0000 We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. Víctor Uc-Cetina, Francisco Moo-Mena, and Rafael Hernandez-Ucan Copyright © 2015 Víctor Uc-Cetina et al. All rights reserved. Ubiquitous Systems towards Green, Sustainable, and Secured Smart Environment Thu, 19 Mar 2015 08:09:36 +0000 Jong-Hyuk Park, Yi Pan, Han-Chieh Chao, and Neil Y. Yen Copyright © 2015 Jong-Hyuk Park et al. All rights reserved. Intelligent Topical Sentiment Analysis for the Classification of E-Learners and Their Topics of Interest Wed, 18 Mar 2015 10:22:36 +0000 Every day, huge numbers of instant tweets (messages) are published on Twitter as it is one of the massive social media for e-learners interactions. The options regarding various interesting topics to be studied are discussed among the learners and teachers through the capture of ideal sources in Twitter. The common sentiment behavior towards these topics is received through the massive number of instant messages about them. In this paper, rather than using the opinion polarity of each message relevant to the topic, authors focus on sentence level opinion classification upon using the unsupervised algorithm named bigram item response theory (BIRT). It differs from the traditional classification and document level classification algorithm. The investigation illustrated in this paper is of threefold which are listed as follows: lexicon based sentiment polarity of tweet messages; the bigram cooccurrence relationship using naïve Bayesian; the bigram item response theory (BIRT) on various topics. It has been proposed that a model using item response theory is constructed for topical classification inference. The performance has been improved remarkably using this bigram item response theory when compared with other supervised algorithms. The experiment has been conducted on a real life dataset containing different set of tweets and topics. M. Ravichandran, G. Kulanthaivel, and T. Chellatamilan Copyright © 2015 M. Ravichandran et al. All rights reserved. Conceptual Framework for the Mapping of Management Process with Information Technology in a Business Process Thu, 12 Mar 2015 12:24:04 +0000 This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified. Vetrickarthick Rajarathinam, Swarnalatha Chellappa, and Asha Nagarajan Copyright © 2015 Vetrickarthick Rajarathinam et al. All rights reserved. Moving Object Detection for Video Surveillance Wed, 11 Mar 2015 08:57:08 +0000 The emergence of video surveillance is the most promising solution for people living independently in their home. Recently several contributions for video surveillance have been proposed. However, a robust video surveillance algorithm is still a challenging task because of illumination changes, rapid variations in target appearance, similar nontarget objects in background, and occlusions. In this paper, a novel approach of object detection for video surveillance is presented. The proposed algorithm consists of various steps including video compression, object detection, and object localization. In video compression, the input video frames are compressed with the help of two-dimensional discrete cosine transform (2D DCT) to achieve less storage requirements. In object detection, key feature points are detected by computing the statistical correlation and the matching feature points are classified into foreground and background based on the Bayesian rule. Finally, the foreground feature points are localized in successive video frames by embedding the maximum likelihood feature points over the input video frames. Various frame based surveillance metrics are employed to evaluate the proposed approach. Experimental results and comparative study clearly depict the effectiveness of the proposed approach. K. Kalirajan and M. Sudha Copyright © 2015 K. Kalirajan and M. Sudha. All rights reserved. Mobility Based Key Management Technique for Multicast Security in Mobile Ad Hoc Networks Thu, 05 Mar 2015 09:28:11 +0000 In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality. B. Madhusudhanan, S. Chitra, and C. Rajan Copyright © 2015 B. Madhusudhanan et al. All rights reserved. Abstract Computation in Schizophrenia Detection through Artificial Neural Network Based Systems Thu, 05 Mar 2015 08:17:41 +0000 Schizophrenia stands for a long-lasting state of mental uncertainty that may bring to an end the relation among behavior, thought, and emotion; that is, it may lead to unreliable perception, not suitable actions and feelings, and a sense of mental fragmentation. Indeed, its diagnosis is done over a large period of time; continuos signs of the disturbance persist for at least 6 (six) months. Once detected, the psychiatrist diagnosis is made through the clinical interview and a series of psychic tests, addressed mainly to avoid the diagnosis of other mental states or diseases. Undeniably, the main problem with identifying schizophrenia is the difficulty to distinguish its symptoms from those associated to different untidiness or roles. Therefore, this work will focus on the development of a diagnostic support system, in terms of its knowledge representation and reasoning procedures, based on a blended of Logic Programming and Artificial Neural Networks approaches to computing, taking advantage of a novel approach to knowledge representation and reasoning, which aims to solve the problems associated in the handling (i.e., to stand for and reason) of defective information. L. Cardoso, F. Marins, R. Magalhães, N. Marins, T. Oliveira, H. Vicente, A. Abelha, J. Machado, and J. Neves Copyright © 2015 L. Cardoso et al. All rights reserved. HT-Paxos: High Throughput State-Machine Replication Protocol for Large Clustered Data Centers Wed, 04 Mar 2015 13:47:51 +0000 Paxos is a prominent theory of state-machine replication. Recent data intensive systems that implement state-machine replication generally require high throughput. Earlier versions of Paxos as few of them are classical Paxos, fast Paxos, and generalized Paxos have a major focus on fault tolerance and latency but lacking in terms of throughput and scalability. A major reason for this is the heavyweight leader. Through offloading the leader, we can further increase throughput of the system. Ring Paxos, Multiring Paxos, and S-Paxos are few prominent attempts in this direction for clustered data centers. In this paper, we are proposing HT-Paxos, a variant of Paxos that is the best suitable for any large clustered data center. HT-Paxos further offloads the leader very significantly and hence increases the throughput and scalability of the system, while at the same time, among high throughput state-machine replication protocols, it provides reasonably low latency and response time. Vinit Kumar and Ajay Agarwal Copyright © 2015 Vinit Kumar and Ajay Agarwal. All rights reserved. A Novel Ontology Approach to Support Design for Reliability considering Environmental Effects Wed, 04 Mar 2015 13:35:10 +0000 Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis. Bo Sun, Yu Li, Tianyuan Ye, and Yi Ren Copyright © 2015 Bo Sun et al. All rights reserved. A Seed-Based Plant Propagation Algorithm: The Feeding Station Model Mon, 02 Mar 2015 09:24:06 +0000 The seasonal production of fruit and seeds is akin to opening a feeding station, such as a restaurant. Agents coming to feed on the fruit are like customers attending the restaurant; they arrive at a certain rate and get served at a certain rate following some appropriate processes. The same applies to birds and animals visiting and feeding on ripe fruit produced by plants such as the strawberry plant. This phenomenon underpins the seed dispersion of the plants. Modelling it as a queuing process results in a seed-based search/optimisation algorithm. This variant of the Plant Propagation Algorithm is described, analysed, tested on nontrivial problems, and compared with well established algorithms. The results are included. Muhammad Sulaiman and Abdellah Salhi Copyright © 2015 Muhammad Sulaiman and Abdellah Salhi. All rights reserved. Using Shadow Page Cache to Improve Isolated Drivers Performance Sat, 28 Feb 2015 10:47:59 +0000 With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users’ virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver’s write operations by the method of combining a driver’s write operation capture and a driver’s private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver’s write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages’ write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot’s reliability too much. Hao Zheng, Xiaoshe Dong, Endong Wang, Baoke Chen, Zhengdong Zhu, and Chengzhe Liu Copyright © 2015 Hao Zheng et al. All rights reserved. An Incremental High-Utility Mining Algorithm with Transaction Insertion Wed, 25 Feb 2015 09:07:56 +0000 Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. Jerry Chun-Wei Lin, Wensheng Gan, Tzung-Pei Hong, and Binbin Zhang Copyright © 2015 Jerry Chun-Wei Lin et al. All rights reserved. An Energy-Efficient Cluster-Based Vehicle Detection on Road Network Using Intention Numeration Method Sun, 22 Feb 2015 06:51:46 +0000 The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate. Deepa Devasenapathy and Kathiravan Kannan Copyright © 2015 Deepa Devasenapathy and Kathiravan Kannan. All rights reserved. Power, Control, and Optimization Thu, 19 Feb 2015 08:30:01 +0000 Pandian Vasant, Gerhard-Wilhelm Weber, Nader Barsoum, and Vo Ngoc Dieu Copyright © 2015 Pandian Vasant et al. All rights reserved. Geometric Data Perturbation-Based Personal Health Record Transactions in Cloud Computing Thu, 12 Feb 2015 13:43:04 +0000 Cloud computing is a new delivery model for information technology services and it typically involves the provision of dynamically scalable and often virtualized resources over the Internet. However, cloud computing raises concerns on how cloud service providers, user organizations, and governments should handle such information and interactions. Personal health records represent an emerging patient-centric model for health information exchange, and they are outsourced for storage by third parties, such as cloud providers. With these records, it is necessary for each patient to encrypt their own personal health data before uploading them to cloud servers. Current techniques for encryption primarily rely on conventional cryptographic approaches. However, key management issues remain largely unsolved with these cryptographic-based encryption techniques. We propose that personal health record transactions be managed using geometric data perturbation in cloud computing. In our proposed scheme, the personal health record database is perturbed using geometric data perturbation and outsourced to the Amazon EC2 cloud. S. Balasubramaniam and V. Kavitha Copyright © 2015 S. Balasubramaniam and V. Kavitha. All rights reserved. Ensemble Classifier for Epileptic Seizure Detection for Imperfect EEG Data Wed, 04 Feb 2015 14:33:09 +0000 Brain status information is captured by physiological electroencephalogram (EEG) signals, which are extensively used to study different brain activities. This study investigates the use of a new ensemble classifier to detect an epileptic seizure from compressed and noisy EEG signals. This noise-aware signal combination (NSC) ensemble classifier combines four classification models based on their individual performance. The main objective of the proposed classifier is to enhance the classification accuracy in the presence of noisy and incomplete information while preserving a reasonable amount of complexity. The experimental results show the effectiveness of the NSC technique, which yields higher accuracies of 90% for noiseless data compared with 85%, 85.9%, and 89.5% in other experiments. The accuracy for the proposed method is 80% when  dB, 84% when  dB, and 88% when  dB, while the compression ratio (CR) is 85.35% for all of the datasets mentioned. Khalid Abualsaud, Massudi Mahmuddin, Mohammad Saleh, and Amr Mohamed Copyright © 2015 Khalid Abualsaud et al. All rights reserved. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks Wed, 04 Feb 2015 11:17:43 +0000 The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. Ranganathan Mohanasundaram and Pappampalayam Sanmugam Periasamy Copyright © 2015 Ranganathan Mohanasundaram and Pappampalayam Sanmugam Periasamy. All rights reserved. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg Tue, 03 Feb 2015 06:28:49 +0000 Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. Hong Zhao, Chun-long Zhou, and Bao-zhao Jin Copyright © 2015 Hong Zhao et al. All rights reserved. CaLRS: A Critical-Aware Shared LLC Request Scheduling Algorithm on GPGPU Mon, 02 Feb 2015 11:15:59 +0000 Ultra high thread-level parallelism in modern GPUs usually introduces numerous memory requests simultaneously. So there are always plenty of memory requests waiting at each bank of the shared LLC (L2 in this paper) and global memory. For global memory, various schedulers have already been developed to adjust the request sequence. But we find few work has ever focused on the service sequence on the shared LLC. We measured that a big number of GPU applications always queue at LLC bank for services, which provide opportunity to optimize the service order on LLC. Through adjusting the GPU memory request service order, we can improve the schedulability of SM. So we proposed a critical-aware shared LLC request scheduling algorithm (CaLRS) in this paper. The priority representative of memory request is critical for CaLRS. We use the number of memory requests that originate from the same warp but have not been serviced when they arrive at the shared LLC bank to represent the criticality of each warp. Experiments show that the proposed scheme can boost the SM schedulability effectively by promoting the scheduling priority of the memory requests with high criticality and improves the performance of GPU indirectly. Jianliang Ma, Jinglei Meng, Tianzhou Chen, and Minghui Wu Copyright © 2015 Jianliang Ma et al. All rights reserved. Workflow Modelling and Analysis Based on the Construction of Task Models Thu, 29 Jan 2015 12:47:28 +0000 We describe the structure of a workflow as a graph whose vertices represent tasks and the arcs are associated to workflow transitions in this paper. To each task an input/output logic operator is associated. Furthermore, we associate a Boolean term to each transition present in the workflow. We still identify the structure of workflows and describe their dynamism through the construction of new task models. This construction is very simple and intuitive since it is based on the analysis of all tasks present on the workflow that allows us to describe the dynamism of the workflow very easily. So, our approach has the advantage of being very intuitive, which is an important highlight of our work. We also introduce the concept of logical termination of workflows and provide conditions under which this property is valid. Finally, we provide a counter-example which shows that a conjecture presented in a previous article is false. Glória Cravo Copyright © 2015 Glória Cravo. All rights reserved. Constructing RBAC Based Security Model in u-Healthcare Service Platform Tue, 27 Jan 2015 08:57:03 +0000 In today’s era of aging society, people want to handle personal health care by themselves in everyday life. In particular, the evolution of medical and IT convergence technology and mobile smart devices has made it possible for people to gather information on their health status anytime and anywhere easily using biometric information acquisition devices. Healthcare information systems can contribute to the improvement of the nation’s healthcare quality and the reduction of related cost. However, there are no perfect security models or mechanisms for healthcare service applications, and privacy information can therefore be leaked. In this paper, we examine security requirements related to privacy protection in u-healthcare service and propose an extended RBAC based security model. We propose and design u-healthcare service integration platform (u-HCSIP) applying RBAC security model. The proposed u-HCSIP performs four main functions: storing and exchanging personal health records (PHR), recommending meals and exercise, buying/selling private health information or experience, and managing personal health data using smart devices. Moon Sun Shin, Heung Seok Jeon, Yong Wan Ju, Bum Ju Lee, and Seon-Phil Jeong Copyright © 2015 Moon Sun Shin et al. All rights reserved. Twin-Schnorr: A Security Upgrade for the Schnorr Identity-Based Identification Scheme Tue, 27 Jan 2015 07:58:09 +0000 Most identity-based identification (IBI) schemes proposed in recent literature are built using pairing operations. This decreases efficiency due to the high operation costs of pairings. Furthermore, most of these IBI schemes are proven to be secure against impersonation under active and concurrent attacks using interactive assumptions such as the one-more RSA inversion assumption or the one-more discrete logarithm assumption, translating to weaker security guarantees due to the interactive nature of these assumptions. The Schnorr-IBI scheme was first proposed through the Kurosawa-Heng transformation from the Schnorr signature. It remains one of the fastest yet most secure IBI schemes under impersonation against passive attacks due to its pairing-free design. However, when required to be secure against impersonators under active and concurrent attacks, it deteriorates greatly in terms of efficiency due to the protocol having to be repeated multiple times. In this paper, we upgrade the Schnorr-IBI scheme to be secure against impersonation under active and concurrent attacks using only the classical discrete logarithm assumption. This translates to a higher degree of security guarantee with only some minor increments in operational costs. Furthermore, because the scheme operates without pairings, it still retains its efficiency and superiority when compared to other pairing-based IBI schemes. Ji-Jian Chin, Syh-Yuan Tan, Swee-Huay Heng, and Raphael Chung-Wei Phan Copyright © 2015 Ji-Jian Chin et al. All rights reserved. Temporary Redundant Transmission Mechanism for SCTP Multihomed Hosts Sun, 18 Jan 2015 13:06:01 +0000 In SCTP’s Concurrent Multipath Transfer, if data is sent to the destined IP(s) without knowledge of the paths condition, packets may be lost or delayed. This is because of the bursty nature of IP traffic and physical damage to the network. To offset these problems, network path status is examined using our new mechanism Multipath State Aware Concurrent Multipath Transfer using redundant transmission (MSACMT-RTv2). Here the status of multiple paths is analyzed, initially and periodically thereafter transmitted. After examination, paths priority is assigned before transmission. One path is temporarily employed as redundant path for the failure-expected path (FEP); this redundant path is used for transmitting redundant data. At the end of predefined period, reliability of the FEP is confirmed. If FEP is ensured to be reliable, temporary path is transformed into normal CMT path. MSACMT-RTv2 algorithm is simulated using the Delaware University ns-2 SCTP/CMT module (ns-2; V2.29). We present and discuss MSACMT-RTv2 performance in asymmetric path delay and with finite receiver buffer (rbuf) size. We extended our experiment to test robustness of this algorithm and inferred exhaustive result. It is inferred that our algorithm outperforms better in terms of increasing the throughput and reducing the latency than existing system. D. Mohana Geetha, S. K. Muthusundar, M. Subramaniam, and Kathirvel Ayyaswamy Copyright © 2015 D. Mohana Geetha et al. All rights reserved. A Novel Cost Based Model for Energy Consumption in Cloud Computing Thu, 15 Jan 2015 14:03:22 +0000 Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. A. Horri and Gh. Dastghaibyfard Copyright © 2015 A. Horri and Gh. Dastghaibyfard. All rights reserved. Exploiting Semantic Annotations and -Learning for Constructing an Efficient Hierarchy/Graph Texts Organization Thu, 01 Jan 2015 09:34:27 +0000 Tremendous growth in the number of textual documents has produced daily requirements for effective development to explore, analyze, and discover knowledge from these textual documents. Conventional text mining and managing systems mainly use the presence or absence of key words to discover and analyze useful information from textual documents. However, simple word counts and frequency distributions of term appearances do not capture the meaning behind the words, which results in limiting the ability to mine the texts. This paper proposes an efficient methodology for constructing hierarchy/graph-based texts organization and representation scheme based on semantic annotation and -learning. This methodology is based on semantic notions to represent the text in documents, to infer unknown dependencies and relationships among concepts in a text, to measure the relatedness between text documents, and to apply mining processes using the representation and the relatedness measure. The representation scheme reflects the existing relationships among concepts and facilitates accurate relatedness measurements that result in a better mining performance. An extensive experimental evaluation is conducted on real datasets from various domains, indicating the importance of the proposed approach. Asmaa M. El-Said, Ali I. Eldesoky, and Hesham A. Arafat Copyright © 2015 Asmaa M. El-Said et al. All rights reserved. Proposed Framework for the Evaluation of Standalone Corpora Processing Systems: An Application to Arabic Corpora Wed, 31 Dec 2014 07:32:48 +0000 Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework. Abdulmohsen Al-Thubaity, Hend Al-Khalifa, Reem Alqifari, and Manal Almazrua Copyright © 2014 Abdulmohsen Al-Thubaity et al. All rights reserved. Computational Intelligence and Metaheuristic Algorithms with Applications Wed, 31 Dec 2014 07:25:02 +0000 Xin-She Yang, Su Fong Chien, and Tiew On Ting Copyright © 2014 Xin-She Yang et al. All rights reserved. Erratum to “A Network and Visual Quality Aware N-Screen Content Recommender System Using Joint Matrix Factorization” Mon, 29 Dec 2014 00:10:37 +0000 Farman Ullah, Ghulam Sarwar, and Sungchang Lee Copyright © 2014 Farman Ullah et al. All rights reserved. Recent Advances on Internet of Things Mon, 22 Dec 2014 10:47:08 +0000 Xiaoxuan Meng, Jaime Lloret, Xudong Zhu, and Zhongmei Zhou Copyright © 2014 Xiaoxuan Meng et al. All rights reserved. Development of Robust Behaviour Recognition for an at-Home Biomonitoring Robot with Assistance of Subject Localization and Enhanced Visual Tracking Sun, 21 Dec 2014 09:48:47 +0000 Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy. Nevrez Imamoglu, Enrique Dorronzoro, Zhixuan Wei, Huangjun Shi, Masashi Sekine, José González, Dongyun Gu, Weidong Chen, and Wenwei Yu Copyright © 2014 Nevrez Imamoglu et al. All rights reserved. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT Sun, 21 Dec 2014 08:17:18 +0000 Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth. Mohamed M. Ibrahim, Neamat S. Abdel Kader, and M. Zorkany Copyright © 2014 Mohamed M. Ibrahim et al. All rights reserved. Robot Trajectories Comparison: A Statistical Approach Tue, 25 Nov 2014 13:02:43 +0000 The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, and WaveFront, using different environments, robots, and local planners. A. Ansuategui, A. Arruti, L. Susperregi, Y. Yurramendi, E. Jauregi, E. Lazkano, and B. Sierra Copyright © 2014 A. Ansuategui et al. All rights reserved. Critical Product Features’ Identification Using an Opinion Analyzer Mon, 24 Nov 2014 00:00:00 +0000 The increasing use and ubiquity of the Internet facilitate dissemination of word-of-mouth through blogs, online forums, newsgroups, and consumer’s reviews. Online consumer’s reviews present tremendous opportunities and challenges for consumers and marketers. One of the challenges is to develop interactive marketing practices for making connections with target consumers that capitalize consumer-to-consumer communications for generating product adoption. Opinion mining is employed in marketing to help consumers and enterprises in the analysis of online consumers’ reviews by highlighting the strengths and weaknesses of the products. This paper describes an opinion mining system based on novel review and feature ranking methods to empower consumers and enterprises for identifying critical product features from enormous consumers’ reviews. Consumers and business analysts are the main target group for the proposed system who want to explore consumers’ feedback for determining purchase decisions and enterprise strategies. We evaluate the proposed system on real dataset. Results show that integration of review and feature-ranking methods improves the decision making processes significantly. Azra Shamim, Vimala Balakrishnan, Muhammad Tahir, and Muhammad Shiraz Copyright © 2014 Azra Shamim et al. All rights reserved. Development and Application of New Quality Model for Software Projects Sun, 16 Nov 2014 06:46:01 +0000 The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects. K. Karnavel and R. Dillibabu Copyright © 2014 K. Karnavel and R. Dillibabu. All rights reserved. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image Wed, 12 Nov 2014 09:38:17 +0000 Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. Li Li, Qingzheng Hou, Jianfeng Lu, Qishuai Xu, Junping Dai, Xiaoyang Mao, and Chin-Chen Chang Copyright © 2014 Li Li et al. All rights reserved. A Green Strategy for Federated and Heterogeneous Clouds with Communicating Workloads Tue, 11 Nov 2014 09:23:10 +0000 Providers of cloud environments must tackle the challenge of configuring their system to provide maximal performance while minimizing the cost of resources used. However, at the same time, they must guarantee an SLA (service-level agreement) to the users. The SLA is usually associated with a certain level of QoS (quality of service). As response time is perhaps the most widely used QoS metric, it was also the one chosen in this work. This paper presents a green strategy (GS) model for heterogeneous cloud systems. We provide a solution for heterogeneous job-communicating tasks and heterogeneous VMs that make up the nodes of the cloud. In addition to guaranteeing the SLA, the main goal is to optimize energy savings. The solution results in an equation that must be solved by a solver with nonlinear capabilities. The results obtained from modelling the policies to be executed by a solver demonstrate the applicability of our proposal for saving energy and guaranteeing the SLA. Jordi Mateo, Jordi Vilaplana, Lluis M. Plà, Josep Ll. Lérida, and Francesc Solsona Copyright © 2014 Jordi Mateo et al. All rights reserved. The Approach for Action Recognition Based on the Reconstructed Phase Spaces Mon, 10 Nov 2014 06:28:53 +0000 This paper presents a novel method of human action recognition, which is based on the reconstructed phase space. Firstly, the human body is divided into 15 key points, whose trajectory represents the human body behavior, and the modified particle filter is used to track these key points for self-occlusion. Secondly, we reconstruct the phase spaces for extracting more useful information from human action trajectories. Finally, we apply the semisupervised probability model and Bayes classified method for classification. Experiments are performed on the Weizmann, KTH, UCF sports, and our action dataset to test and evaluate the proposed method. The compare experiment results showed that the proposed method can achieve was more effective than compare methods. Hong-bin Tu and Li-min Xia Copyright © 2014 Hong-bin Tu and Li-min Xia. All rights reserved. Integrating SOMs and a Bayesian Classifier for Segmenting Diseased Plants in Uncontrolled Environments Tue, 04 Nov 2014 13:43:28 +0000 This work presents a methodology that integrates a nonsupervised learning approach (self-organizing map (SOM)) and a supervised one (a Bayesian classifier) for segmenting diseased plants that grow in uncontrolled environments such as greenhouses, wherein the lack of control of illumination and presence of background bring about serious drawbacks. During the training phase two SOMs are used: one that creates color groups of images, which are classified into two groups using -means and labeled as vegetation and nonvegetation by using rules, and a second SOM that corrects classification errors made by the first SOM. Two color histograms are generated from the two color classes and used to estimate the conditional probabilities of the Bayesian classifier. During the testing phase an input image is segmented by the Bayesian classifier and then it is converted into a binary image, wherein contours are extracted and analyzed to recover diseased areas that were incorrectly classified as nonvegetation. The experimental results using the proposed methodology showed better performance than two of the most used color index methods. Deny Lizbeth Hernández-Rabadán, Fernando Ramos-Quintana, and Julian Guerrero Juk Copyright © 2014 Deny Lizbeth Hernández-Rabadán et al. All rights reserved. Collaborative and Multilingual Approach to Learn Database Topics Using Concept Maps Mon, 03 Nov 2014 09:04:40 +0000 Authors report on a study using the concept mapping technique in computer engineering education for learning theoretical introductory database topics. In addition, the learning of multilingual technical terminology by means of the collaborative drawing of a concept map is also pursued in this experiment. The main characteristics of a study carried out in the database subject at the University of the Basque Country during the 2011/2012 course are described. This study contributes to the field of concept mapping as these kinds of cognitive tools have proved to be valid to support learning in computer engineering education. It contributes to the field of computer engineering education, providing a technique that can be incorporated with several educational purposes within the discipline. Results reveal the potential that a collaborative concept map editor offers to fulfil the above mentioned objectives. Ana Arruarte, Iñaki Calvo, Jon A. Elorriaga, Mikel Larrañaga, and Angel Conde Copyright © 2014 Ana Arruarte et al. All rights reserved. An Evolved Wavelet Library Based on Genetic Algorithm Mon, 27 Oct 2014 11:55:06 +0000 As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. D. Vaithiyanathan, R. Seshasayanan, K. Kunaraj, and J. Keerthiga Copyright © 2014 D. Vaithiyanathan et al. All rights reserved. Cognitive Inference Device for Activity Supervision in the Elderly Mon, 27 Oct 2014 11:16:37 +0000 Human activity, life span, and quality of life are enhanced by innovations in science and technology. Aging individual needs to take advantage of these developments to lead a self-regulated life. However, maintaining a self-regulated life at old age involves a high degree of risk, and the elderly often fail at this goal. Thus, the objective of our study is to investigate the feasibility of implementing a cognitive inference device (CI-device) for effective activity supervision in the elderly. To frame the CI-device, we propose a device design framework along with an inference algorithm and implement the designs through an artificial neural model with different configurations, mapping the CI-device’s functions to minimise the device’s prediction error. An analysis and discussion are then provided to validate the feasibility of CI-device implementation for activity supervision in the elderly. Nilamadhab Mishra, Chung-Chih Lin, and Hsien-Tsung Chang Copyright © 2014 Nilamadhab Mishra et al. All rights reserved. Effects of Corporate Social Responsibility and Governance on Its Credit Ratings Mon, 27 Oct 2014 07:17:21 +0000 This study reviews the impact of corporate social responsibility (CSR) and corporate governance on its credit rating. The result of regression analysis to credit ratings with relevant primary independent variables shows that both factors have significant effects on it. As we have predicted, the signs of both regression coefficients have a positive sign (+) proving that corporates with excellent CSR and governance index (CGI) scores have higher credit ratings and vice versa. The results show nonfinancial information also may have effects on corporate credit rating. The investment on personal data protection could be an example of CSR/CGI activities which have positive effects on corporate credit ratings. Dong-young Kim and JeongYeon Kim Copyright © 2014 Dong-young Kim and JeongYeon Kim. All rights reserved.