Advances in Software Engineering http://www.hindawi.com The latest articles from Hindawi Publishing Corporation © 2014 , Hindawi Publishing Corporation . All rights reserved. Tuning of Cost Drivers by Significance Occurrences and Their Calibration with Novel Software Effort Estimation Method Tue, 31 Dec 2013 13:45:57 +0000 http://www.hindawi.com/journals/ase/2013/351913/ Estimation is an important part of software engineering projects, and the ability to produce accurate effort estimates has an impact on key economic processes, including budgeting and bid proposals and deciding the execution boundaries of the project. Work in this paper explores the interrelationship among different dimensions of software projects, namely, project size, effort, and effort influencing factors. The study aims at providing better effort estimate on the parameters of modified COCOMO along with the detailed use of binary genetic algorithm as a novel optimization algorithm. Significance of 15 cost drivers can be shown by their impact on MMRE of efforts on original 63 NASA datasets. Proposed method is producing tuned values of the cost drivers, which are effective enough to improve the productivity of the projects. Prediction at different levels of MRE for each project reflects the percentage of projects with desired accuracy. Furthermore, this model is validated on two different datasets which represents better estimation accuracy as compared to the COCOMO 81 based NASA 63 and NASA 93 datasets. Brajesh Kumar Singh, Shailesh Tiwari, K. K. Mishra, and A. K. Misra Copyright © 2013 Brajesh Kumar Singh et al. All rights reserved. Thematic Review and Analysis of Grounded Theory Application in Software Engineering Tue, 22 Oct 2013 11:33:15 +0000 http://www.hindawi.com/journals/ase/2013/468021/ We present metacodes, a new concept to guide grounded theory (GT) research in software engineering. Metacodes are high level codes that can help software engineering researchers guide the data coding process. Metacodes are constructed in the course of analyzing software engineering papers that use grounded theory as a research methodology. We performed a high level analysis to discover common themes in such papers and discovered that GT had been applied primarily in three software engineering disciplines: agile development processes, geographically distributed software development, and requirements engineering. For each category, we collected and analyzed all grounded theory codes and created, following a GT analysis process, what we call metacodes that can be used to drive further theory building. This paper surveys the use of grounded theory in software engineering and presents an overview of successes and challenges of applying this research methodology. Omar Badreddin Copyright © 2013 Omar Badreddin. All rights reserved. A Granular Hierarchical Multiview Metrics Suite for Statecharts Quality Sun, 22 Sep 2013 10:41:48 +0000 http://www.hindawi.com/journals/ase/2013/952178/ This paper presents a bottom-up approach for a multiview measurement of statechart size, topological properties, and internal structural complexity for understandability prediction and assurance purposes. It tackles the problem at different conceptual depths or equivalently at several abstraction levels. The main idea is to study and evaluate a statechart at different levels of granulation corresponding to different conceptual depth levels or levels of details. The higher level corresponds to a flat process view diagram (depth = 0), the adequate upper depth limit is determined by the modelers according to the inherent complexity of the problem under study and the level of detail required for the situation at hand (it corresponds to the all states view). For purposes of measurement, we proceed using bottom-up strategy starting with all state view diagram, identifying and measuring its deepest composite states constituent parts and then gradually collapsing them to obtain the next intermediate view (we decrement depth) while aggregating measures incrementally, until reaching the flat process view diagram. To this goal we first identify, define, and derive a relevant metrics suite useful to predict the level of understandability and other quality aspects of a statechart, and then we propose a fuzzy rule-based system prototype for understandability prediction, assurance, and for validation purposes. Mokhtar Beldjehem Copyright © 2013 Mokhtar Beldjehem. All rights reserved. A New Software Development Methodology for Clinical Trial Systems Thu, 21 Mar 2013 17:38:01 +0000 http://www.hindawi.com/journals/ase/2013/796505/ Clinical trials are crucial to modern healthcare industries, and information technologies have been employed to improve the quality of data collected in trials and reduce the overall cost of data processing. While developing software for clinical trials, one needs to take into account the similar patterns shared by all clinical trial software. Such patterns exist because of the unique properties of clinical trials and the rigorous regulations imposed by the government for the reasons of subject safety. Among the existing software development methodologies, none, unfortunately, was built specifically upon these properties and patterns and therefore works sufficiently well. In this paper, the process of clinical trials is reviewed, and the unique properties of clinical trial system development are explained thoroughly. Based on the properties, a new software development methodology is then proposed specifically for developing electronic clinical trial systems. A case study shows that, by adopting the proposed methodology, high-quality software products can be delivered on schedule within budget. With such high-quality software, data collection, management, and analysis can be more efficient, accurate, and inexpensive, which in turn will improve the overall quality of clinical trials. Li-Min Liu Copyright © 2013 Li-Min Liu. All rights reserved. Gesture Recognition Using Neural Networks Based on HW/SW Cosimulation Platform Sun, 24 Feb 2013 11:25:10 +0000 http://www.hindawi.com/journals/ase/2013/707248/ Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient. Priyanka Mekala, Jeffrey Fan, Wen-Cheng Lai, and Ching-Wen Hsue Copyright © 2013 Priyanka Mekala et al. All rights reserved. Accountability in Enterprise Mashup Services Mon, 21 Jan 2013 09:38:55 +0000 http://www.hindawi.com/journals/ase/2013/298037/ As a result of the proliferation of Web 2.0 style web sites, the practice of mashup services has become increasingly popular in the web development community. While mashup services bring flexibility and speed in delivering new valuable services to consumers, the issue of accountability associated with the mashup practice remains largely ignored by the industry. Furthermore, realizing the great benefits of mashup services, industry leaders are eagerly pushing these solutions into the enterprise arena. Although enterprise mashup services hold great promise in delivering a flexible SOA solution in a business context, the lack of accountability in current mashup solutions may render this ineffective in the enterprise environment. This paper defines accountability for mashup services, analyses the underlying issues in practice, and finally proposes a framework and ontology to model accountability. This model may then be used to develop effective accountability solutions for mashup environments. Compared to the traditional method of using QoS or SLA monitoring to address accountability requirements, our approach addresses more fundamental aspects of accountability specification to facilitate machine interpretability and therefore enabling automation in monitoring. Joe Zou and Chris Pavlovski Copyright © 2013 Joe Zou and Chris Pavlovski. All rights reserved. Combining Slicing and Constraint Solving for Better Debugging: The CONBAS Approach Mon, 31 Dec 2012 16:40:08 +0000 http://www.hindawi.com/journals/ase/2012/628571/ Although slices provide a good basis for analyzing programs during debugging, they lack in their capabilities providing precise information regarding the most likely root causes of faults. Hence, a lot of work is left to the programmer during fault localization. In this paper, we present an approach that combines an advanced dynamic slicing method with constraint solving in order to reduce the number of delivered fault candidates. The approach is called Constraints Based Slicing (CONBAS). The idea behind CONBAS is to convert an execution trace of a failing test case into its constraint representation and to check if it is possible to find values for all variables in the execution trace so that there is no contradiction with the test case. For doing so, we make use of the correctness and incorrectness assumptions behind a diagnosis, the given failing test case. Beside the theoretical foundations and the algorithm, we present empirical results and discuss future research. The obtained empirical results indicate an improvement of about 28% for the single fault and 50% for the double-fault case compared to dynamic slicing approaches. Birgit Hofer and Franz Wotawa Copyright © 2012 Birgit Hofer and Franz Wotawa. All rights reserved. Applying a Goal Programming Model to Support the Selection of Artifacts in a Testing Process Sun, 30 Dec 2012 08:22:52 +0000 http://www.hindawi.com/journals/ase/2012/765635/ This paper proposes the definition of a goal programming model for the selection of artifacts to be developed during a testing process, so that the set of selected artifacts is more viable to the reality of micro and small enterprises. This model was based on the IEEE Standard 829, which establishes a set of artifacts that must be generated throughout the test activities. Several factors can influence the definition of this set of artifacts. Therefore, in order to consider such factors, we developed a multicriteria model that helps in determining the priority of artifacts according to the reality of micro and small enterprises. Andreia Rodrigues da Silva, Fernando Rodrigues de Almeida Júnior, and Placido Rogerio Pinheiro Copyright © 2012 Andreia Rodrigues da Silva et al. All rights reserved. An SOA-Based Model for the Integrated Provisioning of Cloud and Grid Resources Tue, 20 Nov 2012 17:55:19 +0000 http://www.hindawi.com/journals/ase/2012/212343/ In the last years, the availability and models of use of networked computing resources within reach of e-Science are rapidly changing and see the coexistence of many disparate paradigms: high-performance computing, grid, and recently cloud. Unfortunately, none of these paradigms is recognized as the ultimate solution, and a convergence of them all should be pursued. At the same time, recent works have proposed a number of models and tools to address the growing needs and expectations in the field of e-Science. In particular, they have shown the advantages and the feasibility of modeling e-Science environments and infrastructures according to the service-oriented architecture. In this paper, we suggest a model to promote the convergence and the integration of the different computing paradigms and infrastructures for the dynamic on-demand provisioning of resources from multiple providers as a cohesive aggregate, leveraging the service-oriented architecture. In addition, we propose a design aimed at endorsing a flexible, modular, workflow-based computing model for e-Science. The model is supplemented by a working prototype implementation together with a case study in the applicative domain of bioinformatics, which is used to validate the presented approach and to carry out some performance and scalability measurements. Andrea Bosin Copyright © 2012 Andrea Bosin. All rights reserved. Towards Self-Adaptive KPN Applications on NoC-Based MPSoCs Mon, 19 Nov 2012 17:18:09 +0000 http://www.hindawi.com/journals/ase/2012/172674/ Self-adaptivity is the ability of a system to adapt itself dynamically to internal and external changes. Such a capability helps systems to meet the performance and quality goals, while judiciously using available resources. In this paper, we propose a framework to implement application level self-adaptation capabilities in KPN applications running on NoC-based MPSoCs. The monitor-controller-adapter mechanism is used at the application level. The monitor measures various parameters to check whether the system meets the assigned goals. The controller takes decisions to steer the system towards the goal, which are applied by the adapters. The proposed framework requires minimal modifications to the application code and offers ease of integration. It incorporates a generic adaptation controller based on fuzzy logic. We present the MJPEG encoder as a case study to demonstrate the effectiveness of the approach. Our results show that even if the parameters of the fuzzy controller are not tuned optimally, the adaptation convergence is achieved within reasonable time and error limits. Moreover, the incurred steady-state overhead due to the framework is 4% for average frame-rate, 3.5% for average bit-rate, and 0.5% for additional control data introduced in the network. Onur Derin, Prasanth Kuncheerath Ramankutty, Paolo Meloni, and Emanuele Cannella Copyright © 2012 Onur Derin et al. All rights reserved. Assessing the Open Source Development Processes Using OMM Thu, 04 Oct 2012 12:47:09 +0000 http://www.hindawi.com/journals/ase/2012/235392/ The assessment of development practices in Free Libre Open Source Software (FLOSS) projects can contribute to the improvement of the development process by identifying poor practices and providing a list of necessary practices. Available assessment methods (e.g., Capability Maturity Model Integration (CMMI)) do not address sufficiently FLOSS-specific aspects (e.g., geographically distributed development, importance of the contributions, reputation of the project, etc.). We present a FLOSS-focused, CMMI-like assessment/improvement model: the QualiPSo Open Source Maturity Model (OMM). OMM focuses on the development process. This makes it different from existing assessment models that are focused on the assessment of the product. We have assessed six FLOSS projects using OMM. Three projects were started and led by a software company, and three are developed by three different FLOSS communities. We identified poorly addressed development activities as the number of commit/bug reports, the external contributions, and the risk management. The results showed that FLOSS projects led by companies adopt standard project management approaches as product planning, design definition, and testing, that are less often addressed by community led FLOSS projects. The OMM is valuable for both the FLOSS community, by identifying critical development activities necessary to be improved, and for potential users that can better decide which product to adopt. Etiel Petrinja and Giancarlo Succi Copyright © 2012 Etiel Petrinja and Giancarlo Succi. All rights reserved. A Multi-Layered Control Approach for Self-Adaptation in Automotive Embedded Systems Thu, 04 Oct 2012 11:31:16 +0000 http://www.hindawi.com/journals/ase/2012/971430/ We present an approach for self-adaptation in automotive embedded systems using a hierarchical, multi-layered control approach. We model automotive systems as a set of constraints and define a hierarchy of control loops based on different criteria. Adaptations are performed at first locally on a lower layer of the architecture. If this fails due to the restricted scope of the control cycle, the next higher layer is in charge of finding a suitable adaptation. We compare different options regarding responsibility split in multi-layered control in a self-healing scenario with a setup adopted from automotive in-vehicle networks. We show that a multi-layer control approach has clear performance benefits over a central control, even though all layers work on the same set of constraints. Furthermore, we show that a responsibility split with respect to network topology is preferable over a functional split. Marc Zeller and Christian Prehofer Copyright © 2012 Marc Zeller and Christian Prehofer. All rights reserved. Improving Model Checking with Context Modelling Mon, 24 Sep 2012 08:02:34 +0000 http://www.hindawi.com/journals/ase/2012/547157/ This paper deals with the problem of the usage of formal techniques, based on model checking, where models are large and formal verification techniques face the combinatorial explosion issue. The goal of the approach is to express and verify requirements relative to certain context situations. The idea is to unroll the context into several scenarios and successively compose each scenario with the system and verify the resulting composition. We propose to specify the context in which the behavior occurs using a language called CDL (Context Description Language), based on activity and message sequence diagrams. The properties to be verified are specified with textual patterns and attached to specific regions in the context. The central idea is to automatically split each identified context into a set of smaller subcontexts and to compose them with the model to be validated. For that, we have implemented a recursive splitting algorithm in our toolset OBP (Observer-based Prover). This paper shows how this combinatorial explosion could be reduced by specifying the environment of the system to be validated. Philippe Dhaussy, Frédéric Boniol, Jean-Charles Roger, and Luka Leroux Copyright © 2012 Philippe Dhaussy et al. All rights reserved. Metadata for Approximate Query Answering Systems Mon, 03 Sep 2012 08:23:42 +0000 http://www.hindawi.com/journals/ase/2012/247592/ In business intelligence systems, data warehouse metadata management and representation are getting more and more attention by vendors and designers. The standard language for the data warehouse metadata representation is the Common Warehouse Metamodel. However, business intelligence systems include also approximate query answering systems, since these software tools provide fast responses for decision making on the basis of approximate query processing. Currently, the standard meta-model does not allow to represent the metadata needed by approximate query answering systems. In this paper, we propose an extension of the standard metamodel, in order to define the metadata to be used in online approximate analytical processing. These metadata have been successfully adopted in ADAP, a web-based approximate query answering system that creates and uses statistical data profiles. Francesco Di Tria, Ezio Lefons, and Filippo Tangorra Copyright © 2012 Francesco Di Tria et al. All rights reserved. Software Quality Assurance Methodologies and Techniques Tue, 28 Aug 2012 13:15:25 +0000 http://www.hindawi.com/journals/ase/2012/872619/ Chin-Yu Huang, Hareton Leung, Wu-Hon Francis Leung, and Osamu Mizuno Copyright © 2012 Chin-Yu Huang et al. All rights reserved. A Stateful Approach to Generate Synthetic Events from Kernel Traces Wed, 15 Aug 2012 14:11:04 +0000 http://www.hindawi.com/journals/ase/2012/140368/ We propose a generic synthetic event generator from kernel trace events. The proposed method makes use of patterns of system states and environment-independent semantic events rather than platform-specific raw events. This method can be applied to different kernel and user level trace formats. We use a state model to store intermediate states and events. This stateful method supports partial trace abstraction and enables users to seek and navigate through the trace events and to abstract out the desired part. Since it uses the current and previous values of the system states and has more knowledge of the underlying system execution, it can generate a wide range of synthetic events. One of the obvious applications of this method is the identification of system faults and problems that will appear later in this paper. We will discuss the architecture of the method, its implementation, and the performance results. Naser Ezzati-Jivan and Michel R. Dagenais Copyright © 2012 Naser Ezzati-Jivan and Michel R. Dagenais. All rights reserved. Genetic Programming for Automating the Development of Data Management Algorithms in Information Technology Systems Thu, 05 Jul 2012 10:29:54 +0000 http://www.hindawi.com/journals/ase/2012/893701/ Information technology (IT) systems are present in almost all fields of human activity, with emphasis on processing, storage, and handling of datasets. Automated methods to provide access to data stored in databases have been proposed mainly for tasks related to knowledge discovery and data mining (KDD). However, for this purpose, the database is used only to query data in order to find relevant patterns associated with the records. Processes modelled on IT systems should manipulate the records to modify the state of the system. Linear genetic programming for databases (LGPDB) is a tool proposed here for automatic generation of programs that can query, delete, insert, and update records on databases. The obtained results indicate that the LGPDB approach is able to generate programs for effectively modelling processes of IT systems, opening the possibility of automating relevant stages of data manipulation, and thus allowing human programmers to focus on more complex tasks. Gabriel A. Archanjo and Fernando J. Von Zuben Copyright © 2012 Gabriel A. Archanjo and Fernando J. Von Zuben. All rights reserved. Formal ESL Synthesis for Control-Intensive Applications Wed, 27 Jun 2012 15:23:59 +0000 http://www.hindawi.com/journals/ase/2012/156907/ Due to the massive complexity of contemporary embedded applications and integrated systems, long effort has been invested in high-level synthesis (HLS) and electronic system level (ESL) methodologies to automatically produce correct implementations from high-level, abstract, and executable specifications written in program code. If the HLS transformations that are applied on the source code are formal, then the generated implementation is correct-by-construction. The focus in this work is on application-specific design, which can deliver optimal, and customized implementations, as opposed to platform or IP-based design, which is bound by the limits and constraints of the preexisting architecture. This work surveys and reviews past and current research in the area of ESL and HLS. Then, a prototype HLS compiler tool that has been developed by the author is presented, which utilizes compiler-generators and logic programming to turn the synthesis into a formal process. The scheduler PARCS and the formal compilation of the system are tested with a number of benchmarks and real-world applications. This demonstrates the usability and applicability of the presented method. Michael F. Dossis Copyright © 2012 Michael F. Dossis. All rights reserved. Evaluating the Effect of Control Flow on the Unit Testing Effort of Classes: An Empirical Analysis Thu, 14 Jun 2012 07:58:44 +0000 http://www.hindawi.com/journals/ase/2012/964064/ The aim of this paper is to evaluate empirically the relationship between a new metric (Quality Assurance Indicator—Qi) and testability of classes in object-oriented systems. The Qi metric captures the distribution of the control flow in a system. We addressed testability from the perspective of unit testing effort. We collected data from five open source Java software systems for which JUnit test cases exist. To capture the testing effort of classes, we used different metrics to quantify the corresponding JUnit test cases. Classes were classified, according to the required testing effort, in two categories: high and low. In order to evaluate the capability of the Qi metric to predict testability of classes, we used the univariate logistic regression method. The performance of the predicted model was evaluated using Receiver Operating Characteristic (ROC) analysis. The results indicate that the univariate model based on the Qi metric is able to accurately predict the unit testing effort of classes. Mourad Badri and Fadel Toure Copyright © 2012 Mourad Badri and Fadel Toure. All rights reserved. How to Safely Integrate Multiple Applications on Embedded Many-Core Systems by Applying the “Correctness by Construction” Principle Tue, 12 Jun 2012 12:53:16 +0000 http://www.hindawi.com/journals/ase/2012/354274/ Software-intensive embedded systems, especially cyber-physical systems, benefit from the additional performance and the small power envelope offered by many-core processors. Nevertheless, the adoption of a massively parallel processor architecture in the embedded domain is still challenging. The integration of multiple and potentially parallel functions on a chip—instead of just a single function—makes best use of the resources offered. However, this multifunction approach leads to new technical and nontechnical challenges during the integration. This is especially the case for a distributed system architecture, which is subject to specific safety considerations. In this paper, it is argued that these challenges cannot be effectively addressed with traditional engineering approaches. Instead, the application of the “correctness by construction” principle is proposed to improve the integration process. Robert Hilbrich Copyright © 2012 Robert Hilbrich. All rights reserved. An Empirical Study on the Impact of Duplicate Code Mon, 28 May 2012 10:45:35 +0000 http://www.hindawi.com/journals/ase/2012/938296/ It is said that the presence of duplicate code is one of the factors that make software maintenance more difficult. Many research efforts have been performed on detecting, removing, or managing duplicate code on this basis. However, some researchers doubt this basis in recent years and have conducted empirical studies to investigate the influence of the presence of duplicate code. In this study, we conduct an empirical study to investigate this matter from a different standpoint from previous studies. In this study, we define a new indicator “modification frequency” to measure the impact of duplicate code and compare the values between duplicate code and nonduplicate code. The features of this study are as follows the indicator used in this study is based on modification places instead of the ratio of modified lines; we use multiple duplicate code detection tools to reduce biases of detection tools; and we compare the result of the proposed method with other two investigation methods. The result shows that duplicate code tends to be less frequently modified than nonduplicate code, and we found some instances that the proposed method can evaluate the influence of duplicate code more accurately than the existing investigation methods. Keisuke Hotta, Yui Sasaki, Yukiko Sano, Yoshiki Higo, and Shinji Kusumoto Copyright © 2012 Keisuke Hotta et al. All rights reserved. A Comparative Study of Data Transformations for Wavelet Shrinkage Estimation with Application to Software Reliability Assessment Sun, 13 May 2012 15:36:15 +0000 http://www.hindawi.com/journals/ase/2012/524636/ In our previous work, we proposed wavelet shrinkage estimation (WSE) for nonhomogeneous Poisson process (NHPP)-based software reliability models (SRMs), where WSE is a data-transform-based nonparametric estimation method. Among many variance-stabilizing data transformations, the Anscombe transform and the Fisz transform were employed. We have shown that it could provide higher goodness-of-fit performance than the conventional maximum likelihood estimation (MLE) and the least squares estimation (LSE) in many cases, in spite of its non-parametric nature, through numerical experiments with real software-fault count data. With the aim of improving the estimation accuracy of WSE, in this paper we introduce other three data transformations to preprocess the software-fault count data and investigate the influence of different data transformations to the estimation accuracy of WSE through goodness-of-fit test. Xiao Xiao and Tadashi Dohi Copyright © 2012 Xiao Xiao and Tadashi Dohi. All rights reserved. Can Faulty Modules Be Predicted by Warning Messages of Static Code Analyzer? Thu, 10 May 2012 13:27:38 +0000 http://www.hindawi.com/journals/ase/2012/924923/ We have proposed a detection method of fault-prone modules based on the spam filtering technique, “Fault-prone filtering.” Fault-prone filtering is a method which uses the text classifier (spam filter) to classify source code modules in software. In this study, we propose an extension to use warning messages of a static code analyzer instead of raw source code. Since such warnings include useful information to detect faults, it is expected to improve the accuracy of fault-prone module prediction. From the result of experiment, it is found that warning messages of a static code analyzer are a good source of fault-prone filtering as the original source code. Moreover, it is discovered that it is more effective than the conventional method (that is, without static code analyzer) to raise the coverage rate of actual faulty modules. Osamu Mizuno and Michi Nakai Copyright © 2012 Osamu Mizuno and Michi Nakai. All rights reserved. Clustering Methodologies for Software Engineering Thu, 10 May 2012 09:09:05 +0000 http://www.hindawi.com/journals/ase/2012/792024/ The size and complexity of industrial strength software systems are constantly increasing. This means that the task of managing a large software project is becoming even more challenging, especially in light of high turnover of experienced personnel. Software clustering approaches can help with the task of understanding large, complex software systems by automatically decomposing them into smaller, easier-to-manage subsystems. The main objective of this paper is to identify important research directions in the area of software clustering that require further attention in order to develop more effective and efficient clustering methodologies for software engineering. To that end, we first present the state of the art in software clustering research. We discuss the clustering methods that have received the most attention from the research community and outline their strengths and weaknesses. Our paper describes each phase of a clustering algorithm separately. We also present the most important approaches for evaluating the effectiveness of software clustering. Mark Shtern and Vassilios Tzerpos Copyright © 2012 Mark Shtern and Vassilios Tzerpos. All rights reserved. A Simple Application Program Interface for Saving Java Program Data on a Wiki Tue, 03 Apr 2012 09:58:32 +0000 http://www.hindawi.com/journals/ase/2012/981783/ A simple application program interface (API) for Java programs running on a wiki is implemented experimentally. A Java program with the API can be running on a wiki, and the Java program can save its data on the wiki. The Java program consists of PukiWiki, which is a popular wiki in Japan, and a plug-in, which starts up Java programs and classes of Java. A Java applet with default access privilege cannot save its data at a local host. We have constructed an API of applets for easy and unified data input and output at a remote host. We also combined the proposed API and the wiki system by introducing a wiki tag for starting Java applets. It is easy to introduce new types of applications using the proposed API. We have embedded programs such as a simple text editor, a simple music editor, a simple drawing program, and programming environments in a PukiWiki system using this API. Takashi Yamanoue, Kentaro Oda, and Koichi Shimozono Copyright © 2012 Takashi Yamanoue et al. All rights reserved. Specifying Process Views for a Measurement, Evaluation, and Improvement Strategy Sun, 19 Feb 2012 13:40:44 +0000 http://www.hindawi.com/journals/ase/2012/949746/ Any organization that develops software strives to improve the quality of its products. To do this first requires an understanding of the quality of the current product version. Then, by iteratively making changes, the software can be improved with subsequent versions. But this must be done in a systematic and methodical way, and, for this purpose, we have developed a specific strategy called SIQinU (Strategy for understanding and Improving Quality in Use). SIQinU recognizes problems of quality in use through evaluation of a real system-in-use situation and proposes product improvements by understanding and making changes to the product’s attributes. Then, reevaluating quality in use of the new version, improvement gains can be gauged along with the changes that led to those improvements. SIQinU aligns with GOCAME (Goal-Oriented Context-Aware Measurement and Evaluation), a multipurpose generic strategy previously developed for measurement and evaluation, which utilizes a conceptual framework (with ontological base), a process, and methods and tools. Since defining SIQinU relies on numerous phase and activity definitions, in this paper, we model different process views, for example, taking into account activities, interdependencies, artifacts, and roles, while illustrating them with excerpts from a real-case study. Pablo Becker, Philip Lew, and Luis Olsina Copyright © 2012 Pablo Becker et al. All rights reserved. Program Spectra Analysis with Theory of Evidence Wed, 15 Feb 2012 18:23:03 +0000 http://www.hindawi.com/journals/ase/2012/642983/ This paper presents an approach to automatically analyzing program spectra, an execution profile of program testing results for fault localization. Using a mathematical theory of evidence for uncertainty reasoning, the proposed approach estimates the likelihood of faulty locations based on evidence from program spectra. Our approach is theoretically grounded and can be computed online. Therefore, we can predict fault locations immediately after each test execution is completed. We evaluate the approach by comparing its performance with the top three performing fault localizers using a benchmark set of real-world programs. The results show that our approach is at least as effective as others with an average effectiveness (the reduction of the amount of code examined to locate a fault) of 85.6% over 119 versions of the programs. We also study the quantity and quality impacts of program spectra on our approach where the quality refers to the spectra support in identifying that a certain unit is faulty. The results show that the effectiveness of our approach slightly improves with a larger number of failed runs but not with a larger number of passed runs. Program spectra with support quality increases from 1% to 100% improves the approach's effectiveness by 3.29%. Rattikorn Hewett Copyright © 2012 Rattikorn Hewett. All rights reserved. Dynamic Context-Aware and Limited Resources-Aware Service Adaptation for Pervasive Computing Thu, 02 Feb 2012 14:19:40 +0000 http://www.hindawi.com/journals/ase/2011/649563/ A pervasive computing system (PCS) requires that devices be context aware in order to provide proactively adapted services according to the current context. Because of the highly dynamic environment of a PCS, the service adaptation task must be performed during device operation. Most of the proposed approaches do not deal with the problem in depth, because they are either not really context aware or the problem itself is not thought to be dynamic. Devices in a PCS are generally hand-held, that is, they have limited resources, and so, in the effort to make them more reliable, the service adaptation must take into account this constraint. In this paper, we propose a dynamic service adaptation approach for a device operating in a PCS that is both context aware and limited resources aware. The approach is then modeled using colored Petri Nets and simulated using the CPN Tools, an important step toward its validation. Moeiz Miraoui, Chakib Tadj, Jaouhar Fattahi, and Chokri Ben Amar Copyright © 2011 Moeiz Miraoui et al. All rights reserved. The Study of Resource Allocation among Software Development Phases: An Economics-Based Approach Thu, 12 Jan 2012 14:25:38 +0000 http://www.hindawi.com/journals/ase/2011/579292/ This paper presents an economics-based approach for studying the problem of resource allocation among software development phases. Our approach is structured along two parallel axes: theoretical and empirical. We developed a general economic model for analyzing the allocation problem as a constrained profit maximization problem. The model, based on a novel concept of software production function, considers the effects of different allocations of development resources on output measures of the resulting software product. An empirical environment for evaluating and refining the model is presented, and a first exploratory study for characterizing the model's components and developers' resource allocation decisions is described. The findings illustrate how the model can be applied and validate its underlying assumptions and usability. Future quantitative empirical studies can refine and substantiate various aspects of the proposed model and ultimately improve the productivity of software development processes. Peleg Yiftachel, Irit Hadar, Dan Peled, Eitan Farchi, and Dan Goldwasser Copyright © 2011 Peleg Yiftachel et al. All rights reserved. Evaluation of Tools and Slicing Techniques for Efficient Verification of UML/OCL Class Diagrams Tue, 27 Sep 2011 13:49:54 +0000 http://www.hindawi.com/journals/ase/2011/370198/ UML/OCL class diagrams provide high-level descriptions of software systems. Currently, UML/OCL class diagrams are highly used for code generation through several transformations in order to save time and effort of software developers. Therefore, verification of these class diagrams is essential in order to generate accurate transformations. Verification of UML/OCL class diagrams is a quite challenging task when the input is large (i.e., a complex UML/OCL class diagram). In this paper, we present (1) a benchmark for UML/OCL verification and validation tools, (2) an evaluation and analysis of tools available for verification and validation of UML/OCL class diagrams including the range of UML support for each tool, (3) the problems with efficiency of the verification process for UML/OCL class diagrams, and (4) solution for efficient verification of complex class diagrams. Asadullah Shaikh, Uffe Kock Wiil, and Nasrullah Memon Copyright © 2011 Asadullah Shaikh et al. All rights reserved.