Abstract

Software should be a product that can be used easily and accurately by the user and should be improved quickly and accurately when problems arise. In many software development projects, software requirements are frequently modified during the design or development phase, which tests the development of specific designers’ or developers’ capabilities. Software development environmental factors (SDEFs), such as differences in mutual work recognition among users, developers, and testers or knowledge differences, can hinder communication, which may lead to faulty development owing to erroneous job definition. Because the exact size and scope of the software cannot be calculated, the risk of excessive requirements, such as schedule, cost, and manpower, may increase. This study aims to investigate 32 SDEFs to examine the influence of factors affecting the reliability of software developed by Korean companies to identify factors with high influence and compare differences with previous studies. Moreover, we found whether any new SDEFs from the top 10 rankings only affected Korean companies, and also US companies included in previous studies. A factor analysis revealed several potential factors to identify the mutually independent characteristics of the factors. Through statistical analysis methods, the difference between the group means and the impact on improving the software reliability were found in Korean companies. These findings can provide useful benefits to software developers and managers working in countries with different or similar cultures and help increase working efficiency, i.e., work versus time investment and software reliability improvement.

1. Introduction

When using software for personal use, the schedule or quality of development is not a concern. However, good software that is meant for sale to the general public should be user-friendly, error-free, efficient, reliable, and utilitarian to satisfy its users. Software systems that offer user convenience must improve solutions for immediate problems in multiple industries. Software development becomes a more complex process as the requirements for system quality increase. With more elaborate software projects and pressing deadlines, software applications have also become larger and more integrated. Projects and their applications require interaction between diverse individuals in different roles to deliver projects on time, within budget, and while adhering to feature and functional requirements [1]. Software testing is indispensable to ensure quality, wherein critical contributions to strategic objectives set by the firm are met by the information technology (IT) unit [2]. However, the continuing mismatch among the goals of testers, developers, and the IT unit is large, and threatens the firm’s corporate success [3]. Paternoster et al. [4] conducted a systematic mapping study by developing a classification schema, ranking the selected primary studies according to their rigor and relevance and analyzing the reported software development work practices in start-ups. Turk et al. [5] explained the integrated agile development environment to identify underlying assumptions and limitations. Currently, many software companies are working hard to match the advantages of open-source software.

1.1. Motivation

As the era of the 4th Industrial Revolution arrives, the importance of software is increasing day by day, and it is a field of interest not only for global companies but also for all countries. Recently, many companies and countries have been investing in various fields to focus on software development. In addition, noting the importance of investing in research and development (R&D), the Organization for Economic Co-operation and Development (OECD) countries and other nations have been increasing support for R&D as part of their science, technology, and industrial policy. As technological development relies on R&D, governments have been leveraging institutions and policies to expand such investments in the public and private sectors to achieve qualitative improvements. In Table 1, looking at the gross domestic product (GDP) by country in 2019, the GDP is 21.4, 14.3, 5.1, 3.9, and 2.9 trillion dollars for the USA, China, Japan, Germany, and India, respectively. In comparison, Korea’s GDP is relatively small at 1.6 trillion dollars. Moreover, looking at the GDP by country in 2018, the GDP is 20.6, 13.9, 5.0, 4.0, and 2.9 trillion dollars for the USA, China, Japan, Germany, and UK, respectively. In comparison, Korea’s GDP is again relatively small at 1.7 trillion dollars [6]. Table 2 shows the gross domestic spending on R&D [7]. It shows that the US has higher R&D investment than Korea and Japan, but their investment ratio to GDP is the lowest. Korea has a lower total investment than the US and Japan, but the rate of investment and growth relative to GDP has been rising rapidly [8]. Many countries are making efforts to increase R&D investments in the private and public sectors.

Apple has once again risen to be the world’s largest technology company in 2020, according to the Forbes magazine. Despite Korea’s small GDP, there are relatively large number of Korean companies on the world’s top companies list, such as Samsung Hyundai Motor, SK Hynix, POSCO, and LG Electronics. This has been made possible not just because of hardware technology but because companies realized the importance of software development and considered the factors affecting the software development process. Samsung is one such example. Samsung’s first smartphone saw a tremendous improvement in terms of hardware. However, when it was launched in an awkward fashion, it had the disadvantage of being the worst smartphone because of software optimization issues. It is important to ensure natural usability through software. Samsung is also making attempts to develop new software to lead the global smartphone industry. In addition, generally, we encountered various problems in software development. To overcome the problems arising in the process of software development, developers have used various approaches [9], and the development of communication technology is one such approach. In this study, we investigate the cultural differences, human cooperation, and tactics aimed at reducing the temporal distance and not the technical aspects. Considering the current status of the mobile content industry, the global app store consumer spending in 2019 was $120 billion, an increase of 2.1 times compared to 2016, and mature markets such as the US, Japan, Korea, and the UK have been driving the growth of app store consumer spending. In addition, the main factors include increased consumer awareness in the mobile industry, increased mobile device users and time spent on mobile devices, advertising revenue from apps, increased influence of mobile games, and increased in-app spending based on subscriptions. Games account for 72% of the app store consumer spending, while subscription services drive the spending growth for nongame apps. In 2019, the number of global app store app downloads (Google Play) was 204 billion, an increase of 45% compared to 2016 and an increase of 6% compared to the previous year. The number of mobile subscribers worldwide is expected to increase by 1.9% per year from 5.1 billion in 2018 to 5.8 billion by 2025, and the growth of the mobile industry is expected to continue until 2025 [10].

1.2. Research Objectives

Alexy et al. [11] found that the support for organizational innovation that allows commercial engagement with open-source software has varying impacts on the technical and administrative dimensions of different job roles through a mixed-method research design. They found that individual-level attributes can counterbalance job role changes that weaken support for adopting open-source software, while perceived organizational commitment has no effect. Ropponen and Lyytinen [12] examined how risk management and environmental factors, such as the development methods and managers’ experience, influence risk components. Their analysis results show that awareness of the importance of risk management and systematic practices to manage risks has an influence on risks related to scheduling, requirements management, and personnel management. Zhang and Pham [13] defined software development environmental factors (SDEFs) related to the software development process. In addition, the study conducted a survey to investigate rankings based on the factors’ influence on software reliability assessment, and also conducted exploratory analyses and presented various opinions about improvement of software reliability. Opinions were presented through additional investigations and analysis results to examine the changes made by software development practitioners. In many software development projects, software requirements are frequently modified during design or development, which tests the development of specific designers’ or developers’ capabilities. SDEFs, such as differences in mutual work recognition among users, developers, and testers, or knowledge differences, can hinder communication, which may lead to faulty development owing to erroneous job definition. Because the exact size and scope of the software cannot be calculated, the risk of excessive requirements, such as schedule, cost, and manpower, may increase.

The objectives of this study are as follows:(1)This study investigates 32 SDEFs to examine the influence of factors affecting the software reliability of Korean companies to identify factors with high influence.(2)We find the most important SDEFs for Korean companies and find the correlations between specific SDEFs.(3)A factor analysis is conducted to identify the mutually independent characteristics of the factors by finding several potential (correlated) factors in the SDEFs, and the differences in group means are investigated through analysis of variance (ANOVA) in the statistical model. This study also uses regression analysis to describe the impact of configuration on improving software stability and analyzes the stages of the development lifecycle.(4)Finally, we compare the results with previous studies to find the characteristics that only affect Korean companies and discuss the causes of the differences.

Section 2 presents the details of the data collected. Section 3 presents the results of various analyses (factor, correlation, regression analysis, etc.) to examine the diversity of Korean companies. Section 4 compares the new findings related to Korean companies with the results of previous studies. It also explains the causes for the existence of differences and similarities in software development. Section 5 concludes the study and discusses the differences.

2. Data Collection

Likert [14] introduced the Likert scale and technique, wherein an individual is invited to define his attitude toward each statement by choosing among a number of r grades on the r-grade Likert scale. The most popular are the five-grade and seven-grade Likert scales. We used the survey by Zhang and Pham [13], which included 32 SDEFs and background information of the software developer. In the survey form [13], 1 indicates “not important,” i.e., it has almost no effect. Conversely, 7 indicates “most important,” which implies it has a significant impact on software reliability. The data were collected from a formal survey and Internet (e-mail) survey served directly to the managers, system engineers, and software developers of 11 organizations (mobile software development, general software development companies etc.) in Korea; 89 survey responses were obtained, among which, 75 were used, excluding a few surveys (written incorrectly or where all values are the same), which were deemed unfit for analysis. R, SPSS V.24, and Excel 2014 were used for the analysis. Table 3 summarizes the demographic data of the participants. The sample had a relatively good mixture of software development experience, size diversity, and program categories, and also represented a good mixture of software development forms. The average number of years of software development was 8.41 and the average percentage of reused code was 49.13.

3. Results

3.1. Relative Importance of SDEFs
3.1.1. Importance of SDEFs Based on the Relative Weighted Method

The survey consisted of 32 SDEFs, with 7 points representing the most important factor in software reliability and 0 points indicating the least important. The relative weighting method with each score was applied to determine the ranking [13]. Table 4 shows the results of analyzing the ranking of 32 software development SDEFs using the relative weighting method. Typically, an environmental factor with a higher normalized weight has a more significant impact on the software reliability assessment than the factor that has a lower weight. Therefore, software developers should pay more attention to SDEFs with higher normalized weights. As can be seen in Table 4, in Korean companies, the normalized weight value of factor f11 (requirement analysis) is the highest at 0.03667, followed by f15 (programmer skill) at 0.03634, f12 (relationship of detailed design to requirement) at 0.03578, f20 (human nature) at 0.03449, f22 (testing effort) at 0.03449, and f24 (testing methodologies) at 0.03449.

3.1.2. Correlation Analysis between SDEFs

This section describes a correlation analysis performed to examine the correlation between SDEFs. The Pearson correlation coefficient (Pearson’s r) is a measure of the linear dependence between two variables, where 1 is the total positive linear correlation, 0 is no linear correlation, and −1 is total negative linear correlation [15]. Table 5 shows the correlation of the top-10 SDEFs with high normalized weight values in Table 4. Correlation analysis was performed using 32 SDEFs; only the SDEFs that were significant in the 10 SDEFs shown in Table 5 were tabulated and the other SDEFs were excluded, and only those factors with a strong correlation were summarized (r > 0.4 or r < −0.4). As a fragmentary example, f11 (requirements analysis) and f12 (relationship of detailed design to requirement), and f11 (requirements analysis) and f13 (work standards) have a strong quantitative linear relationship.

3.2. Environmental Factor Analysis
3.2.1. Factor Analysis

Factor analysis was conducted to verify the construct validity of the questionnaire. Construct validity is determined by the suitability of the items included in the questionnaire to evaluate the set of hypothesized constructs. Factorial validity is a construct validity that proves the validity through factor analysis. Therefore, factor analysis was conducted to verify the validity of the items related to the improvement of the software reliability assessment [16]. Factor analysis was performed using the principal component method, and the varimax method was used as the orthogonal rotation method. The eigenvalues of the components, proportion of variance, and cumulative proportion of variance are presented in Table 6, and 70.973% of the variation can be explained by the first four components. The first to fourth eigenvalues are greater than 1; however, the value after the fifth eigenvalue is less than 1. Those with eigenvalues less than 1 are not considered to be stable. They account for less variability than a single variable does, and hence, they are not retained in the analysis. In this sense, we end up with fewer factors than the original number of variables [17]; hence, the first four components are retained. As a result, the selected factors are equal to the top 13 most important SDEFs based on the ranking results presented by the relative weighting method in Table 4. We can observe that the SDEFs cover many aspects of the development and testing of software, and the majority of these factors have a similarly significant impact on software reliability.

As shown in Table 7, we retained four principal components: PCA1, PCA2, PCA3, and PCA4. The first component, PCA1, consisted of factors for f24, f25, and f22, which are related to testing ability, whereas PCA2 consists of factors for f15, f19, f20, f7, and f18, which are related to programmer ability. PCA3 consisted of factors for f11, f12, and f13, whereas PCA4 consisted of factors f2 and f6, which were related to requirements, standards, and program utility, respectively. All four components had high loading values, which exceeded 0.5. Several methods have been proposed as a measure of reliability, but in this study, Cronbach’s test was used to test the internal consistency of the items. Cronbach’s typically increases as the intercorrelations among test items increase and is, thus, known as an internal consistency estimate of the reliability of test scores. If the Cronbach’s coefficient is more than 0.6, the scale is reliable [18]. As can be seen in Table 7, Cronbach’s is 0.786, which is greater than 0.6.

3.2.2. Hypothesis Test

A statistical hypothesis is a method of statistical interference that is testable based on observing a process that is modeled via a set of random variables [19]. Hypothesis tests are used to determine the outcomes of a study that would lead to a rejection of the null hypothesis for a pre-specified level of significance. The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by identifying two conceptual types of errors, such as type 1 and type 2, and by specifying parametric limits on, for instance, how much type 1 error will be permitted. The null hypothesis is a simple hypothesis related to the contradiction, with the theory to be proved. The alternative hypothesis is a hypothesis related to the theory to be proved. The hypothesis will reveal whether their opinion can be considered “the same.”

Hypothesis 1. People developing software for different applications have the same opinion on the importance of the four components of the new principle. Safety-critical, commercial, inside-used systems, and others may not have the same opinion on the significance of all these factors.

Hypothesis 2. People with different types of software development experience have the same opinion on the significance of the four components of the new principle. The database, operation system, communication control, language processor, and others are considered.

Hypothesis 3. People playing different numbers of years of software development have the same opinion on the significance of the four components of the new principle. Less than 5 years, 6–10 years, and more than 11 years are considered here.

Hypothesis 4. People playing different roles in software development have the same opinion on the significance of the four components of the new principle. Managers, system engineers, programmers, testers, administrators, and others are considered here.

Hypothesis 5. People playing different numbers of years of software development have the same opinion on the significance of the percentage of reused code. Less than 5 years, 6–10 years, and more than 11 years are considered here.We performed ANOVA on hypotheses 1 to 5. ANOVAs are useful for comparing three or more means (groups or variables) for statistical significance. ANOVA is suitable for a wide range of practical problems. The results of the analysis for the given hypotheses are presented in Tables 812. As per the results of Hypothesis 1, there was no significant difference according to the group, as shown in Table 8. As per the results of Hypothesis 2, there is a significant difference only in the group of program utility component factors, as shown in Table 9. A post-hoc analysis was performed using Scheffe’s test method; however, there was no significant difference between the groups. As per the results of Hypothesis 3, there is a significant difference only in the programmer ability component factor group, as shown in Table 10. In the post-hoc analysis, there was a significant difference between groups of less than 5 years and more than 11 years. As per the results of Hypothesis 4, there is a significant difference only in the group of program utility component factors, as shown in Table 11. Post-hoc analysis showed no significant differences between the groups. According to the results of Hypothesis 5, there is a significant difference in the percentage of reused code, as shown in Table 12. In the post-hoc analysis, there was a significant difference between groups of less than 5 years and 5–10 years, and less than 5 years and more than 11 years. In Tables 812, N is the frequency, M is the mean value, SD is the standard deviation, F is the variance between groups indicating the degree of difference between groups, and is the value. Figures 15 show the results of ANOVA for Hypothesis 15.

3.2.3. Regression Analysis

A regression analysis was performed to determine the effect of the four constituent factors obtained through factor analysis on the improvement of the software reliability accuracy. Table 13 shows the correlation between the four component factors in the factor analysis. As can be seen in Table 13, testing ability is correlated with requirements, standards, and program utility, whereas programmer ability is correlated with requirements and standards. Furthermore, requirements and standards have a correlation with all and program utility has a correlation with testing ability and requirements and standards. Regression analysis was performed using a stepwise method to examine the influence of the constructs on the improvement of software reliability accuracy. Figure 6 shows the correlation results for the four new component factors.

Table 14 presents the results of the regression analysis. As a rule of thumb, with respect to the values for our predictors, we say that the coefficient is statistically significant if its value is smaller than 0.05. As can be seen in Table 14, two factors of the four component factors were excluded because the value was less than 0.05. The coefficients indicate how many unit component factors increase for a single unit increase in each predictor. Hence, a 1-point increase in testing ability corresponds to a 0.058-point increase. The beta coefficients allow us to compare the relative strengths of the predictors.

3.3. Analysis of Development Phase
3.3.1. Analysis between Environmental Factor Groups

We want to know how the four software development phases affect each other except the hardware systems phase in the five groups categorized by Zhang and Pham [13]. As a result, there was a significant difference between the four groups, as shown in Table 15. Tukey grouping of the post-hoc analysis was applied to group different development phases based on the mean value of participants’ scores; however, there was no significant difference between each group. The mean score of the analysis and design phase was the highest at 5.545, and the mean score of the coding phase was the lowest at 5.247. The mean scores of the general and testing phases were 5.497 and 5.265, respectively.

3.3.2. Significant Factor for Each Development Phase

In Section 1, we discuss software development projects that are subject to frequent modifications. As a result, SDEFs hinder communication between users, developers, and testers. Thus, every development phase requires data on significant SDEFs for software reliability assessment. This study applies a stepwise elimination method to eliminate nonsignificant SDEFs in each development phase. The variables of stepwise elimination for each development phase were environmental factors in this phase. Table 16 presents the significant environmental factors, parameter estimates, t-values, values, and so on, for the significant environmental factors in each phase. The results show that significant environmental factors are positively correlated with software reliability improvement. In the analysis and design phase, f14, development management is one of the significant environmental factors. F18, program workload, and f20, human nature, are significant environmental factors in the coding phase. Moreover, f22, the testing effort, is also a significant environmental factor. However, in the general phase, there were no significant environmental factors. F18, program workload; f20, human nature; and f22, testing effort, have high rankings in Table 3.

4. Comparison

Zhang and Pham [13] conducted a survey in the early 2000 to assess the importance of SDEFs. Recently, Zhu et al. [20] investigated the changes in the impact of SDEFs. This section compares the results of this study with those of previous studies. By comparing with previous studies, we want to provide software developers working in different countries an important environmental factor in software development to increase the efficiency of work (with respect to work versus time investment) on software reliability improvement.

4.1. Comparison of SDEFs Ranking

Since the study uses different population samples from different countries in its three analyses, and the time of the studies is also different, the ranking results differ. It is interesting to see how the ranking of SDEFs according to the relative weighting method changes with time and how it varies according to country. The three results for the top 10 ranks are listed in Table 17. As shown in Table 17, the results of Zhang and Pham [13] and Zhu et al. [20] are similar. This can be studied in detail by Zhu et al. [20]. The results of the 10 new ranks also had similar SDEFs. However, there are three different environmental factors: f20, human nature; f18, program workload; and f7 programming language. First, the working area, working time, vacation, holiday, salary, relationship with boss, and culture are different. In Korea, software engineers belong to several categories. The proportion of engineers who occupy the total economy is a part of the medium/large core product development departments, large IT companies, agencies, enterprises, national laboratories, online/mobile game developers, high-growth start-ups, and others. In the middle/large core product development departments, there are numerous high-class talents, and the working time is long, but the treatment is good. The “IT department” of a company is not a core department but rather a department that is responsible for the computerization of a company. Moreover, since agencies begin with price competition, rather than quality from the outset, it is not easy for engineers to receive good treatment. In the case of enterprises and national laboratories, the quality of human resources is high, but the intensity of work is not. However, remuneration is low. In the case of online/mobile game developers, experienced workers receive good treatment, but the average remuneration is not high, as the percentage of junior (1–3 years after graduation) engineers is high. In the case of “high-growth start-ups” in Korea where competition for securing high-quality talent is fierce, treatment is good—this means that developers are working hard to work in a pleasant environment. In comparison to the US Silicon Valley, the proportion of large IT companies, agencies, and online/mobile game developers is relatively low in Korea. IT/infrastructure management is outsourced to low-cost markets, such as India or China, or resorts to cloud services. On the other hand, high value-added industries, which correspond to the medium/large core product development departments, and high-growth start-ups of large enterprises are adequately developing. F11, requirements analysis, is the most important environmental factor in the software development process. The customer’s knowledge level has increased to the extent that the program specifications and other factors vary according to customer requirements. In Korea, consumer requirements are changing faster than in the past. In addition, Korea has superior skills in IT-related industries compared to other countries.

4.2. Comparison of Principal Components

As shown in Table 18, Zhang et al. and Zhu et al. [20] selected the top 11 and 10 SDEFs, respectively. In this study, the top 13 SDEFs were selected for factor analysis. This study applied factor analysis to identify the four common factors. The new principal components of this study are well structured by the software development phase compared to other studies.

4.3. Comparison of Significant SDEFs within Each Development Phase

As shown in Table 19, this study and Zhu et al. [20] used the Tukey method to group the four development phases into mean scores, and Zhang et al. [21] used the SNK multiple comparison test. This study and Zhang et al. [21] only comprised one final group; however, they became two separate groups: testing and general phase in Zhu et al. [20]. In this study, the analysis and design phase is the most important development phase in software development; however, the testing phase is the most important phase in the software development phase.

A comparison of significant environmental factors is shown in Table 20. F6, the percentage of reused modules, is the most significant factor from Zhang et al. [21] and Zhu et al. [20] in the general phase; however, it is not the most significant factor in the new results in Table 20. In the analysis and design phase, f14, development management is one of the significant environmental factors. F18, program workload, and f20, human nature, are significant environmental factors in the coding phase. Moreover, f22, the testing effort, is also a significant environmental factor. The reasons for the difference in significant factors in the results of the three papers are due to the difference in working environment, time, and more, as described in Section 4.1.

4.4. Comparison of the Percentage of the Time Allocation for Software Development Phase

In this study, the percentages of time allocation for software development (analysis, design, coding, testing) were 21.33%, 25.33%, 30.40%, and 22.94%, respectively. The percentage of time allocation comparisons is presented in Table 21. Compared to Zhang and Pham [13] and Zhu et al. [20], the percentages of time allocation were 3.7% and 0.7% lower in the analysis phase and 5.6% and 3.6% lower in the coding phase, respectively; however, it was 7.3% and 5.3% higher in the design phase. The time allocation for the testing phase had a similar percentage. Overall, it is observed that there is a small difference in each phase, but it seems that the design phase is more important in Korean companies.

5. Conclusions and Remarks

Software systems that offer user convenience must improve solutions for immediate problems in multiple industries. When using software for personal use, the schedule or quality of development is not a concern. However, good software for sale to the general public should be user-friendly, error-free, efficient, reliable, and utilitarian to satisfy its users. In many software development projects, software requirements are frequently modified during design or development, during which the development of specific designers’ or developers’ capabilities is tested. SDEFs, such as differences in mutual work recognition between users, developers, and testers or knowledge difference, can hinder communication, which may lead to faulty development due to erroneous job definition. This study analyzes the degree of impact of each SDEF on software reliability assessment in Korean companies. This study investigates 32 SDEFs to examine the impact of factors affecting the software development environment of Korean companies. The results were then compared to those of Zhang and Pham [13], Zhang et al. [21], and Zhu et al. [20]. It is worth noting that these three SDEFs, human nature (rank #4), program workload (rank #7), and programming language (#10) are in the top 10 rankings for the first time based on our survey data collected from Korean companies but have never been included among the top 10 in previous studies from US companies. These findings can provide useful benefits to software developers and managers who are working in countries with different or similar cultures, and how to allocate resources to identify up to date significant SDEFs in software development to increase working efficiency, i.e., work versus time investment in software reliability improvement.

Briefly summarizing the results,(1)Data were collected from formal survey questionnaires and Internet surveys, given directly by software developers or managers, in 11 organizations. Data from 75 surveys were used in the analysis.(2)For Korean companies, the normalized weight value of factor f11, requirements analysis, is the highest, followed by f15, programmer skill; f12, relationship of detailed design to requirement; f20, human nature; and f22, testing effort.(3)We found four principal components using factor analysis. The first component, PCA1, consisted of factors f24, f25, and f22, which are related to testing ability, whereas PCA2 consists of factors f15, f19, f20, f7, and f18, which are related to programmer ability. PCA3 consisted of factors f11, f12, and f13, whereas PCA4 consisted of factors f2 and f6, which were related to requirements, standards, and program utility, respectively. Cronbach’s α is 0.786, which is reliable because it is greater than 0.6.(4)We performed ANOVA on hypotheses 1 to 5. As per the results of Hypothesis 1, there was no significant difference according to the group. As per the results of Hypothesis 2, there is a significant difference only in the group of program utility component factors. A post-hoc analysis was performed using Scheffe’s method; however, there was no significant difference between the groups. As per the result of Hypothesis 3, there is a significant difference only in the programmer ability component factor group. The results of the post-hoc analysis showed a significant difference between groups of less than 5 years and more than 11 years. As per the results of Hypothesis 4, there is a significant difference only in the group of program utility component factors. In post-hoc analysis, there was no significant difference between the groups. According to the results of Hypothesis 5, there is a significant difference in the percentage of reused code. Post-hoc analysis showed significant differences between groups of less than 5 years and 5 to 10 years, and less than 5 years and more than 11 years.(5)As a result of regression analysis to determine whether the four constituent factors affect the software reliability accuracy, it was confirmed that testing ability and programmer ability influence the improvement of software reliability accuracy.(6)In the analysis and design phase, f14, development management is one of the significant environmental factors. F18, program workload, and f20, human nature, are significant environmental factors in the coding phase. Moreover, f22, the testing effort, is also a significant environmental factor. However, in the general phase, there were no significant environmental factors. F18, program workload; f20, human nature; and f22, testing effort, have high rankings based on the relative weighted method described in Table 3.

We compared the results of this study with those of previous studies. First, there are three environmental factors that differ from previous studies: f20, human nature, f18, program workload, and f7, programming language. These factors are influenced by the working area, working time, vacation, holiday, salary, relationship with bosses, and culture. Second, in the past two papers, testing plays a more important role; however, this study shows that analysis and design play a more important role than other phases. Third, as a result of comparing the significant environmental factors, the reason for the difference in significant factors in the results of the three papers is due to the difference in working environment, time, and more. Finally, we compared the time allocation for the software development phase. Overall, it can be observed that there is a small difference in each phase, but it seems that the design phase is more important in Korean companies. These results provide general guidance on important factors to consider for software developers and managers working in a culture or environment similar to a Korean company.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant Nos. NRF-2015R1D1A1A01060050, NRF-2018R1D1A1B07045734, and NRF-2019R1A6A3A01091493).