Table of Contents
ISRN Education
Volume 2012, Article ID 872196, 7 pages
Research Article

The Impact of IT Management on the Efficiency of Top US Liberal Arts Colleges

Information Services, Rhodes College, 2000 North Parkway, Memphis, TN 38112, USA

Received 6 December 2011; Accepted 12 February 2012

Academic Editors: K. Capps and I. Shibley

Copyright © 2012 James E. Eckles. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This study of 41 highly ranked liberal arts colleges in the United States attempts to determine if there is any observable impact on the efficiency of the institutions relative to differing practices of information technology management in the context of a resource-based view of the colleges. The institutions are grouped into high- and low-efficiency groups according to their performance in graduating students. Five themes emerge from a review of the literature of information technology management in higher education. Ten independent variables representing IT management practices in those five areas are then compared across the two groups of high- and low-efficiency institutions. No significant difference was found between the two groups on any of the ten variables. Several potential reasons for this finding are discussed.

1. Introduction

In the face of economic pressure, colleges and universities administrators have a strong motivation to find innovative ways to make their institutions more efficient in order to survive [1]. As information-centric organizations [2], one common place for colleges and universities to seek efficiencies is through the use of information technology (IT). Operational research suggests that in other industries, information technology can indeed determine how effective organizations are in combining resources to produce goods or deliver services (e.g., [3, 4]).

Information Technology does not by itself create graduates, research, or any other type of higher education outcome. IT is presumably an investment that extends or enhances an institution’s ability to produce these outcomes [5]. It is reasonable, therefore, to assume that IT investments and practices will have an impact on the efficiency of an organization in achieving outcomes, by increasing either efficiency or capacity. An example of increasing efficiency is electronic journal databases; being able to search through thousands of issues of journals electronically makes the discovery stage of the research process much more efficient and effective. An example of increasing capacity is online learning; with an online course management system, the number of students an institution may teach need not be limited by the number of classrooms available on campus. The work of [4] pointed out that performance gains are often not realized in profitability but rather in improved quality or lowered costs for the end consumer, just as one would expect in a nonprofit industry like higher education.

Multiple studies of IT business value outside higher education have found significant and positive relationships between IT investment and organizational performance (e.g., [3, 4, 6]). Those findings may be surprising given that Information Technology represents approximately 5% of the budgets of the American liberal arts colleges studied here. While that is a seemingly small amount, 5% is important for two reasons. First, the 5% of budget spent on IT is what one may consider a strategic investment; the purpose of investing in IT is to provide resources for delivering educational and research outcomes like graduating students and publishing peer-reviewed research [5]. Second, researchers and practitioners spend much energy debating the best ways to use that 5% of the overall budget. They debate what core IT administrative choices, such as the centralization or decentralization of IT services or the relative levels of clerical and professional staffing, best serve higher education institutions [7, 8].

This paper attempts to make two contributions to our understanding of the impact of IT management on institutional performance of liberal arts colleges in the United States. A review of the literature of information technology management in higher education provides insight into the specific aspects of IT management that may impact institutional performance. Five themes emerge from the literature and their relationships are illustrated through interrelationship digraphing. In addition to identifying those IT management constructs, the study attempts to relate proxies for those constructs to institutional efficiency for a small group of selective private liberal arts colleges. The goal is to identify differences in IT management practices between relatively efficient and relatively inefficient colleges in the sample.

2. Literature

2.1. Resource-Based View

The theoretical basis for this study is the Resource-Based View of the Firm (RBV), drawn from operational research. The theory states that organizations (including higher education institutions) exist as a framework for combining bundles of resources in a unique way. In higher education, resource bundles might include such things as faculty expertise, facilities, finances, student ability, and information [9, 10]. Technology is a part of the framework by which those resources are combined to produce degreed students. Therefore, differences in the ways institutions manage technology and information may result in differences in the efficiency with which those institutions graduate students.

Multiple empirical studies of IT’s impact on institutional performance (e.g., [3, 4]) have utilized RBV as their foundational theoretical framework. In pursuing a similar line of research, Melville et al. [4] document their selection of RBV by comparing it to alternative methods used in 200 articles they reviewed related to IT business value.

RBV is appropriate for application to higher education as an industry. Conner [10] points out the industry-agnostic nature of RBV by observing that outstanding performance results “primarily from the acumen or luck of the firm in acquiring, combining, or deploying resources, rather than from the structure of the industry in which the firm finds itself” [10, page 132]. Powers and McDougall [11] used RBV to study such higher education phenomena as the ability of universities to transfer technology generated from faculty research activities. Dill [12] used RBV as the basis for understanding academic institutions as learning organizations. Further, the large number of institutions and the highly competitive nature of the higher education industry make it likely that higher education institutions obtain observable gains from administrative practices such as IT [4].

One of the difficulties in studying higher education (or any primarily nonprofit industry) through theoretical lenses developed in management or economics is determining the dependent variable that represents institutional performance. Obviously “profit” has limited meaning in the non-profit higher education context. Outcomes studied specifically for higher education institutions include graduation rates and retention rates. However, measuring merely the levels of outputs introduces undesirable issues of scale and resource availability [13]. This can be addressed by studying the efficiency of organizations, that is, the level of output relative to levels of input. One of the core assumptions of RBV is that institutions are seeking to maximize the efficiency of their product (or service) production and distribution [10]. Consequently, we can look to existing literature on efficiency in higher education to provide existing efficiency models for determining the means of differentiating institutions.

Eckles [14] used an economic technique known as production frontier analysis to study the efficiency of outcomes via graduation rates for elite liberal arts colleges in the United States. Specifically, he employed the data envelopment analysis (DEA) model developed initially by Archibald and Feldman [13]. DEA is a nonparametric statistical technique that allows the researcher to identify the institutions that make the most of their available inputs and then measure all other institutions relative to those top performers. In lay terms, the idea behind frontier analysis is to seek a residual not unlike more common linear regression models. Where frontier analysis differs is that instead of seeking a residual compared to the average performance of the sample (i.e., the best-fit line), it seeks a residual compared to the very best performers at any given level of inputs. Those very best performers are said to be technically efficient and have a residual of zero. The nonzero residual of the remaining institutions allows the researcher to construct a measure of efficiency relative to those top performers. Eckles [14] used a DEA model with graduation rate as the output or dependent variable, and the input or independent variables included cost per undergraduate, percent of faculty who are full-time, 25th percentile SAT score of the entering class, and the percent of the entering class who were in the top 10 percent of their high school classes. This model is used in the current study to divide the sampled colleges into a high efficiency and low efficiency group based on the residual scores.

2.2. Information Technology Management in Higher Education

The Hoshin Planning technique of interrelationship digraphing can be used to help understand the relationships among the primary themes that emerge in the literature of IT management in higher education [15]. A review of that literature revealed five themes: alignment, centralization, governance, investment, and security. While the five areas addressed here do not necessarily represent a complete inventory of the issues faced by IT practitioners, they do represent a strong core around which more tangential topics are built.

Figure 1 illustrates the interrelationship digraph that is suggested by the literature reviewed. An arrow pointing from one item, such as Governance, to another item, such as Alignment/Strategic Planning, indicates that Governance “drives” Alignment. That is, some sort of causal or temporal relationship exists between the two. In this manner, the interrelationship digraph is a useful tool for tracking the relationships among the topics as they are revealed in the literature.

Figure 1: Interrelationship Digraph.

This digraph shows Governance as the primary driver among the five topics. It is driving four other topics, and no topics are driving Governance. Governance “causes” Alignment in the sense that Governance decisions will dictate the degree of Alignment between a college or university’s IT organization and the overall institution’s goals [16]. Governance influences the degree to which an institution Centralizes its IT resources [17], it approves and manages Investments in IT [18, 19], and Security breaches are a potential cost of poor Governance [16].

Centralization and Investment are intermediary topics. Both are drivers for two other topics and both are the effects of two other topics. Centralization has been shown to foster Alignment [20] as well as facilitate improved information Security [21, 22]. But Governance decisions and Investments in infrastructure are what determine the degree of Centralization. Similarly, Alignment and Strategic Planning set the priorities that drive Investment that is approved through Governance. But without sufficient Investments in Security, information resources are left exposed.

Alignment and Security then are left as the primary effects. Security is primarily the result of decisions made through Governance in the form of Centralization and Investment. Likewise, Centralization, Governance, and Security are all shown to have an impact on an institution’s Alignment between organizational goals and information technology [23].

It appears, then, that the management of IT within a college or university has a complexity that mirrors the complexity of the prototypical organizational structure of higher education institutions. Though the literature helps one understand the primary topics of conversation that have emerged around higher education IT management, no grand unified theory exists for leveraging information resources to the betterment of higher education institutions.

2.3. Hypotheses

It is reasonable to expect that colleges that rank highly in overall efficiency would similarly rank high in IT efficiency. The number of employees and students supported per IT worker is used as a proxy of IT efficiency, giving us the first hypothesis.H0: There is no difference in the number of individuals supported per IT worker at efficient and inefficient liberal arts colleges.

Literature on governance in higher education tends to advocate for the existence of a chief information officer or equivalent officer running the IT operation and for that officer to create a strategic plan for IT. Literature also advocates for broad-based decision making in IT. Assuming that those advocated positions will lead to more efficient institutions, we come to these three hypotheses.H1a: Efficient liberal arts colleges are no more likely to have a standalone IT strategic plan.H1b: Efficient liberal arts colleges are no more likely to have a higher ranked top IT officer.H1c: Efficient liberal arts colleges are no more likely to have a greater number of constituencies advising their IT operations.

Arguments regarding the balance of centralization and decentralization tend to suggest that the trade-off is between efficiency and agility. Therefore it is reasonable to assume for the sake of establishing a testable hypothesis that greater levels of centralization will correlate with higher levels of efficiency.H2: Efficient liberal arts colleges are no more likely to have a higher percentage of IT personnel centralized.

Funding per student and proportion of the campus budget spent on IT were selected as proxies of investment in IT, leading to two hypotheses.H3a: Efficient liberal arts colleges spend no less on IT Funding per FTE Student than inefficient liberal arts colleges.H3b: Efficient and inefficient liberal arts colleges do not differ in the percentage of overall campus budget spent on IT.

Security practices may relate with increased efficiency either because of a causal relationship or because security practices are associated with other superior IT management practices. Conversely, security practices are costly and could lead to a decrease in efficiency. The following hypotheses seek to determine if a relationship exists.H4a: Efficient liberal arts colleges are no more likely to have an IT security risk assessment.H4b: Efficient and inefficient liberal arts colleges do not differ in the number of security practices they employ.

Alignment is potentially the lynchpin to any possible link between IT management practices and the efficiency of an institution. The degree to which an organization’s IT operations align with its educational mission should relate to the ability of the institution to leverage IT as an asset, potentially resulting in efficiency. This is tested by the following hypothesis. H5: Efficient liberal arts colleges are no more likely to have IT included in the institution’s overall strategic plan.

3. Methodology

3.1. Sample

The sample used for this study was initially based on the 93 highly ranked liberal arts colleges sampled in a recent study of the efficiency of liberal arts colleges [14]. Those institutions were contacted and asked for the responses they would have made to the EDUCAUSE Core Data Service survey as well as their written permission to use their data for published research. Eighty-five of those institutions responded in the affirmative. However, only 41 of them actually provided responses sufficient for the statistical analysis performed in this study.

Following the recommendation for future research in Eckles [14], the technical efficiency score was used to divide the sample into the high-efficiency and low-efficiency groups. The technical efficiency score in frontier analysis is analogous to the residual in linear regression. Instead of measuring the difference between the actual result and the average predicted result as in regression, a technical efficiency score is based on the difference between the actual result and the best possible result of the institution calculated by the analysis. The 20 institutions sampled with the highest technical efficiency scores constituted the high-efficiency group, and the 21 remaining institutions with the lowest technical efficiency scores constituted the low-efficiency group. The cut-off for inclusion in the high-efficiency group was 95% technical efficiency. The technical efficiency scores for the low-efficiency group ranged from 82.5% to 94.7%.

All 41 of the sampled institutions are liberal arts colleges ranked in the top 100 of the USA News and World Report’s National Liberal Arts Colleges list. All are categorized as Baccalaureate Colleges—Arts & Sciences under the Carnegie 2005 Basic classification scheme. The colleges are located in all geographic regions of the USA save the Southwest. Seventeen are located in cities, 20 in towns and suburbs, and four are in rural areas. All institutions serve between 1,000 and 5,000 students except one, which enrolls fewer than 1,000 students. The mean graduation rate for these institutions in the academic year 2006-2007 was 82.7%, and the mean undergraduate enrollment in the same year was just over 2,000 students. All data reflected the 2006-2007 academic year.

3.2. Data

The variables used to represent Governance include that IT has standalone strategic plan, Rank of top IT officer, and Advisors total. The variable used to represent Centralization is % IT Personnel Centralized. The variables that served as proxies of Investment were Funding Per FTE Student and % Campus Expenses spent on IT. The concept of Security was represented by the variables IT security risk assessment performed and Number of security practices. Finally, the employed measure of Alignment was that Institution strategic plan includes IT. Descriptive statistics of all variables used are presented in Table 1.

Table 1: Descriptive statistics.
3.3. Analysis

The hypotheses were tested by performing independent 𝑡 -tests, with efficiency group membership being the dependent variable and a variable representing an IT management practice being the independent variable. This simple univariate method was chosen because a multivariate method is inappropriate with such a small sample size. Additionally, reliability analysis for scales of governance, investment, security, and alignment constructed from the available variables indicated no evidence of any type of coherence based on Cronbach Alpha scores (governance = 0.143, investment = 0.000, security = 0.097, alignment = 0.350).

In those instances where there is a theoretical or intuitive basis for predicting the direction of a difference in means, the hypothesis was stated accordingly and a one-tail test was used. In those instances where a direction was not hypothesized, a two-tail test was used. An experiment-wise alpha of 0.15 is used in these 𝑡 -tests. This decision is based on the small sample size and a reasonable expectation that the effects of information technology on efficiency, if any, are muted due to the relatively small proportion of an institution’s budget typically devoted to information technology (typically about 5%). That experiment-wise alpha was adjusted for each independent test using the Bonferoni adjustment. With 10 tests, that results in an alpha of 0.015 for each individual test.

The assumptions underlying the independent 𝑡 -tests are of course normality of the population distribution, equality of the variance of the populations, and independence of the populations. Normality is assumed in this case, though the independent 𝑡 -test is very robust against violations of that assumption [24]. The assumption of equality of variances was evaluated for each 𝑡 -test using Levene’s Test with an alpha of 0.05, and the 𝑡 -statistic was adjusted appropriately in the two instances where the assumption was violated. The two variables where the assumption was violated were IT funding per FTE student (Levene 𝐹 = 5 . 0 9 0 , 𝑃 = 0 . 0 3 0 ) and total number of outsourced areas (Levene 𝐹 = 4 . 7 7 7 , 𝑃 = 0 . 0 3 5 ). The assumption of independence was met because there is no relationship between the values of variables of individual institutions.

A regression model was considered using technical efficiency score as the dependent variable and the variables related to each hypothesis as the independent variables. There were several problems with this, however. Most importantly, only 18 institutions provided information on every variable being evaluated, so the size of the sample was slashed to an essentially unusable number. Even with the 18 institutions, the model was unusable due to very large collinearity problems as measured by Variable Inflation Factor. IT Funding per Student was collinear with nearly everything, as was Headcount supported per IT worker. Removing those variables from the regression model addressed the collinearity problems but left the model with a regression coefficient not significantly different from zero ( 𝐹 = 0 . 8 5 8 , 𝑃 = 0 . 6 2 0 ).

4. Results

The result of the 𝑡 -test for each hypothesis is included in Table 2. The results lead to the rejection of none of the null hypotheses. The test that had the lowest 𝑃 value, the test for H1b (that the rank of the top IT officer does not differ between the higher- and lower-efficiency groups), was 0.065, far above the 0.015 criterion set for the test

Table 2: Results of statistical tests for each hypothesis.

5. Discussion

What does it mean that there are no statistically significant findings in this study? Technically, it means that no observable relationship exists between the technical efficiency of the liberal arts college studied and the IT management practices evaluated. A number of reasons for this result are possible.

A first possibility is that there really is no relationship whatsoever between institutional efficiency and IT management practices. This possibility, though, seems unlikely. The literature speaks to IT-institutional alignment indicating that having an IT unit capable and ready to pursue institutional strategic goals improves overall institutional performance. It certainly stands to reason that differing management practices will lead to higher or lower performance of the IT unit, which should then impact institutional performance. It is possible that the IT units in the colleges studied are poorly aligned with their institutions’ strategies, and consequently no degree of effectiveness in IT management would impact institutional performance.

Another possibility within the same vein of thought is that the low investments in IT by these colleges relative to overall spending attenuate the ability of IT to impact institutional performance regardless of how that money is spent. Perhaps the average of just 5% of overall spending devoted to IT is insufficient, and that it takes greater investment for IT to make a difference. That notion, however, is discounted by the fact that the median percentage of revenues spent on IT across all industries in 2007 were 1.8% [25]. Every college in this study devoted a higher percentage of its annual operating budget to IT. If anything, IT management practices should matter more in higher education than in other industries.

Perhaps a more likely scenario is that there is a problem with the data used in the study. At least three major limitations in the data could result in the lack of significant differences found in this study. One is that the variables used to measure the IT management practices are in fact poor measurements. The measurements for governance, investment, and security were simply salvaged from what was available in the ready-made data set. The measures of centralization have strong face validity and together make a reliable scale, but they only measure centralization in terms of personnel assignments, and not all institutions responded to every measure.

Interestingly, one potential cause of finding no significant differences may not be the data, but rather a lack of variation in the management practices at the institutions studied. Higher education is a tradition-bound industry, and the management of our institutions often takes common forms, at least within broad categories like liberal arts colleges. The most innovative practice among these institutions is likely the existence of merged information service organizations [26]. Out of the 41 institutions studied, 15 have merged information service organizations, defined for this purpose as a division in which the library reports to the top IT officer. Seven of the 21 inefficient colleges have merged information services organizations, and eight of the 20 efficient colleges have such arrangements. Given that such a dramatic structural difference in IT organizations does not differ across the high- and low-efficiency groups, perhaps it should be expected that no other governance, centralization, investment, security, or alignment IT management practices would differ.

A final possible explanation for the observed lack of significant differences is that there may not have been enough variation in the efficiency of the institutions studied for any differences in IT management practices to be observable. The range of technical efficiency scores was 82.5% to 100%, with a standard deviation of 4%. That makes the entire range of observed efficiency scores about four standard deviations wide. Additionally, the average efficiency score was 95%, indicating that the sample was biased towards higher-efficiency institutions.

6. Conclusion

Admittedly it is possible that the chain of inference that this study attempted to build may just be too indirect. Technology is one part of the administrative framework that institutions use to bundle resources, one of which is information. That resource bundling is done in a way that is either highly efficient or not highly efficient. The management practices within the information technology department of an institution presumably impact the technology portion of the administrative framework, which presumably impacts the efficiency of the bundling. There are many other factors involved in this process—other parts of the administrative framework including people and policies, other resources included in the bundle, and potentially other factors that influence the IT portion of the administrative framework.

Given the findings in this study, what conclusions can one draw regarding the management practices of IT units within these 41 liberal arts colleges in the context of institutional efficiency? It is tempting to conclude that, as Carr [27] famously opined, IT does not matter. When it comes down to how well a top liberal arts colleges uses its available resources to graduate students, the various IT management practices may or may not have a measurable influence. More specifically, if one accepts the proposition that most US liberal arts colleges share common organizational form, then it follows that their efficiency would not vary widely. In other words, the conclusion of this paper is that the nuanced differences of IT management practices among the sampled institutions do not seem to matter. These findings suggest that only gross differences in IT management practices—which would likely be accompanied by major differences in structural organization—can make a difference in institutional efficiency.

This study takes a step towards answering the call of EDUCAUSE president Oblinger and Hawkins to ask the right questions about technology when seeking significant differences [28]. A measurable institutional outcome, efficiency in graduating students, is the outcome institutions seek to improve. Rather than seeking the effects of “technology” vaguely defined, this study attempted to seek effects of specific information technology management practices. Finally, the Resource-Based View of the Firm and the existing literature on information technology management in higher education provide a theoretical basis for hypothesizing that those management practices indeed impact the institutional outcome. The magical significant difference, if it exists at all, will simply take some more hard work to find.


  1. W. H. Graves, “Virtual operations: challenges for traditional higher education,” EDUCAUSE Review, vol. 36, no. 2, pp. 46–56, 2001. View at Google Scholar
  2. R. Sabherwal and P. Kirs, “The alignment between organizational critical success factors and information technology capability in academic institutions,” Decision Sciences, vol. 25, no. 2, pp. 301–330, 1994. View at Google Scholar
  3. A. S. Bharadwaj, “A resource-based perspective on information technology capability and firm performance: an empirical investigation,” Management Information Systems Quarterly, vol. 24, no. 1, pp. 169–196, 2000. View at Google Scholar · View at Scopus
  4. N. Melville, K. Kraemer, and V. Gurbaxani, “Review: information technology and organizational performance: an integrative model of it business value,” Management Information Systems Quarterly, vol. 28, no. 2, pp. 283–322, 2004. View at Google Scholar · View at Scopus
  5. M. A. McRobbie, “The place of information technology in planning for innovation,” EDUCAUSE Review, vol. 42, no. 6, pp. 12–13, 2007. View at Google Scholar
  6. W. Oh and A. Pinsonneault, “On the assessment of the strategic value of information technologies: conceptual and analytical approaches,” Management Information Systems Quarterly, vol. 31, no. 2, pp. 239–265, 2007. View at Google Scholar · View at Scopus
  7. R. N. Katz, “Competitive strategies for higher education in the information age,” in Dancing with the Devil: Information Technology and the New Competition in Higher Education, R. N. Katz, Ed., pp. 27–49, Jossey-Bass, San Francisco, Calif, USA, 1999. View at Google Scholar
  8. L. Fernandez, “An antidote for the Faculty-IT Divide,” Educause Quarterly, vol. 31, no. 1, pp. 7–9, 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. J. B. Barney, “Firm resources and sustained competitive advantage,” Journal of Management, vol. 17, no. 1, pp. 99–120, 1991. View at Google Scholar
  10. K. R. Conner, “A historical comparison of resource-based theory and five schools of thought within industrial organization economics: do we have a new theory of the firm?” Journal of Management, vol. 17, no. 1, pp. 121–154, 1991. View at Google Scholar
  11. J. B. Powers and P. P. McDougall, “University start-up formation and technology licensing with firms that go public: a resource-based view of academic entrepreneurship,” Journal of Business Venturing, vol. 20, no. 3, pp. 291–311, 2005. View at Publisher · View at Google Scholar · View at Scopus
  12. D. Dill, “Academic accountability and university adaptation: the architecture of an academic learning organization,” Higher Education, vol. 38, no. 2, pp. 127–154, 1999. View at Google Scholar · View at Scopus
  13. R. B. Archibald and D. H. Feldman, “Graduation rates and accountability: regressions versus production frontiers,” Research in Higher Education, vol. 49, no. 1, pp. 80–106, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Eckles, “Evaluating the efficiency of top liberal arts colleges,” Research in Higher Education, vol. 51, no. 3, pp. 266–293, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. R. P. Anjard, “Management and planning tools,” Training for Quality, vol. 3, no. 2, pp. 34–37, 1995. View at Google Scholar
  16. J. McCredie, “Improving IT Governance in Higher Education,” Research Bulletin 18, Educaues Center for Applied Research, Boulder, Colo, USA, 2006. View at Google Scholar
  17. J. G. Neal and P. A. McClure, “Organizing information resources for effective management,” in Organizing and Managing Information Resources on Your Campus, P. A. McClure, Ed., pp. 29–44, Jossey-Bass, San Francisco, Calif, USA, 2003. View at Google Scholar
  18. J. I. Penrod and A. F. Harbor, “Designing and implementing a learning organization-oriented information technology planning and management process,” in Case Studies on Information Technology in Higher Education: Implications for Policy and Practice, L. A. Petrides, Ed., chapter 4, pp. 7–19, Idea Group Publishing, Hershey, Pa, USA, 2000. View at Google Scholar
  19. M. R. Nelson, The CIO in Higher Education: Leadership, Competencies, Effectiveness, Educause Center for Applied Research, Boulder, Colo, USA, 2003.
  20. D. Acemoglu, P. Aghion, C. Lelarge, J. Van Reenen, and F. Zilibotti, “Technology, information, and the decentralization of the firm,” Quarterly Journal of Economics, vol. 122, no. 4, pp. 1759–1800, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. R. A. Johnson, T. Mitrano, and R. D. Vernon, “Meeting the cybersecurity challenge,” in Organizing and Managing Information Resources on Your Campus, P. A. McClure, Ed., pp. 93–111, Jossey-Bass, San Francisco, Calif, USA, 2003. View at Google Scholar
  22. R. B. Kvavik and J. Voloudakis, Safeguarding the Tower: IT Security in Higher Education 2006, Educause Center for Applied Research, Boulder, Colo, USA, 2006, (Research Study No. 6).
  23. B. Albrecht, B. Bender, R. N. Katz et al., Information Technology Alignment in Higher Education, Educause Center for Applied Research, Boulder, Colo, USA, 2004, Research Study No. 3.
  24. J. P. Stevens, Applied Multivariate Statistics for the Social Sciences, Lawrence Erlbaum Associates, Mahwah, NJ, USA, 4th edition, 2002.
  25. ITSPE, “IT Spending and Staffing Benchmarks 2007/2008,” Computer Economics, Irvine, Clif, USA, 2008.
  26. J. K. Stemmer, “The perception of effectiveness of merged information services organizations,” Reference Services Review, vol. 35, no. 3, pp. 344–359, 2007. View at Publisher · View at Google Scholar · View at Scopus
  27. N. G. Carr, “IT doesn't matter,” IEEE Engineering Management Review, vol. 32, no. 1, pp. 24–32, 2004. View at Google Scholar · View at Scopus
  28. D. G. Oblinger and B. L. Hawkins, “The myth about no significant difference,” EDUCAUSE Review, vol. 41, no. 6, pp. 14–15, 2006. View at Google Scholar