Education Research International

Education Research International / 2017 / Article
Special Issue

Entrepreneurship Education with Impact: Opening the Black Box

View this Special Issue

Review Article | Open Access

Volume 2017 |Article ID 8475460 |

Steven A. Gedeon, "Measuring Student Transformation in Entrepreneurship Education Programs", Education Research International, vol. 2017, Article ID 8475460, 12 pages, 2017.

Measuring Student Transformation in Entrepreneurship Education Programs

Academic Editor: Päivi Tynjälä
Received07 Jan 2017
Revised09 Mar 2017
Accepted05 Apr 2017
Published18 Apr 2017


This article describes how to measure student transformation primarily within a university entrepreneurship degree program. Student transformation is defined as changes in knowledge (“Head”), skills (“Hand”), and attitudinal (“Heart”) learning outcomes. Following the institutional impact model, student transformation is the primary goal of education and all other program goals and aspects of quality desired by stakeholders are either input factors (professors, courses, facilities, support, etc.) or output performance (number of startups, average starting salary, % employment, etc.). This goal-setting framework allows competing stakeholder quality expectations to be incorporated into a continuous process improvement (CPI) model when establishing program goals. How to measure these goals to implement TQM methods is shown. Measuring student transformation as the central focus of a program promotes harmony among competing stakeholders and also provides a metric on which other program decisions (e.g., class size, assignments, and pedagogical technique) may be based. Different stakeholders hold surprisingly different views on defining program quality. The proposed framework provides a useful way to bring these competing views into a CPI cycle to implement TQM requirements of accreditation. The specific entrepreneurial learning outcome goals described in the tables in this article may also be used directly by educators in nonaccredited programs and single courses/workshops or for other audiences.

1. Introduction

Entrepreneurship has become well recognized as a driver of economic prosperity and many governments strongly fund the creation of new entrepreneurship degree programs worldwide [1]. However, others have criticized entrepreneurship education as lacking rigor [2], a common framework [3], and best practices [4, 5]. One of the most comprehensive assessments of entrepreneurship education programs, including seven surveys since 1979, concluded that “there is little consensus on just what exactly entrepreneurship students should be taught” ([6], p. 169). In fact, the question “can entrepreneurship be taught?” continues to be raised (e.g., [7, 8]).

Many authors have pointed out that there is a lack of research on how to measure the success of entrepreneurship programs [6, 912]. In fact, there have been calls for total reenvisioning of the way entrepreneurship education is designed, implemented, and assessed [13, 14].

Entrepreneurship education is a broad subject that may be applied to single classes, workshops, modules, courses, curricula, and degree programs. It can be delivered to children, youth, undergraduates, graduates, executives, professors, corporations, immigrants, refugees, and those in need. In the context of this article, I will focus primarily on university entrepreneurship degree programs. Readers interested primarily in single courses or different audiences may benefit most from Table 2, where they will find entrepreneurial learning outcomes they may wish to incorporate directly into their classes/courses/modules.

1.1. Best Practices and Total Quality Management (TQM)

The total quality management (TQM) revolution started with the influential books by TQM gurus such as Juran [15], Crosby [16], Deming [17], and Garvin [18]. Central to all these systems are the concept that “quality” must be defined and measured in order to manage it and the concept that customer satisfaction is the ultimate goal [19]. TQM specialists must translate qualitative external customer quality expectations into quantitative internal goals. Central to the continuous improvement cycle is the ability to collect meaningful data that is compared against well-defined quality goals [20].

The TQM wave was adopted by higher education and by 1996 over 160 US universities were involved in adopting TQM [21] and many researchers had investigated the implementation of TQM principles within university business school programs (e.g., [2225]). This resulted in significant improvements to the primary worldwide university accreditation agency standards [26] that continue to be updated following TQM measurement and continuous process improvement principles [27, 28]. Although much progress has been achieved in applying TQM to administrative processes, the core processes of teaching and research are still lagging [29].

Regardless of whether any entrepreneurship educator is interested in accreditation, they will still be interested in having high quality and setting goals. The fundamental issue is defining precisely what “quality” means to different stakeholder groups and how conflicting opinions are resolved to set specific measurable goals [30].

2. Centrality of Student Transformation

Kanji et al. [60] defined the customers in higher education as a broad stakeholder group including existing and prospective students, parents, employers, government, and university employees such as professors and staff. The resulting quality dimensions thus included all aspects of the total student experience including preenrollment, enrollment, in-class experiences, extracurricular activities, and institution-based resources and services.

As Tam [30] showed, however, these different stakeholders have contradictory definitions and models of quality as well as the purpose of education in general. These contradictory definitions go to the heart of what a university’s mission should be: to teach students, to conduct research and create new knowledge, or to contribute to society [61].

To resolve this, Tam [30] proposed that all quality aspects must focus on the student’s learning and educational development and that all other aspects of quality must be peripheral to this main objective. Quality is thus defined as the transformation in the student caused by education. This has been referred to as the value added or institutional impact approach where changes in students’ performance are measured in order to evaluate the performance of a university [62].

According to Astin [63], “true quality resides in the institution’s ability to affect its students favourably, to make a positive difference in their intellectual and personal development” (p.11). While institutional dimensions of quality are necessary, they should be secondary to the student dimensions of quality related to student achievement [64].

Martin et al. [10] also adopted the student transformation viewpoint when doing a meta-analysis of the effect of entrepreneurship education and training on increasing the human capital of the students. They investigated the effect of entrepreneurship education on increasing learning outcomes related to knowledge and skills, attitudes, and intention as well as startup outcomes.

Gedeon [35] followed instructional design methods to create the Entrepreneurship Program Design Framework (EPDF), which expands Fayolle and Gailly’s [3] Teaching Model Framework into an institutional and environmental ecosystem. He defined entrepreneurship education as follows: “entrepreneurship education encompasses holistic personal growth and transformation that provides students with knowledge, skills and attitudinal learning outcomes. This empowers students with a philosophy of entrepreneurial thinking, passion, and action-orientation that they can apply to their lives, their jobs, their communities, and/or their own new ventures” ([35], pp 238.)

Following the EPDF and defining student transformation as changes in knowledge, skills, and attitudes, all other program quality goals can be seen as either input factors (professors, courses, facilities, support, etc.) or output performance (number of startups, average starting salary, % employment, etc.). Placing quantifiable student transformation as the primary goal of an entrepreneurship education program provides a continuous process improvement (CPI) framework that may be used to analyze the literature to identify and categorize alternative stakeholder quality expectations in order to set program goals. The specific goal-setting framework comprises the following:(1)Primary goals: central to student transformation (knowledge (“Head”), skills (“Hand”), and attitudes (“Heart”) learning outcomes)(2)Input goals: factors that support student transformation (such as faculty qualifications, resources, facilities, assignments, courses, pedagogy used, etc.)(3)Output goals: related to success of the program or external impact (such as number of students, number of awards won, or number of new companies launched, etc.)With this CPI framework, the existing literature concerning stakeholder expectations of quality can be reviewed.

3. Quality Expectation Analysis of Alternative Stakeholder Groups

3.1. Quality Definition as Assessed by Accreditation Agencies

Accreditation agencies implement a comprehensive approach to articulating stakeholder quality metrics, documenting program objectives, and implementing a process of continuous improvement against these metrics [65]. The primary business school accreditation agency in North America is the American Assembly of Collegiate Schools of Business [27], whereas EQUIS is the quality assurance accreditation standard run by the European Foundation for Management Development [28]. These organizations provide a system of quality assessment and set quality standards in order to help universities measure where they are on the path to excellence, identify gaps, and stimulate solutions [66]. Entrepreneurship programs housed within business faculties within most large or highly ranked universities worldwide are accredited to the standards set forth by one of these organizations [67].

Table 1 categorizes the standard requirements of both agencies into the goal-setting framework and compares these with the quality expectations of the other stakeholder groups assessed in this article. As can be seen, the AACSB and EFMD’s EQUIS standards heavily focus on input factor goals related to institutional impact or value add. Virtually every aspect of a program is assessed from the quality of the strategy/mission to student services, faculty intellectual contributions, and social impact [27, 28].

StakeholderPrimary goals central to student transformationInput factor goalsOutput goalsNot directly related to student transformation

EQUISChapter 2: skills and assurances of learningChapter 1: strategy and governance
Chapter 3: students & job placement services
Chapters 4-5: faculty quality & research output
Chapter 8: admin resources
Chapters 9-10: international and corporate connections
Chapter 7: contribution to community

AACSBStandards 16, 18, 19, 21: continuous improvement & assurances of learningStandards 1 & 4: mission
Standards 3, 6, 7, 14: student acceptance and retention
Standards 2, 8, 9, 10, 11, 12, 13: staff and faculty sufficiency
Standards 5 & 8: financial strategies

EmployersKey skills learnedProof of capacity (high entrance scores)Prestige of the schoolCompany relations with the school

GovernmentKey skills, attitudes, & intent to start new company (e.g., ASTEE)Number of startups
Number of jobs
Increase in economy

StudentsEntry requirements
Assurance, reliability, empathy, responsiveness, tangibles
Good job

FacultyGood management
Faculty mentors
Tenure process

Business school deansNumber of coursesAlumni exploits
Number of startups
Number of innovations
Impact on community
Number of publications
Outreach to scholars

Magazines and award programsEntry requirements (e.g., GMAT score)
Number of courses
Faculty qualifications
Percent of entrepreneurs in faculty
Resources for students (prizes, mentors, and clubs)
Number of startups
Starting salaries of graduates
Research funding obtained by faculty

Malcolm Baldrige Award ProgramPerformance resultsStrategy & leadership
Process management, measurement, & analysis
Student, stakeholder, and market focus
Faculty and staff focus

USASBE Complete and comprehensive
Innovative, unique
Transferrable and sustainable


Primary goals central to student transformationLearning outcomes[31]
Lifelong learning skills[3236]
Communication skills[32, 33, 3538]
Teamwork skills[32, 33, 3539]
Social capital skills (persuasion, negotiation, networking)[3638, 40]
Creativity and innovation skills (alertness, opportunity spotting)[3639, 41]
Guerilla skills (bootstrapping, acquisition of resources, planning under uncertainty)[9, 3538, 42, 43]
Motivational skills (psychological capital, empowerment)[36, 44, 45]
Entrepreneurial thinking skills (independent and critical thinking, self-management, adapting) [9, 33, 3638, 46, 47]
Attitudes, beliefs, values, and intent
Entrepreneurial desirability [35, 36, 38, 48, 49]
Self-efficacy [36, 38, 39, 47, 4951]
Internal locus of control [36, 38, 39, 47, 49]
Entrepreneurial intent [36, 38, 39, 47, 49, 5355]

Input goals that support student transformationClarity of mission statement[28, 38, 39]
Faculty qualifications and behaviours
Percent with Ph.D. degrees (academically qualified)[27, 28]
Percent with entrepreneurial experience[56, 57]
Intellectual contributions (or number of publications)[27, 28]
SERVQUAL or SERVPERF (assurance, reliability, empathy, responsiveness)[58, 59]
Resources to support students
Student entrepreneurship clubs[57]
Business plan competition amount[57]
Incoming student population
Entry requirements[27, 28, 33, 57]
Number of scholarships[57]

Output goals related to growth or external impactNumber of students
Number of courses[57]
Awards won[57]
Community impact[13]

Osseo-Asare and Longbottom [68] identified several major limitations in applying the EFMD model to higher education including the fact that EFMD is too prescriptive, too time-consuming, and too subjective. Barnett [69] has also pointed out that conflicting views of quality by different stakeholders result in conflicting performance indicators. Furthermore, measuring these indicators is controversial and, at best, only provides information about the past and provides limited insight into what should be modified or improved [64].

Accreditation has also been widely criticized for diverting universities away from teaching by placing such a large emphasis on research (e.g., [7073]). As a result, the agencies have shifted to a mission-driven basis of accreditation as opposed to a one-size-fits-all standard [74]. Even though universities may now designate teaching as their major mission for accreditation, it is more difficult to measure teaching productivity and implementing accreditation for teaching-intensive universities remains problematic [75, 76]. Thus, regardless of a university’s purported mission, it appears that accreditation results in the demand for annual increases in the number of publications by faculty at accredited institutions [77]. As summarized by Roberts et al. [78], “for good or bad the emphasis on research remains… and teaching efforts give way to increased research efforts.”

The effort and cost of accreditation are significant. “In short, pursuing AACSB is not a pleasurable exercise for a business school from both internal and external political views. The annual incremental cost increases for even a small school, including salary and benefits, can easily exceed $500,000 per year” [78]. Although most programs will have lower costs, it is evident that implementing the standard may not be worth the extensive documentation required, especially for a smaller program.

One may thus question whether the costs, potential to divert the university from its teaching mission, and onerous reporting are worth the effort, especially if one considers the possibility that “to most people, the status of being academically accredited does not imply that the educational institution is appreciably superior to the average institution…” ([75], pp 348-349). In a survey of faculty at accredited institutions, however, Roberts et al. [78] found that, “despite this shift from teaching to research, and the increased job stress, and no positive impact on teaching, the respondents, on average, indicated strongly that accreditation was worth the effort.” More recent studies also confirm these results [79].

Clearly, there are benefits to accreditation regardless of the negatives. The value add quality aspects of student transformation related to learning outcomes are particularly well quantified by both EQUIS (in Chapter 2) and AACSB (in Standards 16, 18, 19, and 21). Each program must identify specific student learning outcomes (e.g., communication skills and numeracy) and how they are measured. Each course within the program must then specify how these outcomes are taught and measured. Programs then track their students’ performance over time and identify continuous improvement through implementing new course content, pedagogies, or teaching methods. In this way, quality is measured as the value add or cumulative improvement in learning outcome assessments achieved by a student from start to finish of the curriculum [80].

By focusing on the primary goals related to student transformation, a program may achieve the benefits of the TQM process without the onerous, expensive, and/or potentially negative consequences of full accreditation which focuses on secondary input factors [81].

3.2. Quality Definition as Assessed by Employers

One of the primary purposes of higher education is to prepare students to enter the workforce and contribute to the national economy [82]. This high level qualitative objective related to employer satisfaction is generally translated, as shown in Table 1, into ensuring that students have the requisite intellectual capacity and flexible and adaptable skills [83]. Hesketh [33] found that employers will proactively seek out and give preference to universities that they perceive as providing graduates with better intellectual capability as evidenced by higher entrance requirements or greater university prestige.

Although several authors contest the notion that teaching key transferable skills required by employers is applicable to the mission of higher education, most universities acknowledge this as a key qualitative objective [84]. The number of studies and potential list of skills desired by employers are extensive and contradictory, including as many as 62 different skills [33]. The National Committee of Inquiry into Higher Education in the UK focused this down into four key skills: communication skills, numeracy, use of information technology, and learning how to learn [32]. Other researchers have found that problem-solving, teamwork, and self-management were more desired by employers than numeracy or information technology skills [33]. An Australian government study taking place at around the same time [85] found that employers rated creativity and flair, enthusiasm, and independent and critical thinking as the most important key transferable skills [46].

The specific list of key skills chosen for an entrepreneurship degree program will normally include many of these key skills. Regardless of which skills are chosen as learning outcome goals, the NCIHE concluded that “all institutions of higher education should aim for student achievement in key skills” ([32], p. 135).

3.3. Quality Definition as Assessed by Government

Governments have discovered that entrepreneurship is one of the most powerful growth engines of the economy [14]. They have responded to this societal need by funding new entrepreneurship programs and support infrastructure [1]. In 2000, the Lisbon European Council set the objective of transforming EU productivity through creation of a culture of entrepreneurship and innovation. In 2006, the European Parliament specified entrepreneurial skills as a key lifelong learning competence for all citizens. In 2016, Bacigalupo et al. identified 3 entrepreneurial competence areas, 15 specific competencies, an 8-level progression model, and 442 learning outcomes. Their EntreComp framework provides one of the most comprehensive stakeholder consultations and detailed analysis of entrepreneurial learning outcomes available [36].

Unfortunately, education in general has a very indirect measurable impact on the economy [86]. In the case of entrepreneurship education, the effect is even more distal as students may take several years before they gain enough practical experience to consider starting up a new company [87].

As a result, various authors have instead measured the impact of education on entrepreneurial intent and its antecedent attitudes (as shown in Table 1) rather than measuring the economic impact [88]. Survey instruments are ideal for measuring student transformation of beliefs, attitudes, values, and intent, all of which are known antecedents to entrepreneurial behavior and may be measured and modified during the educational process [48].

To address government interest in measuring these skills, Moberg, et al. [49], with funding from the European Community, Competitiveness and Innovation Framework Programme, developed Assessment Tools and Indicators for Entrepreneurship Education (ASTEE). The ASTEE survey provides validated scales for measuring student’s self-perception of entrepreneurial knowledge, skills, and attitudes.

3.4. Quality Definition as Assessed by Students

Student satisfaction is sometimes referred to as the humanistic approach to educational quality evaluation, in contrast to the mechanistic approach which is conducted by experts and agencies such as AACSB and EQUIS [89]. The most widely used survey instruments are based on the SERVQUAL instrument and SERVPERF [59]. Both instruments measure five dimensions of service quality: reliability, empathy, assurance, responsiveness, and tangibles [90]. Despite the large number of projects associated with these instruments, they have met with only limited success [91]. One of the key limitations identified has been the lack of outcome quality attributes such as whether students get good jobs [89].

Chua [58] created an input-process-output framework of quality classification. Using this scheme, questions related to student selection and entry requirements (inputs) and good job placement and academic performance (outputs) are included along with standard SERVQUAL-style questions related to content, professors’ knowledgeability, concern for students, and social activities (process).

In general, as shown in Table 1, none of these approaches to measuring student perception of quality measure student transformation or learning outcomes. Students are never directly asked if they learned anything or improved personally. They are instead asked about indirect indicators such as SERVQUAL’s tangibles (e.g., “the school office is equipped with modern technology”), reliability (e.g., “I can depend on the school office’s promises”), or responsiveness (e.g., “school office staff/faculty give me prompt service”) [58].

3.5. Quality Definition as Assessed by Faculty

There is a large body of literature related to faculty satisfaction, stress, and morale [92]. As shown in Table 1, the primary sources of faculty satisfaction or dissatisfaction are generally recognized as collegiality, salary, mentoring, management (department heads and/or deans), and the process related to promotion and tenure [92, 93]. Of these, collegiality has been found to be the most important issue to faculty [9294]. Student transformation, learning outcomes, and success do not appear to be significant issues in any of these studies.

3.6. Quality Definition as Assessed by Deans of Business Schools

In most universities, the dean has a major influence on what the program objectives should be. Vesper and Gartner [13] surveyed the deans of over 1,000 business schools worldwide to determine what they viewed as the primary indicators of program quality. As shown in Table 1, they found that for business school deans the top ranking criterion is the number of entrepreneurship courses offered followed by number of faculty publications, impact on community, alumni exploits, innovations, alumni startups, and outreach to scholars.

As pointed out by the authors of that study, these ranking criteria are more than a bit problematic and should be viewed with much skepticism [13]. For example, several of these criteria focus on nebulous outcomes that are not necessarily tied to the curriculum (e.g., impact on the community or alumni exploits). They also imply that quantity equals quality and bigger is somehow better (e.g., number of faculty publications or number of startups).

Quality of education or learning outcomes do not appear anywhere on the business school deans’ list of program ranking criteria [13]. A more recent study on the role of accreditation found that deans continue to emphasize the importance of faculty quality and acquisition of resources (input goals) and community interaction (output goals) [79].

3.7. Quality Definition as Assessed by Magazines and Award Programs

One of the primary mechanisms by which students select which program to attend is magazine rankings [95]. There is a reinforcing cycle of success where achieving high magazine ratings results in better students who help the university gain better ratings and win more awards [96]. Magazines that rank and/or issue awards to entrepreneurship programs include Fortune, Small Business, Princeton Review [56], US News, [57], and Success.

Vespar and Gartner [13] found that magazines’ entrepreneurship rating metrics, as shown in Table 1, had only “tenuous links” to what they were trying to measure: (a) qualifications of faculty, (b) variety and depth of entrepreneurship curriculum, (c) academic standards, and (d) quality and depth of resources.

Dill and Soo [97] found that there is an emerging international consensus on measuring quality in higher education among magazines that rate universities. “One of the leading determinants of a good university is the quality of its incoming students… The quality of the faculty and research is another prominent shared measure, which is assessed primarily by staff qualifications and the ability to attract research grants… In contrast to these input measures, assessments of the teaching and learning process seem to get much less attention” ([97], pg 499-500).

Despite the fact that magazines may agree with each other about how to measure university quality, “a more serious problem with the national magazine rankings is that from a research point of view, they are largely invalid. That is, they are based on institutional resources and reputation dimensions, which have only minimal relevance to what we know about the impact of college on students” ([98], p. 20).

The United States Association of Small Business and Entrepreneurship [99] is comprised primarily of university entrepreneurship professors. The ranking criteria for its award program for the Excellence in Entrepreneurship Education-Model Entrepreneurship Programs are indicated in Table 1.

Finally, the Malcolm Baldrige National Quality Award [100] Education Performance Excellence Criteria have emerged as a theoretically validated model for implementing continuous quality improvement in universities [101, 102]. The 33 criteria can be categorized into the seven general areas listed in Table 1 following Badri et al. [103]. Not surprisingly, as this award arises from the TQM perspective, it is quite similar to the AACSB and EQUIS standard frameworks.

4. Setting Goals Using the Framework and Implementing a Measurement System

By categorizing the large range of diverse and contradictory definitions of quality using the goal-setting framework, we can better identify commonalities between alternative stakeholder expectations. As shown in Table 2, all stakeholder desires associated with student transformation relate to changes in learning outcomes (i.e., knowledge, skills, and attitudes).

The central role of quantifying, measuring, and continuously improving student learning outcomes is clearly articulated in EQUIS (Chapter 2) and AACSB (Standards 16, 18, 19, and 21). Both standards provide good overall guidance for how to assess the impact of education on student transformation. The Malcolm Baldrige Award also places a significant emphasis on the process and knowledge management system around how the university measures, analyzes, and continuously improves the students’ performance or accumulation of learning outcomes [100]. They are all, however, entirely silent on which specific learning outcomes a program should strive for, and some have claimed that there has been little development in the field of assessment practices [104].

Each entrepreneurship program must thus articulate its own list of learning outcomes which depends on its strategy, student population, local employment opportunities, and startup environment. These learning outcomes transcend and cut across the entire curriculum, so that in addition to learning specific course knowledge (e.g., accounting, marketing, and business planning) students transform by improving specific skills (e.g., communication, problem-solving, and teamwork) [31].

Table 2 thus provides a representative list of entrepreneurship program learning outcomes along with reference citations. Although any such list will be subject to heated debate, focusing on learning outcomes will provide a superior measurement of quality rather than measuring the number of courses available (as suggested, e.g., by deans and magazines).

The overall scholarship on assessment of learning outcomes has made significant progress along with progress in the accreditation system [26]. The Joint Committee on Standards for Educational Evaluation (JCSEE) for program evaluation [105] and student evaluation [106] are excellent starting points for understanding how to implement learning outcome measurements [107]. These two standards of evaluation are related since measuring student outcomes will reflect on the success or failure of the educational program itself [108].

There are three required levels of assessment: (a) testing of individuals to assign student grades, (b) assessment of groups of individuals for the purpose of instructional planning, and (c) evaluation of instructional methods and/or the overall program over time [109]. These three levels require different methods and may conflict with one another [110]. The primary purpose of the standards relates to evaluation of the overall program and not the assignment of individual grades or instructional planning [108]. These two alternative objectives may be brought into alignment through the course-embedded method [111].

Of the six primary approaches to educational evaluation, the Kirkpatrick framework remains the most accepted and influential [112]. Kirkpatrick [113] specified three categories of student learning: knowledge, skills, and change of attitude. Knowledge and skills are assessed during class under the learning outcomes just discussed. However, affective or attitudinal beliefs are primarily assessed outside of class via indirect methods such as surveys (e.g., ASTEE), rating scales, and retrospective techniques [114].

Most entrepreneurship programs will have as a goal helping prepare their students for a career as an entrepreneur [115]. Education for an entrepreneurial career should transform students’ attitudes, beliefs, and values such that they view entrepreneurship as a desirable and feasible career, regardless of whether they initially pursue employment [116].

There are well-known and validated attitudinal antecedents to entrepreneurial behavior (such as entrepreneurial intent, desirability, feasibility, perceived behavioural control, and self-efficacy) with existing scales and survey methods to measure the impact of education on this aspect of student transformation [54, 55, 88]. Table 2 provides a potential list of such attitudes, beliefs, and values as well as reference citations.

As can be seen in Table 1, there is a far greater range of quality expectations for the secondary goals related to institutional input factors. While accreditation will mandate the program to set goals for the full range of metrics, Table 2 provides a representative list along with references on how to implement a measurement system for each goal regardless of whether or not the degree program is seeking accreditation and/or TQM practices.

The critical role of strategy, mission, and governance is clearly recognized by EQUIS and AACSB as well as the Malcolm Baldrige Award as a way to resolve conflicting goals and drive operational tactics. This is precisely the reason why these organizations rewrote their standards in the first place to become mission-driven [74, 76]. Potentially conflicting goals between, for example, emphasis on research versus teaching are resolved through a clear mission statement and strategy that then drives the remaining tactical goals.

Unfortunately, poorly worded, vague, or inherently contradictory mission statements are interpreted differently by competing stakeholders. Thus, for example, some stakeholders may incorrectly interpret EQUIS Chapters 4-5 or AACSB Standards 2, 8, 9, 10, 11, 12, and 13 as demanding greater numbers of publications regardless of the mission statement [77, 78].

The solution is to write clearer mission statements, so this is listed in Table 2 as an important input factor goal. The mission statement will help resolve conflicts that arise because different stakeholders have different expectations for input factors such as faculty qualifications and resources available. Deans expect professors to produce large numbers of publications, whereas students want professors to be reliable, responsive, and empathetic. Magazines expect professors to have had experience as an entrepreneur, whereas the accreditation agencies want professors to have a Ph.D. to achieve “AQ” (academically qualified) status. Clearly, the program’s mission and strategy will drive the difficult goal-setting process required to resolve these conflicting expectations. This point was recently made by Turgut-Dao et al. [39]: “we found that critical stakeholder consensus is improved when entrepreneurship is defined in terms of action-based learning, interdisciplinary project work, and personal development. Entrepreneurship thus represents a unique pedagogical teaching method – “teaching through entrepreneurship” (e.g. Samwell, 2010) that may be embraced by service learning community volunteers and social innovators as much as innovative scientists and for profit businesses” ([39], pp 3).

Finally, most programs will also have aspirations related to the growth and impact of their programs. These output metrics are often the easiest to measure and may be extracted from the various stakeholder expectations in Table 1 such as number of startups, average student starting salaries, alumni exploits, and impact on the community. Table 2 provides a sample list of output program goals.

5. Implementing Continuous Process Improvement

The first phase in implementing CPI is to identify and measure the desired student transformation goals (learning outcome goals) as well as key input and output goals. At the single class/course level, an instructor would select key learning outcome goals, for example, business model canvas and pro forma financial projections (Head), opportunity spotting and planning under uncertainty (Hand), and entrepreneurial desirability and intent (Heart). The instructor would then create assignments that directly measure these [107]. In addition to a business plan assignment, they might consider a separate self-reflective assignment to better measure the skill of opportunity spotting [39]. Finally a survey of the students’ attitudes might be added (e.g., [49]). The instructor can then alter different inputs such as textbooks, guest speakers, and simulations and compare these results with these changes to prior class results to continuously improve the course.

At the program level, the curriculum committee will normally be responsible for selecting the required learning outcomes (my university’s entrepreneurship degree program has ten) and then ensuring they are introduced, reinforced, and measured across the different required courses for the degree [38]. The committee must review the resulting data and identify potential changes that will improve outcomes. If accreditation is also involved, there will be additional reporting and auditing requirements.

First we must determine whether the measurements are reliable and valid. Then we must determine whether the input factors are correlated with changes in student transformation and output goals. Then we must determine which input factors to potentially change.

It is a daunting task and quantitative measures alone may not be sufficient despite the many references in Table 2 which are making progress toward refining constructs, scales, and assignments. In my department, the curriculum committee reviews the data but then spends time discussing individual cases and using qualitative input before making final decisions regarding program changes. We have several research programs underway to refine our measures and longitudinally track student transformation with the goal of improving our CPI processes.

6. Conclusions

TQM provides fundamental underpinning for setting entrepreneurship education course and program goals regardless of whether or not the university chooses to seek accreditation. As described in this paper, stakeholders can have surprisingly divergent opinions on what the goals of a program should be. Placing student transformation at the center of the program helps align stakeholder interests and set goals. In addition, by clearly setting input and output goals, as well as student transformation goals, a program can implement a continuous process improvement cycle to better understand the effect of input goals on student transformation and performance output.

Tremendous progress has been made in defining potential student transformation learning goals related to knowledge, skills, and attitudes (including attitudes related to starting a company such as entrepreneurial intent). Although each program must set its own goals and measure learning outcomes, the body of literature related to designing a program, articulating a mission, and setting these goals is becoming more robust.

Those interested in setting goals for a single entrepreneurship course may consider selecting a small number of learning outcomes from Table 2. Most introductory entrepreneurship courses would include creativity and innovation skills (alertness and/or opportunity spotting). The Turgut-Dao [39] reference provides a good example of how one course measured these learning outcomes and tracked them for continuous process improvement.

I would argue that entrepreneurship education no longer lacks rigor or best practices. In contrast, there are now many tools, assessment methods, and published sources that can help entrepreneurship faculty design their entrepreneurship programs.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.


  1. J. Leitao and R. Baptista, Public Policies for Fostering Entrepreneurship: A European Perspective, Springer US, New York, NY, USA, 2009. View at: Publisher Site
  2. K. Wilson, “Entrepreneurship Education in Europe,” in Entrepreneurship and Higher Education, J. Potter, Ed., pp. 119–138, OECD Publishing, Paris, France, 2008. View at: Publisher Site | Google Scholar
  3. A. Fayolle and B. Gailly, “From craft to science,” Journal of European Industrial Training, vol. 32, no. 7, pp. 569–593, 2008. View at: Publisher Site | Google Scholar
  4. R. H. Brockhaus, E. G. Hills, H. Klandt, and H. P. Welsch, Entrepreneurship Education: A Global View, Ashgate Publishing, Aldershot, United Kingdom, 2001.
  5. J. O. Fiet, “The theoretical side of teaching entrepreneurship,” Journal of Business Venturing, vol. 16, no. 1, pp. 1–24, 2000. View at: Publisher Site | Google Scholar
  6. G. Solomon, “An examination of entrepreneurship education in the United States,” Journal of Small Business and Enterprise Development, vol. 14, no. 2, pp. 168–182, 2007. View at: Publisher Site | Google Scholar
  7. C. Henry, F. Hill, and C. Leitch, “Entrepreneurship education and training: can entrepreneurship be taught? Part I,” Education and Training, vol. 47, no. 2, pp. 98–111, 2005. View at: Publisher Site | Google Scholar
  8. P. G. Klein and J. B. Bullock, “Can entrepreneurship be taught?” Journal of Agricultural and Applied Economics, vol. 38, no. 2, pp. 429–439, 2006. View at: Publisher Site | Google Scholar
  9. B. Honig, “Entrepreneurship education: toward a model of contingency-based business planning,” Academy of Management Learning & Education, vol. 3, no. 3, pp. 258–273, 2004. View at: Publisher Site | Google Scholar
  10. B. C. Martin, J. J. McNally, and M. J. Kay, “Examining the formation of human capital in entrepreneurship: a meta-analysis of entrepreneurship education outcomes,” Journal of Business Venturing, vol. 28, no. 2, pp. 211–224, 2013. View at: Publisher Site | Google Scholar
  11. T. N. Garavan and B. O'Cinneide, “Entrepreneurship education and training programmes: a review and evaluation—part 1,” Journal of European Industrial Training, vol. 18, no. 8, pp. 3–12, 1994. View at: Publisher Site | Google Scholar
  12. J. Falka ng and F. Alberti, “The assessment of entrepreneurship education,” Industry and Higher Education, vol. 14, no. 2, pp. 101–108, 2000. View at: Publisher Site | Google Scholar
  13. K. H. Vesper and W. B. Gartner, “Measuring progress in entrepreneurship education,” Journal of Business Venturing, vol. 12, no. 5, pp. 403–421, 1997. View at: Publisher Site | Google Scholar
  14. D. F. Kuratko, “The emergence of entrepreneurship education: development, trends, and challenges,” Entrepreneurship: Theory and Practice, vol. 29, no. 5, pp. 577–597, 2005. View at: Publisher Site | Google Scholar
  15. J. M. Juran, Quality Control Handbook, McGraw-Hill, New York, NY, USA, 1951, (2nd edition, 1962, 3rd edition, 1974, 4th edition, 1988, 5th edition, 1999, 6th edition, 2010).
  16. P. B. Crosby, Quality Is Free: The Art of Making Quality Certain, vol. 94, McGraw-Hill, New York, NY, USA, 1979.
  17. W. E. Deming, Out of the Crisis, MIT Press, Cambridge, Mass, USA, 1986.
  18. D. A. Garvin, Managing Quality: The Strategic And Competitive Edge, Free Press, 1988.
  19. L. J. Porter and A. J. Parker, “Total quality management-the critical success factors,” Total Quality Management, vol. 4, no. 1, pp. 13–22, 1993. View at: Publisher Site | Google Scholar
  20. M. D. Hanna and W. R. Newman, Integrated Operations Management, Englewood Cliffs, NJ, USA, Prentice-Hall, 2001.
  21. B. B. Burkhalter, “How can institutions of higher education achieve quality within the new economy?” Total Quality Management, vol. 7, no. 2, pp. 153–160, 1996. View at: Publisher Site | Google Scholar
  22. G. Baldwin, “The student as customer: the discourse of quality in higher education,” Journal of Tertiary Education Administration, vol. 9, no. 2, pp. 131–139, 1994. View at: Google Scholar
  23. K. E. Bass, S. A. Dellana, and F. J. Herbert, “Assessing the use of total quality management in the business school classroom,” Journal of Education for Business, vol. 71, no. 6, pp. 339–343, 1996. View at: Publisher Site | Google Scholar
  24. D. J. Brown and H. F. Koenig, “Applying total quality management to business education,” Journal of Education for Business, vol. 68, no. 6, pp. 325–329, 2010. View at: Publisher Site | Google Scholar
  25. R. Manley and J. Manley, “Sharing the wealth: TQM spreads from business to education,” Quality Progress, vol. 29, no. 6, pp. 51–55, 1996. View at: Google Scholar
  26. B. D. Wright, “Accreditation and the scholarship of assessment,” in Building a Scholarship of Assessment, T. W. Banta and Associates, Eds., pp. 240–258, San Francisco: Jossey-Bass, 2002. View at: Google Scholar
  27. AACSB, “Eligibility Procedures and Accreditation Standards for Business Accreditation. AACSB International – The Association to Advance Collegiate Schools of Business,” View at: Google Scholar
  28. EFMD, European Quality Improvement System—EQUIS Standards and Criteria.
  29. C. Temponi, “Continuous improvement framework: implications for academia,” Quality Assurance in Education, vol. 13, no. 1, pp. 17–36, 2005. View at: Publisher Site | Google Scholar
  30. M. Tam, “Measuring quality and performance in higher education,” Quality in Higher Education, vol. 7, no. 1, pp. 47–54, 2001. View at: Publisher Site | Google Scholar
  31. C. A. Palomba and T. W. Banta, Assessing Student Competence in Accredited Disciplines: Pioneering Approaches to Assessment in Higher Education, Stylus Publishing, Sterling, Va, USA, 2001.
  32. NCIHE, National Committee of Inquiry into Higher Education. Main Report, HMSO, London.
  33. A. J. Hesketh, “Recruiting an elite? employers' perceptions of graduate education and training,” Journal of Education and Work, vol. 13, no. 3, pp. 245–271, 2000. View at: Publisher Site | Google Scholar
  34. J. Cope, “Toward a dynamic learning perspective of entrepreneurship,” Entrepreneurship: Theory and Practice, vol. 29, no. 4, pp. 373–397, 2005. View at: Publisher Site | Google Scholar
  35. S. A. Gedeon, “Application of best practices in university entrepreneurship education: designing a new MBA program,” European Journal of Training and Development, vol. 38, no. 3, pp. 231–253, 2014. View at: Publisher Site | Google Scholar
  36. M. Bacigalupo, P. Kampylis, Y. Punie, and G. Van den Brande, EntreComp: The Entrepreneurship Competence Framework, Publication Office of the European Union, Luxembourg, 2016.
  37. T. M. Cooney, Entrepreneurship skills for growth-orientated businesses, Danish Business Authority, Copenhag, Denmark, 2012.
  38. D. Valliere, S. A. Gedeon, and S. Wise, “A comprehensive framework for entrepreneurship education,” Special Issue on Entrepreneurial Education in the Journal of Business and Entrepreneurship, vol. 26, no. 1, pp. 89–120, 2015. View at: Google Scholar
  39. E. Turgut-Dao, S. A. Gedeon, K. Sailer, F. Huber, and M. Franck, “Embedding experiential learning in cross-faculty entrepreneurship education,” in proceeding of the ECSB Entrepreneurship Education Conference, proceeding of the 3E Conference, pp. 2411–3298, Lüneburg, Germany, 2015. View at: Google Scholar
  40. R. A. Baron and G. D. Markman, “Beyond social capital: how social skills can enhance entrepreneurs success,” Academy of Management Executive, vol. 14, no. 1, pp. 106–114, 2000. View at: Google Scholar
  41. L. W. Busenitz, “Research on entrepreneurial alertness: sampling, measurement, and theoretical issues,” Journal of Small Business Management, vol. 34, no. 4, pp. 35–44, 1996. View at: Google Scholar
  42. H. H. Stevenson and J. C. Jarillo, “A paradigm of entrepreneurship: entrepreneurial management,” Strategic Management Journal, vol. 11, no. 5, pp. 17–27, 1990. View at: Google Scholar
  43. J. Ebben and A. Johnson, “Bootstrapping in small firms: an empirical analysis of change over time,” Journal of Business Venturing, vol. 21, no. 6, pp. 851–865, 2006. View at: Publisher Site | Google Scholar
  44. U. Hytti, P. Stenholm, J. Heinonen, and J. Seikkula-Leino, “Perceived learning outcomes in entrepreneurship education: the impact of student motivation and team behaviour,” Education and Training, vol. 52, no. 8, pp. 587–606, 2010. View at: Publisher Site | Google Scholar
  45. S. Gedeon, “Trust, ethics, character and competence in angel investing,” Entrepreneurial Practice Review, vol. 1, no. 4, pp. 38–51, 2011. View at: Google Scholar
  46. D. Hagan, “Employer satisfaction with ICT graduates,” in Proceedings of the 6th Australasian Conference on Computing Education-Volume 30, pp. 119–123, 2004. View at: Google Scholar
  47. H. Oosterbeek, M. van Praag, and A. Ijsselstein, “The impact of entrepreneurship education on entrepreneurship skills and motivation,” European Economic Review, vol. 54, no. 3, pp. 442–454, 2010. View at: Publisher Site | Google Scholar
  48. N. F. Krueger Jr., M. D. Reilly, and A. L. Carsrud, “Competing models of entrepreneurial intentions,” Journal of Business Venturing, vol. 15, no. 5, pp. 411–432, 2000. View at: Publisher Site | Google Scholar
  49. K. Moberg, L. Vestergaard, A. Fayolle et al., How to assess and evaluate the influence of entrepreneurship education: A report of the ASTEE project with a user guide to the tools, 2014.
  50. A. Bandura, “Self-efficacy: toward a unifying theory of behavioral change,” Psychological Review, vol. 84, no. 2, pp. 191–215, 1977. View at: Publisher Site | Google Scholar
  51. J. E. McGee, M. Peterson, S. L. Mueller, and J. M. Sequeira, “Entrepreneurial self-efficacy: refining the measure,” Entrepreneurship: Theory and Practice, vol. 33, no. 4, pp. 965–988, 2009. View at: Publisher Site | Google Scholar
  52. J. R. Baum and E. A. Locke, “The relationship of entrepreneurial traits, skill, and motivation to subsequent venture growth,” Journal of Applied Psychology, vol. 89, no. 4, pp. 587–598, 2004. View at: Publisher Site | Google Scholar
  53. E. R. Thompson, “Individual entrepreneurial intent: construct clarification and development of an internationally reliable metric,” Entrepreneurship: Theory and Practice, vol. 33, no. 3, pp. 669–694, 2009. View at: Publisher Site | Google Scholar
  54. C. Schlaegel and M. Koenig, “Determinants of entrepreneurial intent: a meta-analytic test and integration of competing models,” Entrepreneurship: Theory and Practice, vol. 38, no. 2, pp. 291–332, 2014. View at: Publisher Site | Google Scholar
  55. D. Valliere, “An effectuation measure of entrepreneurial intent,” Procedia—Social and Behavioral Sciences, vol. 169, pp. 131–142, 2015. View at: Publisher Site | Google Scholar
  56. Princeton Review, Top 25Entrepreneurship Programs, 2017,
  57., Top 25Colleges for Entrepreneurship 2015, 2015,
  58. C. Chua, “Perception of quality in higher education,” in Proceedings in the australian universities quality forum, AUQA occasional publication, 2004. View at: Publisher Site | Google Scholar
  59. J. Douglas, A. Douglas, and B. Barnes, “Measuring student satisfaction at a UK university,” Quality Assurance in Education, vol. 14, no. 3, pp. 251–267, 2006. View at: Publisher Site | Google Scholar
  60. G. K. Kanji, A. Malek, and B. A. Tambi, “Total quality management in UK higher education institutions,” Total Quality Management, vol. 10, no. 1, pp. 129–153, 1999. View at: Publisher Site | Google Scholar
  61. S. Marginson, “University mission and identity for a post post‐public era,” Higher Education Research & Development, vol. 26, no. 1, pp. 117–131, 2007. View at: Publisher Site | Google Scholar
  62. J. H. McMillan, “Beyond value-added education: improvement alone is not enough,” The Journal of Higher Education, vol. 59, no. 5, p. 564, 1988. View at: Publisher Site | Google Scholar
  63. A. W. Astin, “Why not try some new ways of measuring quality?” Educational Record, vol. 63, no. 2, pp. 10–15, 1982. View at: Google Scholar
  64. R. Barnett, Improving Higher Education: Total Quality Care, Open University Press, Bristol, PA, USA, 1992.
  65. A. Lock, “Accreditation in business education,” Quality Assurance in Education, vol. 7, no. 2, pp. 68–76, 1999. View at: Publisher Site | Google Scholar
  66. J. Cullen, J. Joyce, T. Hassall, and M. Broadbent, “Quality in higher education: from monitoring to management,” Quality Assurance in Education, vol. 11, no. 1, pp. 5–15, 2003. View at: Google Scholar
  67. T. A. Finkle, “Trends in the market for entrepreneurship faculty from 1989-2005,” Journal of Entrepreneurship Education, vol. 10, no. 1, 2007. View at: Google Scholar
  68. A. E. Osseo‐Asare and D. Longbottom, “The need for education and training in the use of the EFQM model for quality management in UK higher education institutions,” Quality Assurance in Education, vol. 10, no. 1, pp. 26–37, 2002. View at: Publisher Site | Google Scholar
  69. R. Barnett, “The idea of quality: voicing the educational,” Higher Education Quarterly, vol. 46, no. 1, pp. 3–19, 1992. View at: Publisher Site | Google Scholar
  70. C. D. Lein and C. M. Merz, “Faculty evaluation in schools of business: the impact of AACSB accreditation on promotion and tenure decisions,” Collegiate News and Views, Winter, pp. 21–24, 1977-1978. View at: Google Scholar
  71. H. Tong and A. L. Bures, “An empirical study of faculty evaluation systems: business faculty perceptions,” Journal of Education for Business, vol. 62, no. 7, pp. 319–322, 1987. View at: Publisher Site | Google Scholar
  72. W. L. Weis, “What's going on in business schools?” Decision Line, vol. 22, no. 1, pp. 3–4, 1991. View at: Google Scholar
  73. W. C. Perkins, “Teaching: gaining importance?” Decision Line, vol. 23, no. 5, pp. 1–34, 1992. View at: Google Scholar
  74. I. C. Ehie and D. Karathanos, “Business faculty performance evaluation based on the new aacsb accreditation standards,” Journal of Education for Business, vol. 69, no. 5, pp. 257–263, 1994. View at: Publisher Site | Google Scholar
  75. J. A. Yunker, “Viewpoint: Doing things the hard way—problems with mission-linked aacsb accreditation standards and suggestions for improvement,” Journal of Education for Business, vol. 75, no. 6, pp. 348–353, 2000. View at: Publisher Site | Google Scholar
  76. R. H. Jantzen, “AACSB mission-linked standards: effects on the accreditation process,” Journal of Education for Business, vol. 75, no. 6, pp. 343–347, 2000. View at: Publisher Site | Google Scholar
  77. B. P. Arlinghaus, “The environment for professional interaction and relevant practical experience in aacsb-accredited accounting programs,” Journal of Education for Business, vol. 77, no. 1, pp. 38–45, 2002. View at: Publisher Site | Google Scholar
  78. W. A. Roberts, R. Johnson, and J. Groesbeck, “The faculty perspective on the impact of AACSB accreditation,” Academy of Educational Leadership Journal, vol. 8, no. 1, pp. 111–125, 2005. View at: Google Scholar
  79. C. Lejeune and A. Vas, “Organizational culture and effectiveness in business schools: a test of the accreditation impact,” Journal of Management Development, vol. 28, no. 8, pp. 728–741, 2009. View at: Publisher Site | Google Scholar
  80. B. A. Beno, “The role of student learning outcomes in accreditation quality review,” New Directions for Community Colleges, vol. 2004, no. 126, pp. 65–72, 2004. View at: Publisher Site | Google Scholar
  81. R. H. Roller, B. K. Andrews, and S. L. Bovee, “Specialized accreditation of business schools: a comparison of alternative costs, benefits, and motivations,” Journal of Education for Business, vol. 78, no. 4, pp. 197–204, 2003. View at: Publisher Site | Google Scholar
  82. R. O. N. A. L. D. Dearing, The Dearing Report, The National Committee of Enquiry into Higher Education, 1997, The Dearing Report, The National Committee of Enquiry into Higher Education.
  83. L. Harvey and D. Green, Employer Satisfaction, Quality in Higher Education Project, Birmingham, United Kingdom, 1994.
  84. E. Dunne, N. Bennett, and C. Carré, “Higher education: core skills in a learning society,” Journal of Education Policy, vol. 12, no. 6, pp. 511–525, 1997. View at: Publisher Site | Google Scholar
  85. DETYA, Employer satisfaction with graduate skills research report, Department of Education, Training, and Youth Affairs, Canberra, Australia.
  86. R. Blundell, L. Dearden, C. Meghir, and B. Sianesi, “Human capital investment: the returns from education and training to the individual, the firm and the economy,” Fiscal Studies, vol. 20, no. 1, pp. 1–23, 2005. View at: Publisher Site | Google Scholar
  87. G. von Graevenitz, D. Harhoff, and R. Weber, “The effects of entrepreneurship education,” Journal of Economic Behavior and Organization, vol. 76, no. 1, pp. 90–112, 2010. View at: Publisher Site | Google Scholar
  88. A. Fayolle, B. Gailly, and N. Lassas-Clerc, “Assessing the impact of entrepreneurship education programmes: a new methodology,” Journal of European Industrial Training, vol. 30, no. 9, pp. 701–720, 2006. View at: Publisher Site | Google Scholar
  89. L. Mai, “A Comparative study between uk and us: the student satisfaction in higher education and its influential factors,” Journal of Marketing Management, vol. 21, no. 7-8, pp. 859–878, 2005. View at: Publisher Site | Google Scholar
  90. A. Parasuraman, V. A. Zeithaml, and L. L. Berry, “SERVQUAL: a multiple-item scale for measuring consumer perceptions of service quality,” Journal of Retailing, vol. 64, no. 1, pp. 12–40, 1988. View at: Google Scholar
  91. F. Buttle, “SERVQUAL: review, critique, research agenda,” European Journal of Marketing, vol. 30, no. 1, pp. 8–32, 1996. View at: Publisher Site | Google Scholar
  92. S. Ambrose, T. Huston, and M. Norman, “A qualitative method for assessing faculty satisfaction,” Research in Higher Education, vol. 46, no. 7, pp. 803–830, 2005. View at: Publisher Site | Google Scholar
  93. T. Manger and O.-J. Eikeland, “Factors predicting staff's intentions to leave the university,” Higher Education, vol. 19, no. 3, pp. 281–291, 1990. View at: Publisher Site | Google Scholar
  94. L. L. B. Barnes, M. O. Agago, and W. T. Coombs, “Effects of job-related stress on faculty intention to leave academia,” Research in Higher Education, vol. 39, no. 4, pp. 457–469, 1998. View at: Publisher Site | Google Scholar
  95. N. A. Bowman and M. N. Bastedo, “Getting on the front page: Organizational reputation, status signals, and the impact of U.S. news and world report on student decisions,” Research in Higher Education, vol. 50, no. 5, pp. 415–436, 2009. View at: Publisher Site | Google Scholar
  96. M. Clarke, “The impact of higher education rankings on student access, choice, and opportunity,” Higher Education in Europe, vol. 2, no. 1, pp. 59–70, 2007. View at: Google Scholar
  97. D. D. Dill and M. Soo, “Academic quality, league tables, and public policy: a cross-national analysis of university ranking systems,” Higher Education, vol. 49, no. 4, pp. 495–533, 2005. View at: Publisher Site | Google Scholar
  98. E. T. Pascarella, “Identifying excellence in undergraduate education,” Change, vol. 33, no. 3, pp. 19–23, 2001. View at: Google Scholar
  99. USASBE, National Model Program Awards Criteria. United States Association of Small Business and Entrepreneurship, 2017,
  100. MBNQA, Malcolm Baldrige National Quality Award 2017-2018, Education Criteria for Performance Excellence, National Instititute of Standards and Technology, Gaithersburg, MD, USA, 2017.
  101. J. Evans, “Critical linkages in the baldrige award criteria: research models and educational challenges,” Quality Management Journal, vol. 5, no. 1, pp. 13–30, 1997. View at: Google Scholar
  102. B. B. Flynn and B. Saladin, “Further evidence on the validity of the theoretical models underlying the baldrige criteria,” Journal of Operations Management, vol. 19, no. 6, pp. 617–652, 2001. View at: Publisher Site | Google Scholar
  103. M. A. Badri, H. Selim, K. Alshare, E. E. Grandon, H. Younis, and M. Abdulla, “The baldrige education criteria for performance excellence framework: empirical test and validation,” International Journal of Quality and Reliability Management, vol. 23, no. 9, pp. 1118–1157, 2006. View at: Publisher Site | Google Scholar
  104. L. Pittaway, P. Hannon, A. Gibb, and J. Thompson, “Assessment practice in enterprise education,” International Journal of Entrepreneurial Behaviour and Research, vol. 15, no. 1, pp. 71–93, 2009. View at: Publisher Site | Google Scholar
  105. JCSEE, Joint Committee on Standards for Educational Evaluation, The Program Evaluation Standards: How to Assess Evaluations of Educational Programs, Sage, Thousand Oaks, CA, USA, 2nd edition, 1994.
  106. JCSEE Joint Committee on Standards for Educational Evaluation, The Student Evaluation Standards: How to Improve Evaluations of Students, Corwin Press, Thousand Oaks, CA, 2003.
  107. J. Arter, “Classroom assessment for student learning (CASL) perspective on the JCSEE student evaluation standards,” in JCSEE National Conference on Benchmarking Student Evaluation Practices, AERA 2009 in the Division H Symposium, 2009. View at: Google Scholar
  108. J. Shaftel and T. L. Shaftel, “Educational assessment and the aacsb,” Issues in Accounting Education, vol. 22, no. 2, pp. 215–232, 2007. View at: Publisher Site | Google Scholar
  109. R. W. Tyler, “The objectives and plans for a national assessment of educational progress,” Journal of Educational Measurement, vol. 3, no. 1, pp. 1–4, 1966. View at: Publisher Site | Google Scholar
  110. E. L. Baker, “Testing and assessment: a progress report,” Educational Assessment, vol. 7, no. 1, pp. 1–12, 2001. View at: Publisher Site | Google Scholar
  111. K. Martell, “Assessing Student Learning: are business schools making the grade?” Journal of Education for Business, vol. 82, no. 4, pp. 189–195, 2007. View at: Publisher Site | Google Scholar
  112. D. Eseryel, “Approaches to evaluation of training: theory & practice,” Educational Technology and Society, vol. 5, no. 2, pp. 93–98, 2002. View at: Google Scholar
  113. D. L. Kirkpatrick, Evaluating Training Programs, McGraw-Hill Education, Tata, India, 1975.
  114. M. Y. Christ, “Fostering the professional development of every business student: The Valparaiso University College of Business Administration Assessment Center,” in Assessment in the Disciplines, Assessment of Student Learning in Business Schools: Best Practices Each Step of the Way, K. Martell and T. Calderon, Eds., vol. 1, Association for Institutional Research, Florida State University, Tallahassee, FLa, USA, 2005. View at: Google Scholar
  115. F. Wilson, J. Kickul, and D. Marlino, “Gender, entrepreneurial self-efficacy, and entrepreneurial career intentions: implications for entrepreneurship education,” Entrepreneurship: Theory and Practice, vol. 31, no. 3, pp. 387–406, 2007. View at: Publisher Site | Google Scholar
  116. A. Fayolle, “Evaluation of entrepreneurship education: behaviour performing or intention increasing?” International Journal of Entrepreneurship and Small Business, vol. 2, no. 1, pp. 89–98, 2005. View at: Publisher Site | Google Scholar

Copyright © 2017 Steven A. Gedeon. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.