Education Research International

Education Research International / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 491097 |

Joke Coens, Delphine Sasanguie, Geraldine Clarebout, Jan Elen, "Disentangling Teaching and Summative Assessment in Higher Education? Pros and Cons from Students’ Perspectives", Education Research International, vol. 2012, Article ID 491097, 9 pages, 2012.

Disentangling Teaching and Summative Assessment in Higher Education? Pros and Cons from Students’ Perspectives

Academic Editor: Lieven Verschaffel
Received13 Jan 2012
Revised26 Mar 2012
Accepted29 Mar 2012
Published03 Jun 2012


The impact of the examiner on students’ perspectives on learning and assessment is investigated. A disentangling of support and evaluation roles was implemented for two courses. The lecturer did not have the responsibility for the summative evaluation. Instead, this was the task of an examiner. Students’ conceptions of the disentangling were investigated by means of focus groups ( ) before and open-ended questionnaires ( and ) after the actual disentangled courses. Results showed as well pros as contras. Pros were that students felt free to express their own opinion during the examination, that they could ‘team up’ with the lecturer during the academic year, and that some of them believed this enabled the examiner to pass them a more objective judgment. Contras were that some students felt uncomfortable because they did not know the examiner, that they feared a wrong first impression would result in a less objective evaluation or that some students reported that they would behave less respectful to the lecturer if this person would no longer examine them. Students’ arguments thus reveal what matters to them in assessment: they desire a reliable, a valid and a transparent evaluation.

1. Introduction

The regulative importance of assessment has already received a lot of research attention (e.g., [1]). This research revealed that assessment influences students’ approaches to learning and their quality of learning in a considerable way [24]. Students adapt their learning strategies based on their conceptions and perceptions of the evaluation and assessment practices [5, 6], while the latter are in turn influenced by their previous experiences with assessments, their uses, and consequences [7].

Until now, however, researchers mainly focused on the impact of the form of the evaluation on the conceptions of students. Scouller [8], for example, demonstrated that, in case of multiple choice examination forms, students are more inclined to apply a surface learning approach (e.g., memorizing) and in case of examination essays, they tend to use a deep learning approach (e.g., the creation of a concept map).

Little research has been undertaken in regard to the influence of other aspects of assessment on students’ perceptions of the evaluation and on their possible adaptation of learning approaches. Hirschfeld and Brown [9], though, stress the possible influence of the year-level of New-Zealand secondary school students on their conceptions of assessment. Till the third year of secondary schooling, these children are not presented with high-stakes external examinations or qualifications-related assessments. All assessments are done for formative purposes. After that, students participate in nationally moderated achievement tests which are externally administrated. Although the authors expected the conceptions about assessment of the students before and after the third grade to be different, this was not the case. However, we believe that the “demographic factor” as described by Hirschfeld and Brown [9] actually is also just a “form” of evaluation that is measured: is there an influence of formative versus summative assessment on the conceptions of students? And apparently in their observations in secondary school students, this was not the case. In line with this, Peterson and Irving [10] observed that secondary school students do not recognise the broad spectrum of assessment types that teachers use. Assessment is typically seen by this group as a formal event, completed by students individually and marked by teachers (i.e., summative assessment).

In contrast to these studies, within the context of the current study, we are interested in the conceptions of higher education students. Moreover, we assume that one aspect that could have an influence on students’ perceptions and on their learning strategies is the examiner him/herself. Who is responsible for the final examination? The lecturer, who is also responsible for the teaching, or a specially appointed (external) examiner? Within the context of the reported study, it is assumed that there is an impact of the evaluator on students’ perceptions of the evaluation and on their learning strategies.

With regard to the examiner, two different practices can be observed in higher education. On the one hand, the coupling between the role of lecturer and the role of examiner is often assumed to be self-evident. Within the context of Flemish higher education for example there is an intrinsic and mandated link between teaching and summative evaluation. The lecturer who engages in teaching is also responsible for developing, administering, and grading the summative assessments [11]. On the other, in other countries, a coupling is not the common and evident practice. In Norwegian higher education, for example, every examination paper is assessed by an independent (i.e., from another institution) examiner [12]. Furthermore, the United Kingdom has a tradition of using “external examiners”. For each course in undergraduate or master’s programmes, examiners from other institutions, or even from outside higher education (e.g., from the industry), are appointed [13]. Also in Flanders, a number of exceptions to the dominant praxis of a coupling exist. Examples are the evaluation of doctoral dissertations or in the case of family ties between a student and an examiner. In the latter case, an external examiner is employed.

The impact of the examiner on the students’ perspectives on learning and assessment forms the focus of this study. This impact is investigated by considering students’ perspectives on a strict disentangling of the support and the evaluation role. In this context, to disentangle means by definition that a lecturer does not have the responsibility for the summative evaluation. This is the task of an examiner. The lecturer has a supportive role and, as a consequence, they bear the responsibility for the formative evaluation.

A disentangling between support and evaluation roles was implemented for two courses in a Flemish Educational Sciences programme. This disentangling entailed the lecturers not having the responsibility for the summative evaluation. The latter was the task of independent examiners. The examiners evaluated whether the students reached the learning objectives (i.e., summative evaluation). The lecturers only had a supportive role and were responsible for the formative evaluation. Thus, the obviousness of the intrinsic link between teaching and summative evaluation in Flemish higher education was questioned. Students conceptions of the relationship between support and evaluation before and after the disentangled courses were investigated by means of focus groups ( ) and questionnaires ( and ) respectively.

The basic assumption was that students, as a result of a disentangling, (a) would perceive the learning environment as more secure and (b) would perceive the lecturer as a real coach. It was further assumed that the relationship between the lecturer and the student would change because the lecturer was no longer both judge and party.

2. Method

2.1. Disentangled Courses

A disentangling was implemented within two courses of the programme “Educational Sciences” at the KU Leuven Kulak (one course in the first year of study and one course in the second year). All students registered for one of the disentangled courses were informed of the disentangling. This was necessary, as the disentangling required an exception to the regular rules of the university. The conditions of the disentangling were the same for both courses. The lecturers were responsible for the lectures and the formative evaluation as usual, and an independent examiner was appointed for the summative assessment. The person who was the lecturer in the first-year course was the examiner in the second-year course and the other way around. This way, the students of the second year were not totally unfamiliar with the examiner, but this was, however, the case for the first-year students, who did not meet the external examiner prior to the exam. Importantly, the second–year students also already had other courses than the disentangled one from the same lecturer. This way, the latter group could definitely compare the disentangled context with the traditional coupled one.

2.2. Focus Group Method

To investigate students’ conceptions prior to the start of the disentangled courses, the focus group method was used. Focus groups are group discussions concerning a certain subject [14, 15]. This method is suitable for determining possible problems with renewal or change and attaining a visibility on the opinion of the respondents [14, 16]. An informal atmosphere is typical for the focus group method. This encourages the respondents to communicate their point of view uninhibited [14]. The focus group method is appropriate for obtaining accurate information on the respondents’ thoughts [17, 18]. A scenario was developed which served as a guideline for the focus groups. The scenario evolved from fictitious cases in which situations of radical disentangling (i.e., a disentangling implemented throughout an entire course, and in which the lecturer and the evaluator do not consult with one another, both focusing exclusively on the learning goals to teach or to evaluate) were described. A moderator led all the discussions. The conversations took place prior to the start of the disentangled courses. At the moment of the focus groups, the students were already informed of the planned disentangling.

All students registered for one of the disentangled courses ( ) were invited to participate in a focus group on a voluntary basis. They were required to sign up by means of an email. In total, 21 students were willing to participate. Eight of them were first-year students, 10 were second-year students, and three of them attended a preparatory programme.

Students were divided in small groups for the focus groups. Four focus group sessions took place in classrooms at the campus. Each session took approximately one hour. A number of the respondents participated in two (different) sessions. The remainder only participated in one session. The number of participants for each session ranged from three to eleven.

The focus groups were recorded using a digicam. The recordings were typed out, and the text files were analyzed. First, irrelevant information was removed from the text files (e.g., the participants introducing themselves to each other). Next, the text was divided into fragments. Each fragment gave information on a certain topic (e.g., an advantage or a disadvantage according to the students). Furthermore, the various fragments were rearranged, and all fragments concerning the same topic were grouped together. On the basis of these gathered fragments, the advantages and disadvantages related to a certain topic were detected and the opinions of students were mapped.

2.3. Open-Ended Questionnaires

To investigate students’ experiences and their conceptions of a disentangling after their actual confrontation with it, the survey method was used. Students had to complete questionnaires, as this is proven to be an efficient way to collect data from a large group of respondents.

Students were required to complete two different questionnaires at different times. The first questionnaire was conducted at the end of the course but before the final examination. The second questionnaire had to be completed after the final examination.

The first questionnaire was a paper-and-pencil one, which consisted of two questions (Figure 1). Students were asked to complete them during the final lesson of the course. The lecturer allocated time for this. Fifty students out of the 52 completed the questionnaire.

The second questionnaire was a digital one. It consisted of one question (Figure 2). After the examination, students received an email with an invitation to complete the questionnaire. Students who did not participate spontaneously received a reminder. Eventually, 18 students completed the second questionnaire.

Relatively open questions were asked. This style was opted for as the focus groups had revealed that students’ opinions about disentangling were very different. Wide questions could cover the wide range of arguments that students had stated in their answers. The answers of the students were typed out completely and analyzed in a similar way to the text files in the focus group method.

By means of these two different methods (i.e., method triangulation), we were able to enlarge the validation of our observations [19]. Furthermore, the grounded theory methodology [20] was applied to gradually detect categories of conceptions of students.

3. Results

First, the results of the focus groups are reported. Five main topics could be detected amongst the conceptions of the students: the relationship between the lecturer and the students, the objectivity of the evaluator, the examination, the learning objectives, and the process evaluation. Next, the results of the questionnaires are discussed. Here, three main topics were brought forward by the students; the teaching/support, the exam preparation, and the evaluation.

3.1. Students’ Conceptions before the Start of the Course

At the start of the academic year, students were told that a disentangling would be implemented for a particular course in the second semester. During the focus groups, students indicated that they initially reacted negatively when they were informed about the disentangling. They were afraid a disentangling would have a negative influence on their marks and they did not have an eye for the potential benefits a disentangling could bring. S: “I did not see the use of it, and that’s why I thought it would be better to leave it the way it is.” (fg 5, 33-34). S: “I think that everyone thought that our marks would suffer from it. Or that the assessment wouldn’t be correct.” M: “So you were worried about your marks.” S: “Most of us.” (fg 4, 20–23),

where S: Student, fg: Focus Group, and M: Moderator.

3.1.1. The Relationship between the Lecturer and the Students

The respondents indicated that a disentangling could lead to a changing relationship between lecturers and students. Within a disentangled system, a lecturer is no longer both judge and party. They are entirely the partner of the students and can focus on the support and supervision of the students. The respondents indicated that the teacher and the students can team up together. Because of this, students may feel less inhibited, for example, to ask questions. The mental “distance” between the teacher and the students may be smaller. S: “I think he will be more supportive in respect of the students, because they have to reach something together.” (fg 3, 274-275). M: “Do you think there will be a different atmosphere when the teacher is not the evaluator?” S: “Yes.” S: “I think so. Now, students keep their lips sealed.” S: “Asking more questions. Now, students think that the teacher may perceive them as stupid.” S: “Now, students say [to each other]: “will you ask the question, I’ve asked this already once”.” (fg 5, 257–268).

According to the respondents, a disentangling may also have other consequences for the relationship between the teacher and the students. Potentially, some students may, for example, show less respect to the lecturer. Others may not make any effort to build a relationship and won’t attend the lessons. S: “And maybe, if you know that the lecturer does not examine, students will be less inhibited. I do not know. It’s a consideration.” M: “That the distance between the lecturer and the students is too small.” S: “Not so much too small, but, for example, you do not have to worry if you insult the lecturer, or if you are angry. You do not have to control yourself. Now, you know that you still have to do an exam with the lecturer.” (fg 1, 298–306). S: “Students who attend the lessons only because they do not want the lecturer to see they are absent won’t attend the lessons when a segregation is set up. For students who attend the lessons because it helps to internalize what they have learned, it won’t make a difference. I think they will still attend the lessons.” (fg 3, 139–147). M: “Do you think students will attend the lessons less if an evaluator examines?” S: “I think it will stimulate.” S: “Not for everybody, but I think it can stimulate skipping classes. If you know that it is not necessary that the lecturer knows you, because he does not examine…” S: “Certainly if it is an oral examination. Then, it makes little difference.” (fg 1, 230–236).

3.1.2. Objectivity

According to the respondents, it may be easier for an examiner to pass an objective judgement on the students if he does not know them. Within a disentangled system, there may be equal chances for everybody. However, there may be a risk that a first impression plays a role, in particular in the case of oral examinations. S: “The advantage of a disentangling is that the objectivity is guaranteed. Because, if you organise a practical lesson, you get to know the students and you are prejudiced during the examinations. If someone else examines, then …” M: “… there are equal chances for everybody.” S: “Indeed.” (fg 1, 56–62). S: “Suppose, during the lessons, I make out a case for abortion, and the lecturer is against abortion. If I do an exam with that lecturer, he will know that I have another opinion and will be prejudiced.” (fg 1, 78–80). S: “If the lecturer does not like you, you’ll be glad it’s someone else.” (fg 1, 163-164). S: “Suppose you missed some classes, you were not able to attend them. The lecturer will be prejudiced. During the exam, he might criticize you. If it is someone else…” M: “He does not know you were absent?” S: “He does not know that, and that’s better and more objective.” (fg 5, 193–197). S: “But in any case, a first impression plays a part. If the examiner does not like girls with curly hair, or he does not like my face. Or I am nervous and I stutter, he might think “Oh no”. Or I cannot answer the first question, he might think I did not study. The examiner is also a human. Only if a robot takes the exam, it will be objective.” (fg 2, 580–586).

3.1.3. The Examination

The respondents indicated that, when a disentangling is implemented, it may be easier for students to give their own opinion concerning a certain matter and that they may take less into consideration the opinion of the lecturer. S: “If someone else examines, you do not know his opinion concerning certain matters, and it will be easier to give your own opinion.” (fg 3, 85-86). S: “I think that—if you do not know the person in front of you—it will be easier to give your own opinion.” (fg 5, 536-537).

On the other hand, the respondents are afraid that, as a result of a disentangling, they may lose certainty. Within the “regular” coupled system, students focus themselves on the priorities of the lecturer during studying. They indicate that they focus on the things that are emphasized by the lecturer. They have an idea about the importance of different parts of the subject matter, and they know the personal opinion of the lecturer. Indeed, within a regular system, lecturers sometimes drop a hint about the examination.

The respondents believe that a disentangling may also cause problems. A lecturer and an evaluator may have different expectations. The evaluator may have higher expectations than the lecturer. It may also be that the evaluator asks less in-depth questions. According to the respondents, it is necessary that the teaching and evaluation are geared to one another, in order for a disentangling to succeed. S: “Every lecturer emphasizes different things. One lecturer thinks that is important, another lecturer thinks something else is important. If you answer the question of an evaluator, you always try to emphasize those things that were emphasized during the lessons.” (fg 3, 104–107). S: “Now you know which parts of the course you really have to study because there’s a big chance there will be a question about it in the exam. […] Sometimes the lecturer gives an examination question, in that case you already know one question. When a disentangling is set up, that’s impossible.” (fg 5, 139–144). S: “You sit there in front of an evaluator you do not know. And you rattle different things without knowing if the things you say are the things the evaluator wants to hear. If the lecturer examines, you know the evaluator and you know what he wants to hear.” (fg 5, 531–535). S: “Lecturers may have different expectations. It is possible that one lecturer gives the impression that a particular part of the course is less important. And it may be that another lecturer gives priority to that same thing. Or it may be that he has higher expectations.” (fg 4, 133–138). S: “I think there must be some kind of equivalence between the lecturer and the evaluator.” (fg 3, 182-183). S: “The evaluator might ask less in-depth questions, or ask less about the details.” S: “Yeah, for example additional questions.” M: “When a disentangling is set up?” S: “Yeah.” M: “That he stays superficial?” S: “Yeah.” M: “And do you think that’s a good thing?” S: “That’s a good thing.” (fg 5, 297–304).

The respondents expected to be more stressed and nervous during the examination because they do not know the evaluator. They were convinced that a disentangling may influence their marks, especially within the context of an oral examination. For this reason, the respondents preferred the idea of a written exam when a disentangling is implemented. S: “You are doing an oral exam in front of a perfect stranger. I think it is already difficult enough when you know the evaluator. In that case, you are less afraid because you know who the evaluator is.” (fg 3, 116–118). S: “You have never seen him, and you know nothing about that person. Because of the nerves, you lose points, and that is a disadvantage.” (fg 1, 160–163). M: “You think there is a difference between an oral and a written exam?” S: “Yeah, a written exam is more acceptable.” S: “Yeah.” S: “In that case, it is you and your sheet. If the strange evaluator stands in front of the room, it does not bother you so much. In the case of an oral examination, it really plays a part. You know nothing about that person. Is he severe? Or not? You do not know that.” (fg 1, 205–210).

3.1.4. Learning Objectives

The respondents indicated that a good list of learning objectives may be useful (assuming that learning objectives are the point of departure for designing the teaching and for designing the examination). One student referred to a previous educational experience: a lecturer formulated minimum goals for every chapter. The student then used the goals as a guide during studying. A list of learning objectives may also counteract the uncertainty of the students and make them feel more comfortable. S: “During my previous education, we had some minimum goals for each chapter. You knew that if you could answer those questions, it would be sufficient. I really used those goals.” (fg 2, 452–455). S: “Yeah, a list of learning goals reveals the accents. You know that those parts of the course about which a lot of goals are formulated are more important than the other parts.” (fg 2, 467–470). S: “On the other side, if you have a list of learning objectives, you can focus on those goals, and you do not have to study all the details.” (fg 5, 146–148).

On the other hand, the respondents thought that a lecturer and an evaluator may interpret a set of learning objectives in different ways. S: “I wonder, if you give a list of learning objectives to lecturer A and to lecturer B and you ask them to formulate some examination questions. I’d like to see the questions of both lecturers, you will see that they are very different.” (fg 1, 110–112). S: “I think, a list of learning objectives, lecturers can interpret them differently, I think the difference between the lecturer and the evaluator may be too big.” (fg 1, 118-119).

Some respondents indicated that a disentangling may result in a “teaching to the target” phenomenon [21]. The respondents indicated that the learning objectives may dominate the lessons. They thought that the lecturers may mainly lecture and that they would not take time for things that broaden students’ views. Other respondents, however, thought that there would be even more opportunities for these things in a disentangled context. They assumed that students will have to internalize independently what they have learned and that, during the lessons, the focus would be on skills and attitudes. S: “If he really focuses on the theoretical learning objectives… if he wants us to know everything when we leave the classroom…” (fg 3, 248–251). S: “I think there will be more opportunities for discussion. I think, if the learning objectives are on paper, students can study independently. During the lessons, the lecturer will focus on less theoretical aspects, for example giving your own opinion about something.” (fg, 1, 426–432).

3.1.5. No Process Evaluation

The respondents indicated that, when a disentangling is implemented, the evaluator cannot take into account the learning process of the students. Did the student make progress? Did they make some effort? Did they show interest? Did they attend the lessons? The respondents indicated that, within the “regular” entangled system, students who make an effort profit from this: when the evaluator hesitates between passing and failing, he will tend to pull the student through. When a disentangling is set up, this bonus may be lost. S: “What about the progress someone made? A lecturer sees it when someone makes progress and he can take that into account.” (fg 1, 212–225). S: “The evaluator does not know which students attended the lessons, so he cannot take that into account. It’s a pity for the students who take pains to attend the lessons.” (fg 3, 133–138).

3.2. Students’ Conceptions at the End of the Course
3.2.1. Comments Regarding the Teaching/Support

Students indicated that they experienced a small distance between the teacher and the students. The disentangling resulted in a close relationship between the lecturer and the students. As a result, students were less inhibited to ask questions. It was easier to make contact with the lecturer. Knowing that the lecturer does not oversee the final examination puts the students’ minds at rest. “You did not really have to be afraid to speak to the lecturer. Probably, the fact that she did not evaluate unconsciously played a part.” “The distance between the professor and the student was smaller. And we dared to ask questions and raise our concerns during the lessons sooner.” “The fact that the lecturer only has the job to help us. This is reassuring. You can trust that person.”

One participant indicated that, because of the disentangling, they felt less inhibited to make negative comments. “I personally felt less inhibited to comment because I knew that she would not examine. […] Why would you still do your best to cooperate?”

3.2.2. Comments Regarding the Exam Preparation

Students prefer it when they know exactly which material is most relevant to the examination. Within a disentangled system, they feel that this is hard to know. According to the participants, an interim test (composed by the evaluator) could outweigh this disadvantage, as this would give the students an opportunity to become familiar with the phrasing of the questions and to have an idea of the important contents. “Uncertainty about the requirements of the evaluator. You cannot assess the requirements of the evaluator.” “A disentangling feels strange: do these two professors require the same, do they expect the same? Do they emphasize the same?” “Explanation and clarification by the examiner. The example questions of the evaluator clearly indicated what was expected on the exam.” “Clarification by the means of example questions. Provides support, less fear of exams.”

Students felt more confident about the examination when they had the opportunity to meet the evaluator in advance. “Meeting the evaluator. It is no longer new.” “It is good to know that person.” “More contact with the evaluator. Reassurance.”

A disentangling, according to the students, had the advantage that all students can start the exam on an even footing. Students thought that for people who are perceived to be a nuisance during classes, it could be a disadvantage if they had to take an exam with the same lecturer. “It feels like you can start the exam with a blank sheet.”

3.2.3. Comments Regarding the Evaluation

Students experienced that the lecturer and the examiner emphasized different aspects of the learning content. They perceived this as a disadvantage of the disentangling. “Prof. X emphasized other aspects than Prof. Y” “Professor X asked a lot of questions about the school culture. This was not emphasized by the regular professor.”

Students did not agree with each other about the process evaluation. One portion of the students believed it was good that the examiner could not take into account the learning process, which often involves ups and downs. Only a “good” end product can be evaluated. Another section did not consider this an advantage; they wanted to be rewarded for all the efforts that they had made during the academic year. “If the text was not that good, you did not have to be concerned about it: the professor who evaluates only reads the final text, and for me, that is very positive he does not has to evaluate the intermediate steps.” “I think it is not only the final product that needs to be evaluated. A whole process preceded and this process is not taken into consideration. It’s a pity. Especially because you spent a lot of time and energy on it.”

4. Conclusion and Discussion

Previous research has shown that assessment influences both students’ perspectives and their learning approaches (e.g., [2, 4, 8]). In this research, specific attention is devoted to the nature of the assessment. To date, however, the impact of the examiner has not been the topic of research.

Whilst a coupling between the role of a lecturer and an examiner is assumed to be self-evident in the Flemish higher education context, this is far from the dominant praxis in other countries (e.g., Norway and UK; [12]). In these countries, one person is responsible for the lectures and another takes charge of the (summative) examination.

To examine whether this organizational decision has an effect on students’ perspectives on the evaluation, a disentangling between support and evaluation roles was implemented for two courses in a Flemish Educational Sciences programme at the KU Leuven Kulak. Students’ conceptions about this disentangling were investigated, both before and after the students were confronted with it, by means of focus groups on the one hand and questionnaires on the other hand. The results of both methods bore a large degree of similarity. This triangulation therefore enabled us to enlarge the validity of our observations [19].

The results showed that within the “regular” coupled system, students fully realise that the same person teaches and evaluates them. Students take this into account by, for instance, paying attention to the lecturer from a strategic point of view. During the lectures, they are kind to the lecturer and are (or pretend to be) interested because they realize that the lecturer will also examine them at the end of the term. Also during studying they keep in mind that the examination is drawn up by the lecturer; they use the hints the lecturer gave them during lectures, and they focus on those parts of the course that were emphasized during lectures. Furthermore, it is remarkable that the respondents think that within a disentangled system, students will treat the lecturer in a less respectful way.

Results also showed that students hold strong views about (dis)entangling the role of the lecturer and that of the examiner. Possible advantages and disadvantages are in conflict with each other. Three important findings are discussed here. First, it is typical to a disentangling that the students do not know the evaluator. This means, according to students, that it is easier for the students to express their own opinion during the examination; they are less inclined to copy the personal opinion of the lecturer. On the other hand, not knowing the evaluator also evokes stress, which makes students feel uncomfortable, especially in case of an oral examination. Second, it is typical to a disentangling that the evaluator does not know the students. This has some important consequences for the conceptions of the students. Not knowing the students could be positive in the sense of objectivity of the evaluation. According to the students, it is easier for an examiner to pass an objective judgment on the students when they do not know them. Conversely, first impressions could result in a less objective evaluation. Not knowing the students also means that the examiner cannot take into account their learning processes. Third, a disentangling has consequences for the relationship between the lecturer and the students. On the one hand, students indicate that as a result of disentangling they can “team up” with the lecturer. However, a disentangling could also have less positive consequences, such as students behaving in a less respectful way.

The arguments of the students reveal what matters to them in assessment. Most importantly, they want a reliable evaluation. They want an evaluator who does not take into account irrelevant things, like first impressions or the number of presences during the lectures. Next, students want a valid evaluation. In particular, it is important for students that the evaluator takes the content of the lessons into consideration. Finally, students want a transparent evaluation. They want to know what to expect.

Students’ conceptions before and after they were confronted with a disentangling bear a large degree of similarity. The topics that students broached were the same to a great extent. Nevertheless, there are some remarkable differences between students’ conceptions before they were confronted with a disentangling and after their experience.

First, students agreed with each other that a disentangling results in a changed relationship between the lecturer and the students. Whereas they indicated initially both positive as well as negative aspects of this changing relationship, after the confrontation with a disentanglement, they indicated especially the positive aspects; notably, a smaller distance between the lecturer and the students. After the confrontation with the disentangling, only one student indicated that the disentangling made them feel less inhibited to make negative comments. This is in line with previous research. Chetcuti et al. [22], for example, demonstrated that a disentangling could have a liberating effect on students. In their study, the students declared that they did not want to disclose their weaknesses to their lecturer, because they feared this may have had a negative influence on their summative assessment. Moreover, Fong [23] also reported that, in case of a coupling, students can be too emotionally anxious to present their true abilities, as they only want to show their good characteristics to the lecturers. In contrast, however, Barrow [24] revealed that, in coupling context, it is also possible that the relationship between the students and the lecturer is good. And, even going beyond this, some studies already argued that a disentangled “anonymous” system of summative assessment can be even detrimental, especially for first-year students [25] or for students with high-cognitive test anxiety or poor study skills [26]. The latter group appears to perform better when external evaluative pressures is removed.

Second, students indicated, both before and after their confrontation with a disentangled course, that, within a disentangled system, it is hard for them to know the foibles of the evaluator; it is hard to know which parts of the learning content they consider to be important. Initially, students recognized that this could mean that it is easier for students to give their own opinion concerning a certain matter and that they may take less into consideration the opinion of the lecturer when they have to formulate an answer. After the confrontation with the disentangling, students did not indicate this aspect anymore. In contrast, they focused on the fact that this lack of information involves uncertainty. Students felt uncertain because they had the feeling that they did not know what to expect from the examination. In previous research literature, Hoglund [27] wrote that students should interpret information from an external examiner as crucial for their learning and growth process. External examiners, he argues, can see things from a different perspective and they are well placed to provide information which corrects students’ mistakes.

Third, considering the reliability of the evaluation, whereas students initially indicated that there may be a risk that a first impression influences the evaluator when they do not know the students, they did not indicate this afterwards. Instead, they only indicated that there were equal chances for all students. This finding is similar to others which have been reported before. Brennan [25], for example, argued in favor of a disentangling, as when the examiner is the same person as the lecturer and hence the examination is not anonymous, students may develop strategies to impress the lecturer (e.g., they become an exemplary student, who is never absent during the lectures, etc.).

Finally, whereas students initially are rather negative concerning the fact that within a disentangled system the learning process cannot be taken into consideration and students cannot be rewarded for the efforts that they have made, they also indicated a rather positive aspect of it after they were confronted with the disentangling. Then, students said that it can also be a good thing that the evaluator only has to evaluate a finished product. According to the students, a learning process often involves ups and downs, and it might be an advantage for the students that the evaluator cannot take into account the “downs”. This finding reflects the argument of quality control and quality enhancement that is inextricably connected with appointing an authority from outside the classroom to evaluate student learning. Previous studies (e.g., [28]) already showed that evaluations of external examiners in higher education are more objective. The lecturers often know their students too well, possibly resulting in either a positive (i.e., halo effect) or a negative (i.e., horn effect) bias towards their students. The same authors also demonstrated that the requirement to justify their marks caused more prudence among the examiners. According to Rees and Sheard [29], the quality of external marking could therefore be enhanced by means of discussion and negotiation between different independent examiners.

Overall, it can be concluded that Flemish students are pleased with the current coupled system of evaluation. They have certainty (e.g., the certainty that the teaching and the evaluation are geared to one another), and this results in a feeling of security, which is lost within a disentangled system. The basic assumptions of the study, namely, that a disentangling could be meaningful in the sense that students as a result of a disentangling (a) would perceive the learning environment as more secure and (b) would perceive the lecturer as a real coach, are confirmed. Students clearly indicate that, as a result of a disentangling, the relationship between the lecturer and the students changes in a positive way. The distance between the lecturer and the students becomes smaller. This means, amongst other things, that students feel less inhibited in asking questions. However, this advantage is in conflict with some other possible disadvantages related to a disentangling (e.g., uncertainty).

This study was largely exploratory in nature and therefore entailed some limitations. First, only two courses were disentangled, with relatively small samples. Second, only one lecturer and examiner who switched places were involved, making it not impossible that personal characteristics of those persons played a role. Third, the response rate of the second questionnaires (i.e., after the actual disentangling) was rather low. However, as the results of the focus groups, the first, and the second questionnaires bore a large degree of similarity, we do not consider this as a real limitation. Fourth, our approach was only qualitatively, so maybe further research is recommended to investigate whether our results would quantitatively be confirmed. Finally, in the reported study, the focus was on students’ conceptions. Students’ learning approaches were not considered but could be the subject of future research. Students adapt their learning strategies based on their conceptions and perceptions of the evaluation and assessment practises [5, 6]. It could, for example, be assumed that a disentangling would result in deep learning approaches.

In future studies, this questioning of the obviousness of the intrinsic link between teaching and summative evaluation in (Flemish) higher education can be taken into account. This study also delivered more insights into the importance of the difference between formative and summative evaluation. Moreover, it has been revealed that it is not unimportant who fulfils the task of the evaluation, for practical, methodological, and substantive reasons. In fact, this entails large theoretical and methodological implications. The examiner should therefore be included in the theory and also in reporting about evaluations, the person who fulfils this evaluation task should be mentioned and be taken into account.

Authors Contribution

J. Coens and D. Sasanguie contributed equally to this work.


  1. F. Dochy, M. Segers, and D. Sluijsmans, “The Use of Self-, Peer and Co-assessment in Higher Education: a review,” Studies in Higher Education, vol. 24, no. 3, pp. 331–350, 1999. View at: Google Scholar
  2. D. Boud, “Assessment and learning: complementary or contradictory,” in Assessment for Learning in Higher Education, P. Knight, Ed., RoutledgeFalmer, London, UK, 1995. View at: Google Scholar
  3. F. Marton and R. Säljö, “Approaches to learning,” in The Experience of Learning: implications for Teaching and Studying in Higher Education, F. Marton, D. Hounsell, and N. J. Entwistle, Eds., pp. 39–59, Scottish Academic Press, Edinburg, UK, 2nd edition, 1997. View at: Google Scholar
  4. K. Struyven, F. Dochy, and S. Janssens, “Students' perceptions about evaluation and assessment in higher education: a review,” Assessment and Evaluation in Higher Education, vol. 30, no. 4, pp. 325–341, 2005. View at: Publisher Site | Google Scholar
  5. J. Lowyck, J. Elen, and G. Clarebout, “Instructional conceptions: analysis from an instructional design perspective,” International Journal of Educational Research, vol. 41, no. 6, pp. 429–444, 2004. View at: Publisher Site | Google Scholar
  6. M. Segers, J. Nijhuis, and W. Gijselaers, “Redesigning a learning and assessment environment: the influence on students' perceptions of assessment demands and their learning strategies,” Studies in Educational Evaluation, vol. 32, no. 3, pp. 223–242, 2006. View at: Publisher Site | Google Scholar
  7. M. S. R. Segers and F. Dochy, “New Assessment Forms in Problem-based Learning: the value-added of the students' perspective,” Studies in Higher Education, vol. 26, no. 3, pp. 327–343, 2001. View at: Publisher Site | Google Scholar
  8. K. Scouller, “The influence of assessment method on students' learning approaches: multiple choice question examination versus assignment essay,” Higher Education, vol. 35, no. 4, pp. 453–472, 1998. View at: Google Scholar
  9. G. H. F. Hirschfeld and G. T. L. Brown, “Students' conceptions of assessment: factorial and structural invariance of the SCoA across sex, age, and ethnicity,” European Journal of Psychological Assessment, vol. 25, no. 1, pp. 30–38, 2009. View at: Publisher Site | Google Scholar
  10. E. R. Peterson and S. E. Irving, “Secondary school students' conceptions of assessment and feedback,” Learning and Instruction, vol. 18, no. 3, pp. 238–250, 2008. View at: Publisher Site | Google Scholar
  11. H. De Neve and P. J. Janssen, Succesvol Examineren in Het Hoger Onderwijs, Acco, Leuven, Belgium, 1992.
  12. V. Gynnild, D. Myrhaug, and W. Lian, “External examiners in new roles: a case study at the Norwegian University of Science and Technology,” Quality in Higher Education, vol. 10, no. 3, pp. 243–252, 2004. View at: Google Scholar
  13. A. Hannan and H. Silver, “On being an external examiner,” Studies in Higher Education, vol. 31, no. 1, pp. 57–69, 2006. View at: Publisher Site | Google Scholar
  14. B. Berg, Qualitative Research Methods for the Social Sciences, Pearson Education, Boston, Mass, USA, 2007.
  15. H. Edmunds, The Focus Group Research Handbook, Sage, Thousand Oaks, Calif, USA, 2000.
  16. D. W. Stewart and P. M. Shamdasani, Focus Groups: Theory and Practice, Sage, Newbury Park, Calif, USA, 1990.
  17. R. D. Strother, Voters’ Bias Shuts Door on Female Leaders, Minneapolis Star and Tribune, 1984.
  18. S. Vaughn, J. S. Schumm, and J. Sinagub, Focus Group Interviews in Education and Psychology, Sage, Thousand Oaks, Calif, USA, 1999.
  19. C. Erzberger and G. Prein, “Triangulation: validity and empirically-based hypothesis construction,” Quality and Quantity, vol. 31, no. 2, pp. 141–154, 1997. View at: Google Scholar
  20. A. L. Strauss and J. M. Corbin, Basics of Qualitative Research: Grounded Theory, Sage, Thousand Oaks, Calif, USA, 1990.
  21. L. Volante, “Teaching to the test: what every educator and policy-maker should know,” Canadian Journal of Educational Administration and Policy, no. 35, 6 pages, 2004. View at: Google Scholar
  22. D. Chetcuti, P. Murphy, and G. Grima, “The formative and summative uses of a Professional Development Portfolio: a Maltese case study,” Assessment in Education; Principles, Policy and Practice, vol. 13, no. 1, pp. 97–112, 2006. View at: Google Scholar
  23. B. Fong, The External Examiner Approach to Assessment, presented at the National Conference on assessment in Higher Education. Paper collected as part of the AAHE Assessment Forum, American Association for Higher Education, Washington, DC, USA, 1987.
  24. M. Barrow, “Assessment and student transformation: linking character and intellect,” Studies in Higher Education, vol. 31, no. 3, pp. 357–372, 2006. View at: Publisher Site | Google Scholar
  25. D. J. Brennan, “University student anonymity in the summative assessment of written work,” Higher Education Research and Development, vol. 27, no. 1, pp. 43–54, 2008. View at: Google Scholar
  26. J. C. Cassady, “The impact of cognitive test anxiety on text comprehension and recall in the absence of external evaluative pressure,” Applied Cognitive Psychology, vol. 18, no. 3, pp. 311–325, 2004. View at: Publisher Site | Google Scholar
  27. R. Hoglund, “Part II: external evaluation can be helpful,” International Journal of Reality Therapy, vol. 24, no. 1, article 48, 2004. View at: Google Scholar
  28. D. Baume, M. Yorke, and M. Coffey, “What is happening when we assess, and how can we use our understanding of this to improve assessment?” Assessment and Evaluation in Higher Education, vol. 29, no. 4, pp. 451–477, 2004. View at: Publisher Site | Google Scholar
  29. C. E. Rees and C. E. Sheard, “The reliability of assessment criteria for undergraduate medical students' communication skills portfolios: the Nottingham experience,” Medical Education, vol. 38, no. 2, pp. 138–144, 2004. View at: Publisher Site | Google Scholar

Copyright © 2012 Joke Coens et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.