Education Research International

Education Research International / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3454783 | https://doi.org/10.1155/2020/3454783

Verónica Martínez, María A. Mon, Marina Álvarez, Eva Fueyo, Alejandra Dobarro, "e-Self-Assessment as a Strategy to Improve the Learning Process at University", Education Research International, vol. 2020, Article ID 3454783, 9 pages, 2020. https://doi.org/10.1155/2020/3454783

e-Self-Assessment as a Strategy to Improve the Learning Process at University

Academic Editor: Christos Troussas
Received17 Nov 2019
Revised31 Jan 2020
Accepted20 Feb 2020
Published29 Jun 2020

Abstract

Background. Self-evaluation or autonomous evaluation, understood as a practice in which students can judge their own achievements and reflect on them, is considered a key element in the assessment process of college education. A common procedure at University environments is to apply information and communication techniques to carry out self-assessment activities and record answers. The aim is to analyse if the e-self-assessment improves student performance, using tests for objective and short answers as a complementary activity in teaching through the virtual platform Moodle. Method. The sample consisted of 406 students of two subjects in the degree course for Primary and Early Childhood Education and in the degree course for Teacher of Primary Education; they had to fill in 100 question self-assessment questionnaires about the content of the subjects on the Moodle virtual learning platform and a satisfaction scale. Results. They confirm a high participation in this innovation methodology; the e-self assessment showed improvement of student achievement and increased the degree of student satisfaction. Conclusions. The e-self assessment would assist students to take an active role in their learning process, increase their achievement, promote their self-directed learning, and develop metacognitive skills.

1. Introduction

Assessment is the final element in the teaching and learning process where the student can participate in three ways: self-assessment, assessment in pairs, and shared assessment or co-assessment. Furthermore, it may be considered as an opportunity in itself to foment significant learning and to develop competencies in university students [1]. This is of such fundamental importance for university teaching that different studies have shown that it determines learning outcomes of students more than the official syllabus [2]. In recent years, the term “learning-oriented assessment” has been coined [35], which brings together three essential questions: (a) the development of assessment tasks for learning; (b) the involvement of the student in assessment; and (c) the results of the assessment are offered as a feedback method [6].

Self-assessment or independent assessment, one of the three forms of assessment where students can participate, is considered as an essential element in the assessment process and is understood as a practical session where learners can judge their own achievements with respect to a specific task, at the same time reflecting on the level of control that they have reached in that specific area of learning [79]. Such are the benefits that we should move from examinations to assessment tasks, and in assessment tasks, including self-assessment, this becomes a teaching tool in itself, thanks to which knowledge is acquired and learning is promoted, but without teachers losing their central role in the teaching and learning process [10, 11].

Even though self-assessment brings together the three essential questions present in assessment oriented towards learning [12] and that its advantages have been demonstrated, it is a methodology that has scarcely been used in an innovative university teaching setting. Thus, in a meta-analysis of the period from 1932 to 1988, only 48 research papers make reference to Higher Education [13]. These data continue to be observed in a number of different works [14, 15] which reflect that the use of different participative assessment methods at universities is scarce, between 8% and 2.7% [6], and point out the necessity of establishing formative processes, for both professors and students, which affect knowledge and putting into practice these modalities with the objective of promoting autonomous and strategic learning [15].

Over the last decade, the carrying out of self-assessment by students is gaining ground in university practices [16, 17], because of the close interrelationship with the promotion of autonomous learning, given that, with the correct orientation, the teacher can train students to establish their learning objectives, to self-monitor, to self-correct, and, in general, to self-regulate their learning process [13, 1820]. In this way, a methodological change is suggested in university teaching where it is established that students should be as autonomous as possible in their learning and that they should take on responsibility for the organization and development of their academic work, the university teacher being a facilitator of this process, helping them to construct their learning [21], as is laid down in the context of European Space of Higher Education.

Furthermore, it has been demonstrated that self-assessment and evaluation from peers promotes competencies such as the development of the capacity for analysis, critical thinking, decision-making, and the acceptance of responsibilities [22]. Taking an active role in assessment implies the development of metacognitive abilities, which, at the same time, results in the development of autonomy. According to Osses and Jaramillo [23], “it is possible to affirm that metacognition is a viable way to achieving fully autonomous development of students, this being reflected, among other aspects, in learning which transcends the scope of school learning and which is projected into student life as “learning to learn.”

Moreover, it may also be said, with respect to self-assessment, that making students participants and protagonists in evaluative practices is a way of integrating assessment in the teaching/learning process. In this way, evaluation will stop being something external and the last step in this process and will become something central and parallel with the entire teaching and learning process. This has been called sustainable assessment, and it should be considered an integral part of curriculum with the aim of creating lifelong effective learners and assessing the tasks they are going to face [24].

The concept of e-self-assessment may be defined as an electronic assessment process in which Information and Communications Technologies (ICT) are applied in order to carry out self-assessment activities as well as to register the student’s answers [16]. The introduction of these technologies (ICT) in the classroom is showing itself to be a strong ally for teachers in the teaching and learning process and, consequently, in assessment [2528]. In this way, Wang and Kinuthia [29] suggest that the incorporation of this technology into the learning environment would be used, among other things, to motivate students and to assess and value learning objectives. A clear example of this is the use of mobile phone in the classroom for learning purposes using a methodology of Mobile Game-Based Learning in Higher Education Settings. It has been observed to be a powerful tool in the fields of e-learning for the students to learn and advance in their knowledge [30]. An exhaustive revision of e-assessment in Higher Education using different assessment strategies can be found in Buzzetto-Mores and Alade [17]. Particularly, the strategies of e-assessment offer the students the opportunity to become part of an electronic learning community [31], and this contributes to making them more autonomous developing the necessary skills to judge and manage their own learning and to the construction of a more adequate and significant learning experience [27, 32].

The use of a visual environment through a virtual platform, such as Moodle for example, to develop a system of self-assessment with tests containing objective and short answers allows students the possibility of adapting their learning rhythms to the temporal and spatial flexibility of this type of assessment. Furthermore, together with immediate feedback on the answers to the questions, which exercises a motivating element so that the student makes an effort, self-assessment takes on the value of a metacognitive tool given that it orients the students in their activities [33, 34]. Just as Biggs [2] affirms, self-assessment, and in particular e-self-assessment, not only sharpens the learning of content, but also gives rise to the learning of metacognitive processes of supervision, which will be essential to their professional and academic life. To this effect, a recent study carried out by Ruiz et al. [35] reveals that students involved in e-assessment aimed at learning develop their basic competencies significantly more compared with students working under a traditional assessment system.

The student who learns to self-assess or e-self-assess also learns to identify and express their needs, to set objectives and to design action plans to achieve them, to identify resources, to value achievements, to increase motivation and confidence in their own abilities, to develop critical thinking and the capacity for analysis, etc. [17, 24, 36], these being cross-curricular competencies included in Undergraduate and Master’s degrees in our universities.

The interest in this type of innovative work lies in the improvement of student academic performance through the use of questionnaires as a tool for self-assessment in virtual environments. Through immediate feedback given to the answers to the questions included in the questionnaire, students, as a part of a co-productive process, can detect their specific learning difficulties as well as learn to self-assess, that is to say, to evaluate how they have overcome these difficulties, how they have modified their learning strategies, and to analyse the result of the assessment process and the quality of the knowledge acquired (metacognition) [12]. So far, the positive benefits of e-assessment have been studied: it does not add stress to the assessment processes; it is useful, adequate, and accessible to university students; it improves reliability and learning expectations; it adds value to the learning process; and it facilitates the learning process by bridging the gap between the starting level of the student and the goal level [24, 2628, 37, 38]. However, it has not been studied if e-self-assessment provides the same benefits and improves the teaching-learning process.

Therefore, the principal objective of this research is to analyse whether e-self-assessment through the virtual platform Moodle, as a complementary activity of course delivery, improves student performance and activates processes of metacognition in higher education settings.

Secondary objectives that may be highlighted promote autonomous work and the participation of students in their learning process, increase collaboration among teachers through the joint development and application of an e-self-assessment tool, and include teaching innovation with innovative tools in the assessment of content.

2. Materials and Methods

2.1. Participants

The participants in this research were 406 students enrolled in two subjects: Foundations in Psychology for Attention to Diversity (FPAD) in the degree course for Primary and Early Childhood Education and Developmental Psychology (DP) in the degree course for Teacher of Primary Education. There were 314 students enrolled in the former and 92 in the latter. Furthermore, there were five professors included in the teaching group, four from the former subject and one from the latter subject.

Those students who did not carry out the self-assessment questionnaire were eliminated from the total number of participants, as well as those who had completed this but who did not sit for the exam. Due to this, the final sample in this study consists of 316 subjects enrolled between the two subjects.

2.2. Instruments

The self-assessment questionnaire consisted of 100 questions to evaluate knowledge of the subject. The questions were of two types: 90 multiple-choice and 10 short-answer questions. The maximum score that could be obtained was ten marks, and in order to pass the test, it was necessary to obtain five marks. Only one attempt was allowed per student.

This questionnaire was completed on the Moodle platform. There were five questions per page, with free browsing of the different pages and answer options were randomly ordered. Also, immediate feedback was given to the student.

The scale of satisfaction consisted of 10 Likert-type questions in order to assess the level of satisfaction of the student with regard to appropriateness, level of difficulty, etc. The questions gave four answer options: 0 corresponded to Totally Disagree; 1 Disagree; 2 Neither Agree Nor Disagree; and 3 Agree. The questionnaire was available through the platform Google Forms once the self-assessment questionnaire had been completed.

2.3. Procedure

A first meeting took place among professors to determine the content and the number of questions to be included in the self-assessment questionnaire. It was decided to create a definitive bank of 100 questions, in which each of the teachers of the subject FPAD proposed an initial list. In the same way, the number of multiple-choice questions was also established as well as how many “fill in the blank” questions would be included, estimating that the majority would be multiple choice with four alternative answers, given that the final exam for the year follows this format. This decision was taken for both subjects given that this is what is outlined in their teaching guides. Also, it was determined that the questionnaire would be visible to the students 15 days before the date of the exam and would be closed one day before the exam, with the aim of avoiding that students could complete it without having studied previously. The access dates for the questionnaire were conditioned by the exam dates since these dates vary in relation to different groups. In addition, parameters were established with regard to the timing and management of the questionnaire (categorization, number of attempts, and type of feedback) for both subjects. Regarding categorization, one point was given if the answer was correct and zero points if incorrect. With regard to time, it was stipulated that students were to be given a maximum of 120 minutes to answer the questionnaire. They could only answer once, and so, if the student responded incorrectly, they would have to think of the correct option, which would imply searching for information. In this way, on finishing the questionnaire, the system gave the student a final grade on their performance.

In Figure 1, an example is shown of two questions of different types that formed part of the questionnaire for the subject FPAD.

In the case of the subject FPAD, in order to reach an agreement on the content of the questions and answers, an initial bank of 200 questions was elaborated. The procedure used to reach this agreement was as follows: a spreadsheet was created which included the number of each question and the name of each of the professors who were required to mark with an X those questions which they considered should form part of the final questionnaire. The criteria for including a question were that the content would be covered in class and that the formulation of the question was clear and coherent. The professors had to carry out this task individually and, once completed, send the spreadsheet to the person responsible for the project and who was also responsible for putting together the four spreadsheets and selecting from the total number of questions the 100 on which all had agreed. In this way, a consensus was reached on the definitive, self-assessment questionnaire.

The questions from this bank were grouped and organized according to topics included in the subject. However, in the questionnaire, these were presented randomly, so that even though two students completed the questionnaire at the same time, the order in which the questions appeared was different.

Following this, on the Moodle platform for the subject, each professor created the definitive questionnaire and arranged the parameters, that is to say, the timing, the categorization, the number of attempts, and the type of feedback. In the case of the subject DP, the professor followed the same procedure as in the FPDA subject.

Furthermore, two of the professors elaborated a scale of satisfaction with respect to this methodology and which consisted of ten Likert-type questions. Once this was concluded, it was sent by e-mail to all other colleagues so that they could make suggestions and appraisals. Once all the group had given their approval to the scale of satisfaction, one of the professors took on the responsibility of elaborating this on Google Forms and of sending the corresponding link to the other professors involved in the project so that they could upload it onto Moodle platform and not leave it visible to students. The same as in the case of the self-assessment questionnaire, the scale of satisfaction questionnaire was made visible to the students 15 days before the date of the exam, but, in this case, it was left open for a few days more after the exam in case the students had not completed it. The link to the questionnaire remains open and is as follows: https://docs.google.com/forms/d/e/1FAIpQLSejczvip_-hBRh1ldZ9UpYD7MZU4wC3ZYNmbpPzsMrsqeqTAg/viewform.

2.4. Data Analysis

The design of this research is quasi-experimental with a single group. The statistical analyses have been carried out with the statistics program SPSS version 20.0 for Mac and with the program GPower 3.1. With the former, the grade of existing correlation between the score obtained on the questionnaire and in the exam for the subject was calculated. In addition, the Student t test for dependent samples was used to determine if there were significant statistical differences between both variables and a factorial analysis (ANOVA) in order to establish any possible differences in the evolution of the scores. A calculation was also carried out post hoc of the size of the Cohen effect (d) to evaluate the effectiveness of the innovation proposal in order to compensate for the lack of group control with the program GPower 3.1.

3. Results

The results of this research correspond, firstly, to the respondents to the self-assessment questionnaire and, secondly, to the scale of satisfaction.

3.1. Self-Assessment Questionnaire

Table 1 shows the percentage total of students who responded to the self-assessment questionnaire differentiated by subjects and by Degree Programs, and the latter was divided into groups. It also shows the mean response percentage (M). The total percentage of participation is 67.24%, which varies slightly according to subjects, being slightly larger in the subject DP (69.14% in DP and 65.68% in FPAD). With regard to the Degree Course, there are also differences. The participation in the Degree in Primary Education is larger than that of those in the Early Childhood Education Degree (68.19% compared to 64.12%). In the subject FPAD, there are also differences between groups.


FPADDP
Groups% of response% of no response% of response% of no responseMean (%)

PrimaryA59.77%40.23%68.13%31.87%68.19
B74.72%25.28%70.15%29.85%
EarlyA73.07%26.92%64.12
ChildhoodB55.17%44.82%
Mean65.68%69.14%

Figure 2 shows the percentage of students who passed both the exam and the questionnaire. This percentage is obtained by adding the number of passes, distinctions, and high distinctions from both assessments.

Of the total sample, 1.91% of the students did not complete the questionnaire and 1.72% did not sit for the exam. As can be seen in Figure 2, the percentage of students who passed both the questionnaire and the exam is higher than those who did not pass. The percentage of those who exceeded a passing grade on the exam is greater (almost 76%) than on the questionnaire (almost 73%). A positive statistical correlation (r = 0.343; ) was found among the scores of the subjects who passed both the questionnaire and the exam (M = 4.77; TD = 2.738 in questionnaire; M = 6.24; TD = 1.714 in exam). Also, statistically significant differences in means have been found between both variables (t = −6.866; ), the size of the mean effect being (d = 0.474), with an observed power of 0.952.

Table 2 shows the percentage of students according to scores, both on the exam and the questionnaire.


High distinctionDistinctionPassFail

Questionnaire1.01225.147.68424.292
Exam4.49827.05244.43422.3

Table 2 also shows that the number of Distinctions and High Distinctions has increased significantly and, at the same time, the number of Passes and Fails has diminished in the final examination. To confirm whether there were significant statistical differences among the scores achieved by the subjects between the questionnaire and the exam, considering different grades, a factorial analysis of variance (ANOVA) was carried out taking the grade obtained in the exam as a dependent variable. Statistically significant differences were found (F (3,312) = 18.468; ). The Scheffé post hoc test showed that significant statistical difference was maintained between the questionnaire and the exam in the grades Fail and Distinction (), in the grades Fail and High Distinction (), in Pass and Distinction (), and in Pass and High Distinction ().

Figure 3 shows these changes in tendency in the different grades obtained by the students.

One of the most notable changes is that those subjects who failed on the questionnaire passed the exam (70%) and those who had obtained a passing grade on the questionnaire reached a grade of Distinction in the exam (55%). It should also be mentioned that a very small percentage of subjects (4.9%) who obtained a Distinction or High Distinction on the questionnaire failed the exam.

3.2. Scale of Satisfaction

After responding to the questionnaire, students were asked to complete a scale of satisfaction with regard to the evaluation of the self-assessment methodology.

Figure 4 shows the mean score for all the students who responded to this scale of satisfaction.

As can be observed in Figure 4, the mean score from students on the scale of satisfaction is high, all of them giving a score higher than two. The two items with lower satisfaction are questions four and nine. Both these items obtain a lower mean than any other on the scale, given that two subjects indicate disagreement that the guidelines had helped them to control their anxiety.

4. Discussion

One of the greatest challenges for professors in the process of European Convergence into the European Higher Education Area has been, and is, a change in certain teaching habits and routines. An attempt has been made to encourage a more significant process of change, among other things, in methodological strategies of assessment. As opposed to the traditional paradigm where the professor was responsible for giving master classes and assessing whether the students had acquired the concepts and contents explained in expository classes (assessment of learning), the focus is now centered on the students who must assume responsibility for organizing and developing their academic work, as well as evaluating their achievements, in short, by developing their autonomous learning [3, 19, 24, 30, 38, 39].

Of the three ways in which students can participate in their assessment process, this research focuses on self-assessment and, specifically, on autonomous e-assessment, with the incorporation of CIT into this process. Although there has been a great deal of research that has shown the benefits of self-assessment, given that it allows students to judge their own progress with respect to a certain task, reflect on the level of control achieved in this learning, that is to say, self-regulate their own learning process [7, 8, 10, 11, 13, 1618, 2628], there are few which incorporate CIT methodology like Moodle quizzes or gradient scales as a criterion to develop assessment judgements of their own performance [27, 30, 32, 34, 39]. Because of that, an innovative teaching proposal has been carried out in the assessment of content, thus creating a module for self-assessment in the two subjects from the degree course in Primary and Early Childhood Education, using the Moodle platform to foment autonomous learning with the aim of improving student performance and increasing the quality of teaching.

With regard to the general objective put forward, the results indicate that e-self-assessment has improved the students’ general performance, if we take into account their scores on the self-assessment questionnaire and the exam, where the correlation has a high level of significance. Furthermore, students have improved their numerical scores in the exam with respect to scores on the questionnaire. These results are in line with Ibabe and Jaureguizar (2007) [39] who obtained a significant statistical correlation between self-assessment and the exam scores of 82 participants. In addition, this investigation [39] and others [26] found that this was a tool which adequately predicts the final grade in the subject. Thus, it is considered that e-self-assessment could favour the development of critical thinking and lead to self-regulation of learning [13, 16, 1820]. Therefore, e-self-assessment could be considered as a dimension of sustainable assessment, answering to some of its key features [24]. This would suggest the need to promote both self-assessment [40] and e-self-assessment in Higher Education, since this may contribute to the attainment of individuals who can act in the future, once they have finished their training in the formal education system, as active and autonomous trainees.

Regarding secondary objectives, the study shows that e-self-assessment has encouraged autonomous work in students and their participation. The self-questionnaire was designed so that students could respond only once immediate feedback was given as well as feedforward. If the students responded incorrectly, they had to think of the correct option, which implies finding information (autonomous work). The improved scores obtained in the final test regarding the questionnaire show that individual effort. According to Knight [40], the feedback, but above all, the feedforward, have great power to stimulate learning. While feedback encompasses comments on the quality of the task carried out, feedforward includes information that is meant to help the students so that they will complete similar tasks more adequately and better in the future, as a part of sustainable assessment [24]. Thus, e-self-assessment could be considered as a reflective strategy in the learning process and, as the self-assessment, help bridge the gap between assessment and learning to ensure long-term learning after completing university studies [24].

Therefore, it appears that e-self-assessment could be considered as an educational tool to encourage autonomy in the teaching/learning process and inform students of their performance throughout the learning process. In this way, it could improve academic performance [79, 24] and it means an increase in types of interactions (professor-student; student-student).

A high percentage of student participation has been achieved, which constitutes a strong point in this study. Almost 70% of enrolled students completed the questionnaire, which reveals an interest in testing their knowledge before the final exam in the subject and also puts into use their processes of metacognition and autonomy [12], especially when they knew that none of the questions in the questionnaire were repeated in the exam. It is estimated that this percentage is high if compared with the results obtained by Rodríguez et al. [41] where the final percentage was 58.5%. However, in further research, a higher level of participation should be considered.

One possible explanation as to why student participation was not 100% could be because, on the one hand, they did not have a good understanding of the benefits of e-self-assessment in their learning process (some said that they did not complete the questionnaire since it was considered a waste of time), or on the other hand, that the possibility of using this tool was not sufficiently disseminated, even though the professors had informed students in the classroom about its existence and advantages. In this sense Gil and Padilla [22], within their list of recommendations for the adoption of these self-assessment and e-self-assessment practices in the context of higher education, highlight that if students do not comprehend the criteria and procedures of the test, the important role that this acquires in learning is not made clear and the motivation of the students is not maintained through feedback, then this practice may not be successful.

On future occasions, it will be necessary to take more care with these aspects and to emphasize the use of this methodology as a form of active participation in the teaching/learning process. One possible way of increasing student participation would be to prepare questionnaires for each two or three topics covered in the classroom, not just one final questionnaire. This measure would lead to being aware of the e-self-assessment tool, students would feel obliged to use it, which would, in turn, mean greater implication of the students and would favour dialogue with and questions directed at their professors. Boud and Falchikov [4] consider that more active involvement of students, not only in the processes and activities of teaching and learning, but also in their own processes of assessment, is one of the fundamental directions in which innovations are being introduced in the field of assessment of university learning.

Another secondary objective of this research is to increase the amount of coordination and collaboration among teachers given the fact that, according to Krichesky and Murillo [42], in the Higher Education environment collaboration is very difficult to achieve. The characteristics of this research require several meetings, a high number of e-mail exchanges, and countless phone calls, both to customize the questions of the questionnaire and to design the satisfaction scale. Without this study, contact among teachers would have been limited or even inexistent. In addition, the teacher group commented that this is an improvement strategy that may have impacted on teaching quality and has been considered motivating and attractive for everyone. This investigation supports the idea that teacher collaboration encourages processes of innovation at the same time as it improves student performance [42, 43].

In regard to the last secondary objective, promoting the use of teaching innovation with innovative tools, with the high participation of students, is partially achieved. The scores of the satisfaction scale show that the students find the educational tool highly satisfactory. The mean of the 10 items on the scale of satisfaction is high, which shows that this methodology has been considered useful by students. The data brings us to thinking that students have understood the e-self-assessment as forming part of the learning process and that it has led them to the construction of knowledge in a virtual environment [6, 21]. All of this must be understood as a strengthening of this methodology. The fact that it offers immediate feedback to their responses also scored very positively. It is an improvement from other researches, where the students complained about the poor quality of the feedback given by the questionnaires [26]. This is an essential question in the new understanding of assessment as a learning process [6, 44], which leads students to reflect on learning, to make judgements, and to direct their learning in a more autonomous way. However, the score obtained on question four (Has this questionnaire been useful in determining the amount of knowledge you have of the subject?) is the lowest on the entire scale, which makes us think that some students made an external causal attribution on their grade on the questionnaire and did not take into account everything they knew about the assessed subjects. Another possible explanation is that they completed the questionnaire before studying the subject, just to try their luck.

5. Conclusions

In conclusion, it may be said that self-assessment through virtual environments or through e-self-assessment is not only possible but also recommended and beneficial, given that the students’ academic performance improves and that it activates processes of metacognition through the use of new technologies. In addition, in this case, it has indirectly encouraged collaboration among professors, which constitutes a tool for improving teaching and it has been proved to increase the satisfaction of the students with this innovative methodology. Therefore, e-self-assessment, as a formative dimension of assessment, acquires a strong value in the teaching/learning process, and it is also confirmed that the use of questionnaires as self-assessment tools in virtual environments (e-self-assessment) is effective to improve academic performance.

Data Availability

The survey data used to support the findings of this study are available from the first author upon reasonable request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the Plan of Support for the Dissemination and Promotion of the Teaching Innovation Activities of the University of Oviedo.

References

  1. M. I. Bordas and F. A. Cabrera, “Estrategias de evaluación de los aprendizajes centrados en el proceso,” Revista Española de Pedagogía, vol. 21, no. 8, pp. 25–48, 2001. View at: Google Scholar
  2. J. Biggs, Calidad del Aprendizaje Universitario, Narcea, Madrid, Spain, 2005.
  3. S. Bloxham and P. Boyd, Developing Effective Assessment in Higher Education. A Practical Guide, Open University Press; McGraw Hill Education, New York, NY, USA, 2007.
  4. D. Boud and N. Falchikov, “Aligning assessment with long‐term learning,” Assessment & Evaluation in Higher Education, vol. 31, no. 4, pp. 399–413, 2006. View at: Publisher Site | Google Scholar
  5. D. Carless, G. Joughin, and M. M. Wok, “Learning- oriented assessment: principles and practice,” Assessment & Evaluation in Higher Education, vol. 31, no. 4, pp. 395–398, 2006. View at: Publisher Site | Google Scholar
  6. I. Álvarez, “Evaluación del aprendizaje en la universidad: una mirada retrospectiva y prospectiva desde la divulgación científica,” Revista Electrónica de Investigación Psicoeducativa, vol. 14, pp. 235–272, 2008. View at: Publisher Site | Google Scholar
  7. R. Bourke and M. Mentis, “Self-assessment as a process for inclusion,” International Journal of Inclusive Education, vol. 17, no. 8, pp. 854–867, 2013. View at: Publisher Site | Google Scholar
  8. F. Dochy, M. Segers, and D. Sluijsmans, “The use of self-, peer and co-assessment in higher education: a review,” Studies in Higher Education, vol. 24, no. 3, pp. 331–350, 1999. View at: Publisher Site | Google Scholar
  9. A. Ndoye, “Peer/self assessment and student learning,” International Journal of Teaching and Learning in Higher Education, vol. 29, no. 2, pp. 255–269, 2017. View at: Google Scholar
  10. G. Gibbs and C. Simpson, “Conditions under which assessment supports student learning,” Learning and Teaching in Higher Education, vol. 1, pp. 3–31, 2004. View at: Google Scholar
  11. I. Navarro and C. González, “La autoevaluación y la evaluación entre iguales como estrategia para el desarrollo de competencias profesionales: una experiencia docente en el grado de maestro,” Revista de Docencia Universitaria, vol. 8, no. 1, pp. 187–200, 2010. View at: Google Scholar
  12. A. Fraile, “La autoevaluación: una estrategia docente para el cambio de valores educativos en el aula,” Ser Corporal, vol. 3, pp. 6–18, 2010. View at: Google Scholar
  13. D. Boud and N. Falchikov, “Quantitative studies of student self-assessment in higher education: a critical analysis of findings,” Higher Education, vol. 18, no. 5, pp. 529–549, 1989. View at: Publisher Site | Google Scholar
  14. V. Álvarez, M. T. Padilla, J. Rodríguez, J. J. Torres, and M. Suárez, “Análisis de la participación del alumnado universitario en la evaluación de su aprendizaje,” Revista Española de Pedagogía, vol. 250, pp. 401–426, 2011. View at: Google Scholar
  15. M. S. Ibarra and G. Rodríguez, “Modalidades participativas de evaluación. Un análisis de la percepción del profesorado y de los estudiantes universitarios,” Revista de Investigación Educativa, vol. 32, no. 2, pp. 339–361, 2014. View at: Publisher Site | Google Scholar
  16. G. Rodríguez, M. S. Ibarra, and M. A. Gómez, “e-Autoevaluación en la universidad: un reto para profesores y estudiantes,” Revista de Educación, vol. 356, pp. 401–430, 2011. View at: Publisher Site | Google Scholar
  17. N. A. Buzetto-More and A. J. Alade, “Best practices in e-Assessment,” Journal of Information Technology Education, vol. 5, pp. 251–269, 2006. View at: Google Scholar
  18. A. Bretones, “Participación del alumnado de Educación Superior en su evaluación,” Revista de Educación, vol. 347, pp. 181–202, 2008. View at: Google Scholar
  19. D. Nicol, “Assessment for learner self‐regulation: enhancing achievement in the first year using learning technologies,” Assessment & Evaluation in Higher Education, vol. 34, no. 3, pp. 335–352, 2009. View at: Publisher Site | Google Scholar
  20. C. Trevitt, E. Brenan, and C. Stocks, “Evaluación y aprendizaje: ¿es ya el momento de replantearse las actividades del alumnado y los roles académicos?” Revista de Investigación Educativa, vol. 30, no. 2, pp. 253–267, 2012. View at: Publisher Site | Google Scholar
  21. L. Vygotsky, Pensamiento y Lenguaje, Paidós, Madrid, Spain, 1978.
  22. J. Gil and M. T. Padilla, “La participación del alumnado universitario en la evaluación del aprendizaje,” Educación XX1, vol. 12, pp. 43–65, 2009. View at: Google Scholar
  23. S. Osses and S. Jaramillo, “Metacognicion: un camino para aprender a aprender,” Estudios Pedagógicos, vol. 34, pp. 187–197, 2008. View at: Publisher Site | Google Scholar
  24. D. Boud and R. Soler, “Sustainable assessment revisited,” Assessment & Evaluation in Higher Education, vol. 41, no. 3, pp. 400–413, 2016. View at: Publisher Site | Google Scholar
  25. A. Villa, “El proceso de Convergencia Europeo y el papel del profesorado,” Foro de Educación, vol. 7-8, pp. 103–117, 2006. View at: Google Scholar
  26. M. Blanco and M. Ginovart, “On how Moodle quizzes can contribute to the formative e-assessment of first-year engineering students in mathematics courses,” Universities and Knowledge Society Journal (RUSC), vol. 9, no. 1, pp. 354–370, 2012. View at: Google Scholar
  27. D. Boud, R. Lawson, and D. G. Thompson, “The calibration of student judgement through self-assessment: disruptive effects of assessment patterns,” Higher Education Research & Development, vol. 34, no. 1, pp. 45–59, 2015. View at: Publisher Site | Google Scholar
  28. M. Ferrão, “E-assessment within the Bologna paradigm: evidence from Portugal,” Assessment & Evaluation in Higher Education, vol. 35, no. 7, pp. 819–830, 2010. View at: Google Scholar
  29. C. X. Wang and W. Kinuthia, “Defining technology enhanced learning environments for pre-service teachers,” Proceedings of Society for Information Technology and Teacher Education International Conference, no. 1, pp. 2724–2727. View at: Google Scholar
  30. C. Troussas, A. Krouska, and C. Sgouropoulou, “Collaboration and fuzzy-modeled personalization for mobile game-based learning in higher education,” Computers & Education, vol. 144, p. 103698, 2020. View at: Publisher Site | Google Scholar
  31. M. Keppell, E. Au, A. Ma, and C. Chan, “Peer learning and learning‐oriented assessment in technology‐enhanced environments,” Assessment & Evaluation in Higher Education, vol. 31, no. 4, pp. 453–464, 2006. View at: Publisher Site | Google Scholar
  32. M. Lafuente, A. Remesal, and I. M. Álvarez Valdivia, “Assisting learning in e-assessment: a closer look at educational supports,” Assessment & Evaluation in Higher Education, vol. 39, no. 4, pp. 443–460, 2014. View at: Publisher Site | Google Scholar
  33. A. García-Beltrán, R. Martínez, J. A. Jaén, and S. Tapia, “La autoevaluación como actividad docente en entornos virtuales de aprendizaje/enseñanza,” vol. 50, no. M6, RED. Revista de Educación a Distancia, 2006. View at: Google Scholar
  34. M. Peat and S. Franklin, “Supporting student learning: the use of computer-based formative assessment modules,” British Journal of Educational Technology, vol. 33, no. 5, pp. 515–523, 2002. View at: Publisher Site | Google Scholar
  35. M. Á. Gómez-Ruiz, G. Rodríguez-Gómez, and M. S. Ibarra-Sáiz, “Desarrollo de las competencias básicas de los estudiantes de Educación Superior mediante la e-Evaluación orientada al aprendizaje,” RELIEVE - Revista Electrónica de Investigación y Evaluación Educativa, vol. 19, no. 1, pp. 1–17, 2013. View at: Publisher Site | Google Scholar
  36. M. T. Padilla and J. Gil, “La evaluación orientada al aprendizaje en la Educación Superior. Condiciones y estrategias para su aplicación en la enseñanza universitaria,” Revista Española de Pedagogía, vol. 241, pp. 467–486, 2008. View at: Google Scholar
  37. J. Dermo, “e-Assessment and the student learning experience: a survey of student perceptions of e-assessment,” British Journal of Educational Technology, vol. 40, no. 2, pp. 203–214, 2009. View at: Publisher Site | Google Scholar
  38. B. Hedin and V. Kann, “Improving study skills by combining a study skill module and repeated reflection seminars,” Education Research International, vol. 2019, Article ID 8463169, 5 pages, 2019. View at: Publisher Site | Google Scholar
  39. I. Ibabe and J. Jaureguizar, “Auto-evaluación a través de internet: variables metacognitivas y rendimiento académico,” Revista Latinoamericana de Tecnología Educativa, vol. 6, no. 2, pp. 59–75, 2007. View at: Google Scholar
  40. P. Knight, “The local practices of assessment,” Assessment & Evaluation in Higher Education, vol. 31, no. 4, pp. 435–452, 2006. View at: Publisher Site | Google Scholar
  41. G. Rodríguez, M. S. Ibarra, J. M. Dodero et al., “Developing the e-Learning-oriented e-Assessmen,” in Actas de la V Internacional Conference on Multimedia and Information and Communication Technologies in Education, pp. 515–519, Formatex, Lisboa, Portugal, 2009. View at: Google Scholar
  42. G. J. Krichesky and F. J. Murillo, “La colaboración docente como factor de aprendizaje y promotor de mejora. un estudio de casos,” Educación XX1, vol. 21, no. 1, pp. 135–156, 2018. View at: Publisher Site | Google Scholar
  43. J. Sebastian and E. Allensworth, “The influence of principal leadership on classroom instruction and student learning,” Educational Administration Quarterly, vol. 48, no. 4, pp. 626–663, 2012. View at: Publisher Site | Google Scholar
  44. K. Tan, “Conceptions of self-assessment: what is needed for long-term learning?” in Rethinking Assessment in Higher Education: Learning for the Longer Term, D. Boud and N. Falchikov, Eds., pp. 114–127, Routledge, Oxford, UK, 2008. View at: Google Scholar

Copyright © 2020 Verónica Martínez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views137
Downloads131
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.