Education Research International

Education Research International / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 8719012 |

Amir Mohamed Talib, Fahad Omar Alomary, Hanan Fouad Alwadi, "Assessment of Student Performance for Course Examination Using Rasch Measurement Model: A Case Study of Information Technology Fundamentals Course", Education Research International, vol. 2018, Article ID 8719012, 8 pages, 2018.

Assessment of Student Performance for Course Examination Using Rasch Measurement Model: A Case Study of Information Technology Fundamentals Course

Academic Editor: Gwo-Jen Hwang
Received27 Mar 2017
Accepted11 Dec 2017
Published19 Feb 2018


This paper describes a measurement model that is used to measure the student performance in the final examination of Information Technology (IT) Fundamentals (IT280) course in the Information Technology (IT) Department, College of Computer & Information Sciences (CCIS), Al-Imam Mohammad Ibn Saud Islamic University (IMSIU). The assessment model is developed based on students’ mark entries of final exam results for the second year IT students, which are compiled and tabulated for evaluation using Rasch Measurement Model, and it can be used to measure the students’ performance towards the final examination of the course. A study on 150 second year students (male = 52; female = 98) was conducted to measure students’ knowledge and understanding for IT280 course according to the three level of Bloom’s Taxonomy. The results concluded that students can be categorized as poor (10%), moderate (42%), good (18%), and successful (24%) to achieve Level 3 of Bloom’s Taxonomy. This study shows that the students’ performance for the set of IT280 final exam questions was comparatively good. The result generated from this study can be used to guide us to determine the appropriate improvement of teaching method and the quality of question prepared.

1. Introduction

Tests and examinations are part of evaluation and assessment that are carried out to fulfill the academic requirement [1]. They are systematic methods in assessing individual change of behaviors, which are related to effective teaching and learning activities [2]. Test question is an assessment tool to gather information on cognitive, psychomotor, and affective achievement of a student [3]. Tests and assessments are implemented to fulfill their important purpose to reflect students’ achievement and differentiate proficient students from the amateur. This is useful to classify students according to their skill and capability.

Examination item is an important instrument to reflect students’ achievement and differentiate proficient students from the amateur. The scarcity of guideline on testing the reliability and validity of examination item needs to be addressed to ensure a systematic method in assessing students’ ability. This research is carried out to achieve two objectives: first, to propose a systematic procedure to measure validity and reliability of the instrument in assessing students’ ability, and second, to discriminate between proficient and amateur according to their ability and determine item difficulty using the Rasch assessment model.

Measurement has been grossly misunderstood and unnoticed in lots of situations, specifically within the field of social science. Many researchers in social technology are frustrated whilst current gadgets are not nicely tailored to the mission, because they then cannot anticipate touchy, accurate, or valid findings [4]. However, modern measurement method as practiced using item response theory with a focus on Rasch Measurement Model provides the social sciences with the kind of measurement that characterizes measurement in the natural sciences, that is, the field of metrology.

The basics of measurement should comprise the instrument for use for purpose, which has specific unit of an agreed trendy amount. A device must have the best construct of linear scale, which can be set 0 and duly calibrated. A valid instrument can then be replicated for use independent of the subject hence measurement taken thereof is therefore a reliable data for meaningful analysis and examination to generate useful information [5]. This information is of maximum importance to be the top ingredient in a particular selection making.

Many of higher education institutions in the Kingdom of Saudi Arabia (KSA) have implemented the outcome-based education (OBE). It is one of the important steps designed to elevate the level of quality and excellence in institutions of higher education [6].

Most higher education institutions in KSA, nowadays, are towards complying with the American Accreditation Board of Engineering and Technology, 2000 (ABET) requirements, which promote OBE learning process. The OBE approach must constantly be monitored, assessed, and measured so as for the university to effectively compete and achieve exquisite overall performance. OBE appeals for the assessment of the path getting to know course learning outcome (CLO) that has been laid out in each course specification. CLOs were evaluated primarily based on the students’ overall performance, which gives an indication in their learning achievements [7].

Measurement of students’ performance has been dependent on the students’ overall performance in sporting out obligations inclusive of quizzes, assignments, mid examinations, projects, and final exams. Evaluation and measurement at the general performance output give an instance at the fulfillment of CLO for every course and may be used as guidance for academics in determining the ideal development of the teaching method in addition to the quality of questions prepared. Hence, a good measurement method is vital to be carried out in order to measure and predict the student’s performance in future, and this would also help us in identifying those students that likely to fail.

The purpose of this study is to measure the students’ achievement in IT Fundamentals (IT280) course in the Information Technology (IT) Department, College of Computer and Information Sciences (CCIS) at Al Imam Mohammad Ibn Saud Islamic University (IMSIU). The course is one of the core subjects that must be completed by the IT students before they can graduate. This course is supposed to be on the introductory level to offer basic skills for subsequent guides for the next courses. It offers a top-degree view of the concern of IT, describes how it pertains to other computing disciplines, and begins to instill an IT mindset. The goal is to assist students understand the numerous contexts wherein its miles used and the worrying situations are inherent in the diffusion of innovative technology. The aim is to help students apprehend the various contexts wherein its miles used and the demanding situations are inherent inside the diffusion of revolutionary generation. The main objectives of this course are to explain the terms and concepts of IT (personal PC, software, hardware, security, networking, Internet/web, and applications); acquire basic skills and be able to use the main PC applications; understand the basic concepts and terminology of IT and be able to define them; select and judge the usage of IT products and services; explore and evaluate IT career opportunities; use Internet/web services as a resource for learning and discovery; create useful end products for themselves in IT areas of interest to explore major, career, skills, interests, and talents; and increase the ability to learn and explore new information technologies with confidence to identify issues related to information security. The measurement is based on the final examination questions, and the Rasch Measurement Model will analyze the raw marks.

The Rasch Measurement Model is a probabilistic model that considers two aspects: (i) the difficulty of the items/questions and (ii) the ability of respondents/students to verify the items [8]. The basic principle underlying the Rasch Measurement Model is that the probability of a student successfully verifying a particular question is governed by the difference between the question’s difficulty and student’s ability [811]. The logic underlying this principle is that all students have a higher probability of answering easier questions and a lower probability of answering more difficult questions accurately [9].

2. Methodology

The data that were obtained from the final examination of IT280 course, which was conducted for the second-year Information Technology students at CCIS, IMSIU in the second semester of 2014/2015-2015/-2016. This study was conducted for all 150 students (male = 52; female = 98) who have registered for the course. The final examination consists of 35 questions which were divided into eight parts: Part A, Part B, Part C, Part D, Part E, Part F, Part G, and Part H. Students are required to answers all the questions. IT280 course was chosen because it is a compulsory and core course for all the following courses within the IT curriculum in the department. IT curriculum has a number of tracks in which IT280 course assists the students to identify which tracks have to be chosen. Besides, IT280 course introduces the techniques of the training program which is compulsory, and students may involve at the end of their third year. Therefore, students need to pass this course in prior to the training.

The final exam consists of seven questions that are required to be answered, all of which were divided into eight topics which are covered by IT280 course, as tabulated in Table 1.

TopicTopics covered by IT280 course

Topic AIT terminologies
Topic BIT systems, IT applications, and science and technology
Topic CIT products and services
Topic DInternet/web services
Topic EIT professions and careers
Topic FHypertext Markup Language (HTML)
Topic GIT ethics and responsibilities
Topic HConcepts of information security

The course learning outcomes for the learning topics for IT280 expected for the students to achieve are tabulated in Table 2.

NumberCourse learning outcomes (CLOs)

CLO-1Able to understand the concepts and terms of IT (personal PC, software, hardware, security, networking, Internet/web, and applications)
CLO-2Able to acquire the basic skills and be able to use the main PC applications
CLO-3Able to understand basic concepts and terminology of IT and be able to define them
CLO-4Able to select and judge the usage of IT services and products
CLO-5Able to explore and evaluate IT career opportunities
CLO-6Able to use Internet/web services as a resource for learning and discovery
CLO-7Able to create beneficial cease products for themselves in IT areas of interest to explore principle, profession, abilities, pursuits, and capabilities
CLO-8Able to grow the capacity to learn and discover new IT with self-assurance and be able to pick out problems related to fact security

The questions are entered as entry number as tabulated in Table 3. The item is labeled as Question No. and Taxonomy Bloom Level of Learning, which the students expected to develop three levels of Bloom’s Taxonomy, namely, remembering/understanding (1), applying/analyzing (2), and evaluating/creating (3). Thus, for entry item number 1, the item is coded as A01_1 (as shown in Table 2).

QuestionsSubquestionsTopicEntry numberCLOs

11Topic AA01_1CLO-1
2Topic AA02_1CLO-3 and CLO-7
3Topic AA03_1CLO-1
4Topic DD04_1CLO-6
5Topic AA05_1CLO-3
6Topic GG06_1CLO-8
7Topic AA07_1CLO-1
8Topic DD08_1CLO-6
9Topic AA09_1CLO-3
10Topic AA10_1CLO-3
11Topic CC11_1CLO-3
12Topic CC12_1CLO-7
13Topic CC13_1CLO-4
14Topic AA14_1CLO-3
15Topic HH15_1CLO-8
16Topic AA16_1CLO-3
17Topic AA17_1CLO-1
18Topic HH18_1CLO-3
19Topic FF19_1CLO-3
20Topic CC20_1CLO-3
211Topic EE11_1CLO-5
12Topic DD12_1CLO-6
13Topic CC13_1CLO-7
14Topic HH14_1CLO-8
15Topic BB15_1CLO-8
16Topic CC16_1CLO-4
17Topic DD17_1CLO-6
18Topic DD18_1CLO-4
19Topic EE19_1CLO-5
20Topic CC20_1CLO-7
21Topic HH21_1CLO-8
22Topic AA22_1CLO-3
23Topic AA23_1CLO-3
324Topic AA24_1CLO-1
25Topic AA25_1CLO-2
426Topic HH26_1CLO-8
27Topic HH27_1CLO-8
28Topic HH28_1CLO-8
29Topic HH29_1CLO-8
530Topic AA30_1CLO-3
631Topic BB31_1CLO-8
732Topic CC32_1CLO-4
33Topic CC33_1CLO-7
34Topic CC34_1CLO-8
35Topic CC35_1CLO-5

Rating from final exam results has been amassed and compiled. As those row ratings have specific total marks for every query, a standardization method is used. The formula for the standardization is given as follows [12]:where i = the ith student (i = 1, 2, …, 59), j = the jth question (j = 1, 2, …, 22), zij = standardized marks for the ith student and jth question, xij = marks for the ith student and jth question, min xj = minimum marks for the jth question, and max xj = maximum marks for the jth question.

Responses from the students’ exam results were analyzed using rating scale in which the students were rated according to their achievement. From (1),

Then, A is classified corresponding to the rating scale in Table 4.

Marks (A)0–1.491.50–3.493.50–6.496.50–8.498.50–10.00

Rating scale12345

The grade rating is tabulated and analyzed using Bond and Fox steps, the Rasch Measurement Model software.

3. Data Analysis and Discussion

Figure 1 demonstrates the summary statistics for 150 students who answered 35 IT280 final exam questions. The first indicator from these findings is the person’s mean of +0.68 (SE 0.54), showing that the students find this set of final exam questions comparatively easy. This means that they tend to answer all the questions correctly. The mean square fits (IMNSQ and OMNSQ) and the z statistics (INFIT ZSTD and OUTFIT ZSTD) are closer to their expected values +1 and 0, respectively, for persons (students) and items (questions). This confirms satisfactory fit to the model. Besides, the person reliability (Rasch equivalence to Cronbach’s alpha) is 0.81, while the item reliability is much higher at 0.93. The values of person and item reliability (>1.57) do reveal that the instrument for measuring the students’ learning ability is reliable, reproducible, and valid for measurement. The person separation can be used to calculate the separation strata, which indicate a variety of wonderful ability degrees separated through 3 widespread errors of measurement. The formula for the strata is (4 × person separation index + 1)/3 [13, 14]. The strata value of 0.13 suggests that the 35 IT280 final exam questions can distinguish two distinct levels of the students’ academic ability.

Figure 2 shows the Wright map representation for the analysis. It signifies that the distribution of the students is on the left and the distribution of the questions is on the right according to student and question label, respectively. The distributions of students and questions supported the result from the summary statistics as illustrated in Figure 1. The Wright map provides an exact overview of the student’s achievement on the final examination.

The separation of the question against the student’s location on the Wright map shows the level of a student’s ability. The further the separation, the more a student is able to respond correctly to the question. The level of question difficulty is also reflected by the spread of the question on the scale. The MEANitem of the questions in the higher location of Wright map (as illustrated in Fig. 2) is more difficult compared to the questions on the lower location. Thus, MEANitem serves as threshold and it is set to zero on the logit scale.

The most difficult question is A22_1 located at the top of the item distribution, while the easiest question is C13_1 located at the bottom of the item distribution. The Wright map confirms that the person mean is higher than the threshold value, MEANitem = 0. These values show that the student’s performance is above the expected performance. More than half of the students measured (N = 138, 98.1%) are found to be above the MEANitem, while only 12 students (1.9%) are below the MEANitem. The students with less ability have some difficulty in answering half of the exam questions, which are located above the MEANitem.

Student S48 can be categorized as a student with the poorest ability since that student is located at the bottom of the student distribution. In contrast, there are 4 students (S85, S06, S110, and S140) with the highest ability who were able to answer all the questions given and they are located at the top of the student distribution. Their abilities have exceeded the degree of difficulties of the questions. This reveals that these students successfully achieved all the Bloom Taxonomy Level of Learning.

Figure 3 determines the Rasch item estimates for the IT280 final exam questions; hence, the details of the map locations can be verified more easily. This findings confirm that the easiest question/item is C13_1 located at the bottom of the item distribution at −2.49 logits (SE 0.047), while the most difficult question/item is A22_1 located at +2.40 logits (SE 0.20). The analysis reveals that the easiest question (C13_1) has minimum estimated measure. This indicates that all students can answer the question correctly. The fit statistics of the item output look good, although we need to reconsider two under fit items: D04_1 and D08_1. Countercheck against the Guttman scalogram as illustrated in Figure 4 indicates that the two items D08_1 (item 8) and D04_1 (item 4) have been underrated by 20 students: student 85 (S85), student 06 (S06), student 86 (S86), student 110 (S110), student 140 (S140), student 33 (S33), student 02 (S02), student 99 (S99), student 01 (S01), student 46 (S46), student 87 (S87), student 08 (S08), student 04 (S04), student 120 (S120), student 143 (S143), student 09 (S09), student 150 (S150), student 94 (S94), student 02 (S02), and student 92 (S92), respectively (Figure 3). One possible reason is that they could have been careless in attempting their answers which led to such a grossly underrated work. After verifying that the point measure correlation as illustrated in PTMEA CORR column in Figure 3 for both items is a positive value, the two misfits are acceptable.

This study is also interested to measure the students’ performance on question C12_1 that tests the students’ ability to create useful products for themselves in IT areas of interest to explore major, career, skills, interests, and talents (refer to Tables 2 and 3 and Figure 2). Further analysis on item category for question C12.1 as illustrated in Figure 5 found that 9 students (6%) were not able to answer it. This reveals that they failed to achieve Level 3 of Bloom’s Taxonomy. The rest of the students can be categorized as poor (10%), moderate (42%), good (18%), and successful (24%) to achieve Level 3 of Bloom’s Taxonomy.

4. Conclusion and Future Work

The Rasch Measurement Model presents a valid platform of measurement equivalent to fundamental courses, which matches the mission and vision measurement standard and criteria. It is also quantifiable considering the fact that it is linear. The Rasch Measurement Model has made it very useful with its predictive function to triumph over missing data.

This paper described the assessment and measurement of students’ performance for IT Fundamentals (IT280) course in the second semester of 2014/2015-2015/-2016 for IT students in the Information Technology (IT) Department, College of Computer and Information Sciences (CCIS), Al Imam Mohammad Ibn Saud Islamic University (IMSIU) by using Rasch Measurement Model.

In conclusions, this study reveals that the students’ performance for the set of IT280 final exam questions is comparatively good. This confirms that most of the students were able to achieve Level 3 of Bloom’s Taxonomy. The result generated from this study can be used to guide us to determine the appropriate improvement of the teaching method and the quality of question prepared.

In the future, we will continue our efforts to evaluate the students’ performance with other set of students and questions in order to ensure the consistency of the students’ performance result. Besides, we will further our study to assess the students’ achievement for other IT courses to prepare the evidence for the quality of measurement of students’ performance. This will help the IT department in complying with the ABET accreditation requirement.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


  1. W. M. Kapambwe, “The implementation of school based continuous assessment (CA) in Zambia,” Journal of Educational Research and Reviews, vol. 5, no. 3, pp. 99–107, 2010. View at: Google Scholar
  2. P. Sahlberg, “Education policies for raising student learning: the Finnish approach,” Journal of Education Policy, vol. 22, no. 2, pp. 147–171, 2007. View at: Publisher Site | Google Scholar
  3. B. L. Boyd, K. E. Dooley, and S. Felton, “Measuring learning in the affective domain using reflective writing about a virtual international agriculture experience,” Journal of Agricultural Education, vol. 47, no. 3, pp. 24–32, 2006. View at: Publisher Site | Google Scholar
  4. M. Saidfudin and A. A. Azrilah, Structure of Modern Measurement, Rasch Model Workbook Guide, ILQAM, UiTM, Shah Alam, Malaysia, 2009.
  5. M. Saidfudin and H. A. Ghulman, “Modern measurement paradigm in Engineering Education: easier to read and better analysis using Rasch-based approach,” in Proceedings of the International Conference on Engineering Education, ICEED 2009, Shah Alam, Malaysia, December 2009. View at: Google Scholar
  6. Ministry of Higher Education (MOHE), “Higher Education Report,” 2009, View at: Google Scholar
  7. S. A. Osman, W. H. W. Badaruzzaman, R. Hamid et al., “Assessment on students performance using Rasch model,” in Reinforced Concrete Design Course Examination. Recent Researches in Education, pp. 193–198, 2011. View at: Google Scholar
  8. G. Rasch, Probabilistic Models for Some Intelligence and Attainment Test, Danish Institute for Educational Research, Copenhagen, Denmark, 1960.
  9. T. V. Bond and C. M. Fox, Applying the Rasch Model: Fundamental Measurement in the Human Sciences, Lawrence Erlbaum Associates, Mahwah, NJ, USA, 2nd edition, 2007.
  10. A. M. Talib, R. Atan, R. Abdullah, and M. A. Azmi Murad, “Security framework of cloud data storage based on Multi Agent system architecture-a pilot study,” in Proceedings of the 2012 International Conference on Information Retrieval & Knowledge Management (CAMP), pp. 54–59, Kuala Lumpur, Malaysia, March 2012. View at: Google Scholar
  11. B. D. Wright and M. H. Stone, Best Test Design, MESA Press, Chicago, IL, USA, 1979.
  12. H. Othman, I. Asshaari, H. Bahaludin, Z. M. Nopiah, and N. A. Ismail, “Application of Rasch measurement model in reliability and quality evaluation of examination paper for engineering mathematics courses,” Procedia–Social and Behavioral Sciences, vol. 60, pp. 163–171, 2012. View at: Publisher Site | Google Scholar
  13. B. D. Wright and G. N. Masters, “Number of person or item strata: (4∗separation+1)/3,” in Rasch Measurement Transactions, vol. 16, p. 888, Sense Publishers, Rotterdam, Netherlands, 2002. View at: Google Scholar
  14. A. R. Rashid, A. Zaharim, and S. Masodi, “Application of Rasch measurement in evaluation of learning outcome: A case study in Electrical Engineering,” in Regional Conference on Engineering Mathematics, Mechanics, Manufacturing & Architecture (EMARC), pp. 151–165, Kuala Lampur, Malaysia, November 2007. View at: Google Scholar

Copyright © 2018 Amir Mohamed Talib et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.