The core aim of this research study is to assess, analyze, and evaluate students’ perception of traditional and electronic assessment, which may affect their academic performance. This research study focuses on students’ perceptions about English as a second language in the context of formal assessment and electronic assessment, which is taken as a problem of this study, i.e., students’ perception of traditional and electronic assessment. This study uses a quantitative research approach, and the data have been obtained through a survey questionnaire from 100 students of SSC and HSC Part- I and II. The research results served descriptively and analytically regarding the two significant types of assessment applied in their classrooms; one is the traditional assessment, which is further subdivided into multiple-choice questions (including fill in the blanks and true and false statements), constructed response questions, and extended response questions. At the same time, the second form of assessment is the electronic assessment which is a computer-based assisted assessment. Electronic assessment is the modern technology that includes all forms of assessments, including collaborative approach-based assessment, interactive assessment, portfolios, and group projects. The result of this study reflects exciting findings. The results for the closed-ended questions (survey questionnaire) bend towards the null hypothesis (Ho), which states that there is no significant difference among students of GSL towards traditional and electronic assessment. However, the results of the two open-ended questions bend towards the alternative hypothesis (Hα), which states that there is a significant difference for the level of difference among students of GSL towards traditional and electronic assessment. Most importantly, teachers are obligated to select an assessment tool with careful consideration to ensure the appropriateness of each assessment tool for the learning objectives. Based on the research findings, some valuable recommendations for the policymakers, curriculum developers, students, and teachers are finally presented.

1. Introduction

Assessment is an integral part of the process of modern-day teaching and learning, and no one can deny that fact [1]. Assessment paves the way towards new teaching and learning techniques and methodologies, making decisions, and moving forward. Assessment is essential for classroom instructions, and in fact, we utilize it daily to assess students’ learning outcomes and check students’ progress [2]. In Pakistan, generally, while in Sindh mainly, English language teaching, learning, and assessment have remained a significant challenge for students and teachers. According to the views shared in [3], to use assessment more, assessment should also be an element that can motivate students to learn [4]. It has been shown that assessment tools should be used to assess or evaluate students’ achievement and enhance the quality of language learning and teaching, but this is missing in the Pakistani context and is one of the main issues of assessment. This research paper intends to highlight the significance of traditional and electronic assessment in the perception and preference of students who have a pivotal role in this regard. This research paper also suggests some viable solutions based on the data analysis and discussion results.

Moreover, assessment has always been a big challenge not only for ESL teachers but also for students. Similarly, assessing ESL is undoubtedly not an easy task for ESL teachers, particularly in Sindh and Pakistan. The same is the case with students with so many ambiguities about ESL assessment. On the other hand, ICT skills are still not so every day among teachers and students. Similarly, there is hardly any research conducted in the context of TA and EA, particularly in the context of Sindh, Pakistan. Therefore, the focus of this study is to identify the level of satisfaction towards traditional and electronic academic assessment among students of GSL. Also, exploring students’ perception of traditional and electronic assessment may affect the performance of the students of GSL.

2. Conceptual Framework

Solo and Blooms’ Taxonomy have been considered and kept in mind while framing the following conceptual framework for this quantitative research study. The conceptual framework has been adapted and developed from the study conducted by Dong and Franklin [5] (Figure 1).

3. Literature Review

3.1. Assessment: An Overview

Generally, assessment is performed using two common types known as formative and summative assessments [6]. In education, the term assessment refers to the large variety of methods or tools teachers use to gauge, measure, and analyze the tutorial readiness, learning progress, skill development, or educational needs of students [7]. However, assessment began to be understood as a complex process that requires to be carried out using various methods which can allow access to multiple indicators of students’ learning progression [8]. Also, it is articulated that assessment should be executed in the educational process and that it should take place simultaneously with the planning of the learning and teaching process [9]. Assessment has played a very significant role in the overall learning process of a student. It helps make the entire teaching and learning effective and helps in planning, assessing, analyzing, and evaluating the outcomes of teaching and learning. In opinion, selected assessment procedures should be relevant to the inclination or performance of the students where an assessor needs to be very purposeful and careful about the procedure to be used when assessing students. Different assessment taxonomies, including SOLO (Structure of the Observed Learning Outcome), Blooms Taxonomy, and revised Blooms Taxonomy, have assessed the students’ learning outcomes. Using assessment most effectively should be an element that can motivate students to learn rather than giving students an adamant time and forcing them to learn. Assessment tools should be used to assess or evaluate students’ achievement and improve the quality of language learning and teaching. To enhance learning and teaching quality, assessment tools are expected to encourage, help, and motivate students to learn actively and critically, not simply studying for an exam [10].

3.2. Traditional Assessment

TA refers to the ordinary methods of testing which usually generate a written document, such as quizzes or exams. Standardized tests and most state achievement tests such as SAT, TOEFL, GRE, and IELTS are also examples of TA. These consist of tests given to students by teachers to measure how much the students have gained and to what extent students have a grip over English as a second language (ESL). The most commonly used traditional assessment tools include multiple-choice question tests, true/false tests, short answers, and essays [11]. The primary aim of TA procedures is to carefully evaluate if students have learned the content and determine to what extent students are successful in gaining knowledge [2]. In this regard, the traditional type of assessment has remained the centre for assessing the overall learning outcomes of the students; however, the literature review suggests that new trends of assessment in terms of electronic assessment (EA), also known as digital assessment (DA), online assessment (OA), and computer-based assessment (CBA), emerged as alternative forms of assessment back in the 1990s [1]. Educational psychologists then developed different methods and ways of students’ assessment, particularly from written or descriptive to multiple-choice questions and on-the-spot or at-class-level oral assessment (formative assessment). The purpose of these procedures is to evaluate if students have gained the content carefully and to calculate whether or not the students are successful in acquiring knowledge. This kind of assessment procedure aims to ascribe a grade to students, ranking and comparing the students against the parameters set for the students [2]. In modern ways of assessment, we see various types of oral assessments such as an assessment of learning, assessment for learning, and assessment by learning, including interviewing. All such assessments are conducted in a single period, session, or sitting and in a single lesson plan implementation process. The literature review suggests that TA, which we may also call classical assessment, mainly relies on format-based assessment, emphasizing how the questions are framed and how the responses are jotted down. In the constructed form of assessment which is the modified form of the extended form of assessment, responses are short and less complex than in the previous one. In the context of Howard Gardners theory of multiple intelligence, just the traditional assessment method may not be a helpful way to assess the learning abilities as different students have got different potentials, capabilities, and intelligence. Therefore, common sense suggests that the traditional assessment method may suit some students, but it is not a wholesome and proper way of assessment. TA may be an ideal form of assessment for some subjects, and similarly, it may not be ideal for some other subjects. The course had two TA devices: quiz and written exams, say midterm test and final test [3]. TA evaluates the training and retaining capacity of a toddler. It analyses what the scholar has acquired proportion of the provided material or syllabus. It also helps educators or teachers compare students’ performances [10].

3.2.1. Extended Response Questions

ERQs are also commonly understood as essays or essay questions that test the students’ advanced cognitive level of skills. According to the study in [12], ERQs assess the cognitive domain and test knowledge objectives at advanced levels. They assess complex cognitive skills, including analysis, synthesis, and evaluation. They assess the ability to research a topic, creatively provide organized, integrated, and evaluated ideas, construct an arrangement, and present it incoherent English, in addition to the students’ factual knowledge and recall knowledge. ERQs are still considered the most effective mode of assessment, mainly while ESL students’ writing and reading skills are assessed. ERQs are pretty commonly used in various academic and professional assessments across the globe. EQs are a great source of testing the writing and grammar skills where students are required to reflect their in-depth knowledge and coherence of the critical analysis. Literature review reflects that ERQs consume more time not only for students but also teachers. There is no definite response context; therefore, students have some reservations regarding the assessment criteria for these open-ended questions.

Similarly, most students cannot see score less due to a lack of grip over grammar rules despite in-depth background knowledge. On the other hand, students who better understand ESL likely score high due to their advantage. ERQs offer some sort of liberation and flexibility to the students where they can respond as per their conception and ideas. That is why when a certain number of students are assessed through essay writing skills; they all reflect pretty differently from each other in this regard. There is always a need to design valid and reliable assessments which may pinpoint and evaluate the conceptual understandings of the students; this can be administered to a larger group of students in a more effective manner and can be used to build claims about conceptual knowledge of the students. This is why essays (EQs) are commonly used in various standard and competitive examinations, especially to access the language proficiency of ESL students.

3.2.2. Constructed Response Questions

CRCs or open-ended questions (EQs) also appeal to critical skills and creativity but in a limited context compared to ERQs, where students must come up with something different but must be within the context. Constructive response tests are unique in their ability to award partial credit for incorrect answers and are generally considered the more complicated tests due to the inability to guess. Researchers have tried to determine the relationship between learning and test scores, but often, it is hard to determine [13]. CRCs appeal to students to interpret the information they already have and develop valuable solutions by utilizing their problem-solving skills. CRCs, however, are time consuming for teachers due to diversity to assess the students’ learning abilities and give them feedback as per their performance in EQs. As far as assessing the ESL proficiency of the students, CRQs offer in a limited context to analyze and evaluate all four basic ESL skills of the students. Unlike ERQs in CRQs, students require to concentrate on several questions in a limited time frame and construct their responses based on the contextualized background study. According to the study in [14] and literature review, CRQs are being utilized as a basic form of formative and summative assessments (SAs), which are typically and quite often used at the end of a learning period to help students remember what they have learned. They have high stakes for everyone involved, most notably for the learners who are being graded, but also in the sense that the data could be used to enhance courses, analyze teaching outcomes, and conduct problem-solving examinations such as certification. Both summative and formative testing significantly affect students’ overall learning, so careful selection is required. Summative evaluation stimulates learning because most school and college students will consider multiple techniques and approaches to improve their performance. It is critical to think about the precise and organized meaning of instructive targets since the obtaining of information and abilities fitting to an expert profile to be gained ought to be coordinated from a showing cycle with good selections of methodologies, delimitation of explicit substance, and evaluation frameworks and, thus, leads to effective and enduring learning [15].

3.2.3. Multiple-Choice Questions

MCQs are termed as close-ended questions, where, unlike CRQs and EQs, students are required to respond in a limited and fixed context. MCQs such as options, blanks, true and false statements, and match the columns offer fewer opportunities for students to respond to MCQs based on their critical thinking and creative skills; instead, MCQs test the cramming and reading comprehension skills of the students. Similarly, the MCQ test is also known as the objective test, and it is defined as a structured test that asks the participants to fill in one or two words or choose the correct answer from several options. An objective test consists of the problem/test item and a list of alternative solutions [16]. Multiple-choice tests are arguably the most popular type of assessment in education, and much research has already been dedicated to determining best practices for using them to measure teaching [17]. MCQ tests have some advantages over other assessing strategies; they allow the educators to cover a wide range of educative material given in a short period, and they are instrumental in evaluating a large population of students. A concept inventory designed as a multiple-choice assessment format is undoubtedly easier to administer and attain, allowing much larger populations of participants within the studies to be evaluated for conceptual understanding. Through MCQs, students’ judgment and decision-making abilities are tested. Intelligent guessing skills also appeal to students to respond in time-bound and pressure situations. Controlling the nerves and using common sense are significant factors for students in MCQ test-based assessments.

3.3. Computer-Based Assisted Assessment or E-Assessment

Since adopting information and communication technology (ICT) in education and learning, CBAA has gained popularity and has been sustainably adapted in several higher education institutions [18]. Today’s schools are turning to computers for all aspects of learning, including assessment. While advantages to computer testing do exist, the comparability between paper-pencil tests (PPTs) and computer-based tests (CBTs) must be considered [19]. Computerized learning and assessment will be the norm, especially after the outbreak of the COVID-19 pandemic. All educational institutions are now affixing to this digital learning model, teaching and assessment, by supporting teachers, students, and other stakeholders [20].

Similarly, the COVID-19 pandemic has imposed a change and shift from TA towards EA through the ICT integration into the educational institutions due to hybrid learning to fight and counter this COVID-19 outbreak worldwide. The benefits of using computers for testing are numerous. Therefore, efforts must be taken to ensure that student’s performance on computer-based tests is an accurate indicator of content competency [19]. Adding multiple-choice options to a test item can support students’ problem-solving process by narrowing down the mental search space for potential responses so that students do not need to engage in processes of generating their response, which reduces complexity in the problem-solving process [21]. Alternative assessments featuring ICT enable ESL teachers to gauge, improve, and redirect their instruction in ways in which teachers can answer the requirements of their students rather than relying exclusively on conventional testing formats which neither resemble the technology-enhanced instructional approaches adopted inside the classroom nor successfully reflect the fundamental skills and proficiency of the ESL learners [22]. On the other hand, ICT has conquered every single corner of learning and teaching, including introducing flipped classrooms, particularly for teaching and learning ESL. Mack composes in the blog titled “In Defense of Classroom Assessments” that computerized evaluations have brilliant possibilities and basic restrictions.

3.3.1. Electronic Assessment in the Context of Vygotsky’s Dynamic Assessment

A Russian psychologist Lev Vygotsky is considered the founding father of the Dynamic Assessment (DA), producing in his sociocultural and historical theory. It is based on his unique zone of proximal development (ZPD) within cultural-historical theory SCT. DA is grounded within the concept of ZPD and prescribes mediated teacher-learner dialogue during the assessment procedure [23]. There are two general approaches to DA, interventionism and interactionism [24]. EA is essentially defined as an approach to know individual differences and implications of understanding for instruction using information communication technology (ICT) tools. It is an assessment that embeds intervention within the testing procedures [25]. EA would seem to be an appropriate tool for educational psychologists (EPs) because it includes all the performances at different intervals, including time management and comprehensive overviewing. Nevertheless, the literature consistently shows this approach has been one of the least utilized educational organizations, particularly at secondary and higher secondary levels [26]. EA is a postmodern notion in testing which sees instruction and assessment as inextricably mingled, contending that learners will progress if provided with dynamic interactions [27]. EA advocates the provision of feedback or other forms of support (e.g., prompts, models, and leading questions) during the procedure to observe learner responsiveness. EA is theoretically a broad approach based on the potential that has led to many different interpretations and spawning different models and techniques, which could be confusing to practitioners [26]. It has opened new horizons for teaching and assessment of learning. EA offers a rationalistic view towards assessment and instruction, and it integrates them into a unified activity [28]. To understand the EA assumptions, one has to emphasize that DA tries to understand the individual’s previous learning and independent performance and his/her potential ability in jointed performance [29]. It aids in measuring the scholar’s proficiency and supports their analytical, reasoning, and rationale skills. By applying their theoretical knowledge, students gain better experience and tackle problems [10]. EA offers a hypothetically roused strategy to unification appraisal and guidance, something vital to students. Thinking about this, EA strategies are urgent to educators and understudies, and this significance is a result of giving scores or evaluations as well as experiences into the person’s capacities which are the reasons for horrible showing and explicit procedures for supporting turn of events [23]. ESL learning is significant in our regular daily existences. We use language to speak with others, to state our feelings, sentiments, wants, and so forth. English is a worldwide language that has a main and unquestionable job in our lives. Evaluation and guidance are provincially incorporated into a similar improvement situated action; EA takes a monistic view toward evaluation and guidance [28].

3.3.2. Students’ E-Portfolio-Based Assessment

Portfolio-based assessments are collections of academic work, such as assignments, lab results, writing samples, speeches, self-created student films, or art projects compiled by students and assessed by teachers in inconsistent ways. SPAs are often exercised to evaluate a “body of knowledge,” i.e., acquiring diverse knowledge and skills throughout your time. Portfolio materials are collected in digital formats through a learning management system (LMS), and they are often evaluated to determine whether students have met required learning standards or not [7]. Portfolio assessment succeeds in finding strengths and weaknesses of student work and is carried out jointly between students and teachers. Implementation like this will increase students’ awareness of recognizing their strengths and weaknesses, which will challenge and motivate the instructor to improve their weaknesses [30]. The portfolio collects student works and documentation about the students’ learning progress (namely, the students’ task, test, performance, and activities) collected regularly and continuously. Portfolio can be in the form of the students’ work, the student’s answer to the teacher’s questions, unreliable records of the students, reports of the students’ activities, and the students’ composition or journals [30]. Assessment should be a motivation for students to study to be more effective. Assessment of performances helps to motivate students to actively and critically learn, create a portfolio just studying for an exam, and to improve the quality of learning and teaching [10]. SPBA involves self-assessment by students. In this case, the students can also assess the process and learning outcomes based on a collection of work and their learning outcomes. Thus, the assessment process will be more meaningful and enjoyable for students [30].

Portfolios comprise understudy work that shows a dominance of the ability of the undertaking and articulation. The study in [11] characterizes portfolios as “a deliberate assortment of understudy work that displays the understudy’s endeavors, progress, and accomplishments in at least one region. The assortment should remember understudy interest for choosing substance, the models for passing judgment on legitimacy, and proof of understudy self-reflection.” Due to their combined nature, portfolios require much info and obligation from the understudy. In addition, they request a lot of time and responsibility from the educators, which yields a reasonable issue in evaluation. As indicated by endeavours in [11], a portfolio is an assortment of different types of proof of accomplishment of learning results. In practical terms, an understudy portfolio for appraisal reasons for existing is a summary of reports, papers, and other material, along with the understudy’s appearance on their learning and on qualities and shortcomings.

3.3.3. Collaborative Assessment (as a Form of E-Assessment)

In collaborative assessment (CA) as a form of E-assessment, the teacher requires collaborating with students to achieve students’ learning outcomes. CA is not aimed and focused on assessing, measuring, gauging, and grading the students, rather it is aimed to help and support the students, including making things easy for them through collaborative assessment. CA is not only making a change within the assessment steps and a shift towards a completely unique assessment philosophy that focuses on the role of intervention in helping individuals develop [20]. CA enables interaction and assistance between teachers and students. There is a CA intervention that can be very helpful for students to perform better. The student thinking process to get the correct answer can be observed via CA [31]. The CA fundamentally supports the idea that assessment is also part of teaching and learning which can only be achieved through constant collaboration between teachers and students. It is not a one-way direction; a teacher should play his role of a guide and counselor. CA engages students in setting goals and evaluation criteria and performing a task, which includes identifying resources to develop ideas and develop an outcome or outcome over time and in a real environment by setting a professional context, engaging in ongoing dialogue with others such as peers, facilitators, community partners, teachers, and colleagues for continuous formative assessment and using higher-level thinking and problem-solving skills in a natural context by setting a professional context.; metacognitive, collaborative, and interpersonal skills and intellectual thinking are all under development [32].

3.3.4. E-Group-Based Assessment

According to the study in [33], many activities, including discussions or problem-based learning activities through LMS or E-Learning, are often considered group work. We specifically refer to tasks where students work together or in a group to produce final work assessed, that is, a group assessment. Student and teacher perceptions of such activities vary, and it is usually observed as a problematic and complex situation [33]. Nevertheless, group work and assessments are authentic within many disciplines/fields, mirroring requirements for future work practices. Requirements for practical group work assessment include ensuring that employment is observed before being assessed and, therefore, fairness, division, and allocation of marks. Many studies on group-based assessment (GBA) review and detail the impact of GBA as one of the approaches in assessment for learning (AFL). These studies mostly used quantitative research design, statistical data, and quantification, but few studies still used qualitative descriptive research focusing on the learners’ perspectives [34]. GBA may be a practical tool for learning and assessment in educational settings, but more research should be conducted on the effectiveness of EA to understand GBA’s potential contributions to the development and motivation of language learners’ language skills [35].

4. Impact of TA and EA on the ESL Teaching and Learning

ESL has always remained an excellent challenge for nonnative students who learn English as a second language. Traditional teaching and assessment have also remained a one-way direction of teaching and learning ESL; however, flipped and hybrid ways of teaching and EA have significantly impacted the learning and teaching of ESL. Language capability and accomplishment tests are usually planned and directed in the customary manner [22]. Be that as it may, various instructing and learning approaches will generally be utilized across the educational plan region [36]. Our outlook changes in ESL showing strategies and approaches, ESL testing and assessment, have likewise gone through specific changes. EA is an emerging evaluation way to deal with the traditional static techniques, which holds that assessment and instruction are wholly incorporated and not considered discrete exercises [37, 38]. In the work in [25], the robust evaluation procedural structure was proposed as a possible way to deal with counseling the unique idea of cycle composing and improving second language students’ cognitive advancement. Hence, this structure offers more composing guidance than composing assessment [25]. Drawing on the sociocultural theory of Vygotskian (SCT), the EA developed the interest of researchers as a path through which two fundamental parts of the path of education and fulfilment, namely, the orientation and assessment, should be separate, coordinated, and brought together to advance student language improvement. Building on the Vygotsky SCT, EA has come up with an alternative for educators and students to try something new and find an effective way to cope with difficulties while teaching and learning ESL [35]. Assessment is becoming more common as many international organizations take full advantage of it by taking various online exams, including SAT, IELTS, TOEFL, O Levels, A Levels, and GCSE exams. Likewise, various international bodies and associations such as the e-Assessment Association (e-AA), founded in 2008, as well as the Association of Test Publishers (ATP) focus specifically on innovations in testing and improving adoption regarding technologically advanced assessment [39].

5. Methodology

For the topic “Students’ Satisfaction towards Traditional and Electronic Assessment for Academic Achievement in ESL at Government School, Larkana,” a theme-based/thematic literature review approach has been incorporated in this quantitative research. A most common survey questionnaire method was adapted based on the Likert scale to collect the statistical data from a specific group of people and further investigate the phenomenon that prevails in assessment. This research study is a correlational type of study in quantitative research; hence, literature review has been conducted to maintain the correlation between TA and EA. Similarly, both forms of assessment have been given equal weightage in the literature review phase so that results may be interpreted and evaluated with certain coherence between TA and EA.

5.1. Participants of the Study

In this study, 100 students (mother tongue language was Sindhi; however, they were being taught all the major subjects in English as a medium of instruction) aged between 14 and 18 years of SSC and HSC level from the Public Sector School, Larkana, were the targeted population for sampling. At the same time, 25 students from each section (SSC Part- I and II aged between 14 and 16 years, HSC Part- I and II aged between 16 and 18 years) were selected randomly out of 1200 students. The sample size was selected from the Morgan table of sample size. The data were collected using the survey questionnaire adapted from the study [40].

5.2. Instrumentation

The questionnaire was adapted from [40]. There was a total of 16 items in the questionnaire containing 14 close-ended questions (CEQs) and two open-ended questions (EQs) in the English language as a medium of communication. The instruments were developed on a Likert scale, and instrumental items focused on assessing the study’s correlational reasoning and perception of the students about TA and EA. A survey questionnaire based on five Likert scales with two open-ended questions was distributed among 100 SSC- and HSC-level students to respond. Before that, students were briefly explained the basic concepts of TA and EA. Students were also asked to fill the provided survey questionnaire based on their perception about ESL assessment only, including reflecting their perception about the TA and EA in EQs. The reliability and validity of the survey questionnaire were ensured prior to collecting actual data.

5.3. Data Analysis Approach

In this study, descriptive and inferential statistical approaches were utilized. Moreover, parametric tests from the same group using pre- and posttest analysis were conducted, whereas in this study, a paired-sample t-test was utilized without violating the parametric test assumptions for testing the hypothesis.

6. Results and Findings

The following data illustrate students’ perceptions of traditional and electronic assessments based on the results. The data have been served in the tables. The data are divided into two different variables (TA and EA) in which the first part consists of 07 statements (CEQs) related to TA, while the second part also consists of 07 statements (CEQs) related to dynamic assessment, and the last part consists of 02 statements (OEQs) related to traditional and DA. For CEQs, a Likert scale has been used, with the values ranging from the biggest to the smallest. The positive pole starts from the statement strongly agree and agree, while the negative pole starts from statements disagree and strongly disagree. For the neutral, it is neither included in partial nor impartial, and hence, it is not included in the counting of positive and negative responses. All the vast scores, whether in positive or negative poles, represent each statement’s answers indicating students’ perception.

Table 1 reflects the descriptive statistics of the seven items of TA. The mean value of the first construct (Traditional assessment is ideal for your English Language Learning) is 2.56, while the standard deviation is 1.42. The second construct (Extended response questions are easy for ESL assessment) highlights 3.50 as its mean and 1.49 as its standard deviation. In the third construct (Constructed response questions are pretty easy to respond to), mean and standard deviation values stand at 2.61 and 1.53, respectively, while the fourth construct (Multiple-choice questions are easy and less time consuming for ESL assessment) denotes 2.41 as the mean value and 1.40 as the standard deviation value. In the fifth construct (Jigsaw, match the columns, and puzzles are hard to crack in ESL assessment), we find 2.81 as the mean value and 1.36 as the standard deviation value; similarly, the mean and standard deviation values of the sixth construct (You feel comfortable with one-word answer type of questions) stand at 2.73 and 1.30. The TA variable’s last construct (Your English language grades get affected due to traditional assessment) reflects 2.50 as its mean value and 1.37 as its standard deviation value.

Table 2 describes the descriptive statistics of the seven items of EA. The mean value of the first item (Computer-assisted assessment leaves a good impacts on your ESL results) is 2.19, while standard deviation is 1.31. The second item (Portfolios give you a chance to improve your ESL grades) highlights 2.52 as its mean and 1.35 as its standard deviation. In the third item (Semester-based assessment is easy for your ESL learning), mean and standard deviation values stand at 2.12 and 1.16, respectively, while the fourth item (You feel comfortable in discussions, debates, and quiz-based assessment) denotes 2.26 as the mean value and 1.11 as the standard deviation value. In the fifth item (Practical and spot-test type of assessment is ideal for you in ESL grades), we find 2.63 as the mean value and 1.33 as the standard deviation value; similarly, the mean and standard deviation values of the sixth construct (You like inquiry-based assessment very much) stand at 2.47 and 1.30. The TA variable’s last item (Inquiry-based assessment impacts your ESL grades) reflects 2.35 as its mean value and 1.18 as its standard deviation value.

Descriptive and correlation statistics of the paired samples of the variables TA and EA, as mentioned in Tables 3 and 4, clearly reflected a significant difference (.000) between TA and EA.

As it is shown in Table 5, a paired-sample t-test was conducted to compare TA with EA, and there was a significant difference in the scores for TA and EA (M = .36, SD = .27; t = 13.59; df = 99; and  = 000).

7. Discussion

These results suggest that TA impacts ESL students’ grades compared to EA. Specifically, our results suggest that students are well satisfied with TA compared to EA. However, in an open-ended question (What are your views about Traditional Assessment?), most participants believed that TA affects their grades, ESL learning, and understanding. A reasonable number of participants wrote that TA offers some opportunities and an edge for cramming students, including students with better handwriting skills, who tend to score much better than those with poor handwriting skills. Standard evaluation techniques are more level headed, solid, and substantial [11]. In the Pakistani context, a more significant part of teachers follow customary evaluation rehearses because they are unfamiliar with ICT skills.

Similarly, students have poor ICT skills or hardly any access to the Internet and electronic gadgets. Surprisingly in response to the second open-ended question (What are your views about Electronic Assessment?), high number of participants, after watching a YouTube clip regarding E-assessment, were of the view that EA impacts positively on their grades as it offers them fewer cheating opportunities, time management, and the chance for editing the responses. A few participants wrote that computer-assisted and portfolio assessments are time consuming, negatively impacting students’ grades that are not good at ICT skills. The majority of the participants reflected that the EA approach requires high critical and creative skills; similarly, EA mostly relies on the spot reflection, including high composing and comprehension skills, especially in ESL learning. EA has a limitation predominantly dealing with time and Internet and electronic gadgets access [34]. However, EA may help ESL teachers improve students’ reading and speaking skills [41].

8. Conclusions and Recommendations

The results’ finding and discussion can be drawn to conclude that both TA and EA cannot be compared simultaneously. Instead, each of them has its significance, impact, and importance; therefore, ESL teachers should continue surveying their students inside a classroom, including utilizing both forms of assessment as and when remained as it is anything but a smooth interaction to survey students as instructors, by and large, to see it distinctively dependent on their scholastic, proficient, and relevant comprehension of appraisal. It may vary due to different ground realities, including the classroom environment, which is an obligatory part of educating and learning since assessment is consistently essential to see and decide the improvement of ESL learning students.

Similarly, results also reflect that, in some cases, especially where there is a lack of proper access to the Internet and computers as well as poor ICT skills of both teachers and students, the use of TA is very much effective and productive, but it may not be used to assess all the four primary forms of ESL learning and teaching. Therefore, in Sindh, Pakistan, improving teachers’ and students’ ICT skills and proper access to the Internet and electronic gadgets is suggested. Similarly, it is entirely up to the educators, students, parents, and stakeholders to decide that when, where, and which form of assessment (TA or EA) to use so that effective and productive results may be achieved in the more considerable interest of students’ overall learning outcomes, especially during situations like the COVID-19 pandemic which may emerge in future.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.