Abstract

Listening and reading skills, which are two of the essential skills in the world of language learning, have established themselves in various academic fields. Many researchers have looked at teaching and developing these two receptive abilities, but less focus has been paid to assessing and testing them. Traditional evaluation has been used to examine learners for a long time, but new teaching methods should introduce new ways to test and assess students. This study attempted to investigate the impact of dynamic assessment (DA) vs. nondynamic assessment (NDA) on Ethiopian intermediate EFL learners’ receptive abilities, which was conceptually grounded by Vygotsky’s (1978) socio-cultural theory (SCT). To this end, 96 intermediate students from a high school participated in this study. Then, they were divided into three equal groups: two experimental groups (EG1 and EG2) and one control group (CG). After administering a pretest, the EG1 and EG2 were taught listening and reading skills through group dynamic assessment, and the control group received traditional instruction. After the treatment, a posttest was administered. The results of one-way ANCOVA revealed that dynamic assessment had significant effects on receptive skills. This study has implications for instructors, students, and material designers in light of the findings. Teachers are encouraged to use DA in their language instruction to help students improve their English language abilities.

1. Introduction

Over the last few years, dynamic assessment (DA) has been a popular topic among academics and scholars. It is described as an approach that recognizes differences between individuals and their consequences for instruction and incorporates intervention into the assessment process by effectively integrating the best forms of mediation that are sensitive to the individual’s current abilities and subsequent performance to enhance learner advancement [1]. To put it another way, DA is different from traditional assessment in terms of theoretical perspective, assessment techniques used, and outcomes interpretation [2]. It is all about how assessment and teaching interact. More specifically, DA concentrates on the evaluation process and the final result. It tries to influence the student’s performance during a test by providing new material or directives to generate better levels of achievement [3].

DA aims at exploring how learners answer instruction during the assessment procedure. As a result, while detecting reading difficulties, the concentration is on gathering information on the participant’s reading decoding skills [4, 5]. Similarly, the dynamic assessment method to identifying reading difficulties seeks to discover the learner’s learning potential as established by Vygotsky’s Zone of Proximal Development (ZPD). There are two practical approaches to DA that can be linked back to the various situations in which Vygotsky articulated the ZPD. The first is referred to as interactionist DA. Its roots may be traced back to Vygotsky’s qualitative interpretation of the ZPD, which prioritizes instruction-learning over assessment. Reuven Feuerstein is a strong proponent of interactionist DA [6, 7]. The Mediated Learning Experience (MLE)—a concept based on Vygotsky’s mediation theory—lies at the center of Feuerstein’s method. The second method of DA, which the researcher has concentrated on in this study, is known as interventionist DA. It follows a quantitative methodology and accommodates itself more to a psychometric perspective. It is now used as a pretest-mediation (intervention)—posttest experimental design. The teacher’s role is dynamic, including collaboration with the learner to change the evaluated skill [8, 9].

DA can help you enhance your receptive abilities. One of the receptive skills is listening, which includes auditory identification, aural grammar, selecting required information, memorizing it, and linking it to the relationship between sound and form of meaning [10, 11]. According to Noels et al. [12], listening is an active mental skill. It aids our understanding of the world around us and is an essential component of effective interaction. According to Taguchi [13], listening comprises listening for ideas, emotions, and goals, requiring active participation, time and energy, and practice.

It is a distinct process from reading comprehension in that it involves comprehending spoken language. Recognizing speech sounds, grasping the meaning of individual words, and understanding the grammar of sentences are examples of these abilities [14, 15]. According to Hamouda [16], listening comprehension refers to comprehending what the listener has just heard. They can repeat the text even if the listener can just repeat the sounds without understanding what they are saying. According to Bekka [17], listening comprehension is an evolving process in which the listener generates meaning by signals from contextual information and prior knowledge while depending on many strategic resources to complete the task need at hand.

The other receptive skill, which may be influenced by DA, is reading. Reading is one of the four primary skills in language education, and it plays an important part in the overall system of language instruction. In their definition of reading, Yang and Qian [18] state that it means “...various things to different individuals, for some it is identifying written words, while for others it is a chance to teach pronunciation and practice speaking” (p. 60). Reading, according to Yang and Qian [9], is an “enjoyable activity” that may provide pleasure to those who participate in it (p. 28). He believes reading is the most important academic language skill [19, 20]. His definition of reading is “... The capacity to derive meaningful information from a written page and to use that knowledge effectively.”

The primary issue highlighted in this research is that current information about human cognition and learning does not align with how assessments are conducted. The design of the assessment process should be based on a model of cognition and learning that is grounded in scientific research. The assessment of students’ performance, success, and results should be found on the most recent scientific understanding of how students represent information and acquire competence in their subject area. The influence of dynamic assessment on learners’ reading and listening performance has been the subject of a few kinds of studies in the literature examining second language receptive skills. Listening comprehension is still regarded as a Cinderella skill in Ethiopian language institutes and high schools, with English language teachers emphasizing other skills at the expense of listening comprehension. According to Zhang [21], “both language instructors and learners tend to overlook the importance of listening comprehension skills because their attention is fixed so completely on their eventual aim, meaning that they fail to realize necessity improving listening comprehension skills” (p. 192).

There is a dearth of empirical L2 research exploring the effects of DA on Ethiopian EFL learners’ receptive skills. Furthermore, the extant research conducted in this area examined the impact of DA only on one skill, and no comparative study has been performed in this domain; therefore, this study will cover these issues and compare the effects of DA on listening and reading comprehension of Ethiopian EFL learners.

Since some Ethiopian EFL learners are weak in English, this study aimed at helping them enhance their listening and reading skills or receptive skills (It is important to note that learners do not need to generate language to perform listening and reading skills; instead, they absorb and comprehend language. These abilities are referred to as passive abilities in certain circles. Compared to the productive or active speaking and writing abilities, they are less prevalent.). This study employed the DA to help Ethiopian EFL learners prompt their receptive skills. Accordingly, two main objectives were followed in this study. Firstly, this study wanted to scrutinize the impacts of DA on Ethiopian EFL learners’ listening skills; secondly, it aimed at comparing the effects of DA on the students’ reading comprehension improvement.

1.1. Research Questions and Null Hypotheses

This study aimed at answering the following research questions:RQ 1. Do dynamic vs. nondynamic assessments (NDA) have any significant effect on Ethiopian intermediate EFL learners’ listening comprehension?RQ 2. Do dynamic vs. nondynamic assessments have any significant effect on Ethiopian intermediate EFL learners’ reading comprehension?

According to the above-stated research questions, the following null hypotheses were suggested:HO 1. Dynamic vs. nondynamic assessments do not have any significant effect on Ethiopian intermediate EFL learners’ listening comprehension.HO 2. Dynamic vs. nondynamic assessments do not have any significant effect on Ethiopian intermediate EFL learners’ reading comprehension.

2. Theoretical Background

2.1. Dynamic Assessment

Even though Vygotsky’s notion of the ZPD is the basis for DA, neither Vygotsky nor his associates used the term DA when discussing the importance of differentiating between diagnostic and prognostic testing in educational and laboratory settings when constructing their proposals on human cultural development. Luria [22], one of Vygotsky’s most prominent colleagues, “contrasts statistical with dynamic techniques to assessment” in his work published more than 40 years ago (p. 7). Even though it is based on good psychometric concepts, Luria argues that the former is flawed because it believes that a person’s test results indicate their full potential. If you want to get a whole picture, you need to know two other things: the person’s performance with support from someone else and the amount to which they may profit from this aid not only on the same task or test but also on multiple tasks or tests.

Barker and Saunders [23] described DA as a technique that examines the effects of an intervention in the prologue to their critical analysis of the research on DA since Luria’s publication. Interventions such as this one is designed to help students improve their scores on specific questions or the whole test. The final score may be based on the difference between the pretest and posttest scores (before and after learning), or it may be based only on the posttest score. While this is a more systematic explanation of DA than Luria provides, it falls short of capturing the full impact of Vygotsky’s conception of progression in the ZPD. Vygotsky’s development was not limited to a particular task or test, as Luria’s remarks demonstrated; instead, it must consider the individual’s capacity to transfer what has been absorbed via mediation beyond the present task to subsequent tasks.

According to Sternberg and Grigorenko [24], following Luria’s distinction between DA and statistically based assessment, a large number of people working within the DA paradigm have juxtaposed their approach with what they refer to as static assessment, which follows more traditional assessment procedures, especially those associated with summative assessment, and that this has confused. Statistic assessment is defined as follows: the examiner gives things to the examinees one at a time or all at once, and each examinee is requested to react to these items sequentially without any feedback or assistance on the examiner’s part, according to Sternberg and Grigorenko [24]. The only feedback an examinee will get after the test has been administered is a report on a score or set of scores, which is usually sent at some point after the test has been issued. Depending on the situation, the examinee is preparing for one or more upcoming examinations by that time.

The Zone of Proximal Development proposed by Vygotsky and Cole [25] implies that different persons might have the same baseline score on a static exam but vary in the amount they can benefit from instruction. Because it has its origins in Vygotsky’s theory of mind, DA goes beyond the integration of evaluation and teaching by empowering instructors to increase learners’ skills by continuously adapting their mediation to the changing requirements of their students, rather than the other way around [26, 27]. A novel psychological and educational evaluation method is no longer needed; in fact, some of its present uses have been around for more than half a century [28]. For the first time, dynamic assessments (DA) were defined by Haywood and Lidz [29] as a broad category of practices that differ from traditional or nondynamic assessments (NDA) by including intervention and learner responsiveness to intervention as essential elements for understanding the learner abilities.

Vygotsky and Cole [25] argued that children’s perspective of the world is shaped by their interactions with others from an early age. During this era, it is feasible for a youngster to learn more effectively when collaborating with a more experienced or knowledgeable mentor. This idea is often used in treatments, although it is not how cognitive or linguistic tests are generally carried out. In Vygotsky’s [25] “zone of proximal development,” a child’s ability to perform and succeed when aided by an adult or a more experienced peer indicates the child’s developmental potential (commonly, ZPD).

When it comes to assessing this learning capacity, the phrase “dynamic assessment” encompasses many different types of methodologies and materials. Teaching or facilitating the increased performance during assessment and evaluation seeks to expose an individual’s maximum performance. Traditional psychometric procedures analyze what is known as “static measurement,” which is a term used by Feuerstein et al. [30]. Feuerstein et al. [7] present the argument for DA of cognitive capacity; however, there is a gap in speech and language treatment.

2.2. Dynamic Assessment and Language Education

The word DA, which refers to the pedagogical manifestation of the ZPD, was created in English by Luria [21], a colleague of Vygotsky. When Luria first proposed DA, he did it within the study he and Vygotsky conducted on children with learning difficulties, which was a natural fit. Modern times have seen it expanded to cover individuals suffering from various diseases related to old age, such as dementia, among other things [29]. However, educators have expanded the scope of DA to include general education and L2 pedagogy [3133].

The development of two broad approaches to DA has occurred since Luria’s introduction of the concept. Instruction as mediation and evaluation are combined into a single action to identify learning potential and support growth in the wake of this potential in both directions. When a test is being completed item by item, one option is to use interventionist DA to provide learners with a defined set of clues and hints that have been prefabricated in advance and presented to them as they go through the exam. This scale of implicit to explicit mediation is used to organize the recommendations. It is based on the assumption that if learners can respond appropriately to an implicit form of mediation, they have already achieved a greater degree of control over the educational object than if they require more direct assistance. In addition to concealing the learner's developmental level, providing explicit mediation when implicit mediation is adequate violates the learner’s feeling of agency [3436] and, more importantly, the learner’s sense of agency [36]. Interventionist DA has the particular benefit that, since the mediation is not personalized to the responsiveness of individual students, it may be undertaken with large numbers of people simultaneously through the computer.

Because the quantity of tips is predetermined, it is also possible to create numerical scores compared between different learners. As a result, this technique is more psychometrically feasible than the alternative approach, interactionist direct assessment (DA) (see below). The Leipzig learning test (LLT), a linguistic aptitude test established by Jurgen Guthke and his colleagues [37] to evaluate the learning capability of overseas students desiring to enroll in German institutions, serves as an example of interventionist DA. The exam findings were used to put students in L2 German lessons suitable for them. For the LLT, examinees are presented with a created language and asked to reply to a series of questions that require them to determine the language’s morphosyntactic features. This is similar to the format of many language aptitude tests. Each test item is followed by a sequence of five clues presented in the following order: implicit to explicit. If an examinee gives an incorrect answer, they are given the most implicit indication possible: that is not correct. Please take another look at it and consider your options.

The mediation gets clearer if the second effort does not result in a satisfactory response: that is not right. Consider which rows are the most relevant to the ones you are attempting to finish in the first place. The fifth and final kind of mediation gives the proper response and an explanation as to why the answer is accurate. After that, the exam moves on to the next item. Although the LLT’s purpose is to examine linguistic aptitude, the fact that it is based on the ZPD means that it acknowledges that aptitude is not a static attribute but rather a dynamic ability that may grow during the same examination intended to assess it. As a result, it is expected that as learners go through the exam, they will need fewer suggestions and less explicit mediation, which will be an indicator that their language aptitude is increasing.

According to Uztosun [38], the interactionist DA prefers “qualitative evaluation of psychological processes and dynamics of their qualitative growth” over “quantitative assessment of psychological processes and dynamics of their qualitative development” (p. 119). When it comes to education, Vygotsky and Cole [25] maintained that we should not measure but understand pupils and that this could only be done via contact and collaboration. Consequently, mediation in interactionist DA cannot be predicted before but must be negotiated with the person and continuously altered according to the learner’s responsiveness to the situation. According to Reuven Feuerstein’s version of DA, known as the mediated learning experience (MLE), the traditional examiner/examinee roles are abandoned in favor of a teacher-student relationship in which both individuals work toward the ultimate success of the learner: “It is through this shift in roles that we find both the examiner and the examinee bowed over the same task, engaged in a common quest for mastery of the material” (MLE) [7], (p. 102). Consequently, education takes center stage, with psychometric assessment being minimized, if not eliminated from the stage.

2.3. Empirical Background

Regarding the effectiveness of DA on students’ English learning, some studies have been conducted [10, 27, 32, 3954]. There are several possible conclusions. In contrast to conventional assessment, DA incorporates the interaction between the examiner and the student throughout the evaluation process, and it is becoming more popular. The properties of DA indicate that it can be used to evaluate learning disabilities in a manner that is congruent with the current understanding of language acquisition. Aside from the research described above, only a few studies have looked at the link between language acquisition and dynamic evaluation in more detail. When learners’ reading and writing skills are assessed utilizing a dynamic assessment technique, a more thorough description of their strengths and shortcomings may be offered to the students and instructors. This information would allow for more effective instructional programming during remediation, which would result in the participant’s reading and writing skills being enhanced as a consequence.

Furthermore, analyzing the efficiency of DA strategies may assist instructors in making better judgments in their classrooms, which can lead to more successful English learners. DA is a powerful method that helps students overcome language and cognitive difficulties when identifying learners. The purpose of this research was to establish whether or not DA was beneficial for evaluating the writing development of Ethiopian EFL learners. Overall, it is anticipated that DA will offer Ethiopian learners more exact and thorough information about their writing abilities than standard evaluation methods provide. Finally, a quick review of a few research conducted in the domain of DA, particularly in the educational setting, indicates the value of this method in assisting learners in reaching higher levels of learning. Nevertheless, there has been a minimal investigation into the function of mediation via dynamic evaluation in teaching EFL writing. Following past research in DA and to broaden the area of its applications, the purpose of this study was to investigate the effects of DA on enhancing the reading and listening abilities of Ethiopian English as a foreign language learner.

3. Method

3.1. Participants

The participants of this study were 96 students who were selected from a high school in Ethiopia. Three intact classes were chosen from the school mentioned. They were all in junior high school third grade, ranging from 16 to 19. One class was regarded as the listening group (EG1), one class was considered the reading group (EG2), and the other class was considered the control group (CG). In each class, there were 32 participants. The study participants were all male, and their mother tongue was Persian.

3.2. Instruments

The first and most important instrument for obtaining data to answer the first research question of this study was a listening pretest created by the researcher and based on the coursebook CDs used by the students (Prospect 3). It consisted of 40 objective items, each of which the pupils were needed to respond to correctly. The researcher instructed the participants to attentively listen to the audio files of the book and then respond to the questions.

The second instrument used in this study to address the second question was a reading pretest created by the researcher himself. Four passages from the Prospect book were chosen, and 40 objective questions, including true/false and multiple-choice items, were designed based on these texts.

The third instrument utilized in this study to address the third question of the investigation was the reading and listening pretest created by the researcher himself. Twenty reading and twenty listening questions were included in the exam; the items were divided into true/false and multiple choice. This exam was created following the students’ textbook (Prospect 3). This test was administered to the participants in the control group.

The fourth instrument utilized in this investigation was a listening posttest created by the researcher; a modified version of the pretest was employed as the posttest in this study. The final instrument was a reading posttest developed by the researcher, a modified version of the pretest. The last instrument in the present investigation was a reading/listening posttest that the researcher created. All of the features of both posttests were the same as they were for the pretest. There was a slight variation between the pre-and posttests; the only difference was that the sequence of choices and questions was modified to eliminate the possibility of recalling answers from the pretests.

The posttests were administrated to measure the impact of the treatment on the participants’ reading and listening improvement. It should be mentioned that the validities of all pre- and posttests were confirmed by 5 English experts and their reliabilities were computed through using KR-21 formula (listening pretest (r = 0.91), reading pretest (r = 0.89), reading/listening pretest (r = 0.88), listening posttest (r = 0.85), reading posttest (r = 0.80), and reading/listening posttest (r = 0.79)). All pre- and posttests were piloted in another high school to examine the feasibility of the tests intended to be ultimately used in the target study.

3.3. Procedure

Three intact classes were selected to carry out the present study; one was considered the listening group, one was regarded as the reading group, and the other was considered the control group. After that, the researcher gave the reading, listening, and reading/listening to reading group, listening group, and control group, respectively. Then, the treatment was administered to both experimental groups. The experimental groups received the instruction based on group DA. At the end of each class, the researcher required the learners to study for the next session to be prepared for a quiz. In other words, one quiz was administered to the experimental groups in each session. Formative assessment was applied during the term, and summative assessment was applied at the end. The dynamic assessment was used in the experimental classes to help the researcher discover what a student already knows. In the experimental groups, the researcher used a repetitive process of pre-test-teach-re-test. The pretest was given to determine what information students already had, then the unknown materials were taught, and finally, a posttest was given.

The focus of this study was on receptive skills, including both reading and listening comprehension. The control group took two receptive tests, including listening and reading tests. The control group received the traditional instruction, meaning that no quiz was administered in each session. Only pre- and posttests were taken to them. The treatment lasted 18 sessions of 75 minutes. In the first session, the purposes and procedures of the study were clarified for the participants, and the participants were pretested. In 16 sessions, both groups were trained by DA. In the 18th session, the researcher administered the posttests of reading and listening to determine the effects of the treatment on the students’ improvement.

4. Results

4.1. Results of Normality Test

It was essential to verify the normality of the distributions before running any statistical analysis on the pre- and posttests. We ran the Kolmogorov–Smirnov normality test on our data to test for normalcy. The findings are shown in Table 1.

As shown by the Sig, values in Table 1 indicate whether the distributions are normally distributed or not. Because the values in Table 1 were more than .05, it can be inferred that the distributions of scores for the pretest and posttest acquired by EG1, EG2, and CG learners were normal. A parametric test (e.g., ANCOVA) may compare the groups included in the study.

4.2. Results for the First Research Question

A comparison of the posttest scores of Ethiopian intermediate EFL learners’ listening comprehension was necessary to answer the first research question of the study: whether dynamic vs. nondynamic assessments (NDA) have any significant effect on Ethiopian intermediate EFL students’ listening comprehension. Independent samples t-test can accomplish this goal, but to correct for any preexisting differences between these two groups and compare their posttest results, a one-way ANCOVA was used to conduct:

Table 2 shows that the EG1 students’ posttest means score (M = 16.5469) was higher than the CG students’ posttest mean score (M = 12.3594). In Table 3, the Sig. column and the Groups row are used to help the researcher evaluate whether or not this difference was statistically significant.

Considering the Groups row and the relevant Sig. column, the -value shows 0.05; that is, it is important to note that the -value, in this case, was lower than the alpha threshold of significance (.000.05), indicating that the difference between the two groups of EG1 (M = 16.5469) and CG (M = 12.3594) on the listening posttest was statistically significant. Thus, employing dynamic assessment (DA) to enhance the listening comprehension of EG learners might result in a considerable improvement in listening comprehension.

A further important piece of information in Table 3 is the impact size value, which can be found below the Partial eta squared column. That is, the intervention (in this case, utilizing dynamic vs. nondynamic evaluations) accounted for 60% of the difference between the EG1 and CG learners, which is a significant finding.

4.3. Results for the Second Research Question

The study’s second research question was similar to the first one, except it was about reading comprehension. It intended to determine whether dynamic vs. nondynamic assessments have any significant effect on Ethiopian intermediate EFL learners’ reading comprehension. Thus, the posttest scores of the EG2 and CG learners were compared through a one-way ANCOVA.

Table 4 demonstrates that the EG2 students’ posttest means score (M = 17.3438) was higher than the CG students’ posttest mean score (M = 12.8594). The researcher needed to look at the -value in the Sig. field relevant to the Groups row of Table 5 to determine whether this difference in posttest mean scores was statistically significant:

There was a statistically significant difference between the two groups of EG2 (M = 17.3438) and CG (M = 12.8594) on the vocabulary posttest, as shown by the -value under the Sig. Column in Table 5. Because of this, students in the EG2 course might benefit significantly from implementing dynamic assessments. According to partial eta squared values, which shows the difference between two groups’ posttest reading comprehension scores, there was a 44 percent difference between EG2 and CG students’ reading comprehension scores after utilizing the dynamic assessment.

5. Discussion

The results are discussed to address the research questions: whether to accept or reject the null hypotheses. The following are two study questions and their respective responses.

To answer the first study question, the researcher compared the three groups of individuals who took part in the pre- and posttests. The pretest and posttest were compared to see whether there was a difference in the participants' performance on developing listening comprehension while utilizing dynamic assessment compared to when they did not use it. Data were analyzed using the program “SPSS,” version 22. According to the findings, the results from an independent samples t-test indicated no statistically significant difference between the groups on the pretest. As a consequence, there was no difference between the three groups.

Furthermore, the findings obtained from the descriptive statistics of the posttest revealed that the mean score of the dynamic experimental group increased, but the mean score of the control group decreased on the posttest, as compared to the pretest. As a result, there was a statistically significant difference between the listening and control groups on the posttest, and the dynamic experimental groups performed much better than the control group.

This progression may be explained in the explanations provided to the dynamic experimental group throughout the course’s writing assignments. In this aspect, however, the separate groups received implicit training, mainly in examples. By providing comprehensive directions, the teacher identifies the learning objectives for the students and offers extensive explanations of the writing and associated tasks that the students will be required to complete to meet those objectives. We refer to implicit instruction as training where the teacher provides examples and may explain the topic verbally, similar to what is now employed in traditional classrooms. They instruct the students on the subject and then step back to enable them to draw their conclusions, develop their conceptual frameworks, and internalize the knowledge in the way that makes the most sense to them [55].

DA was shown to be effective in improving basic EFL (English as a foreign language) students’ grammar learning, according to Sharafi and Abbasnasab Sardareh [54], who studied the influence of DA on elementary EFL students’ grammar learning. After examining the data, they concluded that dynamic evaluation has a considerable effect on primary EFL learners’ learning of prepositions of time and place and that the impact is significant.

In terms of the application of dynamic assessment, the findings of this research corroborate the results of Moradian et al. [49], who investigated the influence of dynamic assessment on Ethiopian EFL learners’ photo-cued responses. The purpose of this research was to determine whether or not direct instruction (DA) may help Ethiopian intermediate EFL learners improve their picture-cued writing assignments. The study was carried out with 35 Iranian EFL learners (male and female) randomly recruited from a population pool of 70 EFL learners registered in two language institutions in Esfahan, Iran,. The information was gathered via a pretest, a posttest, and a questionnaire. The t-test was used to analyze the test scores, and the results indicated that the experimental group performed statistically better on the test. Furthermore, virtually all participants had favorable views about writing, and their confidence in their ability to write in English grew due to their participation in the study.

Using mean scores, we can determine a significant difference in reading comprehension between the two groups. A higher mean score for the dynamic reading group than the control group answers the second study question. The findings revealed that the reading dynamic group made significant gains in reading comprehension due to adopting dynamic assessment.

The positive washback of DA might be one of the most significant reasons why learners in the dynamic experimental group outperformed those in the control group. When it came to participation in the classroom, students in group DA were eager to be involved, and even the poorer kids were not afraid to express themselves openly. Individual group participants seemed to be more motivated than group DA members to be on time for class and complete assignments on time. These might aid in the development of their general writing abilities.

According to the posttest results, the control group performed poorly compared to the dynamic experimental group. This may be attributed to a primary reason relating to the participants’ habit of learning, which causes them to be reliant on the instructors. As a result, learners think that they need the instructors’ guidance and that it is the teacher’s responsibility to resolve all of their difficulties. Since the participants in this research were in intact courses, they could take advantage of certain benefits. For starters, they were geared for educators. The only way information is imparted in teacher-centered classrooms is through direct instruction. This kind of learning is not considered a cognitive act in these courses [52].

The other characteristic of these students was a lack of self-determination or autonomy. The concept of autonomy refers to the notion of learning on one’s own. Autonomous learners depend on their abilities to learn. They are concerned with the appearance as much as the substance. Moreover, language awareness is an essential consideration for independent learners, and it should be taken into account [17]. The third characteristic of these intermediate learners was reliance on their working field. Field-dependent learners tend to place a greater emphasis on the instructor. They see the field as a whole and as a collection of objects. It is beneficial for them to engage in activities that link various aspects of a subject while incorporating the background information. The last characteristic of these learners was that they needed instruction in a conventional setting. Traditional language instruction has been divided into three phases: presentation, practice, and production (or creation) [1]. Underlining writing assignments and explaining explicitly were accounted for as the presentation phase for the dynamic experimental group, but these activities were not present in the individual group, according to the instructions they were given throughout the experiment. These difficulties contributed to the individual group’s worse performance on the posttest.

The findings of this study are consistent with Moradianet al. [50], who attempted to examine the effect of group dynamic assessment (G-DA) on learning the passive structures offered by a mediator during the teachers’ G-DA interactions with a group of L2 learners. They included two groups of L2 learners ranging in age from 16 to 18 years. Both groups had been taught for six sessions. The material used in the pre- and posttest sessions was a 35-item passive structure teacher-made test. The study revealed that learning of passive structures utilizing concurrent and cumulative G-DA was significantly increased, and there is no significant difference between them. Moreover, the findings from the interviews showed that the two approaches were practical and had a crucial role in learning the passive structures.

The results of this study on the impact of dynamic assessment on the reading comprehension ability of L2 learners are consistent with the findings of previous research, such as [45], Poehner [27, 40, 4244]. In contrast, the findings concerning the equivalent benefit of dynamic assessment for learners of varying proficiency levels are novel since there has been no previous research of this kind published in the relevant literature. As a result of this study’s emphasis on appropriate interaction and mediation of the assessor with the learner in his/her ZPD, it appears reasonable that the reading ability of EFL learners in the experimental group of this study improved significantly. Because dynamic assessment aims at identifying and removing limitations and hindering factors in the advancement process as much as possible and suitable in that ZPD, it appears reasonable that their reading ability improved significantly. However, the most important thing to remember is that the purpose of dynamic assessment goes much beyond simple replication. In dynamic assessment, mediations should be standard, meaningful, and purposeful, and they should be aimed solely at developing the test-takers learning. As Poehner [27] points out, any mediation for supporting the test-takers efficaciously, or even just for performing a specific task without displaying the proper approaches and teaching them the main points about that task, cannot be regarded. Consequently, rather than supporting students in doing a task, the purpose of dynamic assessment is for students to achieve task performance with the assistance of a mediator and then transfer the gained capacity later on to other comparable tasks in autonomous performance, so achieving autonomy. The absence of this level of autonomy might lead to the conclusion that the mediation did not result in the development and was thus ineffective. Following a dynamic assessment session and receiving the suitable mediation, the findings of this research indicate that EFL learners could achieve this level of progress in reading ability because they were able to take advantage of the mediations in their later independent performance in an immediate future posttest after completing the study. Several experimental studies’ interventions were beneficial in the posttest, but the benefit was only temporary. When investigating the delayed effect of these interventions, it was discovered that the benefit had faded away, and the procedures could not be considered beneficial over the long term. However, the findings of this research revealed that the favorable and beneficial effects of dynamic evaluation persisted throughout time and were not restricted to a short period after the intervention. The reason for this is, once again, due to the development-oriented aspect of the dynamic evaluation. The goal of dynamic assessment is to bring about long-term changes in the behavior of learners, which will ultimately contribute to their growth. In other words, as predicted, a change or growth that is deeply embedded does not go away with time.

Another aspect that distinguishes the dynamic assessment is its ability to provide a personalized picture of learning. Nondynamic assessment attempts to compare the performance or learning of each learner with that of other learners. In contrast, dynamic assessment compares the present performance of each student with that of their previous performance and makes inferences about advancement based on this comparative performance. The goal is to advance each learner further and increase each individual’s performance at a higher level than their current level of competence, as described above. It is possible to think of dynamic assessment as a one-way road taken by all learners, regardless of their current level of performance. Each person takes advantage of the procedure and moves forward as the ZPD allows them. Dynamic assessment is characterized by its monistic approach to teaching and testing. It may be inferred that no one is unaffected if proper mediation tailored to the individual’s ZPD is offered regardless of their degree of proficiency. Each student’s ability, irrespective of their degree of competence, grows due to dynamic evaluation. Another intriguing aspect to look into in future dynamic assessment research is the comparison of low achievers and high achievers who are using the dynamic evaluation system.

In keeping with its monistic approach to teaching and testing, dynamic assessment measures the capacities of the learners while also providing them with possibilities for learning and growth opportunities. This, in turn, has inevitable good consequences for both instructors and students alike. First and foremost, it assists students in making use of the mediation offered by the assessor and becoming more independent while doing similar activities in the future. Second, it has a beneficial washback effect because it aligns the purposes and processes of testing and teaching and makes them intertwined with one another. Both learner autonomy and the washback effect are of significant relevance and are being investigated in English as a foreign language. Third, the use of dynamic assessment allows learners to be mediated, which reduces overall stress. Test score pollution is a substantial cause of pollution in specific learning environments, such as Iran, where test results significantly influence learners’ anxiety during exam periods.

Therefore, test results that are not influenced by stress factors might be more accurate when making educational selections. Consequently, it can be argued that dynamic assessment results in the presentation of a precise image of the talents, which is the primary and most important goal of assessment in the first place. In the end, dynamic assessment of learners’ abilities can help to avoid misinterpretations and misrepresentations of the skills because dynamic assessment, in contrast to traditional nondynamic assessments, identifies and presents the students’ learning’ learning potential because it illuminates both their current status and their hidden potential in the zone of proximal development after removing impeding factors.

Suppose this research is taken as a whole. In that case, it can be concluded from the findings that dynamic assessment of reading comprehension leads to increased reading comprehension ability, and this advancement is not short term. It can remain after a while because students take advantage of the mediation on their ZPDs when assessed dynamically.

6. Conclusion and Implications

Teaching and testing are inseparable, according to DA’s central belief system. This research showed that DA had a considerable positive impact on students’ reading and listening skills. DA pushes learners and boosts their automaticity by providing them with just what they need to improve their work.

Ethiopian EFL students’ reading and listening skills focused on this research to see how DA intervention affected such skills. The study’s results may be restricted, but they show that EFL students’ reading and listening skills may be effectively improved via the use of a DA strategy. However, findings from this study show that DA approaches to reading and listening may help the instructor detect and correct the mistakes that students make in their reading and listening comprehension and, as a consequence, help them improve their reading and listening abilities. Even though DA may be used as part of classroom education, it can also give valuable data about individual pupils.

Not only is it important to summarize a learner’s performance, but Sternberg and Grigorenko [24] propose that the DA gives recommendations for students. Minakova [48] believes that students who do well on the pretest and have a high learning capacity throughout the DA program should be assigned more challenging materials. Learning and practice should be offered to students with limited learning potential. Teachers may use DA to help them figure out how to tailor their lessons to meet the needs of various students. To sum up, Minakova points out that we have a model for including the formative evaluation and summative objectives into the learning process.

The findings of this research have real-world applications for curriculum designers and content creators. If you’re looking for DA-related language resources, you’ll be hard-pressed to find any. A dynamic curriculum or set of materials should consider all of these considerations to produce materials that can be evaluated dynamically, provide students with appropriate and leveled feedback during the evaluation process, and interactively engage students and instructors during the learning and evaluation process itself.

Learning English as a second language (EFL) students will benefit the most from DA since their reading ability can be more accurately assessed, resulting in higher levels of reading performance [46]. Assume that increasing learners’ knowledge of the advantages of the DA approach has been achieved. In such instances, it may lead to better motivation levels for participants in conference reading sessions, contributing to enhanced reading competency.

The findings of this research may potentially be of use to language instructors in the future. Writing group work activities in the classroom to enhance participants’ writing skills are justified by this document, which focuses on meeting the task’s goals and evaluating how well those goals are accomplished. Language teachers can use this document to justify writing group work activities in class to enhance the students’ writing skills. It also assists instructors in incorporating DA into their classrooms, identifying the limitations of their students, and providing mediation when and when it is required. Furthermore, by providing instructors with practical guidance for implementing DA in their lectures, this research offers them greater confidence in doing so themselves [47].

This research may be helpful to curriculum developers who need to highlight extra flexibility in their courses. In DA, students are considered the most crucial learning environment component. If the curriculum does not meet the requirements of the students, the instructor may be called upon to perform the duties of the curriculum developer. As a result, there should be sufficient flexibility in the curricula to meet the demands of the learners. Beaumont et al. [40] suggest that this research may inspire material designers by creating resources for writing classes or workshops that are more pertinent to students’ necessities. It may also allow teachers to diagnose challenging issues and make language lessons pertinent to participants’ requirements. Aside from that, the findings of this research may be of immense help and use to ESP instructors who are interested in teaching English for specific objectives, such as enhancing students’ writing skills for a particular goal. Because the requirements of the learners are the primary focus of attention in EFL classrooms, the connection between the students and the instructors may pave the way for the needs of the students to be met. If this is the case, it may be necessary for the professors to include the students in the evaluation process via conference writing to make themselves and their students more attentive to their peers’ interests and needs, and as a result, to get a more prominent advantage from the courses.

This research, like all others, had its limits and could not cover all aspects of the subject matter. The following is a list of them:(1)The numbers of participants in the research groups were small. The absence of the participants was out of the researcher’s control, although this issue did not affect the final results of this study.(2)Second, the length of time allotted for the teaching was quite restricted.(3)The participants of this study were confined to intact classes. So, care must be exercised in generalizing the results beyond its proper limits.(4)The result of this study may be affected by classroom situations and social factors. These factors have not been taken into account in the present study.(5)The role of the variables such as age, motivation, and anxiety was not included in this study.

It was via the tests that the researchers were able to get the information they needed for this research, which investigated the influence of dynamic assessment on reading comprehension and listening abilities. The subsequent study might examine the impact of DA on other language abilities, such as writing and speaking, and data could be collected in a variety of methods, including interviews, voice recordings, and classroom observation, among other techniques. As a result of the insufficient number of respondents, this research was conducted on a limited participant sample. Any duplication should make an effort to employ a decent sample population since the small sample may have compromised the dependability of the findings. This research was carried out on Ethiopian students for whom English was a foreign language; however, it is conceivable to conduct similar research on students from other nations for English as a second language (ESL).

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.