Abstract

The primary purpose of this paper is to review relevant research related to the use of an assessment technique, called Self-Regulated Learning (SRL) Microanalysis. This structured interview is grounded in social-cognitive theory and research and thus seeks to evaluate students' regulatory processes as they engage in well-defined academic or nonacademic tasks and activities. We illustrate the essential features of this contextualized assessment approach and detail a simple five-step process that researchers can use to apply this approach to their work. Example questions and administration procedures for five key self-regulation subprocesses (i.e., including goal-setting, strategic planning, monitoring, self-evaluation, and attributions) are highlighted, with particular emphasis placed on causal attributions. The psychometric properties of SRL microanalytic assessment protocols and potential areas of future research are presented.

1. Introduction

The extent to which individuals control, monitor, and regulate their cognition, motivation, and behavior has been of much interest to self-regulation researchers over the past several decades [16]. Self-regulation, also known as self-regulated learning when applied to academic or learning contexts, is typically conceptualized as a multidimensional process whereby individuals attempt to exert control over their cognition, motivation, behaviors, and environments in order to optimize learning and performance outcomes [1, 6, 7]. Although there is also some disagreement in the literature regarding whether self-regulation is a trait or contextualized skill, research has shown that self-regulation can and often does vary across contexts as well as tasks within specific contexts [811]. In addition to conceptual or theoretical reasons, the distinction between self-regulation as a stable entity versus a changeable, teachable skill has important implications for intervention and assessment practices. Over the past few decades, researchers have developed several distinct, albeit related, self-regulation interventions tailored to particular academic skill domains, such as writing [12], mathematics [13, 14], science [15], and reading [16] for students across the development spectrum. Despite targeting distinct academic skills, these intervention programs emphasize the importance of teaching self-regulation in context as opposed to developing a broad set of skills to be applied to any domain or learning environment.

This trend towards more ecologically-sensitive service delivery practices has also been realized within the assessment literature across many fields [1721]. In terms of self-regulation assessment, many researchers have developed alternative methodologies capable of capturing self-regulation processes as they naturally unfold during specific learning or performance tasks and activities. Whether these measures involve the use of hypermedia and think aloud protocols [22, 23], structured personal diaries [24], behavioral traces on work products [25], or direct observations of regulatory behaviors in particular contexts [26, 27], these measures are similar because they target self-regulation as a temporal entity with a clear beginning, middle, and end [28]. Collectively, Winne and Perry [28] labeled these approaches as event measures because they ultimately target self-regulation as a contextualized event.

Despite these assessment advances over the past decade, recent evidence shows that self-report scales continue to be the most widely used measure of self-regulation by both researchers and school-based practitioners [7, 29], with Dinsmore et al. [7] concluding, “Sadly, in our survey of the research, we found that there remained a strong reliance on self-report and Likert-type instruments and insufficient corroboration or collaboration of what individuals report they are thinking or doing with actual traces of such thoughts and behaviors” (pages 405 and 406). Many researchers have questioned the reliance on self-report measures because they typically elicit retrospective accounts of student behaviors or perceptions and are often decontextualized and not linked to particular tasks within a given setting. Given these factors as well as the fact that specific items are aggregated for interpretation purposes, many researchers have questioned whether self-report measures represent a valid approach for assessing self-regulation as a contextualized, dynamic process [28].

The primary purpose of this paper is to illustrate the theoretical foundation, essential features, and applications of an assessment approach, Self-Regulated Learning (SRL) microanalysis, which encompasses elements of both self-report and event measures. We begin by delineating a definition of self-regulation and a theoretical model that has served as the foundation for SRL microanalysis. Before detailing the assumptions and essential features of SRL microanalysis, we briefly review different types of self-report measures. Our primary objective, however, is to provide a reader with an extensive overview and summary of specific microanalytic questions used to target several self-regulation subprocesses, including goal-setting, strategic planning, monitoring, self-evaluation, and attributions. We devote particular attention to microanalytic attribution questions, examining how they are distinct from other types of attribution measures. The final section of our paper details several educational implications and areas for future research.

2. Definition and Theoretical Foundation of Self-Regulation

SRL microanalysis is grounded in social-cognitive theory and research [1, 6]. According to Bandura [1], human functioning is the result of reciprocal interactions among person (cognitive/affective), behavior, and environment factors. Thus, while students with strong efficacy perceptions to obtain academic assistance from others are more likely to seek out help from teachers or parents when needed, Bandura also recognized that social sources can reciprocally enhance or adversely impact how students perceive their help seeking capabilities over time. In addition to this basic premise, social-cognitive theory espouses several other assumptions which serve as the foundation of SRL microanalytic methodology.

Social-cognitive theorists indicate that human regulatory thought and actions are contextualized and thus are largely impacted by environmental characteristics and demands [9, 30, 31]. Researchers have demonstrated that college students’ self-reported use of learning and regulatory strategies varied across three academic tasks: reading for learning, completing a brief essay, and studying for an exam [10]. These findings suggest that specific task or contextual demands impact students’ judgments and perceptions about how to best approach and learn academic material. Other research has shown that the importance of self-regulation processes may vary depending on the particular contexts in which students learn. For example, Cleary and Chen [9] demonstrated that self-regulation and motivation variables reliably differentiated high achievers and low achievers in academically rigorous or intensive math classrooms but did not consistently differentiate achievement groups in environments that did not require high levels of self-directedness and persistence. In line with this contextualist perspective, SRL microanalytic protocols are developed and customized for specific tasks or activities within particular contexts.

Another important assumption of social-cognitive theory is that pure intention and willpower is not sufficient for self-directing and managing one’s behaviors. According to Bandura, humans have the capacity to proactively control and manage the triadic influences through the use of various regulatory subprocesses, such as self-observation, self-judgments, and self-reactions [1]. Zimmerman [6] later proposed a definition of self-regulation that expanded Bandura’s original formulation: self-generated thoughts, feelings, and behaviors that are planned and cyclically adapted based on performance feedback in order to attain self-set goals. Within this definition are the basic components of a process-oriented perspective of self-regulation. For example, in reviewing the definition of self-regulation, the key words planned and self-set goals pertain to forethought processes that precede action. The inclusion of self-generated actions is also noteworthy as this term typically pertains to the “during” aspect of a task, that is, what a person does during learning or performance. Finally, cyclically adapted suggests that self-regulation involves a reflection component following learning.

As can be seen in Figure 1, Zimmerman [6] depicts self-regulation as a three-phase process of thought and action. From this perspective, self-regulation occurs in three sequential phases: forethought (i.e., processes that precede efforts to learn or perform), performance control (i.e., processes occurring during learning efforts), and self-reflection (i.e., processes occurring after learning or performance) [6]. These phases are hypothesized to be interdependent so that changes in forethought processes impact performance control, which, in turn, influence self-reflection phase processes. In general, a self-regulatory cycle is completed when self-reflection processes influence forethought beliefs and behaviors prior to subsequent performance or learning.

This three-phase model is the primary theoretical framework guiding the development of SRL microanalytic methodology, in part, because it possesses several key qualities. First, the model provides explicit definitions of many regulatory subprocesses subsumed within each of the three general phases. These definitions serve as the basis for developing and wording context-specific microanalytic questions and for generating categories for the coding rubrics used as part of the scoring process for open-ended questions.

Another desirable quality of the cyclical phase model is that it can be applied and extended to virtually any task or activity to understand human regulation. Researchers have applied this model to studying human regulation across academic tasks [33, 34], motoric tasks [35, 36], chronic health [37], and music [38]. Consistent with a contextualist viewpoint, it is possible to tailor or customize the three phases to many different types of learning activities, such as solving a math problem, studying for exams, or writing an essay. This is possible because the temporal sequencing of the three cyclical phases is naturally linked to the temporal dimensions of most tasks. That is, forethought phase processes occur prior to engaging in the task; performance phase processes occur during the task; self-reflection phase processes occur upon task completion or following a clearly defined task outcome. By linking the cyclical model and the task in this fashion, one is able to determine the precise sequencing and administration of SRL microanalytic questions.

3. Types of Self-Regulation Assessment

A variety of assessment approaches have been used to measure and examine self-regulation including self-report questionnaires, interviews, think aloud protocols, direct observations, and behavioral traces [7, 21, 28]. Although self-report scales continue to be the most frequently used measure by both researchers and practitioners, there has been some debate in the literature regarding whether self-report measures are capable of measuring self-regulation in a valid way [3942]. However, before one can support or refute the use of these measures, one first needs to clarify what is meant by term self-report. In general, a self-report measure can be described as any assessment tool that prompts an individual to respond to one or more questions or statements that conveys information about oneself. If one accepts this definition, then many self-regulation measures described in the literature, such as self-report questionnaires or surveys, interviews, and structured diaries, could be grouped into this general category because in all situations the respondents serve as the source of the information.

It is important to emphasize that all self-report scales are not inherently biased or less effective than objective forms of measurement just because individuals are asked to provide responses about personal processes, beliefs, and actions. From our perspective, the key issue entails whether a measure can reliably and validly capture self-regulation as a contextualized process. In the following section, we review several different types of self-report scales, highlighting key distinctions and approaches that are more aligned with a process account of self-regulation.

3.1. Self-Report Questionnaires

The general term self-report can be divided into various subcategories, most notably self-report surveys/questionnaires and interviews. Within each of these two subcategories include a variety of approaches. Self-report questionnaires, which include the Motivated Strategies and Learning Questionnaire (MSLQ) [43], Learning and Study Strategies Inventory (LASSI) [44], and countless others reported in the literature tend to be decontextualized or non task-specific forms of assessment that rely on students’ retrospective responses to a series of items targeting different dimensions of self-regulation. Winne and Perry [28] argued that these types of scales are problematic due to inherent limitations with response biases (e.g., social desirability), cognitive distortions, or memory difficulties. Of greatest concern, however, is that these scales rely on composite scores (i.e., aggregation of individual items) for interpretation, rendering the construct of self-regulation as a broad and fixed entity.

Although most self-report surveys include multiple statements or items and require respondents to use a Likert scale to rate their perceptions about these items, a few of these self-report questionnaires are highly context- and task-specific. Thus, they avoid some of the pitfalls associated with most questionnaires. For example, Bandura [45] provided explicit guidelines for developing self-efficacy measures. In general, these scales are designed to evaluate students’ perceptions of personal competency in relation to highly specific behaviors or skills in particular settings at a designated level of performance. These types of scales also differ from most self-report questionnaires in that they target student perceptions about current capabilities to perform specific behaviors at a particular moment in time. Thus, self-efficacy self-report measures do not require individuals to retrospectively reflect on how well they could or have done something but rather to report these judgments of competence immediately preceding their attempt to perform that skill.

Still further, Boekaerts and colleagues developed the OnLine Motivation Questionnaire to examine students’ situation-specific appraisals about performing a task (e.g., mood, self-efficacy, success expectancy, task attraction) and their performance attributions following the activity [2, 8, 39]. Although student appraisals are examined using a traditional Likert format, this self-report questionnaire was designed to directly link the temporal dimensions of a specific task by administering it as students are engaged in some activity. That is, items pertaining to student judgments and interpretations about the task are administered prior to the task while attribution questions are administered immediately following the task [8]. As will be highlighted in the following section, SRL microanalysis adheres to this principle of linking self-regulation measures to the before, during, and after dimensions of the task; however, SRL microanalytic protocols use a highly distinct assessment structure and format when compared to most self-report measures.

3.2. Structured Interviews

Another broad category of self-report includes interviews, which can vary widely in scope and structure. For example, while unstructured interviews typically offer minimal guidance or structure to conducting an interview, semistructured interviews instill greater standardization by using pre-established questions and criteria. With this semistructured approach, an interviewer has the flexibility to modify the wording of questions as well as the order in which they are asked [46]. To avoid reliability issues that may occur with the latter two interview approaches, researchers tend to emphasize structured interviews because they utilize a fixed set of questions and subscribe to a standardized administration format [47, 48].

In addition to format and standardization, interviews can also be distinguished based on whether the questions target past events or behaviors, current behaviors, or prospective behaviors based on future or hypothetical situations or scenarios. Winne and Perry [28] indicate that this temporal distinction is important in considering whether the interview is an aptitude or event protocol. Zimmerman and Martinez-Pons [47, 48] developed a structured interview called the Self-Regulated Learning Interview Scale (SRLIS). As part of this interview, students are presented with six distinct academic situations, such as preparing for a test at home, writing an essay, or completing math assignments. In short, students’ responses to these hypothetical scenarios are coded into distinct self-regulation strategy categories, such as rehearsal, seeking social information, or transformation strategies. Students are also prompted to use a 4-point Likert scale to rate the frequency with which they use the strategies. The SRLIS is quite distinct from self-report surveys because it uses open-ended and Likert response formats (i.e., to rate frequency of strategy use) and utilizes questions that are both context- and task-specific. That is, students are prompted to describe the behaviors or strategies they exhibit on specific tasks or assignments within a given domain (e.g., mathematics) rather than their general use of strategies within that particular domain. In addition, the SRLIS probes students to make judgments about the prospective behaviors they might display in a given situation, rather than to retrospectively report how typical a set of prescribed behaviors are to them. Although the SRLIS offers several advantages to the traditional self-report survey, it is not considered an event form of measurement because it does not assess actual behavior or cognition that occurs during a particular task or activity [21].

SRL microanalytic protocols, which represent another type of structured interview, are similar to the SRLIS because they target self-regulation processes in relation to specific academic situations and tasks. However, in contrast to the SRLIS and self-report surveys, SRL microanalytic protocols are unique in that they target students’ regulatory beliefs and processes prior to, during, and after engaging in a well-defined task and activity. Thus, it does not require retrospective or prospective reports but rather evaluates regulatory processes as they occur across authentic tasks.

4. Overview of SRL Microanalytic Assessment

Although the precise definition and characteristics of microanalysis vary widely, we conceptualize microanalytic assessment as an umbrella term referring to highly specific or fine-grained forms of measurement targeting behaviors, cognition, or affective processes as they occur in real time across authentic contexts [17]. In general, this approach has been used by researchers across diverse domains, such as human development and psychology, education, athletics, testing, and medicine [18, 4953]. For example, within developmental and counseling domains, researchers have used behavioral forms of microanalysis to study mother-infant attachment [51, 53], interactions among multiple family subsystems or triads [18], and interactions between clients and therapists [54, 55]. These researchers have argued that assessing authentic moment-to-moment behavioral interactions are important because they minimize the response biases and errors associated with retrospective self-reports about behavior or interactions. This belief is shared by many self-regulation researchers.

Within the field of self-regulation, many event forms of measurement would also fall under this general definition of microanalysis [22, 2426]. As one example, Perry has described procedures for directly observing students’ regulatory behaviors as they occur in classroom contexts [26]. These assessment procedures can be considered microanalytic because they target highly specific regulatory behaviors as they naturally occur in a particular context. Before turning our attention to describing SRL microanalytic procedures and methodology, it is important to highlight that we are not arguing that SRL microanalysis is a more effective assessment tool than other approaches, but rather that it has potential to complement or supplement the existing set of self-regulation assessment methods.

5. Essential Features and Illustration of the SRL Microanalytic Process

SRL microanalysis is a structured interview involving a strategic, coordinated plan of administering context-specific questions targeting multiple cyclical phase subprocesses as students engage in authentic activities. Over the past decade, a variety of studies have utilized this approach for assessing individuals’ forethought, performance, and self-reflection phase processes across an array of tasks, such as free-throw shooting [35, 50], volleyball serving [34], venepuncture [56], reading or studying [33], and writing [57]. SRL microanalysis differs from many other self-report and event measures because it systematically targets individuals’ cognitive, motivational, and metacognitive processes as they engage in learning or performance activities. In this section, we underscore the basic features and procedures of SRL microanalysis and provide examples of how this assessment methodology has been implemented by researchers. Cleary [17] identified several core features of microanalysis: (a) individualized, structured interview protocol, (b) selection of target SRL processes outlined in Zimmerman’s model [6], (c) development of task-specific questions targeting self-regulation subprocesses, (d) administration of questions linking the three-phase cyclical phase model and the temporal task dimensions, and (e) verbatim recording and coding of participants’ responses. However, in the interest of detailing a specific stepwise process that would facilitate the use of this procedure by researchers and practitioners, we reorganize and relabel these core features into five basic steps.

Step 1 (select a well-defined task). Prior to developing microanalytic questions or protocols, one must first identify a specific target task that has a clear beginning, middle, and end. As previously mentioned, microanalytic protocols are built around a task or activity that is of interest to educators, practitioners, coaches, or clinicians. In prior microanalytic research, the majority of tasks involved brief practice sessions or performance activities, such as basketball free-throw shooting, volleyball serving, or studying and reading. Although the nature of the target tasks used in microanalytic research has been quite diverse, they were comparable because they all included a well-defined preparatory phase (before dimension), an actual learning or performance component (during dimension), and a predefined point at which the task was considered completed (after dimension). Selecting a task with clear temporal dimensions is critical because SRL microanalytic methodology entails administering phase-specific regulatory questions (i.e., forethought, performance, and reflection) at different points during task execution.

Step 2 (identify target SRL processes). Although it is possible to target a single SRL process in a microanalytic protocol, researchers have typically evaluated several self-regulatory subprocesses and/or a set of motivation beliefs within each of the cyclical phases. To date, only three microanalytic studies have comprehensively examined processes within all three phases of the cyclical loop, with two additional studies examining both forethought and reflection (see Table 1). Given that self-regulation is typically conceptualized to be a multidimensional process that involves the dynamic interaction amongst several processes, researchers and practitioners can generate more valid and meaningful interpretations of student regulation if processes within all three cyclical phases are targeted with a microanalytic protocol. The work by Kitsantas and Zimmerman [34] was the first study to microanalytically examine multiple processes across all three phases of the cyclical feed back loop during a specific task, serving in volleyball. In this ex post facto study, the authors targeted 12 distinct regulatory or motivation processes, that when converted to an overall composite score, accounted for 90% of the variance in serving skill. It is also important to note that targeting subprocesses within each of the three cyclical phases is ideal because it enables one to identify the sophistication of students’ strategic thinking across all parts of the task and to better understand how distinct regulatory processes interact or influence each other [61].
However, all processes within the three-phase model are not always assessed. Researchers or practitioners may elect to examine a specific aspect of the cyclical model [15, 34] or the nature of the task may prevent evaluation of all three phases of the loop. In terms of the latter point, Cleary et al. [60] examined the relationship between the achievement and regulatory processes of students as they engaged in a brief reflection activity about a test grade earned in one of their college courses. Due to the narrow nature of task, it was not possible to examine students’ regulatory processes during test preparation or when completing the exam. However, the authors were primarily interested in examining how students made judgments about the quality of their performance (self-evaluation), the reasons for their performance (attributions), and what they perceived that they needed to do to improve future test performance (adaptive inferences).

Step 3 (develop SRL microanalytic questions). After the desired task is defined and the number of target regulatory processes is identified, it is necessary to either customize preexisting microanalytic questions to the target task or to develop new task-specific questions. In general, all microanaltyic questions should be brief, directly linked with the target task and context, and measure a specific self-regulatory process outlined in the three-phase cyclical model (e.g., goal-setting, attribution). In reviewing the microanalytic literature, researchers have used operational definitions of the phase-specific regulatory processes and beliefs to guide the wording of the questions [17]. For example, given that an attribution is defined as a person’s perceptions about the reason(s) for a particular event or outcome, a common microanalytic attribution is, “What is the main reason why you…?” or “Why do you think you…?” In a subsequent section of this paper, we review a variety of examples of microanalytic questions reported in the literature over the past decade.
The questions used in a microanalytic protocol can either be open- or closed-ended. The closed-ended questions utilize Likert-scale formats (e.g., self-efficacy, task interest, and satisfaction) or a forced-choice structure (e.g., self-evaluative standards). However, most of the self-regulatory processes (e.g., goal-setting, strategy planning, and attributions) are measured using free-response or open-ended questions. Given the qualitative nature of the responses provided to these types of questions, researchers have developed contextualized coding schemes to categorize such responses (see Tables 26 e.g., coding schemes).

Step 4 (link cyclical phase processes to task dimensions). As mentioned previously, a unique component of SRL microanalysis is the close connection between the temporal dimensions of the target task and the phases of the cyclical loop (see Figure 2). Thus, forethought phase questions, such as goal-setting and strategic planning, are administered prior to an individual engaging in a particular task. The goal with these questions is to gather information about how individuals approach or prepare to engage in a task. In other words, are students thinking about the key processes or strategies related to the task or are they focused on other, less critical factors? Microanalytic questions pertaining to strategic control and metacognitive or self-monitoring would be administered during task performance. The key themes addressed with performance phase microanalytic questions focus on whether students strategically engage in and self-direct their learning as well as whether they keep track or monitor their rate of learning or progress in successfully completing the task.
Finally, self-reflection phase questions are linked to the after dimension of the task. Reflection questions in SRL microanalytic protocols address the issue of how students judge their successes and failures, particularly in terms of the perceived causes of these outcomes (attributions) as well as their reactions to performance (adaptive or defensive inferences). These latter two reflection processes are particularly important in self-regulation models because they are highly predictive of student motivation and persistence in the face of failure or obstacles [6264]. It should be noted, however, that the precise point at which one completes a task may not always be clear. SRL microanalytic researchers have purposefully defined task completion in terms of a specific performance indicator, such as an exam grade or successful free-throw, or after a predefined practice session has ended. Without access to a clear indicator of quality of performance, such as success or failure or other specific performance benchmarks, students may not be able to effectively respond to self-reflection phase questions.
Cleary and Zimmerman [50] conducted one of the first empirical studies to microanalytically target multiphase regulatory processes. In this study, the authors used a 10-minute free-throw shooting session as the task or event around which to embed forethought and self-reflection phase microanalytic questions. The forethought phase questions targeted self-efficacy, goal-setting, and strategic planning and were administered immediately preceding students’ attempts to practice free throws. As indicated previously, self-reflection phase questions need to be administered after the task was completed. Accordingly, in this study, the participants were administered a satisfaction process question after the 10-minute practice session [50]. Although the authors could have also administered attribution and adaptive inference questions after the practice session was completed, they were more interested in examining the players’ self-judgments and reactions relative to their performance in a specific failure situation (i.e., two consecutive misses in a row during the practice session). Thus, after the participants took their 10th practice shot, the examiner waited until they missed two shots in a row. When this occurred, the examiner then asked the participants about the reasons why they missed those shots (i.e., attributions) along with the conclusion drawn about what they needed to do to improve their performance (i.e., adaptive inference). This allowed the authors to draw conclusions about how the players reacted to a particular performance outcome, rather than more global judgments about their success or struggles during the practice session.
SRL microanalysis has also recently been extended to a clinical context [56]. In this implementation pilot study, Cleary and Sandars [56] illustrated how SRL microanalysis was applied to study the self-regulatory processes of seven medical students as they attempted to take a blood sample by venepuncture from a simulation mannequin arm. After clearly identifying the nature of this task, the authors targeted at least one self-regulation subprocess within each of the cyclical phases. The authors selected three forethought phase processes (i.e., goal-setting, strategic planning, self-efficacy), one performance phase process (i.e., metacognitive monitoring), and two self-reflection processes (satisfaction and self-evaluative standards). As outlined by microanalytic methodology, the forethought phase questions were administered immediately prior to students’ attempts to obtain the blood sample. During actual performance of the venepuncture activity, the examiner administered a question targeting students’ metacognitive monitoring, “Do you think you have performed a flawless routine thus far or have you made any mistakes?” Finally, after students were able to obtain the blood sample, the researchers administered two reflection phase questions, satisfaction and self-evaluative standards. These questions were designed to examine how satisfied the participants were with their performance on the venepuncture task and to identify the standards that the participants used to judge their level of satisfaction with their performance.

Step 5 (scoring procedures). As indicated previously, a variety of question formats have been used in SRL microanalytic protocols to examine motivation and self-regulation processes, such as Likert scales, forced-choice items, and open-ended or free response questions. Typically, the Likert items are designed to target self-motivation, such as self-efficacy and task interest, whereas free-response questions target phase-specific regulatory processes including goal-setting, strategic planning, metacognitive monitoring, attributions, and adaptive inferences. A forced-choice format has been used to examine students’ use of specific criteria to make self-evaluative judgments following failure.
Although the scoring of Likert and forced-choice items is relatively straightforward, all responses to open-ended questions must be independently coded into distinct categories by two or more coders. The development of the specific categories for the coding system is derived from both empirical and conceptual or theoretical perspectives. It is recommended that researchers pilot test the protocols on the target task in order to gather information about types of responses that students may exhibit for that task. Researchers can also use expert consensus and/or prior research to guide the development of the categories. For example, Cleary and Zimmerman [50] used the goal-setting literature to identify different features of goals (e.g., general versus specific goals, process versus outcome goals) that would be important to consider when developing categories. Using prior research, the authors developed various categories including outcome goals, general process goals, specific outcome goals, and specific process goals.
As another example, Cleary and Zimmerman [50] used prior research, pilot testing, and expert feedback to develop categories for an attribution question administered during a free-throw shooting practice session. Using this process, the authors developed different strategy categories, such as shooting technique (e.g., “my elbow was not straight”), focus (e.g., “I was not concentrating”, and rhythm (e.g., “to go at a good pace”), and various other categories such as effort, confidence/ability, or do not know. Regardless of the specific regulatory process that is measured using a free-response microanalytic question, the coding of these questions is facilitated with a structured scoring rubric that provides definitions and behavioral examples of each category. Examples of the general coding schemes used for many of the microanalytic regulation questions are included in Tables 26 (see specific studies for a more detailed description of the categories).

6. Application and Illustration of Open-Ended Microanalytic Questions

Although SRL microanalysis has been applied to multiple tasks and domains, to date, no review articles or studies have attempted to descriptively compare or to synthesize the specific questions used in microanalytic research. This is an important endeavor because it can help researchers to better understand the level of consistency and divergence in the questions and procedures when targeting regulatory processes across different tasks. It should be noted that in this qualitative review, we included either all microanalytic studies that have targeted motivation beliefs and regulatory processes or those that have targeted multiple regulatory processes (see Table 1). The inclusion of studies examining multiple regulatory processes was desirable because it would allow one to examine how comprehensive protocols can be constructed relative to different tasks.

The microanalytic studies presented in Table 1 have collectively targeted almost all of the cyclical phase processes, as identified by Zimmerman. However, in this section we elected to describe and illustrate examples for the five most frequently targeted self-regulation processes in microanalytic protocols: two forethought processes (i.e., goal-setting, strategic planning), one performance process (i.e., self-observation), and two self-reflection processes (i.e., self-evaluation, attributions). We also selected these particular processes to ensure that each phase of cyclical feedback loop was adequately represented (i.e., forethought, performance, and self-reflection). Examples of microanalytic questions, administration procedures, and coding schemes for each of the five processes are presented separately in Tables 26.

Before discussing each of these five subprocesses in depth, however, we want to acknowledge that there are other self-regulatory processes or subprocesses reported in the literature that are not specifically included in the Zimmerman’s model. Although it is quite possible for researchers to include such processes in microanalytic protocols in future research, our primary objective in this paper was to highlight the primary processes that have been studied using the microanalytic process highlighted herein.

6.1. Goal-Setting

Goal-setting has been broadly defined as the aim or purpose of a behavior within a given period of time [65]. This forethought phase process is important due to its motivational influences and because it functions as a standard against which individuals self-evaluate their learning and performance progress. Four microanalytic studies have examined the nature and types of goals that individuals set prior to task engagement (see Table 2). The majority of these studies have used a comparable question and format, whereby individuals are asked to verbally report whether they have a goal in mind preceding their attempt to engage in a specific activity. For example, Cleary and Sandars [56] asked medical students, “Do you have a goal in mind before drawing this blood sample?” immediately before they performed a venepuncture activity. Along the same lines, Cleary and Zimmerman [49] administered the question, “Do you have a goal when practicing these free throws?” at the moment directly prior to participants practicing their basketball free-throw skills.

Due to the nature and procedures of microanalytic protocols, the goals reported by participants in these studies naturally exhibited desirable qualities, such as proximity and self-generation. However, to distinguish between quality of goal responses, researchers have used prior research and pilot testing to develop and refine coding schemes. The literature has shown that the focus of goals, such as a process or outcome, is a key property of goal-setting [6567]. Process goals tend to involve procedures or strategies used to complete a task whereas outcome goals pertain to the products or end result of learning and performance. Most microanalytic coding schemes capture the process/outcome distinction of goal responses. For example, in a volleyball serving study, Kitsantas and Zimmerman [34] identified an outcome goal as a response focusing on the result of a serve, such as getting the volleyball over the net or hitting a specific zone of the court. In contrast, process goals were defined in relation to the volleyball serving technique, such as following through on the serve or tossing the ball properly.

It should also be noted that research has clearly shown that “general” or “do your best” goals are not as effective as more “specific” goals because the former provides vague or ambiguous benchmarks for making self-judgments about performance [50, 68]. Cleary and Zimmerman [50] devised a coding scheme that extended the outcome-process goal distinction in terms of specificity. In this study, the authors created specific process and general process goal categories as well as specific outcome and general outcome categories. For example, a specific process goal involved responses clearly identifying a specific component of the shooting technique (“to keep my elbow in”), whereas a general process goal did not include any specific mention of a particular component (“to do the technique correctly”).

6.2. Strategic Planning

Zimmerman [6] defined strategic planning as a subprocess of task analysis that involves the selection of strategies that are appropriate for a particular task. Within most models of self-regulation, the use of task-specific strategies is critical for acquiring information or optimizing one’s performance [2, 5, 31, 69]. From a microanalytic assessment perspective, the goal of administering forethought strategic planning questions is to identify the types of strategies, behaviors, or thoughts that individuals believe to be most essential to performing well on a given activity. In addition, because these questions are administered immediately preceding engagement in a well-defined task, it allows an examiner to ascertain the primary task dimensions that individuals focus on and think about. Similar to goal-setting, four studies have incorporated strategic planning questions in their microanalytic protocols. However, the nature of strategic planning questions has varied across studies (see Table 3). In the athletic realm, researchers have phrased these questions in relation to the task goals that they communicated. For example, Cleary and Zimmerman [50] developed the question, “What do you need to do to accomplish that goal?” Kitsantas and Zimmerman [34] used an identical item to assess strategic planning in relation to volleyball serving. However, the latter authors included an additional planning question targeting whether students followed a regular routine when practicing on their own. Although this question exhibited many of the features required of microanalytic protocols, it was not truly microanalytic because it did not pertain specifically to the practice session in which they were about to engage. Nonetheless, the information generated from this type of question could be helpful in a more broad assessment of strategic behaviors.

Still further, other studies have phrased the strategic planning question in relation to task engagement rather than task goals. For example, DiBenedetto and Zimmerman [33] developed the question, “Do you have any particular plans for how to read this passage and take this test?” whereas Cleary and Sandars [56] administered the question, “What are you thinking about as you prepare to draw blood from this arm?” These questions were not constrained around the goals that were reported by participants and thus enabled students to have greater latitude when reporting their approaches to the task than the previous questions used in motoric research.

6.3. Self-Observation

Self-observation has been defined as “a person’s tracking of specific aspects of their own performance, the conditions that surround it, and the effects that it produces” [6, page 19]. This is a critical performance phase process of the cyclical feedback loop because it serves as an information or feedback hub through which an individual is able to effectively and systematically evaluate goal progress and to inform cognitive or behavioral adaptations to maximize performance. Although self-observation typically involves monitoring behaviors, skills, and performance, it may also involve tracking cognition and metacognitive processes during learning and performance. That is, are students aware of the quality of their own regulatory processes, such as planning and self-evaluation, and can they reliably predict or make judgments about their competencies and skill levels? Many self-regulation theorists have referred to this process as metacognitive monitoring rather than self-observation [4]. Although there are distinctions in the theoretical assumptions and specific “monitoring” processes discussed in the literature, we attempt to identify the different types of monitoring or self-observation measures that SRL microanalytic researchers have included in their assessment protocols.

In general, only a few studies have microanalytically examined student reports about the types of things that they focus on and monitor during learning and performance (see Table 4). Two studies have examined the quality of participants’ monitoring during performance whereas another study targeted metacognitive judgments about performance. Kitsantas and Zimmerman [34] used an open-ended question, labeled self-monitoring, to examine the extent to which participants engaged in self-monitoring as well as the specific focus of their self-observation efforts during a volleyball serving practice session. Student responses were classified into distinct categories, such as service outcomes, serving technique and outcomes, do not know, and other. Although this question was microanalytic from a content and structural perspective, one can argue that it was not fully microanalytic because it was technically administered after the practice session was completed. As previously discussed, a key component of any microanalytic question is a direct link between the phase of the cyclical loop and the temporal dimension of the target task.

The work of Cleary and Sandars [56] is the only study reviewed in this paper to employ an SRL microanalytic self-observation question during actual performance. In this qualitative pilot study employing a venepuncture task, participants were asked, “Do you think you have performed a flawless process thus far or have your made any mistakes? Tell me about them.” This question was specifically designed to examine the types of errors that students perceive that they were making, with particular attention devoted to the venepuncture technique or other nonprocess factors, such as patient discomfort or ability levels.

DiBenedetto and Zimmerman [33] employed calibration or metacognitive monitoring procedures in order to evaluate the extent to which students accurately predicted performance on content-specific tests. After reading a text passage on tornados and completing a tornado knowledge test and a tornado conceptual test, participants were asked two questions pertaining to their confidence in correctly answering specific test items, and another question targeting the accuracy of their overall test score predictions. The format of these latter questions is consistent with assessment approaches used in other lines of research examining student calibration accuracy and judgments of learning [7072].

6.4. Self-Evaluation

Self-evaluation serves as a critical self-regulatory process because of its impact on other reflection phase processes and subsequent forethought processes. This construct has been defined by social-cognitive theorists as a self-judgment process involving comparisons between one’s performance on a task with some standard or benchmark [1, 6]. From a self-regulatory perspective, engaging in accurate self-evaluation is important because it ultimately defines for an individual whether he or she was successful or not on the task: a type of judgment that subsequently impacts a more complex set of reflection processes, such as making attributions about performance.

The process of self-evaluation is intriguing from an assessment viewpoint because of the different aspects or components of this process, such as the type and the level of criteria or standard [17]. Zimmerman [6] identified four types of criteria that students can use to self-evaluate: mastery, prior performance, normative, and collaborative. Whereas normative and collaborative criteria incorporate social factors into the evaluation process, mastery criteria and prior performance are personal or self-criteria. Mastery standards typically involve benchmarks or performance markers ranging from novice to expert skills, whereas prior performance standards are used to assess individual growth by comparing the individual’s prior outcomes to current performance. The latter two forms of self-referenced standards are ideal from a self-regulatory perspective because they direct one’s attention, reflective thoughts, and actions toward their own behaviors and outcomes—a critical ingredient in helping students becomes more self-directed and adaptive learners.

The level of standards refers to the stringency of the benchmark one uses to judge success. For example, a student who uses a 90% correct standard to judge test performance would be deemed to have a more stringent level of standard than a classmate who adopted a 75% correct benchmark.

Microanalytic researchers have examined the process of self-evaluation in a variety of ways (see Table 5). At a very general level, Kistantas and Zimmerman [34] asked participants in the volleyball serving study whether they self-evaluated or not and to explain what the evaluation entailed. The authors used a broad coding scheme with two categories: self-evaluation or no self-evaluation. Thus, the authors were specifically interested in whether students engaged in an evaluation process, regardless of the specific criteria or standards used to make these types of self-judgments.

In contrast, other microanalytic studies have focused on the level of standards used by participants. For example, Cleary et al. [60] asked college students, “What grade would you need to get in order to feel completely satisfied?” after receiving a test grade back from the course instructor. Students provided responses ranging from 0 to 100, which was the range of potential scores on the exam. In short, the higher the score on this scale, the more stringent the standard that students had about performance. Along the same lines, DiBenedetto and Zimmerman [33] evaluated high school students’ perceptions of the quality of their learning about tornado development as part of a reading session. Ultimately, students were asked to use a Likert scale ranging from 10 (poor) to 100 (very well) to report how well they believed that they performed on the test after receiving their test grade back from their teacher. Thus, students’ judgments about their success in relation to their critical grades were conceptualized to reflect their self-evaluative standards of performance.

Still further, other microanalytic studies used a forced-choice format to examine the type of criteria students use as the basis for making self-evaluative judgments [35, 56]. For example, in an experimental study involving a basketball free-throw shooting task, researchers asked participants, “What did you use to judge your degree of satisfaction?” after the practice session and post-test were completed [35]. Students were provided with a cue card listing several criteria identified in prior research: (a) percentage of shots made (mastery-outcome), (b) use of correct strategy (mastery-process), (c) improvement during practice (prior performance), (d) performance of others (normative), (e) other factors, and (f) do not know. Students were only allowed to pick one response.

6.5. Attributions

When reviewing the microanalytic studies presented in Table 1, it is quite apparent that causal attributions have been the most frequently studied regulatory process in microanalytic research. Causal attributions refer to an individual’s perception of the cause of the outcomes in a particular activity [73]. From both theoretical and empirical perspective, attributions are a key reflection phase process linked to the reactions that individuals display following learning or performance. The importance of attributions has been demonstrated across a number of fields including academics, [8, 64, 74], athletics [35, 67], and psychology [48, 73, 75].

Microanalytic researches have examined individuals’ attributions primarily following poor performance on a particular task (see Table 6). The specific wording of these questions has been directed to capture the nature of the target task and the specific outcome around which students will self-evaluate or judge their level of performance. For example, Cleary and Zimmerman [50] examined novice basketball players’ attributions following two consecutive missed free-throws during a practice session, whereas Kitsantas et al. [59] asked an attribution question after respondents missed the bulls-eye during a posttest session. The attribution questions in these studies were administered immediately following a task-relevant outcome identified by the researchers as important (i.e., missed free-throw or bulls-eye). The key methodological implication here is that the administration of a microanalytic attribution question is highly dependent on the specific task outcome that one targets as well as when that outcome occurs during a task or situation.

DiBenedetto and Zimmerman [33] chose to use a test on tornado development as the single outcome about which to evaluate students’ attributions about test performance. Thus, because the authors decided to focus on test performance as the key outcome, it was quite clear and subscribed when this question needed to be administered; that is, immediately after test performance. However, an outcome is not simply something that occurs following task completion. Rather, it is possible and often highly desirable for researchers to examine attributions at specific instances during a child’s attempts to learn or to perform a given task. For example, Cleary and Zimmerman [50] examined students’ attributions during a free-throw practice session after they had missed two consecutive shots following a predetermined warm-up period. Although the authors could have administered an attribution question regarding the players' overall performance following the free-throw practice session, they were most interested in targeting this process in relation to specific moments of failure or struggle during free-throw practice; that is, when players missed two shots in a row.

Most microanalytic researchers have utilized a highly consistent format and wording of attribution questions, with almost all questions using the stem, “Why do you think you…” followed with an ending phrase conveying two important components: (a) nature of the task, and (b) specific outcome (see Table 6). For example, Cleary et al. [35] administered, “Why do you think you missed those last two shots?” whereas Kitsantas et al. [59] asked the question, “Why do you think you missed the bulls-eye on the last attempt?” These questions were microanalytic in nature because they clearly pertained to the definition of an attribution, were linked to the task and the target outcome, and were administered immediately following an important performance outcome.

Some researchers have employed attribution assessment procedures that parallel some of the basic features of microanalytic assessment protocols. For example, Boekaerts et al. [8] used an Online Motivation Questionnaire to examine students’ attributions following performance in three different school subjects. In this study, the researchers asked students an open-ended question about why they performed well (or not well) on the exam and then coded these responses into one of several categories. Although other attribution researchers have used open-ended question formats and/or a highly contextualized approach [76, 77], microanalytic attribution questions are distinct from most other attribution measures reported in the literature because they rely on an open-ended question format, are directly linked with authentic tasks in a given context, and are administered on-line or as individuals engage in specific tasks. We briefly review some of the more common attribution measures reported in the literature to further highlight the distinctiveness of microanalytic attribution questions.

When comparing microanalytic attribution questions to more traditional attribution measures, one of the first features to consider is the formatting of questions. Although open-ended questions have been noted in attribution research, a closed-ended question format appears to be the most common [78]. Closed-ended questions, which often include forced-choice formats, differ from microanalytic procedures in that examinees are provided with attribution categories from which they may choose and then are asked to rate the importance of these factors in regard to their performance. Researchers have further subdivided forced-choice measures into unipolar and ipsative items [78]. Although both of these general classes of items are similar because they provide a respondent with specific attribution response options from which to choose, they differ in the number of attributions provided for that item. For example, while unipolar measurement directs respondents’ attention on a single causal factor for a particular item or question, ipsative measurement requires respondents to consider multiple attribution categories when providing their judgments. It is important to note that a scale may include a mixture of unipolar or ipsative items.

The Mathematical Attribution Scale (MAS) [79] provides an example of unipolar items by prompting students to rate their level of agreement with this statement, “I did well on the unit test because I studied very hard” using on a 5-point Likert scale. In this case, this item would be considered unipolar because it focuses solely on the single factor of studying hard (effort). In another example, the Course Performance Questionnaire (CPQ) requires respondents to rate, using a 5-point Likert scale, the level of agreement with a statement such as “When I earn a good grade, it is because of my academic competence” [80]. Consistent with all types of unipolar items, this item directs a respondent to focus exclusively on a single causal factor (i.e., competence) of performance.

In contrast, ipsative items require respondents to rate or identify the relative importance of several potential attributions on their performance in a given situation [76, 78]. However, researchers have used several variations of this general approach. For example, some researchers have used a percent of causality format whereby respondents indicate the proportional impact that a factor contributed to their performance. Elig and Frieze [76] used this measurement approach to examine the extent to which student success was caused by a number of factors such as high general intelligence (ability) or the difficulty level of the task. Another measure, the Intellectual Achievement Responsibility (IAR) scale [81], employs an ipsative approach for some items. For example, a hypothetical academic success scenario such as “when you do quite well on a test” is presented. The respondent is then asked to choose one cause from multiple potential responses, such as effort, ability, or external factors, to identify the key causal determinant for this success.

Although SRL microanalytic researchers have utilized both open-ended and forced-choice formats (see Table 5), free-response questions have been relied on exclusively. However, as discussed previously, reliance on an open-ended question format is an important but insufficient aspect of microanalytic questions. Other key distinguishing features include the methods employed to generate an attribution response and the temporal sequencing of the questions. In general, attributions can be elicited and measured in relation to natural events, laboratory settings, or hypothetical situations [78]. Naturally based assessments examine attributions in reference to an event that has actually been experienced by the respondent in an authentic context. Laboratory investigations take the form of fabricated experimental situations that aim to replicate, as closely as possible, a real or natural experience. Finally, the hypothetical scenario, a component of many traditional attribution assessments, makes use of hypothetical stories that respondents are to read and conjecture their personal attributions for outcomes that may not have been personally experienced [78].

Much of the attribution literature uses hypothetical scenarios to evaluate the nature of students’ attributions. As one example, an item on the Survey of Achievement Responsibility (SOAR) questionnaire asks students to imagine that they are faced with a new math problem and “catch on” very easily. This item then asks respondents to attribute their ease of task completion to one of the provided choices such as task difficulty, ability, and effort [82]. Although the use of hypothetical scenarios can yield important information about the nature of students’ attributions and regulatory processes, these types of approaches often involve general situations that do not pertain to specific situations that are actually experienced by the respondent. Thus, the information that is gathered with such assessments may reflect a more dispositional characteristic of people rather than their actual attributions and how such judgments may vary [27, 78]. Microanalytic attribution questions are unique because they evaluate students’ judgments of causality in relation to a specific context and task performance.

Furthermore, although some attribution measures have paralleled the context-specific format used in SRL microanalysis, it is also important to note that very few studies have mirrored the microanalytic feature of temporal linking of attribution questions with the after dimension of authentic task performance. By linking the temporal dimensions of a task with the sequence questions of administered, one will be better able to obtain an accurate account of the nature of students’ reflective self-judgments as they perform tasks that are of relevance to educators, tutors, or coaches.

7. Psychometric Evidence and Areas of Future Research

There is an emerging literature base supporting the premise that SRL microanalytic protocols exhibit relatively strong reliability and validity. Given our focus on microanalytic questions in this paper, we wanted to provide readers with a general review of the psychometric properties of these questions. In terms of reliability, kappa coefficient and percent agreement have been the key metrics used by researchers to examine the level of interrater agreement. Across almost all studies, the interrater agreement has been quite strong (see Table 7). It is important to note that these strong indices of agreement have been established, in part, due to the development and use of highly detailed and explicit coding schemes and manuals [33, 35]. Given the nature of most microanalytic questions (i.e., single items, free-response formats), alpha coefficients have not traditionally been reported. However, self-efficacy measures are an exception. These measures are quantitative in nature and incorporate several items targeting student beliefs to perform specific behaviors at various levels of performance (e.g., self-efficacy for receiving an A on a test, receiving a B on a test, and receiving a C on a test). These measures have been shown to exhibit high internal consistency (see studies listed in Table 1).

Another procedure for measuring reliability that has not been examined at this point is the use of test-retest reliability. Although it is unclear whether this type of reliability is useful for highly contextualized forms of assessment, such as SRL microanalysis, it might be of interest to examine the stability of students’ responses in highly similar situations across relatively short periods of time.

In terms of validity, researchers have examined the differential validity, predictive validity, and construct validity of specific microanalytic subprocess measures across diverse tasks. To date, a few studies have shown microanalytic measures to reliably differentiate achievement or expertise groups [33, 34, 50]. In general, this line of research has shown that, compared to lower performers, experts or high achievers tend to exhibit more strategic thinking and regulation as they perform specific tasks across domains. More specifically, microanalytic research has shown that those who exhibit the highest level of performance tend to set more specific goals, approach tasks more strategically, and make strategic attributions and adaptations following failure or poor performance on a task.

There is also some evidence that microanalytic protocols are reliable predictors of task performance. Kitsantas and Zimmerman [34] used a comprehensive microanalytic protocol to examine the regulatory processes of expertise groups as they practiced volleyball serving. To examine the predictive validity of this multi-item protocol, the authors combined all microanalytic measures into a single scale to predict subsequent volleyball serving skill. Although the authors did not include any other measures in the correlation analysis, they showed the composite score to account for 90% of the variance in volleyball serving skill. More recently, Cleary et al. [60] examined whether self-reflection microanalytic questions (self-evaluative standards, attributions, and adaptive inferences) accounted for unique variance in college course grades over and above that accounted for by other self-report measures. In general, the three self-reflection microanalytic questions accounted for a substantially greater amount of variance in final course grades than the other measures.

Most microanalytic studies have also found strong correlations among self-regulation processes as predicted by the cyclical feedback model of self-regulation (see Table 7). Of particular importance is the consistent finding that the type or quality of one’s attributions is strongly related to the types of adaptive inferences that students believe that they need to make in order to optimize future performance [35, 50]. That is, students who made strategic attributions for failure or following performance on a particular task were more likely to infer that they needed to adapt their strategic methods to perform more effectively on the task in the future.

Although SRL microanalytic protocols are quite promising, future research clearly needs to address several important issues. First, more evidence regarding the concurrent validity of these measures is needed. Of particular interest would be to employ a multidimensional assessment approach, utilizing self-report surveys, direct observations, behavioral traces, and/or microanalytic protocols, to examine the degree of convergence and divergence across these different assessment tools. This line of inquiry is particularly important given recent evidence showing that student self-ratings of their regulatory processes as gathered on self-report surveys often do not correspond strongly with their actual behaviors [42] or the quality of their regulatory processes as illustrated in microanalytic protocols [60]. However, there is much work to be done in order to determine the specific components of regulation that are reliably measured by each of these distinct protocols. From our perspective, it is highly likely that although different self-regulation assessment approaches will overlap to some degree, they will capture unique elements of regulatory functioning. Thus, to effectively and comprehensively understand human regulation, researchers and practitioners will need to utilize a diverse set of measures.

As indicated previously, a couple of studies have examined the validity of microanalytic protocols in predicting achievement in academic and nonacademic contexts [34, 60]. However, a potentially fruitful line of research involves examining whether microanalytically derived processes predict behavioral change as students engage in specific tasks in a given context. For example, the cyclical feedback model predicts that the quality of one’s reflection phase processes should lead to changes in one’s subsequent forethought and performance phase processes. It would be of particular interest for researchers to examine whether the quality of students’ self-reflections, as measured with microanalytic protocols, predicts actual changes in their strategic behaviors or motivation to engage in that task in the future.

A primary objective of microanalytic protocols is to generate reliable information about students’ regulatory processes that can be used by educators and practitioners to inform the development of academic interventions or to guide instruction [15, 83]. Given the recent evidence placed on linking school-based assessments to intervention development as well as using context-specific or ecologically appropriate measures to evaluate youth [20, 84], greater attention needs to be devoted to not only apply SRL microanalytic procedures to specific academic contexts but to also explore how such assessment data can be used effectively by teachers and school-based practitioners to guide intervention planning and development. Recent research has shown that special education teachers perceive microanalytic forms of assessment data to be extremely useful for intervention planning and instructional programming as well as for enhancing other roles in which they engage, such as participating in team meetings and consultation [36]. In addition, Cleary et al. [15] provided an anecdotal illustration of how self-regulation tutors used microanalytic protocols can be used to guide tutoring sessions with urban youth in high school contexts who were failing science courses. Although there is great promise in linking SRL microanalytic assessment approaches to these instructional contexts, to our knowledge, no study to date has systematically addressed this issue.

8. Conclusion

Although no single assessment tool can effectively capture human regulation in its entirety, assessment tools that examine regulatory thought and action as they occur in real time during a particular task have the potential to provide more useful information that can lead to contextualized, individualized interventions for youth who struggle in school. It is our belief that SRL microanalytic protocols can be one of these types of assessments. Despite this trend toward the use of contextualized, online forms of assessment, however, we believe that a multidimensional assessment approach that includes various types of self-reports, direct observations, and perhaps teacher and/or parent ratings may prove to be a most valuable approach towards understanding human regulation.