Vehicle-to-Everything (V2X) Communication Approach Towards Advanced Intelligent TransportationView this Special Issue
Towards a Low-Cost Teacher Orchestration Using Ubiquitous Computing Devices for Detecting Student’s Engagement
The ubiquitous devices and technologies to support teachers and students in a learning environment include the Internet of things (IoT), learning analytics (LA), augmented or virtual reality (AR/VR), ubiquitous learning environment (ULE), and wearables. However, most of these solutions are obtrusive, with substantial infrastructure costs and pseudo-real-time results. Real-time detection of students’ activeness, participation, and activity monitoring is important, especially during a pandemic. This research study provides a low-cost teacher orchestration solution with real-time results using off-the-shelf devices. The proposed solution determines a teacher’s activeness using multimodal data (MMD) from both teacher and student’s devices. The MMD extracts different features from data, decodes them, and displays them to the instructor in real time. It allows the instructor to update their teaching methodology in real time to get more students on board and provide a more engaging learning experience. Our experimental results show that real-time feedback about the classroom’s current status helped improve learning outcomes by about 45%. Also, we investigated a 50% increase in classroom engaging experience.
This pervasive development in technology has made computers more robust and smaller. The computers successfully made their way from giant PCs to small portable mobile devices, which created a new era of ubiquitous computing that made accessible computers anywhere with more excellent perception and understanding of the surrounding environment through sensors . Mark Weiser coined the term “ubiquitous” in the 90s , resulting in several smart devices including smartphones, smartwatches, and smart TV  being used in various scenarios. For example, wearable devices (smartwatches, bands, etc.) are advantageous in health-related applications and others. Users’ gestures are required as they are constantly connected to the skin and fixed on the human body . Like other fields, including business, health, and entertainment, these devices have more potential to be efficiently and effectively exploited to improve education quality. One such prominent example is smart wearable devices for teacher orchestration . These technologies are utilized for teaching, learning, and orchestration in a learning environment.
Teacher orchestration refers to managing different classroom activities encompassing individual, small group, and whole class in a face-to-face classroom by a teacher . The word orchestration came from orchestra, which means carefully organizing a complicated event . In the context of a smart classroom, teacher orchestration is the careful arrangement of a technologically more prosperous classroom environment and activities to achieve the required learning outcomes . The main focus is facilitating a teacher in monitoring and healthier students’ performance in a ubiquitous learning environment (ULE) . In ULE, small and handheld devices perform various monitoring and data visualization tasks to support teachers’ and students’ learning pedagogies. A typical classroom contains multiple kinds of activities. A teacher must manually manage several paper-based activities, such as taking attendance by calling students’ names and marking them present or absent. It can be technology-assisted by using different sensors and other devices . Traditional orchestration is more ubiquitous in most institutes because it is easy to use and requires less training.
Still, time consumption and resource wastage are among the expected downsides of this approach. Technological devices were employed to assist teachers and students in the learning process to overcome these issues, which created the smart classroom concept. Smart classrooms are technologically rich learning spaces where computers and other devices are exploited to help teachers and students . Although this approach helped overcome these issues, there are other complications regarding user acceptance and development costs due to giant infrastructures. Several custom-built hardware with multiple sensors are used , increasing the setup cost and affecting user experience and social acceptance. Therefore, smartphones were deployed instead of using custom-built hardware and equipped with different sensors [11–13]. Using handheld devices, a new era of ULEs evolved, transforming the educational context into complex social and technological ecologies by expanding the scope of education beyond the classroom .
Several studies proposed numerous approaches to perform the orchestration process using multimodal data from multiple sources. These studies leverage different technologies, including the Internet of things (IoT) [10, 14, 15], intelligent tutoring systems (ITSs), learning dashboards , augmented and virtual reality (AR/VR) [17, 18], smart wearables , different sensors, and ubiquitous computing devices . However, most of the proposed solutions are either not real time or expensive and less user-friendly because they need technical assistance to be used.
Most studies focused on custom-made hardware, which provided good results in some circumstances, such as lab environments . Still, the setup cost and acceptability in the real-world classroom are a matter of concern. It requires technical assistance from paid experts or specially hired employees to deploy and use these clumsy infrastructures in a learning space. Using ubiquitous computing devices reduces setup costs, but they are not real time and provide results after the classroom session. This delay is becoming the main reason for time wastage for teachers and learners as they cannot adjust their behaviors in that specific session. To mitigate these issues, it is required to provide a real-time teacher orchestration solution using off-the-shelf and low-end devices, which is the paper’s main aim. The proposed solution is low-cost, easy-to-use, with a real-time feedback facility about the class’s current status. The main goal of this research work is to leverage the potential of off-the-shelf smart devices, including smartphones and smartwatches, in teacher orchestration to reduce the use of custom-made and specialized hardware, which increases the setup cost and requires technical assistance for deployment and usage.
This study attempts to avoid using any external server for data acquisition, processing, or result generation. Thus, it significantly reduces the cost and effort required for setting up and using the system in real-world classroom scenarios. The proposed solution needs a smartwatch, i.e., a wearable device worn by the teacher in their dominant hand and connected to a smartphone placed in front of the teacher. The connected smartwatch sends its sensor’s data to the connected smartphone, processing and analyzing for final result generation on the smartphone. The application collects data from both teachers and students. The facial orientation of the teacher is used to measure her activeness or tiredness level. A server application is developed and deployed on the teacher’s smartphone to collect and process the multimodal data from the teacher’s and students’ devices. The processed data is displayed on the teacher’s smartphone showing statistics about the current state of the class, e.g., how many students are active and inactive and what is his voice quality during the lecture. Results show a significant increase in learning outcomes, i.e., a 45% increase. Also, we investigated a 50% increase in classroom engagement. The gathered data shows that this solution is less intrusive and has no serious issues for students and teachers. Also, the system can be applied in other lecture-demonstration methods.
The rest of the paper is divided into six sections. Section 2 is a comprehensive yet concrete literature review. Section 3 is the proposed methodology that elaborates the technical aspects. Section 4 is the implementation of the system. Section 5 is the experimental setup section. Section 6 shows results and analysis that further discusses the obtained results. The last section is the conclusion section. The references are numbered in the last section.
2. Literature Review
Mobile and computer technology have been introduced into educational contexts over the past two decades . Access to computers and large-scale one-to-one computing programs have been implemented in several countries globally [22–24], such that elementary and middle school student(s) and teacher(s) have their electronics and mobile devices. In terms of encouraging and promoting innovation and modernization in education through mobile and information technology (IT), it also supports traditional lecture-style teaching, convenient information gathering, and information sharing and promotes innovative teaching methods such as cooperative learning [25, 26], exploratory learning outside the classroom, and game-based learning . On the flip side, the marvelous expansion of sensor technology in smartphone(s), along with their sensing capabilities for accurate capturing, monitoring, and analysis of information, helps us know about traffic conditions [28–30], road conditions [31–33], environmental impacts of noise level , air quality and pollution level [35–39], humidity and temperature , understand patterns of objects movements [41–43], alerting and monitoring disaster [44, 45], weather information , etc.
The IT and mobile technologies can facilitate and enable innovative educational methods. Simultaneously, these patterns in educational practices will likely help subject content learning and facilitate the development of communication, problem-solving, creativity, and other high-level skills among students . Also, it will support teachers in orchestrating different classroom activities and increase the learning outcomes. The technological use for teacher orchestration has evolved from computers  and IoT devices to handled smartphones. Table 1 shows a variety of sensor technologies and their inevitable usages in teacher orchestration.
2.1. Assessments during Class
Student monitoring and engagement are positively linked with the required learning outcome. For instance, good grades in curricular and extracurricular activities are directly linked to critical thinking and the efficiency of the subject(s) .
Being a teacher is one of the most important factors for student(s) engagement and attention . Teachers’ coordination and proper communication facilitated by a verbal, gestural, and written connection with their student(s) can benefit the student(s) mesmerization and attention. Classroom monitoring can be considered a powerful tool to determine the quantity and quality of active learning in the classrooms . Monitoring activities lead to many engagement improvements, e.g., to improve learning , engagement to improve throughput rates and retention [65, 66], engagement for equality/social justice , and curricular relevance . Submissive to the important monitoring and engagement, different tools , technologies , algorithm(s) , and strategies have been used to measure and estimate the attention level of both student(s) and teacher(s).
According to , only 46% to 67% of the students pay positive attention to the class during lecture delivery. It means half of the students could never be productive. With this information in hand, both the teachers and researchers have examined potential problems that arise during their classes, and efforts have been made to eradicate and correct them, which may have a long-term benefit on the learning efficiency of the learner and students. The study also showed that students’ engagement and focus are positively linked with good grades and critical thinking . It is only possible with full attention and focus, which depends on numerous elements and factors, including the teacher . According to , a classroom’s size influences student attention and engagement. In large classes, the teacher needs to use more time to draw students’ attention, which is sometimes emotionally exhausting.
Face detection, face recognition, facial features, pose estimation, etc. techniques have been used for student monitoring, for instance, student attendance monitoring system based on deep learning [73, 74], tracking through eye tracking , monitoring meeting through head orientation, and gaze direction , assessing and monitoring classroom attention , and estimation of activeness, transcribing, unavailing, distracted and transition, automatic recognition of engagement from students’ facial expressions .
2.2. State-of-the-Art Orchestration Solutions
According to Chan, “orchestration” is derived from orchestra in teacher orchestration . Each student interacts with a digital device in a smart classroom to support them in the learning process. A smart classroom is an intelligent learning space equipped with different devices, sensors, and custom software agents . Leeuwen and Rummel  reviewed various orchestration tools for teachers to help them understand students’ collaboration in their groups. Smart wearables were also analyzed in a pedagogical context, like [81, 82], to explore wearable technologies in the educational aspect and discuss different approaches to using smart wearable and smartphones for m-learning  and teacher orchestration. Suárez et al.  discussed using smartphones in education using inquiry-based learning by examining multiple approaches and their strengths and limitations.
The IoT was extensively used in the classroom to support both teachers and students . Subbarao et al.  analyzed different IoT-based approaches providing solutions for several learning pedagogies using devices and sensors. Also, different augmented and virtual reality (AR/VR) solutions for supporting learning activities are discussed in [10, 84]. These approaches are categorized based on their technology stack and used infrastructure in the following subsections.
2.2.1. Internet of Things (IoT)
The connection of different devices (things) with the Internet is known as the Internet of things [83, 85]. A smart classroom contains multiple intelligent devices, which eventually need to communicate to enrich the learning experience. IoT is one of the widely used approaches in different solutions; unlike other fields of life, it also evolved in teaching and learning pedagogies. Most of the solutions found in the literature, which use sensors for getting data from learning space, are based on the IoT paradigm. Rico et al. in  and Subbarao et al. in  review different IoT-based approaches providing multiplicity solutions for several learning pedagogies using a combination of devices and sensors.
Gligoriü et al.  determine lecture quality using different sensors like PIR and sound sensors and a video camera. Similarly, another study  finds the student’s satisfaction from a classroom session using physical environment parameters. The student uses their smartphones to input their feedback as satisfied or not satisfied . In another study, Gligorić et al.  designed an LED lamp to show students’ interest or satisfaction levels using Raspberry Pi (https://www.raspberrypi.org). They record 30 lectures using cameras and microphones and annotate students’ data using their smartphones. Students click exciting or not interesting when they find something satisfactory or unsatisfactory. A 30-second window was labelled when more than 90% of votes were received.
Mahmood et al. [14, 84] used a camera connected with Raspberry Pi to calculate students’ interest levels from their facial expressions and notify the teacher about their current status. Besides getting data about the lecture, IoT is also used for classroom attendance; in , Atabekov designed a smart chair for getting classroom attendance and time spent by a student in the classroom.
2.2.2. Near-Field Communication
The Near-Field Communication (NFC) technology is also used for automatic student attendance, indoor classroom location, and real-time feedback . In , an RFID-based campus security system is proposed by Mirza and Brohi, which monitors and tracks different resources, including students’ records, exam papers, and student certificates, using cloud computing. Another similar approach  used PIR and RFID sensors with Arduino to monitor classrooms and parking lots and determine which occupied or empty classroom or parking space. Furthermore, they used a video camera with a cloud platform to offer a virtual classroom for e-learning. Said et al. in  introduced an IoT-based e-learning system called “free learning” or F-learning, consisting of smart classrooms and virtual labs that autonomously communicate with each other using cloud infrastructure. And finally, Haung et al.  and John et al.  used multiple sensors to control smart classrooms by getting different data and decreasing energy conservation.
2.2.3. Augmented and Virtual Reality
Augmented and virtual reality (AR/VR) allows users to be physically involved in different blended scenarios and create a hybrid learning environment by combining physical and digital objects . As students learn 50% of what they hear and read while 90% of what they do , AR/VR for learning purposes might significantly provide positive results and help students grasp more helpful information. Herpich et al.  discussed different mobile-based augmented reality solutions for supporting learners.
Elkoubaiti et al.  explore AR/VR in education and smart classrooms. They describe the technical requirements including latency, field of view, resolution, frame rate, network requirements, and measurements for the privacy and security of AR and VR applications. Similarly, Munoz et al.  represent a case study using an AR-based tool named GLUEPS-AR and a VR game (Game of Blazons). The study conducted different VR/AR-based activities for students and showed that these VR/AR tools help teachers create different learning situations. Also, Kosmas et al.  evaluate the effect of the motion-based game on student performance during language learning classes.
Khan et al.  developed an augmented reality mobile application to examine their learning motivation. They used the ARCS (attention, relevance, confidence, and satisfaction) model to find the significance of AR technology on students’ learning performance. Although the available literature has extensive studies focused on AR/VR, according to Murat and Gokçe , many students cannot arrange AR/VR headsets. Also, it distracts students’ attention, and undoubtedly, it is expensive as well.
2.2.4. Learning Dashboards
A learning dashboard is a visualization tool supporting teachers and learners in different learning scenarios for better decision-making . It is a specific intervention of learning analytics used to identify meaningful data for various stakeholders (like teachers, students, and administrators) and how data representation can be helpful in sense-making . Korozi et al.  developed LECTOR—a web-based tool for students’ reengaging systems in smart classrooms using multimodal data from different sources, including an eye tracker, depth camera, microphone, and other embedded sensors.
Similarly, another approach used LECTOR  and a smartwatch app called NotifEye , which shows a teacher’s smartwatch notification with different information regarding students’ current learning status, activeness, and other positive interventions. Holstein et al.  developed a real-time dashboard for the intelligent tutoring system (ITS), which assists students during their programming course for learning http://ASP.net (https://dotnet.microsoft.com/apps/aspnet). VanLehn et al. developed a FACT multimedia system —a web-based AI tool that records students’ collaborative activities of arranging paper cards on the math class poster. Wetzel et al.  analyze the same FACT system with a traditional paper-pen-based approach to evaluate the time wastage factor of both conventional and electronic systems in learning pedagogies. Although learning dashboards better visualize students’ data, most systems require extra hardware and sensors.
2.2.5. Ubiquitous Computing and Other Sensors
The educational contexts have evolved into complex technological and social ecologies using different ubiquitous devices to transform the traditional learning space in ubiquitous learning environments (ULEs) . Iqbal  represented a mobile application for teachers to mark quiz and exam papers and input feedback about students’ performance. Viswanathan and VanLehn in  and Tissenbaum et al. in  used students’ interaction logs with web app and tablet apps, respectively, to identify their collaboration in a classroom session. In , Yu-Gang et al. proposed a mobile-based learning model, enhancing smartphones’ traditional learning. Smartphones are also used for automating the attendance process in ULE to facilitate teachers. Budi et al.  used image processing to take students’ attendance by using a mobile camera and a trained machine learning model running on the server for face recognition to identify different individuals in the uploaded image. Yang et al.  used voice print to mark students’ attendance and detect their indoor location in the classroom. In , Gligoric et al. measure the level of interest of a lecture by detecting student movements using a video camera, classroom sound (with microphone), and teacher’s movement from his smartphone accelerometer.
Prieto et al.  used the teacher’s smartphone’s accelerometer with other devices like a camera, microphone, and electroencephalogram (EEG) sensor (for capturing brain activities) to identify different classroom activities like an explanation, questioning, and monitoring. They identify teachers’ actions in a classroom session from multimodal data and build an “orchestration graph.” And while the orchestration graph defines who does what and when , it is a time-series graph plotting different activities with a given time and duration. Similarly, other approaches [112–114] reduce the infrastructure and use low-end devices; they used microphones to capture audio data and segment the lecture into different subactivities like question-answering. But these approaches require training the system for each teacher individually because of the change in voice tone and different speaking styles.
Recommendation techniques recommend tailored items to a user [115–118]. Liu et al.  proposed a smart learning recommendation system, which captures data from different sources to determine students’ current learning state and then suggests or reinforces different learning strategies (like quiz). In another approach, Bdiwi et al.  investigated the impact of teachers’ positions on students’ performance in higher education. Wang et al.  used an eye tracker to determine how much the teacher’s gaze guidance affects the students learning performance in video lectures. Similarly, Viilo et al.  perform teacher orchestration video data recorded in the classroom.
The advantage of wearables over mobile devices is that they can be available most of the time, unlike mobile technology, mainly in pockets or bags . In a study, Garcia  proposed a smartwatch app named “ScienceStories,” where students can record their science concepts. They find that the gamification mode has the highest use among the students. Quintana et al.  evaluate the acceptability of wearables in education by using the smartwatch to remind different tasks to the teacher during the classroom session.Also, Lu et al.  used a smartwatch for learning analytics to predict various activities using the hand gestures of a particular student. Another study designed, developed, and evaluated a wearable application for students with intellectual and developmental disabilities (IDDs) to assist them in the educational environment . Wearables like smartwatches and smart bands are another common type of wearables named optical head-mounted displays (OHMDs) or simply head-mounted displays (HMDs). They are usually worn over the eyes, which can either be utterly immersive like VR headset (Oculus (https://www.oculus.com)) or nonimmersive such as smart glasses (Google Glass  or Microsoft HoloLens (https://www.microsoft.com/en-us/hololens/)) . In , the teacher wore Google Glass to view the emotional status of each student in the classroom.
Patrick  used audio data from the microphone for different segment activities in a learning session. The author used a machine learning approach to train a classifier and then predict various activities from the given audio data, like answering, supervising students, and lecturing. Similarly, Donnelly et al.  also used audio data from the microphone to detect teacher questions from a live classroom session. Finally, Bdiwi et al.  used RFIDs to find the impact of the teacher’s position on students’ performance using an IoT-based approach.
Gligorić et al.  also used IoT devices, including PIR and sound sensors, to detect the lecture quality. Finding the lecture quality in real time is a positive approach, but using extra hardware raises costs and acceptability-related issues. In another study, Gligoric et al.  used a video camera, mic, and Android smartphone to detect the level of interest a lecture created. The author also proposes another IoT-based solution to show students’ satisfaction levels . Finally, Mahmood and Salman  used a video camera and Raspberry Pi to find students’ attentiveness levels using their facial expressions and assist teachers in improving their teaching methodology.
The materials and methods should contain sufficient detail so that all procedures can be repeated. It may be divided into headed subsections if several methods are described.
3. Proposed Methodology
The proposed solution needs a smartwatch worn by the teacher in their dominant hand and connected to a smartphone placed in front of the teacher. First, it helps collect the teacher’s hand and foot movement to identify if the teacher is moving during the lecture or remains static. Then, the smartwatch sends its sensor data to the connected smartphone, processing and analyzing for final result generation on the smartphone.
The application collects data from teachers and students, as shown in Figure 1. The system gets the teacher’s hand and foot movements and her audio- and face-related information using a smartphone and smartwatch from the teacher’s side. The foot movements help identify whether the teacher is static or moves and interacts with students. Hand movement is used to capture hand gestures and remember different actions. The audio data is used to measure the teacher’s sound level and helps differentiate who is currently speaking. If it is only the teacher’s voice, it is classified as a lecturing event. If there is a combination of students’ and teachers’ voices, it is counted as a question-answer session or discussion. The facial orientation of the teacher is used to measure her activeness or tiredness level. A server application is developed and deployed on the teacher’s smartphone to collect and process the multimodal (different sources) data from the teacher’s and students’ devices. The processed data is displayed on the teacher’s smartphone showing statistics about the current state of the class, e.g., how many students are active and inactive and what is his voice quality during the lecture.
The application shows the current status of the classroom after collecting multimodal data in real time. It also provides a short glimpse of different activities at the end of a classroom session, for example, how much time the teacher spent lecturing, question answering (discussion), and writing on board. The application can mark students as active and inactive by processing the head and voice-related data discussed later in sections. The teacher’s activeness (Equation (3)) is calculated from two factors, i.e., classroom current status and voice level of individual students. The classroom’s current status can be found using
where is the total number of connected students, i.e., both active and inactive in that specific learning session, at the same time, and stands for classroom status, which will be a decimal value between 0 and 1. Similarly, the voice level can be calculated using
Here, represents voice level for an individual student, max-voice-level is the maximum threshold set for voice level, i.e., 90 decibels (dB) for our experiment, and is the total number of connected students. The resulted value of voice level () will be a decimal number between 0 and 1. And finally, Equation (3) uses these CS values, and can compute the teacher’s activeness level, which will be again a decimal number from 0 to 1.
Finding the value of the teacher’s activeness fulfils our first object of this research work. Now to meet the second objective, i.e., finding the contribution of each modality, we analyze the kind of data captured from these modalities and then find the use of that captured data.
The system works in a local area network to get data from different stakeholders. The teacher’s application acts as a server to collect data from connected students. The student’s application running on different students’ smartphones is responsible for collecting and processing the data and then sending that processed data to the teacher’s smartphone for final representation and results in a generation. This section discusses how the application captures and processes this multimodal data in real time.
4.1. Data Acquisition and Processing
The following data is collected from both teachers and students, analyzed and used to find the classroom status and voice level as stated in Equations (1) and (2).
4.1.1. Facial Data
According to Mahmood et al. , the understanding of student interest level is allied with the quality of the lecture. Therefore, the application captures face-related data from teachers and students to get their level of interest and activeness in the current classroom session. This study focuses on head movement to analyze how much head direction helps identify the current attention level of the student. For this purpose, Google Vision APIs (https://cloud.google.com/vision/) detect users’ faces from images captured using a smartphone’s camera. These APIs provide a framework for detecting and tracking objects in images and videos. It supports face detection, barcode reading, and text recognition. For example, the head left to right movement represents head rotation, with a value between −60 and +60 and represented with . Similarly, it also gives clockwise rotation, representing head tilt angle from −45 to +45 annotated as . The application takes a picture every 5 seconds and passes the captured bitmap image to Algorithm 1 to detect different face-related features.
The APIs offer different face-related data, including the number of faces detected, head rotation, head tilt (in degrees), smiling probability, eye-opening probability, and facial landmarks. Shown in Figure 2 is how these APIs consider head rotation tilt angle. Since the APIs provide head rotation and tilt, the rotation exceeds 20 degrees, i.e., +20 degrees on the right side, while −20 for looking at the left side (Step vii). Then, the application checks whether he exceeded this limit last time; if this is the first time he was noted, the application will wait for the next cycle/iteration; otherwise, it marks him as inactive. So, for example, if a student is not looking straight in the first cycle, the system will set a flag value warnTeacher to true, but in the next process, if the student is found looking straight, the application will mark him active and set warnTeacher back to false.
4.1.2. Voice Data
The application also collects voice data to infer classroom activities like lecturing or question-answer session. The microphone is used from existing smartphone devices in front of the teacher and students. The application collects audio data and performs preprocessing for noise removal on the student side. This cleaned data is used to measure the voice level of teachers and students in the classroom environment. If it detects only the teacher’s voice, it is marked as a lecture. But if there is a combination of both teacher and student’s voices within a defined threshold, then the system considers it a discussion or question-answering session. It uses standard Android APIs to collect and extract features from audio data for audio processing. As the application measures the voice level, we used the MediaRecorder class from Android APIs to get the maximum amplitude of audio data. The student application sends this amplitude value to the teacher’s smartphone, and the teacher application compares these values captured from different students. As given in Algorithm 2, if the voice difference between the two nearest students is noticeable, i.e., a value from student A is 35 dB, while the next student (student B) sends a value of 60 dB, then the application checks whether voice amplitude is captured on the teacher’s smartphone if the teacher’s voice is around 50 to 60 dB. Thus, the application infers that the teacher is lecturing while student B talks with someone. But suppose the teacher’s voice amplitude is less than 30 dB. In that case, the application considers that the student is asking a question and therefore marks that session as “discussion” or “question answering,” as shown in Algorithm 2.
4.1.3. Hand and Foot Movement Data
To find the teacher’s mobility and interaction in the classroom, the system captures her hand and feet to infer whether the teacher is standing still or moving. The system includes an off-the-shelf Android Wear-OS (https://wearos.google.com/) available smartwatch worn by the teacher on the dominant hand. In addition, it captures data from IMU (Inertial Measurement Units) sensors, including accelerometer, gyroscope, and pedometer mainly. The application uses Android APIs to interact with sensors and captures data at the rate of 40 samples per second to correctly recognize gestures from raw data . Further details of these sensors are given below. Algorithm 3 shows steps getting sensory data from smartwatches.
(1) Accelerometer. An accelerometer is used to measure the acceleration (change of velocity) in three axes (, , and ) ; see Figure 3. It reads these acceleration values from the smartwatch accelerometer to find hand gestures.
(2) Gyroscope. A gyroscope is used to measure the angular velocity (orientation/tilt) of a device’s three dimensions (, , and ) . Therefore, it correctly identifies hand gestures by combining them with the accelerometer data.
(3) Pedometer. A pedometer is an electromechanical sensor used to detect and count each person’s step . The application uses several steps to identify whether the teacher is standing still or moving toward the students in the classroom.
4.1.4. Data Representation
To better user experience and reduce cognitive overload, the application shows a seating map on the screen to mimic the real classroom structure. Therefore, when the teacher starts the application to monitor, he is prompted to input the number of rows and seats in each row in the classroom (Figure 4(a)). Then, starting the application in server mode, the teacher presents a grid of icons representing student setting in the classroom (Figure 4(b)). This icon changes according to the current student status; for example, when a student is not connected, the white icon, but when a new student gets connected, the application captures his seat number from the connection request packet and updates their status from the white icon to a colored icon. To decide which icon will be updated in the grid of the application, use the seat number.
After collecting multimodal data from several connected devices, all the data is combined on the teacher’s smartphone for final calculation and result generation. The system contains different features regarding face and voice data from the student’s side. The application continuously updates the seat-map grid to show the latest data on the screen. For example, if the voice level is less than 40 dB (see Algorithm 2, Step 3). Similarly, suppose the user’s face is not detected or their head direction exceeded by 20 degrees (see Algorithm 1), in that case, the application provides real-time feedback to the teacher.
On the teacher side, after getting this multimodal data from all students, the application first calculates the class status CS using the number of active students (marked using Algorithm 1) and total students using Equation (1). Similarly, the overall classroom voice level VL is also calculated using Equation (2). And finally, by substituting the values of CS and VL in Equation (3), the teacher’s current activeness level can be calculated. The application continuously calculates the activeness value and updates a progress bar on the teacher’s smartphone to provide real-time feedback, as shown in Figure 5(a).
The system also included a smartwatch (Asus Zenwatch 2) worn by the teacher to capture his hand and foot movement. The application captures sensor data of a five-second window and processes that data on the teacher’s smartphone to get the number of steps taken and process hand gesture data. If the number of steps in three consecutive time windows is less than 1 or greater than 3, the system’s foot movement is less efficient for better lecture quality. In addition, it counts the number of steps during a particular classroom session, shown in the final report presented at the end of the class and a detailed summary of a learning session (Figure 5(b)).
5. Experimental Setup
This section describes the environment setup used for our experiments during actual classroom sessions.
5.1. Classroom/Environment Setup
Figure 6 depicts the layout and management of teacher and students’ positions in the classroom during the experiment. The smartphone was placed in front of a student using the specialized smartphone jacket installed on the back of the student’s chair in front of them. Figure 7 shows a chair with a smartphone jacket installed at the back to get students’ faces and audio data. Five positions were selected to sit a student with a smartphone, whereas a teacher is equipped with a smartphone and smartwatch (Figure 6). The teacher is standing and moving during the classroom session. Therefore, his smartphone is placed in a neck holder to make it easier to move and provide real-time statistics on his smartphone screen. In addition, the teacher wears a smartwatch on his dominant hand to capture their hand movements and count their steps during the classroom using the built-in pedometer of the smartwatch.
5.2. Display Seating Map
Real-world classroom size is not fixed, and the system must show a student’s exact position in the classroom. Therefore, to offer the exact indoor location, the application uses QR codes to recognize a student’s accurate seat map, unlike some existing solutions that use RFID  for indoor location, which is costly and requires technical assistance. The QR code is placed in front of each seat to get the seat number and position in the classroom, as shown in Figure 8.
For the evaluation of the proposed system, we conduct questionnaire-based surveys. We first take a pretask study from participating teachers during the experiment to know how many teachers had used an orchestration solution before. After that, we conduct experiments in several classroom sessions to try our Android application in real classroom scenarios. Finally, we take a posttask questionnaire to get participants’ responses after using the Android application. The statistical data from both questionnaires are gathered and coded in SPSS version 21 for further analysis and significance testing.
6. Results and Discussion
After implementing the proposed system, we conducted several experiments in different classroom sessions for one month to better understand and impact our developed Android application. This section discusses the results and findings obtained from pre- and posttask questionnaires.
6.1. The Demographics of Participants
For the experiments, we asked several teachers and students to voluntarily participate and use the Android application on their smartphones during classroom sessions. First, we explain how the system works to all participants and provide a more engaging user experience using low-cost off-the-shelf devices. By requesting approximately 30 teachers, 18 teachers (12 males and 6 females) agreed to use this application and contribute their feedback voluntarily. Similarly, by asking 40 students, 22 agreed to participate, where 17 were male, and 5 were female students between 24 and 28 years (see Table 2).
6.2. The Pretask Findings
We asked the participants whether they had used any teacher orchestration solution before and their experience with those solutions/tools in the pretask questionnaire. As shown in Figure 9, around 80% of participants did not use any orchestration tool before, and they were not familiar with teacher orchestration. The other 20% were mostly teachers, who were also unfamiliar with teacher orchestration, but they used MOOCs to assist their students in the learning process.
We further asked those teachers whether they were satisfied after using those applications for managing their classroom activities. As a result, only 30% said they were satisfied, while 70% said the results were unsatisfactory (Figure 10).
6.3. The Posttask Findings
After the experimental classroom sessions, we conducted a posttask questionnaire-based survey. The participants were asked about their experience and observations after using the Android application. In addition, they were asked whether they feel any improvement and how much the smartphone-based orchestration solution will help create a more engaging learning experience. These questions are given in Table 3.
After collecting their responses, we coded all the recorded data in SPSS version 21 and performed a paired sample -test for these different questions and variables. The first question in our survey was about knowing how the user felt in terms of easiness regarding the proposed solution. As shown in Figure 11, around 50% of the participants strongly agreed that the application was easy to use because the user could join and start with only 2 to 3 clicks. In contrast, the rest of the 10% and 5% mark the easiness as neutral and disagree.
The proposed solution’s primary purpose is to improve teacher performance and increase learning outcomes. Table 4 shows the statistical data gathered from participating students presenting the improvements made after using the proposed solution. About 45% of the students strongly agreed, and 35% agreed that the application improved performance by presenting valuable data to the teacher, which supported him in understanding the entire classroom’s current status. The same data is also represented in Figure 12 using a bar graph.
Along with improving teacher performance, we were also fascinated by the proposed system’s negative factor or downside. Therefore, we asked the participants whether the application produced any disturbance or distracted them during the classroom session. Only 35% of the participants marked a slight annoyance (Figure 13) because the teacher was wearing a neck holder stand to hold his smartphone, and the majority of participants in this 30% were teachers. In contrast, most students, around 35%, disagree with the disturbance, and only 20% mark it as neutral. Of course, a neck holder in the classroom might create a slightly negative impact, which was only used to allow the teacher to view data easily on his smartphone. But it can be replaced with a monitor screen installed behind the students, which provides the teacher with a freer environment to move. Still, on the other hand, it will add some extra cost to the proposed solution because the primary purpose was to use the existing devices to create a low-cost solution.
We also investigate how much the proposed smartphone-based orchestration solution helped create an engaging experience in the classroom. The majority of the participants, i.e., 90%, accepted that the proposed solution successfully made an engaging experience in their learning environments, while only 10% answered this question as neutral but none of the participants disagreed with the engaging impact created by our proposed solution (Figure 14).
Similarly, to know the impact of using low-cost smartphone devices rather than huge and expensive infrastructures, we asked the participants how satisfied they were with using smartphones for teacher orchestration; 35% strongly agreed and 50% agreed that they were satisfied with using off-the-shelf smartphone devices (Figure 15). While 10% responded neutral, only 5% disagreed that using their smartphones is a good idea because of the privacy concerns.
Now, we compare this satisfaction result with the posttask results. We asked the participants about their satisfaction level after using the existing teacher orchestration solutions. Therefore, we perform a paired sample -test and use this hypothesis and alternate hypothesis:
H0: the satisfaction level of participants is not significant.
H1: the difference between these satisfaction levels is significant.
A confidence interval value of 95% shows the generated results in Table 5, where the value is calculated as 0.007. This is less than 0.05. Therefore, we can drop the null hypothesis and accept the alternative hypothesis as valid. The participants are more satisfied with the proposed smartphone-based teacher orchestration solution than the available solutions.
Lastly, we asked whether this application should be used in their other classrooms. After getting the satisfaction level, the response to this question was also very encouraging. Around 70% of the students recommend using this application in other classrooms for teacher orchestration; see Figure 16. And 15% mark this question as neutral, while only 10% disagree with utilizing this application.
This study presented state of the art in teacher orchestration and provided a more engaging student experience in a smart classroom. It evaluated several learning pedagogies and their effect on different stakeholders, including students, teachers, and administrators. This study proposed a solution that used off-the-shelf devices for teacher orchestration in a smart learning environment. The solution captures data from teacher and students and processes it, where each device processes its data and sends the results to the teacher’s smartphone to provide real-time results. We also evaluate the significance of the proposed solution by using the application in real classrooms and get participants’ feedback using a brief questionnaire survey. The results were significantly positive and also encouraged smartphone-based orchestration solutions. Pose recognition significantly impacts studying body language ; therefore, processing a teacher’s pose in a learning session can open numerous opportunities in a teacher’s orchestration.
The data that support the findings of this study are available upon request from the first author.
Conflicts of Interest
The authors claim no conflict of interests.
J. Chong, S. See, L. L.-H. Seah, S. L. Koh, Y.-L. Theng, and H. B. Duh, “Ubiquitous computing history, development, and scenarios,” Ubiquitous Computing: Design, Implementation and Usability, IGI Global, pp. 1–8, 2008.View at: Google Scholar
I. Alam, S. Khusro, and M. Naeem, “A review of smart TV: past, present, and future,” in 2017 International Conference on Open Source Systems & Technologies (ICOSST), pp. 35–41, 2017.View at: Google Scholar
R. Rawassizadeh, E. Momeni, C. Dobbins, P. Mirza-Babaei, and R. Rahnamoun, “Lesson learned from collecting quantified self information via mobile and wearable devices,” Journal of Sensor and Actuator Networks, vol. 4, no. 4, pp. 315–335, 2015.View at: Publisher Site | Google Scholar
B. K. Engen, T. H. Giæver, and L. Mifsud, “Teaching and learning with wearable technologies,” in E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, pp. 1057–1067, Vancouver, British Columbia, Canada, 2017.View at: Google Scholar
J. Wetzel, H. Burkhardt, S. Cheema et al., “A preliminary evaluation of the usability of an AI-infused orchestration system,” in International Conference on Artificial Intelligence in Education, pp. 379–383, 2018.View at: Publisher Site | Google Scholar
T.-W. Chan, “Sharing sentiment and wearing a pair of ‘field spectacles’ to view classroom orchestration,” Computers Education, vol. 69, pp. 514–516, 2013.View at: Publisher Site | Google Scholar
J. A. Muñoz-Cristóbal, I. M. Jorrín-Abellán, J. I. Asensio-Perez, A. Martinez-Mones, L. P. Prieto, and Y. Dimitriadis, “Supporting teacher orchestration in ubiquitous learning environments: a study in primary education,” IEEE Transactions on Learning Technologies, vol. 8, no. 1, pp. 83–97, 2015.View at: Google Scholar
A. Atabekov, “Internet of things-based smart classroom environment: student research abstract,” in Proceedings of the 31st annual ACM symposium on applied computing, pp. 746-747, 2016.View at: Google Scholar
A. Uzelac, N. Gligorić, and S. Krčo, “System for recognizing lecture quality based on analysis of physical parameters,” Telematics and Informatics, vol. 35, no. 3, pp. 579–594, 2018.View at: Publisher Site | Google Scholar
D. Rico-Bautista, Y. Medina-Cárdenas, and C. D. Guerrero, “Smart university: a review from the educational and technological view of Internet of things,” in International Conference on Information Technology & Systems, pp. 427–440, 2019.View at: Publisher Site | Google Scholar
S. Khusro, M. Naeem, M. A. Khan, and I. Alam, “There is no such thing as free lunch: an investigation of bloatware effects on smart devices,” Journal of Information Communication Technologies and Robotic Applications, vol. 8, pp. 20–30, 2018.View at: Google Scholar
M. Khan, S. Khusro, I. Alam, S. Ali, and I. Khan, “Perspectives on the design, challenges, and evaluation of smart TV user interfaces,” Scientific Programming, vol. 2022, 14 pages, 2022.View at: Publisher Site | Google Scholar
I. Khan, S. Khusro, N. Ullah, and S. Ali, “AutoLog: toward the design of a vehicular lifelogging framework for capturing, storing, and visualizing LifeBits,” IEEE Access, vol. 8, pp. 136546–136559, 2020.View at: Publisher Site | Google Scholar
C.-W. Shen, Y.-C. J. Wu, and T.-C. Lee, “Developing a NFC-equipped smart classroom: effects on attitudes toward computer science,” Computers in Human Behavior, vol. 30, pp. 731–738, 2014.View at: Publisher Site | Google Scholar
K. Anwar, T. Rahman, A. Zeb et al., “Improving the convergence period of adaptive data rate in a long range wide area network for the Internet of things devices,” Energies, vol. 14, no. 18, p. 5614, 2021.View at: Publisher Site | Google Scholar
B. A. Schwendimann, M. J. Rodriguez-Triana, A. Vozniuk et al., “Perceiving learning at a glance: a systematic literature review of learning dashboard research,” IEEE Transactions on Learning Technologies, vol. 10, no. 1, pp. 30–41, 2017.View at: Google Scholar
J. Bacca, S. Baldiris, R. Fabregat, and S. Graf, “Augmented reality trends in education: a systematic review of research and applications,” Journal of Educational Technology and Society, vol. 17, no. 4, pp. 133–149, 2014.View at: Google Scholar
F. Herpich, F. B. Nunes, G. Petri, and L. M. R. Tarouco, “How mobile augmented reality is applied in education? A systematic literature review,” Creative Education, vol. 10, no. 7, pp. 1589–1627, 2019.View at: Publisher Site | Google Scholar
A. Ezenwoke and O. Ezenwoke, Wearable Technology: Opportunities and Challenges for Teaching and Learning in Higher Education in Developing Countries, Iated Digital Library, 2016.
L. P. Prieto, K. Sharma, P. Dillenbourg, and M. Jesús, “Teaching analytics: towards automatic extraction of orchestration graphs using wearable sensors,” in Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, pp. 148–157, 2016.View at: Google Scholar
Y.-T. Sung, K.-E. Chang, and T.-C. Liu, “The effects of integrating mobile devices with teaching and learning on students’ learning performance: a meta-analysis and research synthesis,” Computers & Education, vol. 94, pp. 252–275, 2016.View at: Publisher Site | Google Scholar
D. Bebell and L. O'Dwyer, “Educational outcomes and research from 1: 1 computing settings,” The journal of technology, learning and assessment, vol. 9, no. 1, 2010.View at: Google Scholar
H. Fleischer, “What is our current understanding of one-to-one computer projects: a systematic narrative research review,” Educational Research Review, vol. 7, no. 2, pp. 107–122, 2012.View at: Publisher Site | Google Scholar
A. A. Zucker and D. Light, “Laptop programs for students,” Science, vol. 323, no. 5910, pp. 82–85, 2009.View at: Publisher Site | Google Scholar
T.-C. Liu, Y.-C. Lin, M.-J. Tsai, and F. Paas, “Split-attention and redundancy effects on mobile learning in physical environments,” Computers & Education, vol. 58, no. 1, pp. 172–180, 2012.View at: Publisher Site | Google Scholar
J. Roschelle, K. Rafanan, R. Bhanot et al., “Scaffolding group explanation and feedback with handheld technology: impact on students’ mathematics learning,” Educational Technology Research and Development, vol. 58, no. 4, pp. 399–419, 2010.View at: Publisher Site | Google Scholar
E. Klopfer, J. Sheldon, J. Perry, and V. H. Chen, “Ubiquitous games for learning (UbiqGames): weatherlings, a worked example,” Journal of Computer Assisted Learning, vol. 28, no. 5, pp. 465–476, 2012.View at: Publisher Site | Google Scholar
R. Tatum, M. Bays, J. Hyland, and B. Hartman, “Traffic monitoring using an adaptive sensor power scheduling algorithm,” SN Applied Sciences, vol. 1, no. 12, p. 1552, 2019.View at: Publisher Site | Google Scholar
Z. Wang, F. Wang, T. Brown, J. Xue, and J. Zhang, “Traffic aware wireless visual sensor network deployment for 3D indoor monitoring,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), pp. 1–6, 2019.View at: Google Scholar
M. Sarrab, S. Pulparambil, N. Kraiem, and M. Al-Badawi, “Real-time traffic monitoring systems based on magnetic sensor integration,” in International Conference on Smart City and Informatization, pp. 447–460, 2019.View at: Publisher Site | Google Scholar
C. Ruan, Y. Wang, X. Ma, and H. Kang, “Road meteorological condition sensor based on multi-wavelength light detection,” in Third International Conference on Photonics and Optical Engineering, p. 110521F, 2019.View at: Google Scholar
W. Li, M. Burrow, and Z. Li, “Automatic road condition assessment by using point laser sensor,” in 2018 IEEE SENSORS, pp. 1–4, 2018.View at: Google Scholar
T. Sekizawa, M. Mori, and R. Kanbayashi, Tire-Mounted Sensor and Road Surface Condition Estimation Apparatus including the Same, Google Patents, 2019.
Z. Yang, X. Shi, and J. Chen, “Optimal coordination of mobile sensors for target tracking under additive and multiplicative noises,” IEEE Transactions on Industrial Electronics, vol. 61, no. 7, pp. 3459–3468, 2013.View at: Google Scholar
F. Mao, K. Khamis, S. Krause, J. Clark, and D. M. Hannah, “Low-cost environmental sensor networks: recent advances and future directions,” Frontiers in Earth Science, vol. 7, p. 221, 2019.View at: Publisher Site | Google Scholar
M. Haghi, R. Stoll, and K. Thurow, “Pervasive and personalized ambient parameters monitoring: a wearable, modular, and configurable watch,” IEEE Access, vol. 7, pp. 20126–20143, 2019.View at: Publisher Site | Google Scholar
M. J. González-Campo, J. E. Serrano-Casteneda, and J. C. Martınez-Santos, A Proposal for an Air Quality Monitoring System for Cartagena de Indias, Latin American and Caribbean Consortium of Engineering Institutions (LACCEI), 2019.
S.-B. Kwon, Y.-M. Cho, D.-S. Park, E.-Y. Park, S.-Y. Kim, and M.-Y. Jung, “Temperature and humidity monitoring using ubiquitous senor network in railway cabin,” in Proceedings of the KSR Conference, pp. 948–951, 2008.View at: Google Scholar
H. Wang, D. Li, C. Wu, and X. Yu, “Depth perception of moving objects viaing structured light sensor with unstructured grid,” Results in Physics, vol. 13, p. 102163, 2019.View at: Publisher Site | Google Scholar
A. J. Golparvar and M. K. Yapici, “Graphene smart textile-based wearable eye movement sensor for electro-ocular control and interaction with objects,” Journal of the Electrochemical Society, vol. 166, no. 9, pp. B3184–B3193, 2019.View at: Publisher Site | Google Scholar
M. A. Wohl and J. J. O'hagan, “Method, apparatus, and computer program product for combined tag and sensor based performance modeling using real-time data for proximity and movement of objects,” Tech. Rep., Google Patents, 2015.View at: Google Scholar
S. M. Kumar and L. Lakshmanan, “A situation emergency building navigation disaster system using wireless sensor networks,” in 2018 International Conference on Communication and Signal Processing (ICCSP), pp. 0378–0382, 2018.View at: Google Scholar
H.-B. Choi, K.-W. Lim, and Y.-B. Ko, “Sensor localization system for AR-assisted disaster relief applications,” in Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services, pp. 526-527, 2019.View at: Google Scholar
C. T. Ulmer, Weather Sensor including Vertically Stacked Multi-Power Modules, Google Patents, 2019.
M. Warschauer, “A teacher’s place in the digital divide,” Yearbook of the National Society for the Study of Education, vol. 106, no. 2, pp. 147–166, 2007.View at: Publisher Site | Google Scholar
M. Tissenbaum, C. Matuk, M. Berland et al., Real-Time Visualization of Student Activities to Support Classroom Orchestration, International Society of the Learning Sciences, Singapore, 2016.
H. Matsuno, H. Ogasawara, A. Noguchi, K. Hasegawa, and R. Wajima, “Effective lecturer-student microphone use in a lecture room: a useful approach for teaching and learning pharmaceutical science English,” Journal ofAcademic Society for Quality of Life, vol. 1, no. 1, pp. 21–25, 2015.View at: Google Scholar
I. Khan, S. Khusro, S. Ali, and A. Din, “Low dose aspirin like analgesic and anti-inflammatory activities of mono- hydroxybenzoic acids in stressed rodents,” Proceedings of the Pakistan Academy of Sciences, vol. 148, no. 1, pp. 53–62, 2016.View at: Publisher Site | Google Scholar
B. Ngoc Anh, N. Tung Son, P. Truong Lam et al., “A computer-vision based application for student behavior monitoring in classroom,” Applied Sciences, vol. 9, no. 22, p. 4729, 2019.View at: Publisher Site | Google Scholar
R. Kulkarni, “Real time automated invigilator in classroom monitoring using computer vision,” in 2nd International Conference on Advances in Science & Technology (ICAST), p. 3367715, 2019.View at: Publisher Site | Google Scholar
D. Canedo, A. Trifan, and A. J. Neves, “Monitoring students’ attention in a classroom through computer vision,” in International Conference on Practical Applications of Agents and Multi-Agent Systems, pp. 371–378, 2018.View at: Google Scholar
E. Alepis, M. Virvou, and K. Kabassi, “Affective student modeling based on microphone and keyboard user actions,” in Sixth IEEE International Conference on Advanced Learning Technologies (ICALT'06), pp. 139–141, 2006.View at: Google Scholar
M. C. Brady, S. D'Mello, N. Blanchard, A. Olney, and M. Nystrand, “Evaluating microphones and microphone placement for signal processing and automatic speech recognition of teacher-student dialog,” The Journal of the Acoustical Society of America, vol. 136, no. 4, pp. 2215–2215, 2014.View at: Publisher Site | Google Scholar
Y.-M. Huang, C.-C. Hsu, Y.-N. Su, and C.-J. Liu, “Empowering classroom observation with an e-book reading behavior monitoring system using sensing technologies,” Interacting with Computers, vol. 26, no. 4, pp. 372–387, 2014.View at: Publisher Site | Google Scholar
W. Jintao, J. Cheng, L. Kai, N. Yiming, R. Yinglu, and W. Jing, “Study on energy efficiency of new intelligent classroom lighting control system,” China Computer & Communication, vol. 13, no. 8, p. 21, 2017.View at: Google Scholar
I. Khan, S. Khusro, S. Ali, and J. Ahmad, “Sensors are power hungry: an investigation of smartphone sensors impact on battery power from lifelogging perspective,” Bahria University Journal of Information & Communication Technologies (BUJICT), vol. 9, no. 2, 2016.View at: Google Scholar
I. Khan, S. Ali, and S. Khusro, “Smartphone-based lifelogging: an investigation of data volume generation strength of smartphone sensors,” in International conference on simulation tools and techniques, pp. 63–73, 2019.View at: Publisher Site | Google Scholar
N. Harada, M. Kimura, T. Yamamoto, and Y. Miyake, “System for measuring teacher–student communication in the classroom using smartphone accelerometer sensors,” in International Conference on Human-Computer Interaction, pp. 309–318, 2017.View at: Google Scholar
M. J. Schloneger and E. J. Hunter, “Assessments of voice use and voice quality among college/university singing students ages 18-24 through ambulatory monitoring with a full accelerometer signal,” Journal of Voice, vol. 31, no. 1, pp. 124.e21–124.e30, 2017.View at: Publisher Site | Google Scholar
J. C.-Y. Sun, K.-Y. Chang, and Y.-H. Chen, “GPS sensor-based mobile learning for English: an exploratory study on self-efficacy, self-regulation and student achievement,” Research and Practice in Technology Enhanced Learning, vol. 10, no. 1, p. 23, 2015.View at: Publisher Site | Google Scholar
R. M. Carini, G. D. Kuh, and S. P. Klein, “Student engagement and student learning: testing the linkages,” Research in Higher Education, vol. 47, no. 1, pp. 1–32, 2006.View at: Publisher Site | Google Scholar
G. Hagenauer, T. Hascher, and S. E. Volet, “Teacher emotions in the classroom: associations with students’ engagement, classroom discipline and the interpersonal teacher-student relationship,” European Journal of Psychology of Education, vol. 30, no. 4, pp. 385–403, 2015.View at: Publisher Site | Google Scholar
M. P. Wenderoth, Monitoring the Level of Active Learning in your Classroom and Its Impact on Your Students, The Western Conference on Science Education, Western University, Canada, 2019.
H. Coates, “The value of student engagement for higher education quality assurance,” Quality in Higher Education, vol. 11, no. 1, pp. 25–36, 2005.View at: Publisher Site | Google Scholar
G. D. Kuh, Excerpt from High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter, Association of American Colleges and Universities, 2008.
G. D. Kuh, “Promoting student success: what campus leaders can do. Occasional paper no. 1,” National Survey of Student Engagement, vol. 2005, 2005.View at: Google Scholar
G. D. Kuh, “What student affairs professionals need to know about student engagement,” Journal of College Student Development, vol. 50, no. 6, pp. 683–706, 2009.View at: Publisher Site | Google Scholar
G. D. Kuh, “What we’re learning about student engagement from NSSE: benchmarks for effective educational practices,” Change: The Magazine of Higher Learning, vol. 35, no. 2, pp. 24–32, 2003.View at: Publisher Site | Google Scholar
M. May, S. George, and P. Prévôt, “TrAVis to enhance students’ self-monitoring in online learning supported by computer-mediated communication tools,” Computer Information Systems and Industrial Management Applications, vol. 3, pp. 623–634, 2011.View at: Google Scholar
C.-C. Hsu, H.-C. Chen, Y.-N. Su, K.-K. Huang, and Y.-M. Huang, “Developing a reading concentration monitoring system by applying an artificial bee colony algorithm to e-books in an intelligent classroom,” Sensors, vol. 12, no. 10, pp. 14158–14178, 2012.View at: Publisher Site | Google Scholar
M. Raca, L. Kidzinski, and P. Dillenbourg, “Translating head motion into attention-towards processing of student’s body-language,” in Proceedings of the 8th international conference on educational data mining, 2015.View at: Google Scholar
P. Blatchford, P. Bassett, and P. Brown, “Examining the effect of class size on classroom engagement and teacher-pupil interaction: differences in relation to pupil prior attainment and primary vs. secondary schools,” Learning and Instruction, vol. 21, no. 6, pp. 715–730, 2011.View at: Publisher Site | Google Scholar
R. Fu, D. Wang, D. Li, and Z. Luo, “University classroom attendance based on deep learning,” in 2017 10th International Conference on Intelligent Computation Technology and Automation (ICICTA), pp. 128–131, 2017.View at: Google Scholar
A. Haq, S. Khusro, and I. Alam, “Towards better recognition of age, gender, and number of viewers in a smart TV environment,” in 2021 Mohammad Ali Jinnah University International Conference on Computing (MAJICC), pp. 1–6, 2021.View at: Google Scholar
K. Krafka, A. Khosla, P. Kellnhofer et al., “Eye tracking for everyone,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2176–2184, 2016.View at: Google Scholar
R. Stiefelhagen and J. Zhu, “Head orientation and gaze direction in meetings,” CHI'02 Extended Abstracts on Human Factors in Computing Systems, ACM Digital Library, pp. 858-859, 2002.View at: Google Scholar
M. Raca and P. Dillenbourg, “System for assessing classroom attention,” in Proceedings of 3rd International Learning Analytics & Knowledge Conference, 2013.View at: Google Scholar
D. Dinesh and K. Bijlani, “Student analytics for productive teaching/learning,” in 2016 International Conference on Information Science (ICIS), pp. 97–102, 2016.View at: Google Scholar
A. van Leeuwen and N. Rummel, “Orchestration tools to support the teacher during student collaboration: a review,” Unterrichtswissenschaft, vol. 47, no. 2, pp. 143–158, 2019.View at: Publisher Site | Google Scholar
D. Sapargaliyev, “Wearables in education: expectations and disappointments,” in International Conference on Technology in Education, pp. 73–78, 2015.View at: Publisher Site | Google Scholar
A. Althunibat, “Determining the factors influencing students’ intention to use m-learning in Jordan higher education,” Computers in Human Behavior, vol. 52, pp. 65–71, 2015.View at: Publisher Site | Google Scholar
Á. Suárez, M. Specht, F. Prinsen, M. Kalz, and S. Ternier, “A review of the types of mobile activities in mobile inquiry-based learning,” Computers Education, vol. 118, pp. 38–55, 2018.View at: Publisher Site | Google Scholar
V. Subbarao, K. Srinivas, and R. Pavithr, “A survey on internet of things based SMART, digital green and intelligent campus,” in 2019 4th International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU), pp. 1–6, 2019.View at: Google Scholar
S. Mahmood, S. Palaniappan, R. Hasan, K. U. Sarker, A. Abass, and P. M. Rajegowda, “Raspberry PI and role of IoT in education,” in 2019 4th MEC International Conference on Big Data And Smart City (ICBDSC), pp. 1–6, 2019.View at: Google Scholar
K. Anwar, T. Rahman, A. Zeb, I. Khan, M. Zareei, and C. Vargas-Rosales, “RM-ADR: resource management adaptive data rate for mobile application in LoRaWAN,” Sensors, vol. 21, no. 23, p. 7980, 2021.View at: Publisher Site | Google Scholar
N. Gligorić, A. Uzelac, and S. Krco, “Smart classroom: real-time feedback on lecture quality,” in 2012 IEEE International Conference on Pervasive Computing and Communications Workshops, pp. 391–394, 2012.View at: Google Scholar
N. Gligorić, T. Dimčić, S. Krčo, V. Dimčić, J. Vasković, and I. Vojinović, Internet of Things Enabled LED Lamp Controlled by Satisfaction of Students in a Classroom, A publication of IPSI Bgd Internet Research Society New York, 2014.
S. Khusro, B. Shah, I. Khan, and S. Rahman, “Haptic feedback to assist blind people in indoor environment using vibration patterns,” Sensors, vol. 22, no. 1, p. 361, 2022.View at: Publisher Site | Google Scholar
Z. Mirza and M. N. Brohi, “An in-depth analysis on integrating campus radio frequency identification system on clouds for enhancing security,” Journal of Computer Science, vol. 9, no. 12, pp. 1710–1714, 2013.View at: Publisher Site | Google Scholar
M. W. Sari, P. W. Ciptadi, and R. H. Hardyanto, “Study of smart campus development using Internet of things technology,” in IOP Conference Series: Materials Science and Engineering, p. 012032, 2017.View at: Google Scholar
O. Said and Y. Albagory, “Internet of things-based free learning system: performance evaluation and communication perspective,” IETE Journal of Research, vol. 63, no. 1, pp. 31–44, 2017.View at: Publisher Site | Google Scholar
L.-S. Huang, J.-Y. Su, and T.-L. Pao, “A context aware smart classroom architecture for smart campuses,” Applied Sciences, vol. 9, no. 9, p. 1837, 2019.View at: Publisher Site | Google Scholar
M. Akçayır and G. Akçayır, “Advantages and challenges associated with augmented reality for education: a systematic review of the literature,” Educational Research Review, vol. 20, pp. 1–11, 2017.View at: Publisher Site | Google Scholar
M. Hayat, R. Hasan, S. I. Ali, and M. Kaleem, “Active learning and student engagement using activity based learning,” in 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions)(ICTUS), pp. 201–204, 2017.View at: Google Scholar
H. Elkoubaiti and R. Mrabet, “How are augmented and virtual reality used in smart classrooms?” in Proceedings of the 2nd International Conference on Smart Digital Environment, pp. 189–196, 2018.View at: Google Scholar
J. A. Munoz-Cristóbal, V. Gallego-Lema, H. F. Arribas-Cubero, J. I. Asensio-Pérez, and A. Martínez-Monés, “Game of blazons: helping teachers conduct learning situations that integrate web tools and multiple types of augmented reality,” IEEE Transactions on Learning Technologies, vol. 11, no. 4, pp. 506–519, 2018.View at: Publisher Site | Google Scholar
P. Kosmas and P. Zaphiris, “Embodied interaction in language learning: enhancing students’ collaboration and emotional engagement,” in IFIP Conference on Human-Computer Interaction, pp. 179–196, 2019.View at: Publisher Site | Google Scholar
T. Khan, K. Johnston, and J. Ophoff, “The impact of an augmented reality application on learning motivation of students,” Advances in Human-Computer Interaction, vol. 2019, Article ID 7208494, 14 pages, 2019.View at: Publisher Site | Google Scholar
I. Jivet, M. Scheffel, M. Specht, and H. Drachsler, “License to evaluate: preparing learning analytics dashboards for educational practice,” in in Proceedings of the 8th International Conference on Learning Analytics and Knowledge, pp. 31–40, 2018.View at: Google Scholar
M. Korozi, A. Leonidis, M. Antona, and C. Stephanidis, “LECTOR: towards reengaging students in the educational process inside smart classrooms,” in International Conference on Intelligent Human Computer Interaction, pp. 137–149, 2017.View at: Google Scholar
E. Stefanidi, M. Korozi, A. Leonidis, M. Doulgeraki, and M. Antona, “Educator-oriented tools for managing the attention-aware intelligent classroom,” in the Tenth International Conference on Mobile, Hybrid, and on-Line Learning, 2018.View at: Google Scholar
K. Holstein, B. M. McLaren, and V. Aleven, “Intelligent tutors as teachers’ aides: exploring teacher needs for real-time analytics in blended classrooms,” in Proceedings of the Seventh International Learning Analytics & Knowledge Conference, pp. 257–266, 2017.View at: Google Scholar
M. Z. Iqbal, “Enhancing classroom engagement through smart phone based paper marking solution,” in Proceedings of the 6th International Conference on the Internet of Things, pp. 161-162, 2016.View at: Google Scholar
S. A. Viswanathan and K. VanLehn, “Using the tablet gestures and speech of pairs of students to classify their collaboration,” IEEE Transactions on Learning Technologies, vol. 11, no. 2, pp. 230–242, 2018.View at: Google Scholar
S. Yu-Gang, W. Jia-Bao, and H. Feng, Research on Mobile Learning Model Based on Internet of Things, DEStech Transactions on Social Science, Education and Human Science, no. eshd, 2016.
S. Budi, O. Karnalim, E. D. Handoyo, S. Santoso, H. Toba, H. Nguyen et al., “IBAtS-image based attendance system: a low cost solution to record student attendance in a classroom,” in 2018 IEEE International Symposium on Multimedia (ISM), pp. 259–266, 2018.View at: Google Scholar
S. Yang, Y. Song, H. Ren, and X. Huang, “An automated student attendance tracking system based on voiceprint and location,” in 2016 11th International Conference on Computer Science & Education (ICCSE), pp. 214–219, 2016.View at: Google Scholar
N. Gligoric, A. Uzelac, S. Krco, I. Kovacevic, and A. Nikodijevic, “Smart classroom system for detecting level of interest a lecture creates in a classroom,” Journal of Ambient Intelligence and Smart Environments, vol. 7, no. 2, pp. 271–284, 2015.View at: Publisher Site | Google Scholar
M. Viilo, P. Seitamaa-Hakkarainen, and K. Hakkarainen, “Long-term teacher orchestration of technology-mediated collaborative inquiry,” Scandinavian Journal of Educational Research, vol. 62, no. 3, pp. 407–432, 2018.View at: Publisher Site | Google Scholar
P. J. Donnelly, N. Blanchard, B. Samei et al., “Multi-sensor modeling of teacher instructional segments in live classrooms,” in Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 177–184, Tokyo, Japan, 2016.View at: Google Scholar
P. J. Donnelly, N. Blanchard, B. Samei et al., “Automatic teacher modeling from live classroom audio,” in Proceedings of the 2016 conference on user modeling adaptation and personalization, pp. 45–53, 2016.View at: Google Scholar
P. J. Donnelly, N. Blanchard, A. M. Olney, S. Kelly, M. Nystrand, and S. K. D'Mello, “Words matter: automatic detection of teacher questions in live classroom discourse using linguistics, acoustics, and context,” in Proceedings of the Seventh International Learning Analytics & Knowledge Conference, pp. 218–227, 2017.View at: Google Scholar
S. Liu, Y. Chen, H. Huang, L. Xiao, and X. Hei, “Towards smart educational recommendations with reinforcement learning in classroom,” in 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), pp. 1079–1084, 2018.View at: Google Scholar
R. Bdiwi, C. de Runz, S. Faiz, and A. A. Cherif, “Smart learning environment: teacher’s role in assessing classroom attention,” Research in Learning Technology, vol. 27, 2019.View at: Publisher Site | Google Scholar
I. Alam, S. Khusro, and M. Khan, “Personalized content recommendations on smart TV: challenges, opportunities, and future research directions,” Entertainment Computing, vol. 38, p. 100418, 2021.View at: Publisher Site | Google Scholar
I. Alam and S. Khusro, “Tailoring recommendations to groups of viewers on smart TV: a real-time profile generation approach,” IEEE Access, vol. 8, pp. 50814–50827, 2020.View at: Publisher Site | Google Scholar
M. Jan, S. Khusro, I. Alam, I. Khan, and B. Niazi, “Interest-based content clustering for enhancing searching and recommendations on smart TV,” Wireless Communications and Mobile Computing, vol. 2022, Article ID 3896840, 14 pages, 2022.View at: Publisher Site | Google Scholar
I. Alam, S. Khusro, and M. Khan, “Factors affecting the performance of recommender systems in a smart TV environment,” Technologies, vol. 7, no. 2, p. 41, 2019.View at: Publisher Site | Google Scholar
H. Wang, Z. Pi, and W. Hu, “The instructor’s gaze guidance in video lectures improves learning,” Journal of Computer Assisted Learning, vol. 35, no. 1, pp. 42–50, 2019.View at: Publisher Site | Google Scholar
B. Garcia, S. L. Chu, B. Nam, and C. Banigan, “Wearables for learning: examining the smartwatch as a tool for situated science reflection,” in Proceedings of the 2018 CHI conference on human factors in computing systems, p. 256, Montreal QC, Canada, 2018.View at: Google Scholar
R. Quintana, C. Quintana, C. Madeira, and J. D. Slotta, “Keeping watch: exploring wearable technology designs for K-12 teachers,” in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 2272–2278, San Jose, California, USA, 2016.View at: Google Scholar
Y. Lu, S. Zhang, Z. Zhang, W. Xiao, and S. Yu, “A framework for learning analytics using commodity wearable devices,” Sensors, vol. 17, no. 6, p. 1382, 2017.View at: Publisher Site | Google Scholar
H. Zheng and V. G. Motti, “WeLi: a smartwatch application to assist students with intellectual and developmental disabilities,” in Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 355-356, 2017.View at: Google Scholar
J. Hernandez and R. W. Picard, “SenseGlass: using Google Glass to sense daily emotions,” in Proceedings of the Adjunct Publication of the 27th Annual ACM Symposium on User Interface Software and Technology, pp. 77-78, 2014.View at: Google Scholar
W. Wu, S. Dasgupta, E. E. Ramirez, C. Peterson, and G. J. Norman, “Classification accuracies of physical activities using smartphone motion sensors,” Journal of Medical Internet Research, vol. 14, no. 5, p. e130, 2012.View at: Publisher Site | Google Scholar
A. Al-Haiqi, M. Ismail, and R. Nordin, “A new sensors-based covert channel on Android,” The Scientific World Journal, vol. 2014, Article ID 969628, 14 pages, 2014.View at: Publisher Site | Google Scholar
M. B. Nair, S. R. Kumar, N. Mohan, and J. Anudev, “Instantaneous feedback pedometer with emergency GPS tracker,” in 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC) I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), 2018 2nd International Conference on, pp. 122–126, 2018.View at: Google Scholar
K. Schindler, L. Van Gool, and B. de Gelder, “Recognizing emotions expressed by body pose: a biologically inspired neural model,” Neural Networks, vol. 21, no. 9, pp. 1238–1246, 2008.View at: Publisher Site | Google Scholar