Research Article
Towards a Low-Cost Teacher Orchestration Using Ubiquitous Computing Devices for Detecting Student’s Engagement
Algorithm 1
Detect facial feature and head direction.
Input: camera image img in Bitmap//Google Vision APIs only accept bitmap images | Output: facial features (head rotation/title) | 1. Detect faces in img using Google APIs FaceDetector and store them in list<FirebaseVisionFace> object faces | 2. For of faces list | i. Set face to faces | ii. Set headRotation to angle of face | iii. Set headTilt to angle of face | //Now using this data for decision-making | iv. If and //means the student is not looking straight | a. If warnTeacher is false//if last time he was looking straight, then wait for the next iteration before marking him inactive | I. Set warnTeacher to true | b. Else//it means he was also looking somewhere else last time | I. Mark this student inactive | c. Else//means the student is looking straight | d. Set warnTeacher to false//clear previous state | 3. End |
|