Scientific Programming

Scientific Programming / 2020 / Article
Special Issue

Healthcare Big Data Management and Analytics in Scientific Programming

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 6471438 |

Usman Ali Khan, Iftikhar Ahmed Khan, Ahmad Din, Waqas Jadoon, Rab Nawaz Jadoon, Muhammad Amir Khan, Fiaz Gul Khan, Abdul Nasir Khan, "Towards a Complete Set of Gym Exercises Detection Using Smartphone Sensors", Scientific Programming, vol. 2020, Article ID 6471438, 12 pages, 2020.

Towards a Complete Set of Gym Exercises Detection Using Smartphone Sensors

Guest Editor: Iván García-Magariño
Received23 Dec 2019
Accepted20 Feb 2020
Published22 Jul 2020


Smartphones with gym exercises predictors can act as trainers for the gym-goers. However, various available solutions do not have the complete set of most practiced exercises. Therefore, in this research, a complete set of most practiced 26 exercises was identified from the literature. Among the exercises, 14 were unique and 12 were common to the existing literature. Furthermore, finding suitable smartphone attachment position(s) and the number of sensors to predict exercises with the highest possible accuracy were also the objectives of the research. Besides, this study considered the most number of participants (20) as compared to the existing literature (maximum 10). The results indicate three key lessons: (a) the most suitable classifier to predict a class (exercise) from the sensor-based data was found to be KNN (K-nearest neighbors); (b) the sensors placed at the three positions (arm, belly, and leg) could be more accurate than other positions for the gym exercises; and (c) accelerometer and gyroscope when combined can provide accurate classification up to 99.72% (using KNN as classifier at all 3 positions).

1. Introduction

The advancement in the technology troubled humans by making their lives busy. This is affecting their health negatively [1]. However, the technology also helps humans to improve their health, education, business, and social relationships [2]. The beneficial impact of technology is tremendous, especially in the health sector. Multiple hardware and software [3, 4] are used to improve overall human health. Among various sources of maintaining health, gyms are the major source of physical fitness.

People join gyms to achieve goals like bodybuilding, physical fitness, or losing weight. In the modern world, technology has replaced the traditional concepts of guidance and training to stay healthy and fit. The tools like smartphones and devices like wearable gadgets are among the many resources that are helping to stay healthy and fit [58]. There is also some researches like [911] that support the notion that technology can help to achieve fitness objectives. Besides, there are various smartphone applications like [1214] that can track different physical activities, e.g., walking, running, sitting, and standing with the corresponding calorie burn out. The sensors (accelerometer, gyroscope, etc.) are used to track the activities.

Many wearable devices and smartphone applications track physical activities and calorie burnout like [3, 4]. However, none of the studies provides the information appropriate to measure major gym activities. For example, research studies such as [9, 11, 15] targeted a group of upper body muscles along with some warm-up exercises only. This research is a similar attempt yet is different in many aspects. First, in this research, 14 exercises of different muscle groups (abdominal, upper body, and lower body) are added to move towards a complete solution.

Second, in most of the existing research, the position of the sensors or the devices was only at the arm. Seeger et al. [16] used three sensors at the following positions: a single accelerometer at the wrist, a hand glove, and a sensor at torso position. We hypothesized that the use of accelerometer and gyroscope at three body positions (arm, leg, and belly) could enhance accuracy because of the dependence of various gym exercises on either of the positions individually or in combination.

The third aim is to determine the number of sensors required to detect an exercise accurately. At the hypothesized three positions, five sensors are used: at the arm and the leg, the accelerometer and gyroscope together, while on the belly, only the accelerometer. The single sensor at the belly will only be used to determine the laying (x-axis), standing (y-axis), or in-between position that is often used in the gym exercises (e.g., angle leg press). The contribution of these sensors towards the accuracy has been analyzed in this research as well.

The classification algorithms used to detect exercises in the related work such as [7, 9, 11, 1517] are linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), Naïve Bayes (NB), support vector machine (SVM), and dynamic time wrapping (DTW) algorithms. The result of the accuracies achieved by the studies was promising. However, their used datasets were quite sparse (collected from 8 to 10 persons) for 43 unique exercises. The exercises were also related to each other or were exercises from the same muscle groups having the same activity motion/patterns. The fourth aim of this study is to increase the number of participants in the real-world settings to bring more rigor to the findings. The increase in the number of participants and thus dataset could also affect the choice of the exercise detection algorithm. This forms the fifth aim, the selection of the most appropriate algorithm(s) to detect gym exercises.

The rest of the paper is organized as follows. Section 2 discusses the relevant literature in the context of the aims of this study. Section 3 is about the materials and methods. In Section 4, experimental setup is elaborated. In Section 5, the analysis and results are discussed. Section 6 concludes the paper as well as identifies some limitations. The section also embarks upon the possible future work.

2. Literature Review

In this section, related work is discussed in the context of the aims of this study. Therefore, this section is divided into the following three subsections: (1) exercise detection, (2) positioning and the number of sensors, and (3) exercise detection algorithms. The participant’s selection is described in the Materials and Methods section.

2.1. Exercise Selection

The first activity recognition study based on wearable sensors device [18] was published in 2000. In this study, they attached two accelerometers inside the trousers’ pocket to recognize daily life activities. The study [19] examines the use of a single smartphone accelerometer in activity recognition. The reported results showed accuracies between 80% and 97% depending on the set of activities used and the processing techniques. Muehlbauer et al. [7] used the arm position to attach an Arm-Hostler with a fixed sensor to recognize a set of ten upper body gym exercises. They reported 93.6% accuracy in more than 90% of the cases they studied. MyHealthAssistant [16] classified the gym exercises using three accelerometers (on the hand glove, wrist, and torso). They trained a Bayesian classifier on the mean and variance features collected via an accelerometer. They collected the data of 11 exercises and achieved 92% accuracy. Chang et al. [8] used 2 accelerometers (on the hand and waist position) and examined a Hidden Markov Model (HMM) and a Bayes Classifier to identify exercises. They achieved 90% accuracy for the set of nine exercises and around 5% of the overall miss-count rate.

The activity recognition data collected from the literature corresponding from the years 2006–2018 found only 6 of 25 research studies related to gym exercises while the remaining 19 of 25 research papers were about daily life physical activities, emotional recognition, and elderly fall detection [20]. All the 25 papers were used to extract the information like the type of sensors used, features used to recognize activities, and the classification algorithms used.

2.2. Position and Number of Sensors

In most of the literature, only a single sensor for activity recognition is utilized. However, some studies used more than one sensor as well. For example, the authors in [21] put both an accelerometer and a gyroscope together and stated that the gyroscope adds nothing to the recognition results. However, some contradictory results are reported by the authors in [22]. The study [10] reported a 3.1 to 13.4 percent increase in recognition accuracy for 08 of 09 activities when an accelerometer is combined with a gyroscope while using the KNN classification algorithm. The average accuracy reported was 83.7% with an accelerometer and 90.2% with both accelerometer and gyroscope with an increase of 6.5% in average accuracy. The study also revealed that the sensor combination provides better results as compared to accelerometer alone. However, the paper does not report individual accuracies, thus resulting in an ambiguity whether the gyroscope or the accelerometer played a major role in the accuracies.

Table 1 provides the details of the number of sensors and device positions as per the literature while Table 2 describes the use of the combination of sensors (sensor fusion) as well as their accuracies.

S. no.(Number)/position of smartphones/devicesNames of sensors usedAccuracy acquired (%)Reference

1(01)/armAccelerometer, gyroscope, magnetometer, electromyography EMG75.70[11]
3(01)/armAccelerometer and gyroscope93.00[9]
4(03)/01 wrist, 01 hand glove, and 01 torso03 accelerometers only92.00[16]
5(01)/armAccelerometer and gyroscope93.60[7]

Sensor nameTimes usedReasonAccuracies min., max., and avg.References

Accelerometer12The accelerometer is the most powerful sensor in smartphones. It can be used for activity recognition by inferring the user’s movements, such as walking, standing, running, sitting, and gym activities.Min. accuracy = 82.2%[15, 2333]
Max. accuracy = 97.3%
Avg. accuracy = 87.7%
Accelerometer and gyroscope08Accelerometer and a gyroscope, to be used in recognizing physical activity and providing the strongest result.Min. accuracy = 67.8%[10, 11, 31, 3438]
Max. accuracy = 97%
Avg. accuracy = 88.3%
Accelerometer, gyroscope, and magnetometer05Adding a magnetometer with an accelerometer and gyroscope. The results are not encouraging because magnetometer causes overfitting in training classifiers due to its dependence on directions.Min. accuracy = 71.6%[10, 11, 34, 37, 39]
Max. accuracy = 96%
Avg. accuracy = 82.6%

From the analysis of Table 2, it can be argued that the combination of accelerometer and gyroscope provides the strongest accuracy results. Moreover, in most cases, a gyroscope does improve the recognition accuracy from 3.1% to 13.4% when used in combination with an accelerometer [10]. The magnetometer’s role in activity recognition was poor.

2.3. Exercise Detection Algorithms

The related literature has also used different classification algorithms. For example, the authors of [21] used KNN combined with support vector machine (SVM) and the authors of [22] used KNN combined with decision tree and Naïve Bayes, while the authors of [40] used J48 Decision Tree combined with Naive Bayes for exercise recognition. They reported an average accuracy of 95%, 90.2%, and 88%, respectively. Table 3 shows the top three classifiers Naïve Bayes (NB), decision trees (J48), and K-nearest neighbor (KNN) being abundantly used for the activity recognition purpose.

AlgorithmTimes usedReferences

Naïve Bayes5[4, 5, 12, 14, 15]
Decision trees5[4, 9, 11, 12, 14]
K-neaest neighbor3[3, 4, 14]
Linear discriminant analysis2[21, 40]
Hidden Markov model (HMM)2[10, 15]
Quadratic discriminant analysis QDA1[40]
Logistic regression2[2, 9]
Support vector machine1[16]
Dynamic time warping (DTW)1[13]

The accuracy of the results depends on the suitable selection of the classification algorithm as well as on the selection of the suitable parameters for them. Table 4 shows the top three most used features in the literature, that is, mean, standard deviation, and minimum and maximum as classification algorithm parameters.

FeaturesNo. of papers in which usedReferences

Mean15[24, 69, 11, 1416, 19, 21, 22, 40]
Standard deviation11[3, 4, 8, 9, 14, 15, 18, 19, 21, 22, 40]
Minimum/maximum09[3, 4, 13, 15, 16, 18, 32, 33, 40]

3. Materials and Methods

In this section, the materials and methods used in the study are discussed. Section 3.1 discusses the selection of gym exercises for the current study. Section 3.2 is about the development of the application used for the data collection process. Section 4 is about experiment and data collection methods used in the study.

3.1. Selection of the Exercises

The process of the selection was started by collecting a list of all the gym exercises from the two sources [41, 42]. The sources listed a total of 74 gym exercises which will be called set TE (Total Exercises). To verify the repeatability of the exercises in gyms, one of the authors visited four most known and commonly used gyms of the city to meet with the gym-goers. They were interviewed about the most common exercises the gym-goers trained on. The results were a subset of 54 most used gym exercises. The set will be called the set SE (Subexercises).

The set SE was compared with a set of the common exercises mentioned in the literature which resulted in (Common Exercises) set CE having 35 exercises. The exercises in CE were further categorized into exercises group along with information like exercise positions and equipment used to do the exercise.

Further analysis of the set CE revealed that five exercises were repeating in different muscle groups with different names. One of the exercises occurred three times and the rest of the four exercises twice in each muscle group. Removing the repetitions from the set CE resulted in a set of 29 exercises.

From the 29 exercises, 3 exercises were related to the head and are considered as warm-up exercises in the literature [43]. These 3 exercises are also removed from the list of 29 exercises reducing the final exercise set (Total Final Exercises) TFE to 26.

The exercises mentioned in the above paragraph were extracted from research papers like [7, 9, 11, 1517]. Among these references, the study in [17] was related to only gym warm-up exercises and thus was not included in the exercise selection. The remaining five papers were used to form 5 exercise sets (EP1–EP5). Here, E stands for exercise and P for the paper. Thus, EP1 represents exercise set extracted from paper 1, that is, reference [7] and so on. The union of the exercise sets EP1–EP5 was taken resulting in set (Total Exercises from Papers) TEP containing 43 exercises considered in the literature. The set TFE was subtracted from the set TEP to provide 14 unique exercises and 12 exercises that are considered in the literature (Table 5).

S. no.Repeating R/unique UExercise groupExercise nameExercise positionEquipment used

1Unique exercises groupShoulderFace pullStandingCable/rope
2ShoulderCable front raiseStandingCable/rope
3BicepsScott curlSitting bendDumbbell
4BicepsSmith machine drag curlSittingBarbell
5TricepsTriceps with barStandingCable/rope
6TricepsDecline close grip bench pressLying bench declineBarbell
7ChestStanding cable crossStandingCable/rope
8BackWide grip pull upStanding hangingFix rods
9BackT bar rowsSitting hangingBarbell
10BackChin upsStanding hangingFix rods
11AbsAdjustable sit-up benchSitting and lyingBench
12AbsAbs wheelLyingAbs wheel
13AbsRoman chairsLyingRoman chair
14AbsFlutter kickLyingFlat surface
15Common exercises groupLegsLeg pressLyingLeg press machine
16LegsRomanian deadliftStandingBarbell
17LegsBarbell squatSit-standBarbell
18ShoulderIncline press wide gripLyingBarbell
19ShoulderStanding barbell pressStandingBarbell
20BicepsInclined dumbbell curlSitting inclineDumbbell
21BicepsBarbell preacher curlSittingBarbell
22TricepsTriceps press with cableStandingCable/rope
23ChestMachine bench pressSittingBench machine
24ChestDipsStanding hangingFix rods
25ChestPec deck machine (butterfly)SittingPec deck machine
26ShoulderSeated barbell shoulder pressSittingBarbell

3.2. Application Development

To accomplish the objectives of this research, the first requirement was to develop an application to collect data from the participants. For the purpose, an android based smartphone application was developed. The users could add, view, edit, and delete personal profiles. The user interface of the developed application is depicted in Figure 1, whereas the flow of the user’s interaction with the application is elaborated in Figure 2. The application is also provided as a supplementary file with the paper for the researchers who want to replicate the research.

Figure 2 shows the overall process followed in the developed application for the data collection. The start screen provides options for the new users to register themselves, while already registered users can go to the registered users' screen. After the selection of the new registration option, the user could move to the signup screen option. There they either can enter their profile information such as height and weight to register themselves with the new user profile or could go back to the main screen without registration. After clicking already registered users option, users could move to already registered users profile list to select their profile by name. The selected profile screen with information appears about the users from where they can start recording the exercise data and will also start doing the exercise. They can also view their stored records or could go back to the main screen. The users could exit the application from the main screen.

4. Methods

The developed application was installed on 3 smartphones and was positioned as shown in Figure 3. An LG Model F180 was attached to the leg while another similar model was attached to the arm. This model supports both the accelerometer and gyroscope sensors providing the values of acceleration and rotation. For the belly position, we required only one sensor to determine the state (sitting, laying) of the participant. For the purpose Q-Mobile model, i7 was attached at the belly position having the support of only the accelerometer sensor.

The research also aimed to increase the number of participants and to collect varying data. Therefore, 20 participants with two sets of a total of 10 repetitions for each participant were used for the purpose. The 10 repetitions are used in the related literature before such as [11]. The data were collected against a selected set of 26 exercises. The smartphones were attached at three different body positions (arm, belly, and leg). All the gym-goers taking part in data collection were asked to behave normally as their usual exercising day. The sensors X, Y, and Z values were being recorded and stored in a file by the application while performing exercises. All the activities were carried out indoors in a gym.

4.1. Experimental Setup

The experimental setup section is divided into a further four subsections. Section 4.1.1 is about ethical compliance as per involvement and data collection of the participant. Section 4.1.2 explains the demographics of the participants. Section 4.1.3 is about the data collection process. The preparation of the data for the analysis is discussed in Section 4.1.4.

4.1.1. Ethical Compliance

The departmental ethics committee, called Project Research and Evaluation Committee (PREC), approved the study design and the procedure as defined in the above section. Informed consent for the study was obtained from the participants of this study.

4.1.2. Participants

For the selection of the participants, the busiest gym in the center of the city was selected. The gym-goers used to visit the gym regularly were approached and the aims and objectives of the data collection were explained to them. The 20 participants all males volunteered to participate in the data collection process. The participants were between the age brackets of 20 and 35. Their mean age was 25.85 years with SD of 4.13. Their heights ranged from 162 to 181 cm with a mean of 171.1 cm and SD of 5.34. Their weights ranged from 62 to 80 kg with a mean of 68.1 kg and SD of 5.56, respectively. The gym experience of the participant was between 2 and 19 months with a mean of 9.35 months and SD of 4.90. All the exercises were completed with free weights (participants choose weights themselves).

4.1.3. Data Collection

The data were recorded from 5 sensors (two sensors of the smartphone attached at the leg, two attached at the arm, and one attached at belly). All the sensors recorded X, Y, and Z values while the participant was doing the exercise. A triaxial accelerometer estimates the acceleration along X, Y, and Z axis and gyroscope (Pitch, Yaw, and Roll) helps the accelerometer to predict the orientation of the sensor. Three smartphones were synchronized to get the time from the server. The time was recorded up to millisecond along with X, Y, and Z values. This resulted in the 15 X, Y, and Z values along with a timestamp, the category of the exercise, and the exercise name. The dataset available from the literature [17, 44, 45] was not used because of the nonavailability of the data of 14 unique exercises. We also decided to collect data for the exercises whose data was available because of the probable setup differences between the existing studies and this study. This may have help in countering the bias and variations.

4.1.4. Data Preparation

The recognition process includes a collection of exercises data using multiple sensors. The data is preprocessed and segmented and the features are extracted and classified as the last step [11, 46]. The same process is followed in this research as well.

Three different files containing exercise data from each smartphone were combined carefully to match the participant's assigned ids and time stamps. In the second step, the recorded data from CSV files were preprocessed to remove the extra noise. For example, at the start and the end of an exercise, the participant's movements were very random as well as jerky and were not aligned with the required exercise. Therefore, to remove this noise we removed the data from the first 3 seconds and the last 3 seconds of the recorded data of each exercise. For each exercise, there were 2 sets, each set of 10 repetitions and with an average participant time consumption for an exercise of 38 seconds. After preprocessing, we considered the data of 32 seconds only. The application was programmed to record 4 samples in a minute.

Various previous studies such as [11] used a 4-second window to extract required features and 1 minute of the slide to vary the data. We adopted the same strategy. The features extracted were based on the most used features (mean, standard deviation, and minimum and maximum) for the similar nature of the data as presented in Table 3. For each of the X, Y, and Z values, these four features were extracted forming a total of 60 features (36 accelerometer features and 24 gyroscope features).

To analyze the preprocessed data, we used WEKA (Waikato Environment for Knowledge Analysis) [47]. The preprocessed data (extracted features) were converted to ARFF (Attribute-Relation File Format). The listed attributes were named as per the following strategy. In the name of the attribute, the first position character ‘a’ stands for an arm, ‘b’ stands for the belly, and ‘l’ stands for the leg. The second position character ‘a’ or ‘’ stands for accelerometer or gyroscope. The third position character ‘x’, ‘y’, or ‘z’ stands for axis values X, Y, and Z. The selected classifiers NB, KNN, and J48 as per Table 3 were utilized with default configuration settings. In the test options, the percentage split with 80% training and 20% testing option was selected as used also by [38, 48] to evaluate the performance and accuracy of the classifiers.

5. Analysis and Results

The existing research mostly used three classifiers, namely, NB, KNN, and J48 (cf. Table 3) and hence they were also utilized in this research. All of the above-mentioned algorithms can create multiform class boundaries and, therefore, are suitable for the data collected via sensors and devices [10]. Furthermore, for practical applications, these methods are fast and are easily implementable [10].

We examined the values of both the sensors (accelerometer and gyroscope) with the above-mentioned classifiers at three different body positions (arm, belly, and leg). The analysis was done in five ways: firstly, the analysis of the exercises considering the data from three sensors of the same nature, that is, accelerometer (rows 1, 2, and 3 of Table 6) attached at three positions (arm, leg, and belly). As there were only two gyro sensors at arm and leg positions, data from the positions are analyzed and presented as per rows 4 and 5 of Table 6. The same process is continued for the combination of three, four, and five sensors as illustrated in Table 6.

S numberSensor nameSmartphone positionsNaïve Bayes classification accuracy in percentage (%)K-NN classification accuracy in percentage (%)Decision tree (J-48) classification accuracy in percentage (%)Sensors features (mean, std. deviation, min, and max) A (x, y, z) G (x, y, z)

Single sensor used

Two sensors used
6Accelerometers = 2Arm and belly69.9298.4092.9624
7Arm and leg74.1899.1795.3124
8Belly and leg63.1098.6593.8524
9Gyroscopes = 2Arm and leg42.5296.2986.4024
10Accelerometer + gyroscopeArm

Three sensors used
12Accelerometers = 2, gyroscope = 1Arm and belly77.4299.4193.9436
13Accelerometer = 2, gyroscope = 1Belly and leg77.5999.0294.2136
14Accelerometers = 3Arm, belly, and leg79.5999.5189.036

Four sensors used
15Accelerometers = 2, gyroscopes = 2Arm and leg79.3299.6396.0548

Five sensors used
16Accelerometers = 3, gyroscopes = 2Arm, belly, and leg80.7299.7296.2960

In Table 6, column “sensor name” represents the name of the sensor from which the data is acquired. The number in front of the sensor name represents the count of sensors used to acquire and analyze the data. For example, S numbers 6, 7, and 8 in Table 6 display “accelerometer = 2” which is an indication that two accelerometers attached at the body positions (displayed in the next column) were used to analyze the data. The results of the input data from the chosen three classifiers are presented in the classifier names columns in the form of accuracy. The last column represents the number of features used in the analysis. A single sensor used at body position will have 12 features, two sensors will have 24 features, and so on.

The results revealed that the best accuracy of 99.72% was achieved with the KNN classifier using five sensors at three attachment positions (arm, belly, and leg). However, as can be seen from the summary as per Table 7, this is not a big variation from the accuracy of the KNN using two sensors at two attachment positions. A minimum of two sensors used at the arm and leg position provided an accuracy of 99.27% which is equatable to five sensor positions.

Sensors usedSmartphone positionClassifier usedMaximum accuracy achieved (%)Variance

Single sensor usedLegKNN96.39——
Two sensors usedArm and legKNN99.272.88%
Three sensors usedArm, belly, and legKNN99.510.23%
Four sensors usedArm and legKNN99.630.12%
Five sensors usedArm, belly, and legKNN99.720.14%

For each (exercise) activity, the accuracies achieved using the KNN classifier with both the accelerometer and gyroscope are a little better than using only the accelerometer. The accuracy results and their difference are shown in Table 8.

Exercise groupExercise nameThree accelerometers’ accuracy (%)Three accelerometers + two gyroscopes’ accuracy (%)Difference (%)Avg. time (in sec) consumed at exercise, 2 sets each of 10 reps

Unique exercises groupFace pull1001000.026.8
Cable front raise1001000.038.5
Scott curl98.499.2+0.633.2
Smith machine drag curl95.6100+4.430.9
Triceps with bar98.199.0+0.926.8
Decline close grip bench press1001000.026.3
Standing cable cross1001000.028.7
Wide grip pull up99.599.7+0.223.8
T bar rows1001000.024.2
Chin ups1001000.023.7
Adjustable sit-up bench1001000.037.9
Abs wheel1001000.046.5
Roman chairs1001000.037.8
Flutter kick1001000.028.7
Common exercises groupSeated barbell shoulder press10097.4−2.627.1
Incline press wide grip99.1100+0.926.6
Standing barbell press97.099.1+2.927.4
Inclined dumbbell curl1001000.037.2
Barbell preacher curl99.399.30.034.4
Triceps press with cable98.999.0+0.129.2
Machine bench press1001000.025.2
Leg press1001000.041.3
Romanian deadlift1001000.037.2
Barbell squat1001000.030.2
Both groupsAll exercises average accuracy99.5199.72+0.2131.4

The classification confusion matrix in Figure 4 shows that the highest accuracy is achieved using data from all the sensors of the smartphones and with a KNN classifier. Examining the confusion matrix, the results show that most of the classes (exercises) are accurately being predicted. However, a couple of classes (exercises) were not differentiable because of the similarity in the exercise position and nature. For example, the Triceps group (triceps press with cable and triceps press with bar) are having similar motion patterns. However, we still can differentiate between them based on the execution time differences as per Table 8. The table shows that triceps press with bar takes 26.8 seconds for 20 repetitions while triceps with cable take 29.2 seconds for the same 20 repetitions thus having a difference of 2.4 seconds.

6. Conclusion and Future Work

The goal of this study was to predict gym exercises with the help of smartphone sensors in real-world settings. To achieve the goal, exercises from the literature were extracted for which prediction research work was conducted and was intersected with a set of the most used exercises in the gym. The result was 14 unique exercises for this study. Besides, 12 common exercises were also considered for comparison purposes. Furthermore, finding the sensors suitable attachment positions, as well as the number of sensors to utilize in predicting the exercise accurately, was also one of the goals of this research. Also, we conducted the exercises with the greatest number of participants (20) as compared to the existing literature (avg. max. 10). The results indicated three key lessons derived from this study while examining the goals. (a) The most suitable classifier to predict a class from the sensor-based data was found to be KNN. (b) The sensors placed at three positions (arm, belly, and leg) could provide better accuracy than other positions when the gym exercises are under the question and (c) smartphone sensors accelerometer and gyroscope in combination can provide accurate classification (using KNN as classifier at all 3 positions) in most of the activities averaging up to 99.72% accuracy. Their combination can increase accuracy by up to 0.21%.

The research can be implemented in the form of a smartphone application that can be turned on by the users while doing exercises in the gym. In the future, this application can be embedded with a calorie burn out tracker that should be able to guide gym-goers to do which exercise and for how much time? The output could be in the form of sound notifications as well as sound messages that could advise to change or stop the exercise.

The research has some limitations as well. In this research, only 14 unique exercises are considered taking the considered exercises in the literature to 55 exercises. In this context, of the total of 74 exercises as per sources [41, 42], nineteen (19) gym exercises still remain to be predicted though not most often used. The future research work can consider these exercises as well. In addition, in this research, no female participants were involved thus having a probability of nonapplicability of this research for the female participants. The future research could also hire female participants to increase further accuracy.

Data Availability

The data are available within the supplementary information file. However, any query about the research conducted in this paper is highly appreciated and can be asked from the principal authors (Usman Ali Khan and Dr. Iftikhar Ahmed Khan).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Supplementary Materials

An ARFF (Attribute Relation File Format) file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. A complete description including information on how to read this file is available at the following URL. (Supplementary Materials)


  1. V. Burke, L. J. Beilin, K. Durkin, W. G. K. Stritzke, S. Houghton, and C. A. Cameron, “Television, computer use, physical activity, diet and fatness in Australian adolescents,” International Journal of Pediatric Obesity, vol. 1, no. 4, pp. 248–255, 2006. View at: Publisher Site | Google Scholar
  2. S. Stieglitz and T. Brockmann, “The impact of smartphones on e-participation,” in Proceedings of Annual Hawaii International Conference on System Sciences, pp. 1734–1742, Maui, HI, USA, January 2013. View at: Google Scholar
  3. C. Smith, Nike Fitbit Flex,, Malaysia, 2020,
  4. J. Park and E. Friedman, FITBIT,, USA, 2020,
  5. O. Banos, M. Damas, H. Pomares, A. Prieto, and I. Rojas, “Daily living activity recognition based on statistical feature quality group selection,” Expert Systems with Applications, vol. 39, no. 9, pp. 8013–8021, 2012. View at: Publisher Site | Google Scholar
  6. M. Zhang and A. A. Sawchuk, “Human daily activity recognition with sparse representation using wearable sensors,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 3, pp. 553–560, 2013. View at: Publisher Site | Google Scholar
  7. M. Muehlbauer, G. Bahle, and P. Lukowicz, “What can an arm holster worn smartphone do for activity recognition?” in Proceedings of the 2016 ACM International Symposium Wearable Comput. ISWC, Heidelberg, Germany, September 2011. View at: Google Scholar
  8. K.-h. Chang, M. Y. Chen, and J. Canny, “Tracking free-weight exercises,” UbiComp 2007: Ubiquitous Computing, vol. 4717, pp. 19–37, 2007. View at: Publisher Site | Google Scholar
  9. D. Morris, T. Saponas, A. Guillory, and I. Kelner, “RecoFit: using a wearable sensor to find, recognize, and count repetitive exercises,” in Proceedings of the 32nd Annual Conference on Computer Security, pp. 3225–3234, New Orleans, LA, USA, October 2014. View at: Google Scholar
  10. M. Shoaib, H. Scholten, and P. Havinga, “Towards physical activity recognition using smartphone sensors,” in IEEE 10th International Conference on Ubiquitous Intelligence and Computing, pp. 80–87, Vietri sul Mere, Italy, December 2013. View at: Google Scholar
  11. H. Koskimäki, “MyoGym-introducing an open gym data set for activity recognition collected using myo armband,” Ubicomp/Iswc, vol. ’17, pp. 537–546, 2017. View at: Google Scholar
  12. D. M. Bravata, C. Smith-Spangler, V. Sundaram et al., “Using pedometers to increase physical activity and improve health,” JAMA, vol. 298, no. 19, pp. 2296–2304, 2007. View at: Publisher Site | Google Scholar
  13. C. B. Chan, D. A. J. Ryan, and C. Tudor-Locke, “Health benefits of a pedometer-based physical activity intervention in sedentary workers,” Preventive Medicine, vol. 39, no. 6, pp. 1215–1222, 2004. View at: Publisher Site | Google Scholar
  14. D. Merom, C. Rissel, P. Phongsavan et al., “Promoting walking with pedometers in the CommunityThe step-by-step trial,” American Journal of Preventive Medicine, vol. 32, no. 4, pp. 290–297, 2007. View at: Publisher Site | Google Scholar
  15. I. Pernek, K. A. Hummel, and P. Kokol, “Exercise repetition detection for resistance training based on smartphones,” Personal and Ubiquitous Computing, vol. 17, no. 4, pp. 771–782, 2013. View at: Publisher Site | Google Scholar
  16. C. Seeger, A. Buchmann, and K. Van Laerhoven, “myHealthAssistant: a phone-based body sensor network that captures the wearer’s exercises throughout the day,” in Proceedings of the 6th ICST Conference on Body Area Networks, Beijing, China, November, 2011. View at: Google Scholar
  17. O. Baños, M. Damas, H. Pomares, I. Rojas, M. Tóth, and O. Amft, “A benchmark dataset to evaluate sensor displacement in activity recognition,” in Proceedings of the 2012 ACM Conference on Ubiquitous Computing, p. 1026, Pittsburgh PA, USA, September 2012. View at: Google Scholar
  18. K. Van Laerhoven and O. Cakmakci, “What shall we teach our pants?” in Proceedings of the 2nd IEEE International Symposium on Wearable Computer, pp. 77–83, Cambridge, MA, USA, September 2000. View at: Google Scholar
  19. O. D. Incel, M. Kose, and C. Ersoy, “A review and taxonomy of activity recognition on mobile phones,” Bionanoscience, vol. 3, no. 2, pp. 145–171, 2013. View at: Publisher Site | Google Scholar
  20. M. Habib, M. Mohktar, S. Kamaruzzaman, K. Lim, T. Pin, and F. Ibrahim, “Smartphone-based solutions for fall detection and prevention: challenges and open issues,” Sensors, vol. 14, no. 4, pp. 7181–7208, 2014. View at: Publisher Site | Google Scholar
  21. A. Anjum and M. U. Ilyas, “Activity recognition using smartphone sensors,” in Proceedings of the 2013 IEEE 10th Consumer Communications and Networking Conference (CCNC), pp. 914–919, Las Vegas, NV, USA, January 2013. View at: Google Scholar
  22. W. Wu, S. Dasgupta, E. E. Ramirez, C. Peterson, and G. J. Norman, “Classification accuracies of physical activities using smartphone motion sensors,” Journal of Medical Internet Research, vol. 14, no. 5, pp. 1–9, 2012. View at: Publisher Site | Google Scholar
  23. I. E. Smith and W. G. Griswold, “Ubiquitous computing,” UbiComp, vol. 4206, 2006. View at: Google Scholar
  24. M. B. Berchtold, D. Gordon, H. R. Schmidtke, M. Beigl, and   ActiServ, “Activity recognition service for mobile phones,” in Proceedings International Symposium on Wearable Computers, ISWC, Seoul, South Korea, October 2010. View at: Google Scholar
  25. G. Bieber, P. Koldrack, C. Sablowski, C. Peter, and B. Urban, “Mobile physical activity recognition of stand-up and sit-down transitions for user behavior analysis,” in Proceedings of the 3rd International Conference on PErvasive Technologies PETRA, Samos, Greece, January 2010. View at: Google Scholar
  26. A. Henpraserttae, S. Thiemjarus, and S. Marukatat, “Accurate activity recognition using a mobile phone regardless of device orientation and location,” in Proceedings of the 2011 International Conference on Body Sensor Networks (BSN, Zakopane, Poland, June 2011. View at: Google Scholar
  27. J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recognition using cell phone accelerometers,” ACM SIGKDD Explorations Newsletter, vol. 12, no. 2, pp. 74–82, 2011. View at: Publisher Site | Google Scholar
  28. Y.-S. Lee and S.-B. Cho, “Activity recognition using hierarchical hidden Markov models on a smartphone with 3D accelerometer,” Lecture Notes in Computer Science, pp. 460–467, 2011. View at: Publisher Site | Google Scholar
  29. A. F. Olsen and J. Torresen, “Smartphone accelerometer data used for detecting human emotions,” in Proceedings of the 3rd International Conference on Systems and Informatics, pp. 410–415, Shanghai, China, November 2016. View at: Google Scholar
  30. Z. Wang, D. Wu, R. Gravina, G. Fortino, Y. Jiang, and K. Tang, “Kernel fusion based extreme learning machine for cross-location activity recognition,” Information Fusion, vol. 37, pp. 1–9, 2017. View at: Publisher Site | Google Scholar
  31. B. J. Mortazavi, M. Pourhomayoun, G. Alsheikh, N. Alshurafa, S. I. Lee, and M. Sarrafzadeh, “Determining the single best axis for exercise repetition recognition and counting on smartwatches,” in Proceedings of the 2015 IEEE 11th International Conference on Wireless and Mobile Implantable Body Sensor Networks (BSN), Jeju Island, Korea, July 2014. View at: Google Scholar
  32. M. Kose, O. Incel, and C. Ersoy, “Online human activity recognition on smart phones,” in Proceedings of the 2nd International Work Mobile Sensor From Smartphones Wearables to Big Data, Beijing, China, October 2012. View at: Google Scholar
  33. F. Li, K. Shirahama, M. Nisar, L. Köping, and M. Grzegorzek, “Comparison of feature learning methods for human activity recognition using wearable sensors,” Sensors, vol. 18, no. 3, p. 679, 2018. View at: Publisher Site | Google Scholar
  34. Y. E. Ustev, O. Durmaz Incel, and C. Ersoy, “User, device, and orientation independent human activity recognition on mobile phones,” in Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct, pp. 1427–1436, Zurich, Switzerland, September, 2013. View at: Google Scholar
  35. T. Saponas, J. Lester, J. Froehlich, J. Fogarty, and J. Landay, iLearn on the iPhone : Real-Time Human Activity Classification on Commodity Mobile Phones, University of Washington, Seattle, WA, USA, 2008.
  36. M. Nilsson and H. Wilén, Push-up Tracking through Smartphone Sensors, 2016, MS Thesis, KTH Royal Institute of Technology, Stockholm, Sweden.
  37. K. Liu, Y. Wang, R. Chen, T. Chu, and J. Bi, “A survey of human activity recognition using smartphones,” Journal of Residuals Science & Technology, vol. 13, no. 8, pp. 1–10, 2016. View at: Google Scholar
  38. C. A. Ronao and S.-B. Cho, “Recognizing human activities from smartphone sensors using hierarchical continuous hidden Markov models,” International Journal of Distributed Sensor Networks, vol. 13, no. 1, p. 155014771668368, 2017. View at: Publisher Site | Google Scholar
  39. L. Liu, Y. Peng, M. Liu, and Z. Huang, “Sensor-based human activity recognition system with a multilayered model using time series shapelets,” Knowledge-Based Systems, vol. 90, pp. 138–152, 2015. View at: Publisher Site | Google Scholar
  40. H. Martín, A. M. Bernardos, J. Iglesias, and J. R. Casar, “Activity logging using lightweight classification techniques in mobile devices,” Personal and Ubiquitous Computing, vol. 17, no. 4, pp. 675–695, 2013. View at: Publisher Site | Google Scholar
  41. Bodycraft Exercise Guide, 2020,
  42., 2020, The Personal Training System, URL:
  43. S. Woods, T. Bridge, D. Nelson, K. Risse, and D. M. Pincivero, “The effects of rest interval length on ratings of perceived exertion during dynamic knee extension exercise,” The Journal of Strength and Conditioning Research, vol. 18, no. 3, pp. 540–545, 2004. View at: Publisher Site | Google Scholar
  44. D. Micucci, M. Mobilio, and P. Napoletano, “UniMiB SHAR: a new dataset for human activity recognition using acceleration data from smartphones,” Applied Sciences, vol. 10, p. 1101, 2016. View at: Google Scholar
  45. D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “A public domain dataset for human activity recognition using smartphones, eur. Symp. Artif. Neural networks,” Computational Intelligence, pp. 24–26, 2013. View at: Google Scholar
  46. A. Bulling, U. Blanke, and B. Schiele, “A tutorial on human activity recognition using body-worn inertial sensors,” ACM Computing Surveys, vol. 46, no. 3, pp. 1–33, 2014. View at: Publisher Site | Google Scholar
  47. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The WEKA data mining software,” ACM SIGKDD Explorations Newsletter, vol. 11, no. 1, pp. 10–18, 2009. View at: Publisher Site | Google Scholar
  48. B.-J. Ho, R. Liu, H.-Y. Tseng, and M. Srivastava, “MyoBuddy: detecting barbell weight using electromyogram sensors,” Proceedings of the 1st Work. Digit. Biomarkers, pp. 27–32. View at: Google Scholar

Copyright © 2020 Usman Ali Khan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.