Abstract

In the latter part of the 21st century, the prevalence of online games has significantly increased, encompassing titles connected to the Internet via smart devices, enabling multiplayer interaction. Recent media attention has shed light on the adverse effects associated with online gaming. This research paper explores the viewpoints of 4,700 university students in Jordan regarding the physical, psychological, and behavioural impacts of Internet games. Additionally, it predicts how these impacts may affect the academic performance of 1,410 students. To analyze student trends and forecast outcomes based on sustained game engagement, a convolutional neural network (CNN) was specifically developed for the neural network. The findings revealed student consensus with recommended university measures to limit online game usage, emphasizing a prevalent belief in the negative influence of games on the body, behaviour, and mental health. In terms of the prediction process, the training data encompassed 60%, 70%, and 80% of the dataset. The results revealed that the highest accuracy, 96.69%, was achieved at the 70% threshold for predicting students’ grade point average (GPA). The analysis suggested that projecting a decrease in the percentage of hours dedicated to playing online games could act as a mitigating factor to prevent GPA decline. Consequently, the system advises a range from 99.9% to 4.1%. This implies that a student with a maximum of 99.9% is encouraged to significantly reduce playing hours to preserve their GPA, while a student with a minimum of 4.1% is recommended to decrease playing hours by 4.1%. On average, for the 1,090 students, the system proposes a 48.36% reduction in playing hours to safeguard their GPAs and mitigate potential risks. This high level of accuracy played a crucial role in forecasting students’ GPA outcomes following a year of sustained daily engagement with online games. Notably, the results unveiled a concerning revelation that 80% of students would face a detrimental impact on their academic performance after one year of such consistent online game involvement.

1. Introduction

The Internet of Things is a recently appearing topic in technological, communicative, and financial subjects. It is one of the essential topics of the 21st century, which means (s) correlate things to the Internet via embedded devices [1]. It depends on the people communicating with each other or the database on a wireless connection [2].

Internet of Toys (IoT), or online games, are the games that are played on the Internet partially or completely through the embedded system’s devices containing sensors [3, 4]. These games are spread over several platforms for modern electronic games, such as computers and mobile devices [5]. The design of games ranges from simple textual environments to virtual environments and graphics integration [6]. Online games have increased in the size and scope of different cultures, including different nationalities, ages, and occupations [7].

Recently, machine learning has been used to study the effect of certain techniques on individuals. Machine learning is an artificial intelligence area, which means finding solutions to problems by studying and understanding patterns related to a problem through information technology. In other words, machine learning trains IT systems to create patterns through algorithms, as it relies on creating knowledge through experience and training [8]. Machine learning helps devices program themselves. With many different techniques, there are many applications that machine learning has a role in building, such as [9] robotics: how to manage uncertainty in new conditions, autonomous: self-driving car, social networks: data on relations and preferences, and machine learning to obtain benefits from data.

This research offers significant contributions in several key areas. First, it acknowledges the growing importance of machine learning as an emerging technology and its potential impact on academia and industry. This recognition sets the stage for an investigation into how machine learning can address challenges in higher education.

In addition, the research leverages machine learning techniques to assess how video games affect the academic performance of university students, extending the utility of machine learning into the realm of education and addressing students’ well-being.

Moreover, the study employs machine learning methods to study patterns related to challenges facing university students, using information technology to identify best practices and solutions for real-world issues encountered by students. By utilizing datasets collected from university students, machine learning creates patterns and insights through algorithms, providing a data-driven approach to understanding how video games influence student grade point averages (GPAs) and health. Additionally, the anticipated outcome aims also to predict a reduction in the percentage of hours spent playing online games could mitigate the risk of GPA decline.

Lastly, the findings of the study hold practical significance for university administrators and policymakers, offering valuable insights into best practices and potential challenges associated with the use of video games in education, bridging the gap between technology and education to enhance student well-being.

The research has been divided and structured as follows: Section 2 discusses the background of the terminology. Section 3 discusses related work. Section 4 presents the research methodology. Section 5 discusses the results obtained. Finally, Section 6 lists a conclusion of the work.

Machine learning, a subfield of artificial intelligence, has revolutionized the way we process and analyze data, enabling computers to learn from patterns and make predictions without explicit programming. It has found application across various domains, from healthcare and finance to entertainment and education, transforming industries and driving innovation.

One of the most compelling branches of machine learning is CNNs. CNNs are a specialized class of deep learning models designed for tasks involving image and pattern recognition. They have gained immense popularity due to their remarkable ability to extract intricate features from visual data, making them indispensable in applications such as computer vision and natural language processing.

In this era of data abundance, where images, videos, and sensor data are pervasive, the need for effective and efficient tools to interpret and make sense of this information is paramount. CNNs have emerged as a key solution, allowing us to analyze images, detect objects, recognize faces, and even diagnose medical conditions with remarkable accuracy.

3. Machine Learning

There are two types of programming: traditional which does not run the program and inputs to obtain the output (Figure 1). Machine learning helps to obtain patterns and create programs in a farming system (Figure 2). For example, the seeds are the algorithm, the nutrients are the inputs, and finally, the plants are the programs that produce crops continuously [9].

Machine learning is used to solve several problems [10, 11]:(i)Extracting some data on a specific topic(ii)Make guesses and make predictions based on available data(iii)Working on improving the performance of some operations based on some patterns and algorithms

3.1. Types of Machine Learning

Algorithms and pattern detection play an important role in dividing machine learning into groups [12]:(i)Supervised learning: In this type, the algorithm generates a function that links the input with the expected output. The classification problem is the most important problem to be solved by supervised learning; the function maps a specific vector into several vectors by looking at input-output examples.(ii)Unsupervised learning: In this type, education and training are without a clear goal, such as dividing or collecting (clustering). The machine seeks to structure and sort the data recorded according to certain features.(iii)Semisupervised learning: It combines both supervised and unsupervised learning examples to produce a relevant function or classifier.(iv)Supporting (reinforcing) learning: It is based on the principle of rewards and punishments, whereby algorithms are trained to cause a specific reaction to a positive or negative outcome.(v)Transduction: It is based on determining new outputs based on old inputs and outputs.(vi)Active learning (learning to learn): The algorithm studies its own inductive bias based on previous practice.

Convolutional neural networks (CNNs), often abbreviated as ConvNets, are a specialized class of deep learning models designed to excel in tasks involving visual and spatial data, such as image and video analysis. CNNs have revolutionized the field of computer vision and significantly improved the accuracy of tasks such as image classification, object detection, and facial recognition [13, 14].

The key feature that sets CNNs apart from traditional neural networks is their ability to learn hierarchical features automatically and adaptively from data. This is particularly important when dealing with complex visual data, as CNNs can recognize patterns at multiple levels of abstraction.

Here are some essential characteristics and components of CNNs [15, 16]:

3.1.1. Convolutional Layers

CNNs utilize convolutional layers to apply a set of learnable filters (kernels) to the input data. These filters systematically scan the input, capturing various features such as edges, textures, and shapes. Convolutional layers are instrumental in preserving the spatial relationships within the data.

3.1.2. Pooling Layers

Pooling layers, typically implemented as max-pooling or average-pooling, reduce the spatial dimensions of the data, which helps reduce computational complexity while retaining the most important information. This down-sampling operation helps make CNNs computationally efficient.

3.1.3. Activation Functions

Activation functions such as ReLU (rectified linear unit) are used to introduce nonlinearity into the network, allowing it to model complex relationships within the data.

3.1.4. Fully Connected Layers

These layers connect every neuron in one layer to every neuron in the next layer, which is a common architecture for the final layers of a CNN. Fully connected layers are typically used for classification tasks.

3.1.5. Dropout

Dropout is a regularization technique used in CNNs to prevent overfitting. It randomly drops a fraction of neurons during training, which helps the network generalize better to unseen data.

3.1.6. Transfer Learning

CNNs can leverage pretrained models on large datasets, such as ImageNet, and fine-tune them for specific tasks. This transfer learning approach saves training time and resources.

The success of CNNs can be attributed to their ability to automatically learn and extract features from data, making them well suited for tasks where feature engineering would be challenging and time-consuming. Their applications extend beyond computer vision into fields such as natural language processing (NLP) and speech recognition, where they are adapted to process sequential data effectively.

Overall, convolutional neural networks have revolutionized the field of artificial intelligence, making significant strides in pattern recognition, image analysis, and a wide range of real-world applications. They continue to be a driving force in advancing technology and reshaping how we interact with the visual world.

The authors in [17] introduced an algorithm, the adaptive convolution neural network (ACNN), for diagnosing bearing defects. They enhanced the traditional CNN model, achieving a remarkable 96.8% accuracy in identifying defect types and severity levels. The ACNN adapts to input data vibration levels, ensuring faster convergence during training and demonstrating superiority in error diagnosis compared to the older algorithm.

The paper in [18] proposes a smart CNN traffic forecasting application to classify data, including encrypted data. First, they convert the vector data into a standard array and convert it to grey images. The matrix is entered into the system so that they can know the identity of the data. In this system, two convolutional layers and two fully connected layers were adopted. The convolutional layers extract the characteristics of the input data. A two-dimensional map is extracted from the convolution operation. Each convolution layer in the proposed CNN is followed by an activation layer (ReLU) function. This experiment achieved a good accuracy rate of 94.20%.

The paper in [19] proposed a new approach for the prediction of carrying RUL based on CNN. In the feature extraction stage, they extract the frequency spectrum; after that, the data go to the construction model. This model contains CNN to extract hidden data from the feature map; the authors prefer the most suitable network structure. The CNN model contains eight layers, three convolution layers, three pooling layers, one flattened layer, and ReLU (rectified linear unit) activation function. This experiment used real-world data to collect significantly enhanced prediction accuracy. Their new feature extraction method, the spectrum-principal-energy-vector, can better represent the information from the raw data.

In [20], the authors proposed a system incorporating a sentiment analysis-based churn prediction model for mobile games, employing word embedding and deep learning algorithms. Utilizing three-word embedding models and four diverse datasets, the study followed a three-stage process data collection, data preprocessing, and classification. Notably, the sentiment analysis carried out through deep learning with CNN, yielded an impressive accuracy of 78.64%.

The study in [21] proposed a system to extract the self-learned features using an end-to-end CNN and compare the results with the conventional state-of-the-art and traditional computer-aided diagnosis system’s evaluation. Their model consists of eight layers: one input layer, three convolutional layers, three subsampling layers using the ReLU activation function, and finally, one fully connected layer as the output layer. Images are obtained from the Consortium Lung Image Database Consortium public repository of 1018 cases. Images are divided and collapsed for easy study, leaving clear lumps and marks in the image. Then, the data will be divided into training data and testing data; then, they trained the CNN model to extract features and enter the prediction system. Finally, the model classified the prediction results. The CNN model consists of three convolution layers, one fully convolution-connected layer with ReLU and batch norm activation function, and then an output layer with three neurons. The accuracy results reach 93.9%

The paper in [22] provides an overview of various aspects of data mining research. It emphasizes the significance of data mining in extracting valuable insights from extensive datasets found in databases, data warehouses, and data marts. The paper also delves into contemporary developments in the field of data mining, highlighting its broad applicability across different domains.

The paper in [23] emphasizes the pivotal role of R programming for data analysis, with R Studio serving as a valuable graphical user interface (GUI) for generating reports based on contemporary modelling techniques such as random forest and support vector machines. The paper’s specific focus is on analyzing the academic performance of B.A. students at Dibrugarh University, with an examination of how this performance relates to factors such as caste and gender.

The study in [24] aims to predict the final grades of students in an object-oriented programming course at the University of Plovdiv during the 2021-2022 academic year. The results highlight the potential of machine learning, especially random forest, for early identification of students at risk of failing, facilitating timely support and resource optimization for better academic outcomes. The paper in [25] introduces a method that capitalizes on rattle to streamline the process of choosing an educational data mining model. While specific findings are not presented here, the proposed approach holds promise in simplifying the selection of the most suitable data mining model for educational data, ultimately enhancing decision-making in this field.

The paper introduces [26] a novel computerized educational approach for teaching power electronics laboratory concepts. The proposed method involves implementing PSpice for core power electronic circuits, particularly those relying on thyristor circuits, to analyze behaviours under varying loads. The simulation models developed serve to augment and support power electronics education at the undergraduate level, effectively integrating with the power electronic laboratory course. An examination of the impact of these simulations on student outcomes reveals that they contribute to a deeper understanding of course material, leading to improved academic performance. This study, conducted in 2009 and published in Comput Appl Eng Educ in 2011, emphasizes the efficacy of incorporating computerized simulations for enhanced learning in power electronics.

The study in [27] introduces machine learning to predict success rates for science fiction films. Leveraging diverse Internet data and advanced supervised machine learning techniques, including various algorithms, it develops a robust predictive classification approach using the Internet Movie Database (IMDb). Recognizing the impact of social media platforms, sentiment analysis is integrated, resulting in a hybrid success rating prediction model. The findings demonstrate superior precision, faster execution, and higher accuracy compared to previous research. This breakthrough allows producers and marketers to anticipate a film’s success prerelease, shaping tailored promotional activities. The study sets the groundwork for more accurate prediction models, considering the increasing role of social media. Overall, it highlights the potential of machine learning in predicting science fiction film success, offering new possibilities for the industry.

The study by [28] explores photoplethysmography (PPG) as a cost-effective, rapid, and noninvasive tool for coronary artery diseases. PPG signals, reflecting changes in microvascular blood volume, are utilized for cardiorespiratory disorder identification. Analyzing data from 360 subjects, a two-stage classification process differentiates healthy and unhealthy subjects, achieving high accuracy. The Naïve Bayes classifier reaches 94.44% for the first stage and 89.37% for the second stage, emphasizing PPG’s accuracy in diagnosing cardiovascular disorders with a simple microcontroller for enhanced patient comfort.

The dynamic landscape of higher education in Jordan demands a deeper exploration of the interplay between technology and student well-being. While Internet of Things (IoT) devices, particularly “video games,” have become ubiquitous among Jordanian university students, research on their potential effects on academic performance and overall well-being remains scarce.

This study addresses a significant gap in the literature by employing machine learning techniques to analyze data from Jordanian university students. Our primary objective is to uncover the intricate relationships between the usage of video games, grade point average (GPA) outcomes, and the overall well-being of students. Additionally, we seek to predict whether a reduction in the percentage of hours spent playing online games could serve as a mitigating factor in averting the risk of GPA decline.

Understanding these connections holds immense value for educators, policymakers, and students themselves as they navigate the ever-evolving impact of technology on academic success and quality of life. Our focus on video games, representing online games within the Jordanian university context, fills a remarkable void in current research. Given their rising popularity and potential to shape students’ daily routines, investigating their impact assumes even greater significance. By addressing this gap, we aspire to illuminate the complex interplay between technology, academic achievement, and student health in Jordan. Our findings aim to contribute meaningfully to the broader conversation surrounding educational technology and its far-reaching implications.

Specifically, this research will introduce and utilize CNN to predict the influence of video games on both students’ GPAs and their health.

4. Research Methodology

4.1. Dataset

We conducted our data collection from a sample of 4700 university students in Jordan, spanning a specific timeframe from August 20, 2020, to October 20, 2020. During this period, we gathered information about these students, including their history of engagement with online games, dating back five years. Subsequently, we continued our data collection from August 25, 2021, to October 25, 2021, to obtain postengagement data from these students.

The dataset comprising information from all 4700 students was meticulously analyzed to gain insights into their perspectives and opinions regarding online games. Furthermore, we generated forecasts regarding the potential impact on the grade point average (GPA) for 1,410 students, considering their forthcoming involvement in online games. The anticipated outcome aims also to predict a reduction in the percentage of hours spent playing online games could mitigate the risk of GPA decline.

Regarding ethical considerations in our research, we adhered to a set of principles throughout the data collection and analysis process. First, we ensured full transparency by disclosing the researcher’s identity, the purpose of the questions, and the overall research intent. Second, we maintained the anonymity of the questionnaire respondents.

Furthermore, we actively sought to incorporate feedback and reactions from the online community within our inquiries, always obtaining permission from community members to utilize their questionnaire responses. It is worth noting that our primary aim in applying research ethics was to ensure that students provided honest and accurate responses while maintaining their anonymity. No personal information such as names was collected from participating students.

In conclusion, the responses gathered online are considered part of the public domain and opinion. Users, as per the provided definition, granted permission for their responses to be viewed, reviewed, and examined as part of this public discourse.

4.2. Initial Model

In this segment, our focus lies on examining the variables that could impact the neural network. Many variables may affect the proposed experiment, but we have identified the importance based on common aspects among university students. 11 input variables and 10 output variables to CNN are defined in Tables 1 and 2.

The output variables in this study are classified into two distinct categories. The first category examines how these variables impact the health of students. The focus of this paper, however, centres on the second category, specifically exploring how these variables influence the grade point average (GPA) and the hours spent playing.

4.3. Cross-Validation

It is sometimes called rotation estimation; it is used to evaluate predictive models to divide the original dataset into a training set and a test set. In cross-validation, the original dataset is randomly divided into approximately equal subsamples k (k = 10 in this research). One subsample is kept from the k subsamples as the validation data for model testing, and the remaining k − 1 subsamples are used as the training data.

The cross-validation process is then repeated k times (folds), with each subsample k being used exactly once as the verification or confirmation data. The results of k folds can then be averaged to produce a single estimate. The advantage of this method is that all measurements are used for both training and validation, and each note is used for validation exactly once. We used the 10-fold cross-validation technique, and the data were split to be trained and tested ten different times. Then, we take the average of the results obtained each time.

4.4. Final Model

Upon assimilating crucial concepts in the domain of artificial intelligence and comprehending the underlying mechanics of the proposed CNN algorithm, the subsequent step involves formulating the definitive design for the proposed system.

Following the experimentation phase, the model necessitated configuration into four layers. The input layer encompasses 11 neurons, each employing a rectified linear unit (ReLU) activation function. Subsequently, the convolution layer integrates 12 neurons with a ReLU activation function. The pooling layer, comprising 11 neurons, employs the sigmoid activation function. Lastly, the output layer is composed of ten neurons, where each neuron corresponds to a distinct output feature.

4.4.1. ReLU (Rectified Linear Activation Function)

The activation function is a piecewise linear or straight function that will output the input immediately if it is positive. Otherwise, it will output zero [29]. It has enhanced the default activation function for many types of neural networks because a model that practices it is easier to train and often produces better evaluations [4, 29].

ReLU stands for the improved linear unit and is a type of activation function. Mathematically, it is represented as follows:

4.4.2. Sigmoid Activation Function

A sigmoid function is a numerical function that has a characteristic S-shaped curve. The sigmoid function is commonly used to refer to the logistic function, also called the logistic sigmoid function. All sigmoid functions have the resources to connect the entire number line on a small scale, such as between 0 and 1, or −1 and 1, so one use of a sigmoid function is to transform a real value into one that can be interpreted as a possibility.

So, Figure 3 shows our proposed model.

4.5. Why Convolution Neural Network?

Convolution is a simple input filter data that produces activation results or a prediction. Repeated application of the same filter makes entries in the activation map called a feature map, which shows the locations and strength of a detected feature in input, such as an image.

A convolutional network has the following characteristics and features: Convolutional filters/kernels are defined by width and height (specific information). It has some input and output channels. The hyperparameters associated with the convolution process, such as padding size and stride, are essential aspects to consider. Convolutional layers convolve the input and transfer its result to the following layer. This is related to the response of a neuron in the visual cortex to a specific provocation [30]. Each convolutional neuron processes data only for its receptive field. However, fully connected feedforward neural networks can be used to learn features and classify data.

Convolutional networks may include local or global pooling layers to streamline the underlying estimate. Pooling layers decrease the dimensions of the input data by coupling the output data of neuron clusters at one layer into one neuron in the next layer. Local pooling connects small clusters, typically 2 × 2. Global pooling acts on all the neurons of the convolutional layer [8]. There are two common types of pooling: max and average. Max pooling uses the highest value of each neuron at the previous layer [10, 31], while average pooling instead uses the average value.

Even so, the authors in this research have chosen CNN as it is a powerful and well-established machine learning model for prediction and recognition. It is noteworthy that our preference for a CNN over alternative machine learning algorithms is guided by the distinct characteristics of the data and the unique requirements of the addressed problem. This consideration includes factors like spatial hierarchies in data. Moreover, CNNs exhibit proficiency in managing complex relationships within data, rendering them particularly adept for tasks characterized by intricate patterns and dependencies.

Also, CNNs are particularly effective when dealing with image data, which might have been the predominant type of data in their study, such as data related to online games and their effects on students.

However, we aim to use other machine learning models in future work for comparison which could provide a more comprehensive analysis of our findings. This would allow for a comparative assessment of CNN’s performance and provide a broader perspective on the effectiveness of different machine learning approaches in the context of their research.

4.6. Experiment and Results

The impact of video games (Internet of Things toys, often representing online games) on students’ grade point averages and health is a complex and multifaceted issue with ongoing research and debate. We considered here only GPA:(1)Excessive video game engagement can lead to less time dedicated to studying and completing schoolwork, potentially lowering grades(2)The immersive nature of video games can be distracting, hindering focus and concentration on academic tasks(3)Late-night gaming sessions can disrupt sleep patterns, impacting cognitive function and learning

Constructing the convolutional neural network (CNN) model relies on selecting pivotal factors that significantly influence the outcomes. Our questionnaire was systematically administered to discern these influential factors, aiming to predict the impact of video games on students. The overarching goal is to guide students, advising them to reduce the percentage of time spent engaging with video games as a strategic measure to safeguard their GPA.

4.6.1. Performance Metrix

To review the best system architecture for building the model, many system architectures were built using the iterative process. The different architectures were compared based on the prediction accuracy of each architecture. The different CNN architectures have been built and tested using the Keras library, which was run under Windows Intel® Core™ i5 2450-M [email protected] GHz 2.50 GHz, 4.00 GB RAM PC, with Windows 10 64-bit Operating System.

5. Results

Working on choosing the appropriate method for dividing the data for training and testing, we found that the holdout method does not differ from the cross-validation method in the data. Therefore, the traditional method was adopted in dividing the data for training data and test data.

5.1. CNN Prediction by Splitting the Data into 60% as the Training Set

The CNN training involved utilizing 60% of the dataset as a training set with a single hidden layer. All experiments were meticulously documented to determine the optimal accuracy ratio. Intriguingly, when incorporating two hidden layers, the accuracy surged to 96.1%, as visually depicted in Figure 4.

5.2. CNN Prediction by Splitting the Data into 70% as the Training Set

By training CNN using 70% of data as the training set, we have accuracy higher than by using 60% of data as the training set especially for using two hidden layers. The results are shown in Figure 5.

5.3. CNN Prediction by Splitting the Data into 80% as the Training Set

Figure 6 shows the accuracy results when we used 80% of the data as a training set with one and two hidden layers.

Figure 6 shows that there was no improvement in accuracy when using 70% of the data as training.

So, the previously mentioned results helped us choose the appropriate architecture for CNN’s proposed model to get the best results. 70% of the data were used as training data and two hidden layers to get an accuracy rate of 96.69%, which was the highest and best result obtained.

Finally, we tested 1410 students who play online games and studied the effect of time students spend playing on their college GPA and academic performance. Figure 7 shows that 80% of the students got their grades down; in return, 2% increased their grades, and the rest were not affected.

In Figure 8, the percentage of total playing hours to be reduced for each student is depicted based on CNN predictions. Notably, this figure exclusively represents students whose GPAs are affected by video games, constituting 80% of affected GPA students. Considering an accuracy of 96.6% and a total of 1,410 students, this equates to 1,090 students.

The figure illustrates a range from a maximum percentage of 99.9% to a minimum of 4.1%. This implies that the system suggests, for instance, a student with a maximum of 99.9% to reduce playing hours by 99.9% to safeguard their GPA. Conversely, a student with a minimum of 4.1% is recommended to reduce playing hours by 4.1%. On average, across the 1,090 students, the system suggests a reduction of 48.36% in playing hours to ensure the protection of their GPAs and mitigate potential risks.

5.4. Comparative Analysis of This Study against State-of-the-Art Results

This comprehensive examination delves into a detailed comparative analysis, systematically contrasting the findings and outcomes of the present study with the latest state-of-the-art results in the respective field. The examination encompasses a thorough exploration of key metrics, methodologies, and innovations employed, shedding light on the distinctive contributions and potential advancements introduced by the current research in relation to the existing state-of-the-art landscape as shown in Table 3.

5.4.1. Several Limitations Are Important to Consider in This Study
(1)Data Type Dependency: The primary limitation of this research, particularly regarding convolutional neural networks (CNNs), is its reliance on structured data. While CNNs excel in tasks like image recognition, they may not be the most suitable choice when dealing with more intricate, unstructured data types or datasets with high variability. For instance, when handling natural language text or datasets with diverse data forms, the limitations of CNNs may become apparent.(2)Resource Intensiveness: Another significant limitation is the substantial computational resources and training data required. Training deep CNN models can be computationally demanding, and it necessitates access to extensive labelled datasets. This could pose challenges for researchers with limited resources or access to data, potentially limiting the widespread applicability of the study’s findings.(3)Overfitting Risk: CNNs are susceptible to overfitting, particularly when working with small datasets. Ensuring that the model generalizes well to unseen data can be a daunting task. Researchers may need to employ additional techniques, such as data augmentation or transfer learning, to mitigate the risk of overfitting and enhance model performance.(4)Sample Size Variability: It is worth noting that some of the studies cited in this research, like the one conducted by [21], might have had access to larger sample sizes. In contrast, the current study might have been constrained by a smaller sample size. This discrepancy in sample size could impact the robustness and generalizability of the study’s findings. Larger sample sizes typically provide more statistically reliable results and broader applicability to the target population.

Addressing these limitations and recognizing their impact on the study’s outcomes is essential for researchers and readers to accurately interpret the findings and understand the context within which they apply.

6. Conclusion

With the spread of modern technologies that connect devices, such as the Internet of Things, Internet games have spread with them so that every home has at least one person playing Internet games. Therefore, in this research, we collected data from 4700 university students in Jordan to study the effect of online games on students in terms of their average. The CNN was designed to study the effect of games on the student rate and provided an accuracy score of 96.69%.

The results showed that if students continued to play online games for the same period every day, the GPA would decrease for 80% of the students, and it would increase for 2% of the students; for the rest of the students, there would be no change in their rates.

The system also proposes a variation spanning from a maximum of 99.9% to a minimum of 4.1%. This signifies that, for example, a student with a maximum of 99.9% is advised to diminish their playing hours by 99.9% to preserve their GPA. Conversely, a student with a minimum of 4.1% is counselled to reduce playing hours by 4.1%. On average, considering the 1,090 students, the system advocates for a 48.36% reduction in playing hours to safeguard their GPAs and alleviate potential risks.

In our future endeavours, we intend to delve into alternative algorithms for predicting the influence of online games on students, conducting a comprehensive comparative analysis among them. Furthermore, we aspire to broaden the application of the netnography methodology across multiple studies and to validate its effectiveness by scrutinizing community sentiments and opinions on specific issues. Additionally, we plan to employ machine learning algorithms to assess the impact of video games on students’ health.

Data Availability

Data are available upon request from [email protected].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The researchers would like to express their sincerest gratitude for the Academic Alliance for Reconciliation in the Middle East and North Africa (AARMENA) Capacity Building in Higher Education Project (CBHE), cofunded by the Erasmus+ Program of the European Union.