Abstract
Education is one of the core elements in building the career of an individual. It needs proper strategies and techniques to fulfill the modern world’s requirements, such as intelligent learning systems, intelligent management systems, and intelligent computational systems. At present, there is a dearth of systematic debate on how to proceed along the road of machine learning (ML) and education. As a result, this study focuses on the use of artificial intelligence (AI) to promote saxophone informatization teaching strategies, particularly the new strategies brought by deep learning (DL) to saxophone teaching from the perspectives of teaching resources, teaching environment, teaching and learning strategies, teaching management, and teaching evaluation. A matrix decomposition strategy with dynamic weight learning is suggested by keeping the earlier aspects in consideration, which is used to produce a recommendation algorithm that fundamentally incorporates multiple contextual features such as geographic, temporal, and social characteristics, as well as the weight parameter learning process, and essentially constitutes the linear fusion technique’s building approach. All the experiments are carried out on the yelp dataset in order to check if the recommended algorithm is effective or not. The performance of the suggested method is compared to the benchmark algorithms in order to prove that the dynamic weight parameter learning technique is as effective as gradient descent. A comparison of the algorithm that employs one contextual element alone vs the method that uses three contextual factors is also conducted to demonstrate that the linear fusion of several components improves the system’s recommendation performance.
1. Introduction
Music is an auditory art, and as a part of our daily life, it can provide nourishment for our minds from time to time. When we are pleased, we listen to a harmonic tune to make our mood freer to fly; when we are irritable, we communicate a peaceful song to make our mood less restless; and when we are decadent, we listen to uplifting music to restore our good mood. Based on these facts, it is impossible to estimate the influence that music has on us. AI’s good attempts and success in the sphere of music application and musical education in recent years have also been astonishing. As a result of the rapid development of contemporary science and technology, changes have occurred in the material and technical aspects of music education. Furthermore, educators’ teaching models and theories have changed, resulting in positive effects on the development of various theories of modern music education, the formation of new teaching concepts, and the improvement of teaching methods and instructional approaches. These are considered problems and a result of technological advancement as well.
AI is playing a vital role in almost every aspect of our daily lives and is not as far away from our daily lives as we think. For example, expert systems, autonomous planning, intelligent search, language translation, and other biometric technologies including fingerprint recognition, face recognition, retinal recognition, iris recognition, and palm print recognition, are some of the applications of AI in our daily lives. All of these domains have AI-powered applications, and this article focuses only on the possibilities and trends of AI applications in art education [1].
Several articles have examined the integration of AI into music education in the last few decades. Recent decades have seen a considerable shift in the world of music education, with more focus on the promotion of understanding of the theoretical education research, as well as on the role of AI technology in current music education. Both developed and developing countries, where pedagogical theory is sufficiently mature, and developing countries, which are still actively exploring their educational paths, have sought to explore a new approach to music education. Musical teachers are aiming to explore new development areas and concepts in arts integration, gradually eliminating outmoded and unscientific educational beliefs in order to meet the demands of today’s society while also teaching musicians in high demand [2]. Based on current worldwide and local developments, the authors hope to give a thorough and scientific reference for others to comprehend the element of AI in music education. For example, expressing a strong belief that substantial technology breakthroughs will have a positive, robust, and profound influence on the development of music education and training approaches. A little amount of academic research is expected to be contributed to the combination of the two, because others will have a macroscale understanding and reference value in comprehending this field, allowing the high-tech findings to be more beneficial to music education.
In Table 1, Simon Holland’s “Artificial Intelligence in Music Education: A Critical Review” is most relevant to this study [9]. The review summarizes the application of AI in music education, including early computer-assisted instruction, multimedia and hypermedia tools, the “classical” lasso, and the music intelligence teaching system (an intelligent teaching system for 16th-century counterpoint). It further includes Mac Voice (a famous intelligent critic of speech), Bamberger’s music marking system, the summary marker system, and a review of the methodological rhetoric of AI. Evaluation of methodological discourse, computational models of learning for support and creativity, and highly interactive interfaces based on AI theory are three publications that examine a block containing machine intelligence and music instruction. Wikipedia (Wikipedia) highlights the history and software development of AI in music education, such as smart music, a computer-based practice tool that allows interaction with artists [10]. Star play Melomics is a portfolio for automated proprietary computational systems (without human intervention) for music, based on a bionic approach and produced by Melomics media, among others, which will be described in detail below [11]. The link between AI and music is discussed in other works, however, it is explored more in terms of composition rather than musical educational content.
The rest of the paper is organized as follows: Section 2 is about the related work that discusses the music-related work in the literature. Section 3 describes the material and methods followed in the proposed study. This section further highlights the data acquisition and cleaning process as well as the proposed algorithm that laid the foundation of the music recommendation algorithm. Section 4 is about the experimental result and analysis that highlight the findings of the paper. Finally, Section 5 summarizes the overall theme of the paper.
2. Related Work
2.1. The Evolution and Current Research Status of AI in Music Education
When it comes to electronic music, Mitchell Mark, a well-known American music educator, wrote in his book “Contemporary Music Education” at the time (the 1960s), it was seen as a form of music learning, and people were learning it only as a new form of music, but they did not realize how much potential this electronic music could have in the future of school music education” [11]. Music education has considered AI as a high-tech breakthrough in the 1960s when industrialized nations harnessed AI technology to create electronic keyboard instruments and intelligent electronic synthesizers [12]. They not only store and play a broad variety of instrument sounds but also switch between them as well, and they were compact and portable, which made them incredibly convenient. Electronic synthesizers were brought into the field of music education with the purpose of boosting the convenience of communal music teaching in the music classroom. The idea was to change the generation’s boring teaching method by utilizing the instrument’s cognitive functions. This instrument piqued the interest of educators at the time since it represented an experimental use of intelligent technology in music education.
Science and technology both started developing at an accelerating pace in the late twentieth century, the music education market and the classroom adopted and applied more efficient, compassionate, and intelligent synthesizers, which became increasingly popular. These instruments have been modified to include new features that are more convenient, clever, and perfect than the previous versions of the instruments. It is not only capable of programming or even algorithmically creating music in a major, harmonic, or polyphonic style as opposed to the previous generation but it is also capable of storing a diverse range of sounds in the system, whether they are our national instruments or the instruments of other countries. Further, it even stimulates the sounds of animals and other objects and allows learners to choose from a wide range of sounds according to their needs. Learning to make their own sound, whether it is the sound of a swallow or a bell, is something that the students can experiment with. Furthermore, the instrument system includes more user-friendly and powerful sequencing and editing functions than any traditional instrument [13], making it superior to any traditional instrument. It is also possible for the instrument system to perform intelligent “behaviors” such as programmed acquisition and editing based on the instructions and information input provided by the creator and the user; in other words, the instrument system can ultimately meet the needs of the learner whenever he/she desires. As a result, the research and development of this new musical instrument, as well as its widespread dissemination, have compelled music educators and students to seek new ideas, new pedagogical approaches, and new teaching philosophies that can accommodate and adapt to the impact and challenges of scientific and technological achievements. The formerly collaborative act of playing with numerous individuals will now be replaced by one person creating or performing on a musical instrument, making learning and practice much more effective. Students will be able to play their compositions and practice pieces instantly in and out of class, and teachers will be able to get good, immediate evaluation and feedback on their teaching objectives and the key points of their teaching content in music practice. With the further development and widespread use of this instrument, the teaching methods and theories of “experiential learning” and “comprehensive evaluation” were proposed and established. The foundation and preparation for the establishment of experiential learning, experiential teaching, comprehensive assessment, and teaching theory were made in the practical sense [14].
After the development and generations of electronic musical instruments, intelligent electronic musical instruments have been introduced, which are more powerful, portable, and convenient for more and more musicians to accept, apply, and promote. At the same time, many electronic musical instrument manufacturers in the market have developed a variety of styles and functions of intelligent pianos. These pianos can come with their own recording programs and can automatically perform different styles of instrumental works according to different program settings. These newest synthesizers have changed the quality of sound, performance, sequencing, and articulation compared to previous electronic instruments. More convenient and efficient programming functions have replaced the previous complicated operation methods, and the new sequencing functions have been replaced by automatic and even voice modes instead of the previous manual modes. The presence of these instrumental musicians can be found in both primary and secondary music teaching classrooms as well as in higher music education institutions and even in the various music education institutions that have sprung up in society. To name a few: instrumental ensembles, sight-singing, and ear training classes; group piano class teaching; and music theory teaching. While improving students’ understanding of musical elements and deepening their theoretical knowledge, they are also able to enhance their classroom learning ability, comprehension, and adaptability in many ways. It is clear that the use of intelligent musical instruments is of great value and significance.
In recent years, the use of music technology in schoolcurricula has been transforming the way teachers and students communicate with each other in the music learning classroom, which has led to a subtle change in the way music classroom situations fit into the diverse cultures of society [15–19]. Users usually download suitable smart software from their computers or phones to assist artists in their learning. For example, music science and technology tools can effectively use and integrate students’ music composition materials, laying the foundation for new works. This effect not only inspires classroom teaching and learning but also new work creation. This issue not only gives fresh directions, instructional ideas, and thinking space for music educators but also opens up new ways.
Speech recognition is the ability of a computer to recognize human spoken words [10], however recently, this technology has been used for the welfare of music education teaching. For instance, Stone et al. [20] have examined the construction of a music teaching system using voice recognition and AI technologies in order to increase the consistency and dependability of music instruction and to create a theoretical foundation for future music instruction.
2.2. The Application of AI in Music Education
“Transits into an Abyss” was performed by the London symphony orchestra in July of that year. This was a big deal. A premier orchestra performed a work composed entirely by machines for the first time. Computers called Iamus, a Greek mythological character who is thought to be able to speak the language of birds and wrote the piece. There was no need for any human intervention to compose an exceedingly intricate piece of music that often resonates with an audience’s emotions in just a few minutes with this method. He has already written millions of original compositions in the current classical style, and he has the ability to adapt and experiment with different musical genres in the future.
More than that, today’s computer systems can algorithmically write pieces that have the same flavor as the masterpieces. For example, computer scientist, composer, and author of “Experiments in Musical Intelligence,” David Kopp, has designed the simulation robot EMMY, which has produced a wide range of compelling music, from Bach’s hymns and Mozart’s sonatas to Chopin’s mazurkas, as well as Beethoven’s Symphony no. 10 and Mahler’s operas in five acts. So, many AI technologies have been increasingly used in the field of music, so let’s talk about the application in the field of music education.
The early days of AI in education (AI-ED) resulted in the creation of the intelligent tutoring system (ITS), as shown in Figure 1. The most common application of AI technology in education is the adaptive learning system (ALS), which is the primary development direction for teaching and learning in the future and is also one of the topics covered in this paper. The rapid advancement of IT, the introduction, and the ongoing improvement of new teaching system development models have prompted individuals to create new teaching systems by combining multimedia technology, network foundation technology, and AI technology. It consists of a domain model, a learner model, and a pedagogue model, and encompasses all of the components necessary for the development of a teaching system, which may be considered to have unequaled advantages and enormous appeal.

The model is concerned with the discipline of education. The pedagogue model refers to techniques of teaching that are both appropriate and effective. The learner model reflects the learner’s interaction with the computer or machine, and it represents the students themselves. This model can be used by the introductory part of the AI system (i.e., the learner model) to decide the instructor’s and learner’s progress, as well as to decide the most efficient, appropriate, and interesting introductory activities and interactions afterward. More crucially, because the system is always acquiring and changing data, the learner model continuously absorbs and feeds back information about learners’ learning behaviors and performance in the classroom, enriching, organizing, and improving the learner model.
Applying this principle to music education, an example of saxophone teaching can be explained as follows. The learner module is the student who wants to learn the saxophone, the teacher module is the teacher or software that teaches the saxophone, and the domain module is the information related to the saxophone subject. The intelligent algorithm of the system itself will examine and process the contents of the mentioned modules, and provide the most appropriate content to the learners according to their learning needs and individual learning capability. Further, the repeated analysis of students’ classroom performance (e.g., students’ return to class, mood, accuracy, competition results, etc.) will provide evaluation and feedback, for example, guidance or tips to assist students in making consistent and stable progress in their studies. Continuous analysis of learning results from the system, including important data on performance, students’ learning status and attitudes, and any errors or misconceptions in the learning process, may be delivered to instructors and students using AI technology.
Teachers may use the ease and efficiency of AI to better understand their students’ learning patterns and educate them in the proper way, allowing them to better personalize their teaching to their students’ requirements. Learners can use human-computer interactive learning models to track and observe their own learning process and progress, as well as summarize and reflect on their learning to keep them motivated to learn. These intelligent systems can establish dialogue scenarios of interest to children based on their physiology and psychology, unknowingly introduce them to the scenario, ask questions and communicate with them through a series of games or intelligent behaviors, and impart knowledge and information in multiple human-computer interactive dialogues. It also records the child’s mastery of knowledge, intellectual development, and mental development on the cloud-based data platform. In the process of interaction, the system is able to process the relevant information through various algorithms and continuously improve the information, becoming an adaptive tutor that is increasingly compatible with the child. Systems like this are very suitable for students who are learning saxophone or other instruments and skills from scratch.
3. Materials and Methods
3.1. Data Acquisition
Data analysis includes a wide range of duties, including data security analysis and data characterization, to name a few. In the data preparation process, data quality analysis is a critical step that must be completed before the data preprocessing step. Data quality analysis is primarily required to determine whether or not there is any contaminated data inside the original data, such as missing values, anomalous values, duplicate data, and data containing special characters. There is a large amount of data in the database that are useless for making predictions.
3.1.1. Selection of Relevant Attributes
Several attributes from the dataset must be collected before conducting the experiment in order to improve the accuracy and efficiency of the prediction model while also reducing the complexity of the algorithm. Before executing the experiment, the combination of criteria that determine or impact the artist’s future song play must also be chosen. Processes such as variability and morphological charts are commonly utilized to attain this goal in the past [21–23]. There is no association between several attributes of the datasets utilized in this paper, such as the amount of time that users of the dataset spent playing. Because of this, it is proposed that the useless attributes should be deleted from the model in order to reduce the model complexity. The maximum information coefficient method is used in this paper to determine the relationship between the following attributes: when evaluating the dependency between two variables X and Y that are relevant in a vast amount of data, the information gain coefficient (also known as the mutual information technique) is a comprehensive and fair criterion that weighs the reliance between two variables X and Y. With respect to two discrete variables X and Y, the information gain coefficient R(X, Y) is described and calculated in the following manner:where p(x) and p(y) are the marginal probabilities of the variables and p(x, y) denotes the probability distribution of the two variables together.
In the acquired dataset, some data do not meet the requirements such as missing data, noisy data, and abnormal data, so the abnormal data need to be processed before the model is built, and the steps of processing are as follows.
Step1. In this paper, the mean fill method is used to fill the missing data values as follows:where represents the information that is missing at the current instant, while represents the information that was missing at the previous moment, and similarly, represents the information that will be missing at the next moment.
Step2. Smoothing of data: the goal of data smoothing is to change data that have significant skewness in the key features of the dataset into data that is more obedient to the Gaussian distribution, so laying the groundwork for obtaining better classification results in the future. Data are smoothed using the least-squares method, which is presented in this work.
Step3. Data normalization is the third step: in the case of DL methods, such as neural networks, used for model building, the greater the value of data in a dataset, the greater the proportion of influence on the model, which results in the invisible possibility of losing those numerical attributes with smaller values when using DL methods, especially neural networks. As a result, the values in the dataset should be normalized, and in this study, the data are normalized to the range [0, 1] by utilizing the min-max standardization approach, which is described in detail elsewhere.where represents the data after normalization has been performed, indicates the original data, and denote the highest and minimum values found in the original data, respectively.
3.2. Proposed Algorithms
3.2.1. Constructing the Algorithm Formula
This section introduces the proposed matrix decomposition model with dynamic weight learning. It is used to develop a recommendation algorithm, which consists primarily of the construction method of linear fusion method and incorporates several contextual factors such as geographic factor, temporal factor, and social factor, and introduces the process of weight parameter learning.
Constructing the algorithm formula
The traditional matrix decomposition model of the rating prediction matrix is formulated as shown in equation (4). Where U and are the user implicit matrix and the interesting point implicit matrix, respectively. On this basis, when context factors are fused, the implicit vectors involved are mainly calculated, for example, when analyzing the social factor, the user implicit vector of the user’s friends is accumulated and then dotted with the interest point implicit vector; when calculating the base factor, the vectors are accumulated and then dotted with the user implicit vector. Finally, a weight parameter is then assigned to each factor. Thus, a generalized approach can be used for the calculation. The construction formula of the proposed algorithm in this paper is as follows:where is the predicted final rating of user U on interest point i; is the inner product of the k factor in the matrix decomposition formula, the user implicit vector, and the factor implicit vector; is the fusion weight set by the system for the k factor; and n is the total number of factors. The formula ends with the addition of the product of the user implicit matrix and the implicit matrix of interest points. With this generalized formula, multiple factors can be added to the model and dynamically extended. In the following section, the category, music-based, social, and temporal factors are added and their validity is demonstrated experimentally.
3.2.2. Algorithm-Based Weight Parameter Learning Process
In this algorithm, a gradient descent-like approach is used to dynamically learn the weight parameters of each factor. The MAE values of the overall algorithm are compared with the MAE values of the algorithm using one factor alone. First, the weight of each factor is initialized to a random value between 0 and 1, and then the MAE value of the recommendation result of the algorithm is calculated. If the MAE value of the algorithm is higher than the MAE value of a factor alone, then the weight value of that factor is reduced, and the weight reduction method is shown in equation (6). If the MAE value of the algorithm is lower than the MAE value of a factor alone, then the weight value of that factor is increased, and the formula for increasing the weight is shown in equation (7):where is the factor weight after updating, while is the factor weight before updating, and m is the rate of weight learning, which is equivalent to the gradient descent rate in the gradient descent method here. is the prediction score of user u for interest point i calculated using k factors alone and is the prediction score calculated by the algorithm after fusion. is the absolute value of the difference between the true score and the predicted score. By updating the weight parameters in this way, the linear fusion of algorithms for different types of users can be set in a reasonable way, so that the fusion algorithm can be more suitable for the characteristics of users with different check-in patterns. For example, some users check-in with a stronger geographical factor, so the weight parameter of the geographical factor assigned to that user is larger, and some users check in with a stronger temporal nature, so the weight parameter of the temporal factor assigned to the current user is larger, so it can better fit the characteristics of all users’ check-in types and improve the overall algorithm recommendation performance.
Through the above analysis and research, the main calculation process of the algorithm proposed in this paper is as follows:(1)The weight parameters of each factor in the initialization process are normalized to ensure that the weight parameters of each factor add up to 1.(2)While iter < maxiter does.(3)Calculate the predicted value of each user’s rating by equation (1).(4)According to the method in subsection 3.1.1, the weight parameters of each factor are continuously optimized according to equations (4) and (5).(5)If the loss function converges, then the iteration will break.(6)iter++(7)After maximum iteration or convergence of the objective function, return the weight parameters of each factor, as well as the implicit vector of users and the implicit vector of interest points.(8)After deriving the weight parameters of each factor, an interest point recommendation is performed according to each user [4].(9)For all users u in the user set.(10)The predicted scores of each user for each interest point are calculated according to equation (1) and a list of predicted scores S(u) is derived.(11)Sort the set of predicted scores for each user and select the top k as the final recommendation list R(u).(12)Return the final set of recommendation lists R(u) for each user.
Through the above steps, steps 1 to 7 are the dynamic weight parameter learning algorithm. The main function is to learn the weight parameter size of each factor in the rating prediction formula. On the other hand, steps 8 to 12 are the interest point recommendation algorithm, which function is to calculate the interest point recommended for each user based on the learned weight parameters and various implicit vectors [20, 23].
4. Experimental Results and Analysis
In this paper, experiments are conducted using the yelp dataset to verify the effectiveness of the proposed algorithm. Figure 2 represents the key attribute and percentage of the attributes present in the yelp dataset.

As shown in Figure 2, the main characteristics of saxophone teaching are reflected in the technique of the instrument, the sound, and the expression of emotions.
4.1. Different Performance Metrics Attained via the Proposed Algorithm
Performance evaluation metrics play a key role in tracking the efficiency/performance of a model. In this study, the two main performance measures, such as accuracy and recall are considered to track the performance of the proposed algorithm. Figure 3 shows the accuracy of the proposed algorithm for different probability estimates.

From Figure 3 it is clear that the accuracy of the algorithm varies for different text characteristics but still, it is above 90% for every text characters.
Figure 4 illustrates the recall of the proposed algorithm for different probability estimates.

As shown in Figures 3 and 4, there is little difference between the different probability estimation methods. The general probability estimation methods do not show significantly lower classification accuracy, but rather the above data figure shows that the general probability estimation methods have slightly higher classification accuracy. However, this does not mean that the Laplace estimation and estimation in our probability estimation module are meaningless because for website classification, probability processing is necessary, and as a well-scaled system, these three probability estimation methods have existential significance. The subsequent experiments are performed by choosing estimates with comparable performance to the general probability estimation.
In this paper, the performance of the algorithm is evaluated using two metrics, Recall@ k, and Precision@ k. The dataset is partitioned, in which 80% of the data are used as the training set, while the remaining 20% are used as the test set. For each user, their predicted scores for those POIs that have not been visited are calculated, and then the top k POIs with the highest scores are ranked according to the predicted scores and selected as the recommended list for that user. The recommendation performance of our algorithm is measured by comparing the recommendation list with the set of POIs visited by users in the test set.
In order to verify the performance of the algorithm, the experimental results of the proposed algorithm will be compared using the following recommendation algorithms.
FSS-FM: this algorithm uses a hybrid optimization method of ranked learning and alternating least squares for the optimization of the objective function, and discusses the impact of the recommendation performance of the music basis factor.
TSG-MF: this algorithm extends the user multilabel matrix, music-based influence matrix, and social factor influence matrix based on the nonnegative matrix decomposition, and regularizes these 3 terms into the objective loss function of the matrix decomposition algorithm.
SSTPMF: this algorithm investigates the influence of interest, point similarity, and user similarity on the performance of the algorithm, and inscribes the social factor information, check-in time information, music base factor information of interest points, and category factor information of users into the probability matrix decomposition model.
4.2. Empirical Study Analysis 1: Comparison with the Benchmark Algorithms
The basic purpose of this practice is to compare the performance of the proposed algorithm with the benchmark algorithms to demonstrate the effectiveness of this dynamic weight parameter learning method similar to gradient descent. The final experimental results are shown in Table 2.
From Table 2, the following conclusions can be drawn: TSG-MF outperforms all the algorithms except SSTPMF and the algorithm proposed in this study. For SSTPMF and FSS-FM, these two algorithms integrate music-based contextual factors based on matrix decomposition, so the performance is improved to a certain extent. But TSG-MF starts from user check-in tags, music-based factors, and social factors, so, the information mined is more in-depth, and the performance is improved significantly.
It can be seen that the performance of the SSTPMF algorithm is better due to 2 reasons: firstly, SSTPMF uses the probability matrix decomposition model as the basic algorithm, which has a better ability to cope with data sparsity compared with the traditional nonnegative matrix decomposition. Secondly, SSTPMF calculates the similarity between users, interest points and , and the similarity between users and interest points from social similarity, temporal similarity, topic category similarity, and music base similarity, which are more comprehensive contextual factors to consider.
The performance of the proposed algorithm outperforms these algorithms, which indicates that the gradient descent-like dynamic weight parameter learning contextual factor fusion algorithm studied in this paper is more effective in improving the recommendation performance.
4.3. Practical Study Analysis 2: Comparison with Algorithms That Consider a Factor Alone
The purpose of this empirical study is to compare the algorithm using one factor alone with the algorithm that combines three contextual factors in this paper, in order to show that the linear fusion of multiple factors is indeed effective in improving the recommendation performance of the algorithm. The experiment uses the RMSE metric to measure the recommendation performance of the algorithm. The experimental results are shown in Figure 5.

Compared with the algorithm using only the music-based context factor, the algorithm using only the social context factor, and the algorithm using only the temporal context factor. The RMSE of the algorithm incorporating all three factors in this paper is lower, which indicates that the algorithm has the best performance and proves that the recommendation performance of the algorithm incorporating multiple context factors has been effectively improved.
5. Conclusion
This paper promotes the study of saxophone knowledge teaching strategy using AI, and the study primarily focuses on these three features: first and foremost, critical research in the field of AI for educational change has been accomplished. On the basis of a discussion of relevant theories and technical support, we make an effort to examine the new impact and changes that AI technology has brought to the education and learning from the points of view of teaching resources and teacher preparation, learning, teaching methods, teaching management, and evaluation. Second, AI encourages the creation of new improvements in both teaching material and lecture dynamics. Scientists are looking into the features and functions of new intelligent teaching tools, as well as the effectiveness of AI technology in optimizing education opportunities from three perspectives: intelligent evolution of learning resources, intelligent pushing of teaching resources, and intelligent retrieval of educational resources, according to the findings. In addition, this research examines the connotation, characteristics, and technical support of intelligent teaching environments. Third, research on AI for teaching and learning is evolving. AI-assisted teacher preparation, education, and question-answering aid in the development of intelligence, accuracy, and personalization in the classroom. It also looks at how AI may help students recognize, discover, and improve themselves as well as their learning experience. Having these features in mind, a matrix decomposition approach with dynamic weight learning is proposed, which is used to create a recommendation algorithm that essentially comprises the linear fusion technique’s building approach and combines numerous contextual aspects such as geographic, temporal, and social factors, as well as the weight parameter learning process. To test the effectiveness of the suggested algorithm, the experiment was carried out on the yelp dataset. In order to show the effectiveness of the dynamic weight parameter learning approach comparable to gradient descent, the performance of the proposed algorithm was compared to that of the benchmark algorithm. A comparative analysis of the algorithm that uses one contextual element alone with the algorithm that uses three contextual factors was also carried out to prove that the linear fusion of multiple components is useful in increasing the algorithm’s recommendation performance.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.