Abstract

For the past year, everyone has been facing difficulties due to the fast spreading of the Corona Virus. As an extension, students, parents, and teachers are handling the challenges in the education sector. Since the COVID days, the schools and colleges were closed, and hence, the students were lagging in their subjects. As an alternative to this scenario, offline classes are converted to online courses, otherwise called virtual classes with virtual classrooms. Due to this conversion, the teaching has become a little more advanced by incorporating various computer-based technologies. The technologies like artificial intelligence, cloud computing, and machine learning paved the way for exploring concepts in data transmission in terms of timely delivery of content, less error rate, and nontechnical terms like making the classes interactive and understanding the subject concepts. In this research work, the online teaching class on music is considered. To be specific, traditional Chinese music is taken for the study. An artificial intelligence model is designed with the aid of wireless sensor networks for the online class on the musical subject. Q-learning algorithm, which is an artificial intelligence-based reinforcement learning algorithm, is implemented. The aim of the Q-learning algorithm in this online teaching of classical music is to check the frequency level of the music that aids in the automatic transfer of another wavelength inside the dataset.

1. Introduction

As modern science and technology progresses, the neural network is being more frequently employed in music education, which is beneficial to the growth of music education in China. The Chinese students pay more attention to the pop music and expect artificial intelligence in learning [1]. The implementation of artificial intelligence in music education has shattered the folk music education paradigm, particularly the use of computer music systems and high-intelligence music software in music education, which has considerably enhanced the level the music instruction and enlarged the music teaching model. An implementation of artificial intelligence system has the potential to provoke students’ interest in studying music, which is a much more innovative approach to traditional student education [2]. Multidisciplinary music instruction has been carried out using an artificial intelligence system. Artificial intelligence is utilized in the music education activities of elementary and middle school students to increase students’ passion and initiative, which is helpful to students’ active learning and information retention [3]. In university and nonprofessional arts education, traditional classroom content is transformed into artificially intelligent system instructional courseware, which aids in guiding students to learn new knowledge, allowing students to participate in the artificial intelligence system under the teaching of music learning, resulting in the greatest uptake of knowledge of music [4]. With the advancement of computer technology, a plethora of intelligent music software related to computer technology has emerged, allowing original music tasks that rely on synthesizers or music workers to process as well as edit to be completed entirely by computer, thereby trying to improve the processing ability of songs data and expanding the range of music information [5]. This type of multitrack audio is extremely powerful. The users may edit, change, record, and play all types of music elements, as well as process them using artificial intelligence. This new form of music system is now being used in music education, which has tremendously aided the growth of music education. In China, many people are interested to learn music through online, so there needs a necessity of making a teaching system model for teachers which helps in guiding students to learn music [6]. The contribution of the study is to design the teaching model and analyze the frequency level of the music which helps in the automatic transfer of another wavelength inside the dataset. This paper is organized into six sections. Section 1 presents introduction to the study. Section 2 discusses the related works of the study, and section 3 explains the dataset and methods used for the study. Section 4 explains the proposed architecture model of the study. Section 5 discusses the results and findings of the study, and section 6 presents the conclusion and future direction.

In the 1960s, artificial intelligence was used in music education solely to create a new musical instrument using musical keyboards. The instrument can hold the timbres of several instruments and has the benefits of being able to perform to anyone at any time, changing the tone of the instrument as it is played, and being compact and portable [7]. However, artificial intelligence was still in its infancy at the time, and most music teachers were unaware of this new intelligent musical instrument, or even just that it would play a significant role in the future development of music education. As a result, despite the fact that this type of musical instrument is used in music education, it has acquired little attention; many schools’ music instruction continues to follow the traditional teaching pattern. Teachers and students now have a platform for communication and engagement in the classroom thanks to the implementation of this music system in music education [8]. The way music is taught has also evolved significantly. For starters, students could use the new sound system to get a sense of the music and the allure of each note. Secondly, students have the opportunity to rehearse whatever music the teacher is teaching in class. Students may better comprehend the music knowledge that the teacher discusses, as well as the qualities and purposes of each musical element, by actually playing. Finally, the instructor can employ this technique. To some level, everybody benefits from real-world experiences that she might not have had in the classroom [9]. However, AI can continue to democratize music education in a physical or digital classroom, as well as through applications and tools to and from the recording studio. Musical is an open-ended field with few predetermined aims or regulations, although significant variations exist between novices and professionals. Issue searching is at least as essential as problem solutions in this sector. AI may be used in a variety of ways in music education. The efficacy of the Logo method is contested in the fields where it is best known (mathematics, physics, etc.). The technique has many supporters, but detractors argue that it requires highly motivated and knowledgeable instructors to succeed and that credit for accomplishments is ambiguous (i.e., to the system or the teacher) [10].

There would be a much greater consensus that the logo method is useful in open-ended domains like music composition. However, this has never been experimentally tested. There are several possibilities for expanding music Logo’s work to include other composing approaches and other areas of music [11]. In general, intelligent tutoring systems are ideally suited to sectors with firm and fast rules and goals, as well as methods for recognizing and categorizing systematic mistakes. In music, such places are in scarce supply. Previous work in these areas might be improved, and new, relatively well-defined areas in music with clear and fast rules and goals could be identified [12]. It could be used for ear training, for example. However, it is critical to understand the technique’s limits and to avoid using it where it is not suitable. Harmony Spaces is an example of a human interface-focused strategy that integrates AI music ideas and techniques. This system is powered by a variety of sources [13]. Harmony Space builds on AI theories of a domain in music (harmony) and uses them instead of a task model in the human-computer interaction. Direct manipulation methods are then used, with the domain theory modified as needed, to make those musical components and connections that are judged conceptually significant perceptually salient. This method may be used in a variety of situations. Exploring if equivalent power in an interface might be derived from group decision theory characteristics of other domains, not necessarily related to music, would be a fascinating research subject [14]. Not only is the MC cognitive support framework relevant to music, but it may also be used in any open-ended problem-solving situation. For several reasons, it has a special potency in the situation of harmony. These include leveraging Harmony Space’s beautiful representations for harmony, generalizing across tonal and modal characteristics, and applying generative designs that allow for different views. As proven by MOTIVE, similar concepts might be applied to other aspects of music. Negotiation is a key open study topic in AI and AI-ED, with applications in a variety of fields. There are applications for coping with the limits and incompleteness of AI concepts of music in arts education, as well as for investigating sentient cooperation [15]. Given the current levels of domain expertise that such systems typically possess, as well as the direct manipulation and visualization techniques available for facilitating human-machine communication, it is unclear how useful this work is in currently deliverable systems for music education to perform ensemble with students and understand every patient’s musical knowledge by listening to the educators’ performance [16]. In a music lesson, for example, the instructor may ask the student to play a response to a question, or the teacher might play a piece of music and then ask the student to repeat or recreate it. Not only will students have a better musical experience, but they will also be able to converse and engage with their professors [17]. Students will shift from a passive to an active position in the classroom, not only listening to the instructor’s explanations but also experiencing and understanding music through the artificial intelligence system, understanding and experiencing music that the teacher cannot. Students may learn more about the qualities and purposes of each music part by using this artificial intelligence program, as well as how these music elements are produced during the building process [18].

The introduction of artificial intelligence technology to music instruction can also help to increase learning and application in the network. The integration of intelligent instruments and software in music education has enabled the incorporation of several new music courses and teaching techniques [19]. Composition, instrumental, analytical work, and other courses in music education all use intelligent teaching methods so that students may play, listen, and change at any moment while producing, improving the efficiency of students’ production. Students may have a deeper understanding of the features and functions of music knowledge, music symbols, and musical elements in the network system, allowing them to have a better musical learning experience [20]. Teachers and students may make use of the benefits of online classroom learning to increase engagement and improve the quality of music education [21, 22]. The use of networked learning has resulted in two main changes in music education: on the one hand, it has influenced conventional music conceptions, and on the other, it has altered how music knowledge is obtained. Because of the features of the Internet, music education has expanded beyond the campus and into the globe, resulting in the teaching of music education [23]. Teachers and students can more easily obtain music information thanks to the network. They can not only obtain the musical information they need fast but also acquire a lot more music information, which is more comprehensive [24]. Students that study through the network, on the other hand, may not be able to fulfill their learning goals owing to the network’s huge and complicated content. As a result, schools should establish online music learning courses so that instructors may utilize the network to obtain additional music textbooks, as well as to allow teachers to explain various music concepts to students over the network, therefore deepening their grasp of the music. As a result, the Internet has become an integral element of music education in schools. The connection is expanded to the classroom setting, broadening the students’ perspectives and connecting them to the rest of the world from the original book [25]. Students may use the Internet not only to get additional music knowledge and information but also to upload their musical works so that more people can see and hear them, as well as integrate themselves into the global music ecosystem, learning and enriching themselves in the process [26].

Artificial intelligence in music education may help students reveal their problems in the process of music learning in the future society, by covering a range of diverse music resources to support students with the music learning resources they require, thereby improving the efficiency of music teaching and learning [27]. For instance, in the procedure of piano education for kids, artificial intelligence can help learners discover good teacher resources as well as rely on the web platform supplied by machine learning to obtain high-level music instruction, allowing them to experience the magic of the organic integration of music, science, and technology [28]. Furthermore, future artificial intelligence may be able to effectively comprehend a music teacher’s speech and emotion, as well as follow the music teacher’s humanized teaching technique, expanding and deepening the music delivered by the machine. It should be mentioned that artificial intelligence’s potential application in music education may have several limitations: For starters, it originates from the constraints of music education’s uniqueness. Machine learning in music education primarily serves a supporting role, such as the ability to teach theoretical foundations of music teaching, such as ball, tone, size, and arpeggio, but artificial intelligence has limitations when it comes to emotional aspects of music teaching, such as music emotion, music content expression, and tone. Second, it stems from the industry’s recognized limits. Most individuals in the business think that music has to be perceived; machines in the experiencing of several complex human feelings still have certain limitations [29]. The present study focused on developing an effective teaching model for students and also analyzed the quality of music using wireless sensor networks.

3. Materials and Methods

3.1. Dataset

The music dataset will contain the wave signal as represented in Figure 1. When the classical music wave signal is given as input, then the expected output will be level frequency, time duration, and power consumption. Low-level signal features make up the temporal and spectral aspects of an audio signal. They are not perceptually motivated and characterize the distinctiveness of a signal in the temporal or frequency domain. Physical feature extraction is done in short overlapping windows since music has such a wide variety of temporal variations. The dataset was collected from butterfly music.mid. The music dataset is classified into five parts namely music1, music2, music3, music4, and music5. Each music dataset has 100 nodes or music. For this study, the time, power, and frequency of the music are evaluated.

3.2. Q-Learning

For this study, Q-learning reinforcement algorithm is implemented in which Q stands for quality metrics. Reinforcement learning is a machine learning technique which is focused on how software must take actions in the environment [30]. This algorithm is a part of deep learning technique which helps to increase few parts of cumulative reward. Q-learning is a value-based learning algorithm that updates value function as per the Bellman equation [31].

Because of the route difference, the signals generated by the transmitter are given in the following equation. where is the direction of a far range signal, is the length in between array members, is the maximum speed, and is the array object’s delay in time.

As a result, the time delay between the array items can be calculated as in the following equation. where denotes the frequency in the middle.

The phase angle for limited range signals is represented in the following equation. where is all the signal’s wavelength; as a result, if the signal’s time delay is determined, the signal’s direction may be determined using Equation (1), which is the fundamental premise of efficient spectrum estimation approaches.

The music is expressed as a specific time issue: the music is separated into different clumps and solved iteratively using Q-learning via Equation (3). There was work done to create continuous deep Q-learning to model-based acceleration that would limit data redundancy when linking local subtasks or linking separate notes. When using increasingly complicated models, simulation time and resource magnitude are at a distasteful rate, a phenomenon known as the “high dimensionality.” Teamwork in the spectrogram allows for significant potential frequency and time increase.

The eigenvectors of the matrix are arranged according to their size is represented in the following equation.

Matrix has eigenvectors that correspond to signal but also noise, respectively.

If is the amplitude of the matrix and is the associated eigenvector , then

In Q-learning, decisions are made and offered uncertainty about the conditional probabilities or prizes; the representative continues with in best possible way given the current regulation. Following Equation (5) are examples of learning experiences: A constituent assembly emerges as a result of the current condition and the actions taken. Q-learning methods in this spectrum of analysis solution Q function can be learned by recursively minimizing.

Let stand the minimum of .

Following Equation (6)

When you extend the right hand side and compare it to the left, you get the following. where is described for signal edge value, is described for end of edge value, is the beginning of another signal, is the maximum speed, is the number for frequency level, is increase the frequency level, and is specific eigenvectors of the matrix.

Since AHA is a full-level matrix and (AHA)-1 exists, exists as well.

Calculate (AHA)-1A on both edges at the very same time from this, and you will get

Generate a noise matrix by using the noise-specific distribution as each column:

Describing the spectrum can be determined using

Equations (12) and (13) are used to evaluate the spectrum which helps to determine the acoustic signature of the musical note. Only the clear spectrum can generate overtones of the music.

4. Proposed Model

Figure 2 represents the proposed architecture which explains the teaching model for online music education using wireless sensor networks. The online traditional Chinese music has a database contains teaching management, online classes, review system, remote examination system, online music teaching evaluation system, online training, music evaluation library, and online music class library. This database management can be accessed via internetwork support system and wireless sensor networks. In order to evaluate the music, we use the Q-learning algorithm. A teacher is able to manage database, billing system, performance report, and fault management as well. This model helps the instructor in teaching the classical music in an effective and easier way.

5. Results and Discussion

Recent advancements in wireless technology have resulted in a number of self-contained deployments of wireless networks. Because nodes in multiple systems must coexist, all broadcasters and receivers must be aware of their audio signal surrounds in order to adjust the settings of transmitters and receivers to meet their demands. Machine learning approaches have grown in popularity as a result of their ability to learn, analyze, and estimate transmitter signals and associated factors that characterize the frequency domain. The Q-learning method for constructing a reliable framework to track pirate frequency transmitters employs an AI with audio signal data that are received as input signals. Once the hostile transmitters have been detected and eliminated, this Q-learning feature coding is used to categorize the authorized transmitters. It is predicated on the teaching/transmission and acquiring strategies established by Q-learning: teaching through examples, coaching, learning without a teacher, and learning with a divine teacher. When we study these Q-learning methods, we can see that they are useful for distinguishing between teaching and learning in a spectrum. In this study, the music dataset has been analyzed.

If we analyze the music dataset, the audio signals will be transmitted and spectrum frequency will be generated. The spectrum is the significant aspect need to be considered in the analysis of the study.

Figure 3 represents the frequency analysis which is represented in the wave signal. For this frequency analysis, power and time will be considered. While music is moved increasing or decreasing a key, the absolute frequency of the note changes, but the fundamental connections among notes remain. This is a required feature of the model. A beginning on deep learning created music time with a restricted macrostructure to an entire thing was offered. The model generated frequency time-unconnected subunits which did not lead to a sense of coherence. Attention has to be paid to a framework of the music in order to accurately model it.

Table 1 represents the frequency analysis of online classic music and accuracy of the music1 when considering time, power, and frequency as parameters. It indicates that music1 dataset has 94.5%.

Figure 4 represents the frequency analysis of music2 dataset. In this case, medium-level signal characteristics make up the temporal and spectral parameters of an audio signal. They are not perceptually motivated and characterize the distinctiveness of a signal in the temporal or frequency domain. For example, multiple observations can happen within a single time step, resulting in resonant intervals. These notations also can be patterns that span multiple time stages in a row. Additionally, musical observations are expressed as an octave, or as an interval between musical innings. Fastball circularity arises from the assumption that batters one or even more high notes apart and are musically equitable. Resonant frequency is thus thought to have two aspects: height, which keeps referring to the absolute frequency of a point of information, and width.

Table 2 represents the frequency analysis of online classic music and accuracy of the music2 when considering time, power, and frequency as parameters. It indicates that music2 dataset has 97.81%.

Figure 5 represents the frequency analysis of music3. In this case, high-level signal characteristics make up the temporal and spectral parameters of an audio signal. By considering the frequency and power, the music1, music2, and music3 generated effective results as it helps in transferring the data without disruptions or noise. Furthermore, multiple observations can be done at the same time, which is the concept of music, and the network must report again for classification of unified music. Q-learning algorithm is constant over time and not note integral. Each note is represented by a distinct sending end. Moving up the whole step creates a different result. Relative friendships, rather than abject relationships, are important in music.

Table 3 represents the frequency analysis of online classic music and accuracy of the music3 when considering time, power, and frequency as parameters. It indicates that music3 dataset has 99.04%.

Figure 6 represents frequency analysis of music4. It indicates that different music transmitter signal features make up the temporal and spectral parameters of an audio signal. In addition to time, frequency, and power, the other significant parameters like audio signal and spectrum were analyzed. The spectrum is highly accurate which can generate acoustic signature of the musical note.

Table 4 represents the frequency analysis of online classic music and accuracy of the music 5 when considering time and power as parameters. It indicates that music5 dataset has 91.36%.

Figure 7 represents the graphical representation of the music5 dataset. In this case, we can see different music signal aspects make up the parameters of an audio wave signal. They are not perceptually motivated and characterize the distinctiveness of a signal in the temporal or frequency domain. The spectrum and audio signal are highly precise than other parameters.

Table 5 represents the frequency analysis of online classic music and accuracy of the music5 when considering time and power as parameters. It indicates that music5 dataset has 97.8%.

.

Table 6 represents the overall accuracy using Q-learning reinforcement algorithm. The results predict that the algorithm helps in transmitting the signal with high frequency and power. Hence, the quality of the signal is increased by using learning reinforcement algorithm which helps the teachers in teaching the classical music without any noise or disruptions. The results revealed that Q-learning reinforcement algorithm performs well, which helps in evaluating the accuracy and acoustic signature of the music note.

Table 7 clearly shows that its use of random algorithms with in problem of discovering the optimal solution is now far below that of the random algorithm. The range of search is broad due to global seek in Q-learning algorithms. We select iterated genetic material based on fitness value as well as look for the best solution’s position, making it easier to find the optimal solution. We could really establish dynamic modification parameters based on the results of the entire experiment to effectively overcome the issue of multiconstrained objective, such that the technique gives document generation of test questions. Designers can quickly identify the optimal solution when compared to a possibility of Q-learning algorithm.

6. Conclusion

The development of artificial intelligence paved the way for learning music through online. In China, music education has enhanced the understanding of artificial intelligence in music and empowers the application of artificial intelligence. Artificial intelligence system teaching techniques can integrate all types of educational information with student’s learning and teacher’s teaching. It is significant to know how to use artificial intelligence system properly in teaching practice. Hence, the study proposed artificial intelligence teaching model of online classical music education using wireless sensor networks. It helps the teachers in understanding of the process made through online music education. The study also evaluated the accuracy of the music data using the reinforcement algorithm. The results proved that Q-learning algorithm has high accuracy in evaluating the classical music data. For future research, it is highly recommended to implement deep neural network algorithm for determining the noise and signal disruptions in online music education use WSNs.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.