Novel Approaches in Graph and Complexity-Based Data Analysis and ProcessingView this Special Issue
[Retracted] Popular Song Composition Based on Deep Learning and Neural Network
For the general public, composition appears to be professional and the threshold is relatively high. However, automatic composition can improve this problem, allowing more ordinary people to participate in the composition, especially popular music composition, so the music becomes more entertaining, and its randomness can also inspire professionals. This article combines deep learning to extract note features from the demonstration audio and builds a neural network model to complete the composition of popular music. The main work of this paper is as follows. First, we extract the characteristic notes, draw on the design process of mel-frequency cepstral coefficient extraction, and combine the characteristics of piano music signals to extract the note characteristics of the demonstration music. Then, the neural network model is constructed, using the memory function of the cyclic neural network and the characteristics of processing sequence data, the piano notes are combined into a sequence according to the musical theory rules, and the neural network model automatically learns this rule and then generates the note sequence. Finally, the ideal popular piano music scores are divided into online music lover scores and offline professional ratings. The score index is obtained, and each index is weighted by the entropy weight method.
tThe spiritual and cultural needs of the people have grown rapidly due to the development of our country’s economy and society and the improvement of material living standards. It shows that the characteristics of multi-level, multi-form and diversified, cultural consumption ability have been greatly enhanced, and the level of appreciation has been continuously improved. Music art reflects the real emotions of human beings. It helps people find the connotation of their thoughts. Moreover, it brings the sincerest resonance and power. Thus, the art of music plays an important role in people’s spiritual world.
The piano has a magnificent sound, a wide range, full of variation, and strong expressiveness. The piano occupies an important position in music creation and rehearsal due to its rich musical theory expression ability. At present, people’s work music requires professional knowledge of basic music theory, musical style, harmony, etc., in order to create music scores marked with basic content such as speed and chords. Composers often do not think out of thin air when they are composing music. Most composers like to use guitar or piano to create while playing. The finished product is usually sheet music, which will be marked with basic content such as speed and chords as well as the arrangement of the music and some basic ideas of style . For ordinary users, the professionalism and threshold of composition are too high.
After more than 60 years of evolution, driven by new theories and technologies such as mobile Internet, big data, supercomputing, brain science, and social development needs, artificial intelligence has accelerated its development, showing new features such as deep learning and cross-border integration [2, 3]. In recent years, artificial intelligence has developed rapidly in the field of composition, breaking through the constraints of human work music, so human needs to master profound music theory. Artificial intelligence is capable of creating fresh music and improving the efficiency of music creation and the performance of music. Domestic and foreign technology giants have also significantly commercialized the practices in this field, and some of their works can pass the Turing test, reaching a level that ordinary users cannot distinguish.
The significance of this article’s topic selection is as follows:(i)First and foremost, promote music learners to have a better understanding of the field of popular music. This can be achieved through the analysis of popular music piano creation.(ii)Secondly, the combination of popular music and deep learning establishes an artificial neural network piano automatic composition method. This will give song creators certain inspirations, so that people in general can participate in the creation of popular music, accumulate the amount of popular music piano pieces, give it to popular music, and bring more possibilities for music creation.(iii)Thirdly, the qualified and capable piano educators should make more creations and try more. To this end, they have to master and learn from the piano creation techniques of popular music, integrate popular music piano creation into one’s own piano teaching, and enhance students’ interest in piano learning and musical creativity. In addition, their overall musical quality, cooperation, and communication skills will improve .
The following section highlights some of the related works. Section 3 describes the methods of the proposed study. In Section 4, the experimental results are analyzed, and Section 5 concludes the research.
2. Related Work
At present, searching for the keyword “popular music” related to this topic, we have obtained more than 1,000 related documents on CNKI; Wan F database collected more than 400 relevant art articles. The author believes that the “popular” here is not simply defined by the broad masses, and its premise comes from the division of regions . Whether it is American jazz or Chinese pop music and whether it is Gershwin’s “Rhapsody in Blue” or Li J H’s “Drizzle,” this belongs to the category of popular music. Popular music includes pop music, light music, jazz music, rock music, and so on. By reviewing the related literature of popular music, the authors found that the field of popular music has its unique charm. The authors believe that music has no borders, but popular music has its own borders. According to literature research, the development of popular music in modern China began with the “school music and songs” promoted in schools. In the 1920s and 1930s, popular anti-Japanese mass songs that inspired the people’s fighting spirit played a huge role in the war. At the same time, the creation of urban music represented by Li J H has also been greatly developed, which has promoted the development of the field of modern popular music in our country. Although the notation technology and media of that era were relatively backward, most of them were appreciated in the form of radio, but as a symbolic product of the era, its existence itself was communicative. Nowadays, the development of popular music is very rapid, and the forms of dissemination and programs are also diversified. Music from all over the world can be communicated with each other through television, the Internet, and other means, and it is spread widely. While preserving the classics, the repertoire is updated very quickly. From the perspective of current music culture, it can also be found that popular music repertoires have also come to the fore in various competitions and are loved by the masses.
As early as the 1960s, there were attempts to combine computers with traditional music. In 1956, Hiller and Isaacson created the world’s first string quartet that was completely computer-generated . The famous music intelligence system EMI was developed by Professor David Cope. It uses a pattern matching algorithm to extract music sequence features in music works and assigns corresponding weights according to the frequency of feature appearance. The final generated music style is similar to the original work style . Markov chain has Markov properties in probability theory and mathematical statistics . Many scholars used Markov chains to study algorithmic composition. Visell used hidden Markov chains to implement a manual tuning real-time music system . Genetic algorithm is a computational model in biological evolution, which can be used for reference in artificial music creation . Many related studies have used genetic algorithms to achieve music composition. The GenJam system constructed by Biles selects a chord. Based on this chord, a jazz solo melody can be created. This system has the function of interactive improvisation .
Artificial neural network is a computational model that imitates neurology and biology. It is widely used in music creation systems and has achieved certain success. Eck and Schmidhebuer used LSTM to initially explore the creation of music. Compared with the previous music creation with RNN that can only capture the local structure of the music, LSTM can capture and reproduce the long-term music structure, and the final generated music can follow the chord law . Franklin used LSTM to learn music knowledge; taking into account the coordination of music structure, the final model can improvise a piece of music . Smith and Garnett used the deep neural network reinforcement learning model to generate music and improve the quality of music works through creativity .
In this section, the methods of the proposed study are explained, such as audio file feature extraction and automatic composition neural network model construction.
3.1. Audio File Feature Extraction
The audio file feature extractions are illuminated separately. The classification is based on the features.
3.1.1. Mel-Frequency Cepstral Coefficient (MFCC)
Music signals and speech signals have similarities. In the field of speech recognition and speech reconstruction, mel-frequency cepstral coefficient (MFCC) is commonly used to extract the characteristics of speech signals . MFCC takes into account the auditory characteristics of the human ear but does not take into account the music and the music theory characteristics. Short-time Fourier transform is used in MFCC extraction. Short-time Fourier transform is the most commonly used time-frequency analysis method, which can be used to process nonstationary signals whose frequency changes with time. Specifically, the original signal is divided into many time periods with a window function, and Fourier transform is performed on it in each time period . The short-time Fourier transform (STFT) was proposed by Gabor in 1946. For a continuous signal , the STFT is defined as follows:where is the window function. The choice of the length of the window function affects the time resolution and frequency resolution. The longer the length of the window function, the worse the time resolution and the higher the frequency resolution after Fourier transform; on the contrary, the shorter the length of the window function, the better the time resolution and the worse the frequency resolution .
3.1.2. Cepstral Coefficient Extraction
Figure 1 shows a flowchart of the entire MFCC extraction, each step of which will be explained below. In the first step, the input original signal is divided into frames, and then each frame of signal is processed to obtain each frame of signal and then the Fourier transform is performed:
A series of transformations are performed on the above equation, and the MFCCs are extracted:
3.1.3. Note Feature Extraction
This paper extracts piano note features, referring to the process of MFCC feature extraction, and uses a filter-based algorithm to extract the fundamental frequency. The design principle of the filter refers to the twelve-average law because each key of the piano is tuned according to the twelve-average law . The extraction flowchart is shown in Figure 2.
3.2. Automatic Composition Neural Network Model Construction
The section is subdivided into two parts, i.e., the recurrent neural network structure and automatic composition neural network model.
3.2.1. Recurrent Neural Network Structure
Compared with the general feedforward neural network, the structure of recurrent neural network increases the feedback link from hidden layer to hidden layer . The recurrent neural network will have an input at every moment, then calculate a new state according to the hidden layer state at , and finally output . The calculation formula of the output layer is as follows:
It can be seen from the equation that the output value of the neural network depends on the previous input values, and it also shows that the cyclic neural network has memory capabilities. Therefore, the cyclic neural network is chosen as the basic unit of the neural network model of piano automatic composition.
3.2.2. Automatic Composition Neural Network Model
A piano music is a sequence composed of multiple notes according to certain musical theory rules. The neural network can learn this hidden rule and then predict and generate the note sequence to realize automatic composition. From the previous analysis, it can be seen that the cyclic neural network has a memory function. In theory, it can establish dependencies between states at long intervals. However, there is a problem of gradient explosion or disappearance in the actual training. It is difficult for a simple neural network to model such a length dependency. The piano automatic composition process is divided into two steps: the first step is to use the note sequence dataset to train the neural network model, and after multiple rounds of learning and training, a good note prediction network model is obtained. The second step is to use the note prediction network model to generate a note sequence, match it with the piano sound source, and get a piano music in audio format. The training process will not be described in detail. After the training process of the automatic composition neural network model is completed, a good note prediction network model will be obtained. The note prediction network model can be used to generate a specified number of note sequences. The note value predicted by the note prediction network model is also used for calculation by the softmax function, and the note category with the largest probability value is selected as the final actual predicted note output . The note prediction network model generates the note set X as follows:
The entire composition process is to first specify the input note sequence of the input layer of the note prediction network model. The note prediction network model generates a note, arranges the predicted notes of m times in order to obtain a set of generated note sequences, and assigns the musical instrument to the note sequence. For the piano, the sound source is dubbed at the same time, and finally the piano music is obtained in audio format. The composition process is shown in Figure 3.
4. Experiments and Results
All data in this study are from the ADNI1 dataset. This section introduces in detail the construction process of the neural network model of automatic composition in this paper, the training rules, and the process of automatically generating piano music. Combining with the characteristics of this demonstration audio set, after many experimental results comparisons, the final automatic composition neural network model will need to use specific parameter values. In the process of automatically generating piano music, the note prediction network model will generate a note every time, arrange the 400 notes generated by the 400 predictions in order to obtain a set of generated note sequences, assign the musical instrument to the note sequence as the piano, equip the piano sound source at the same time, and finally get the piano music in audio format.
4.1. Results of Online Audition Evaluation
The online audition effect scoring platform adopts a development method that separates the front and back ends. The front end is developed with React, and the back end uses Java . After the development of the platform is completed, 20 test piano songs are placed on the platform, of which five are from the audio collection of the demo composer one. Five are from the music website, and ten are generated by the automatic composition neural network model . The duration of each piano piece is intercepted for 30 seconds to avoid auditory fatigue of the auditors. We invited testers to listen to 10 piano music online and score and evaluate them according to their subjective listening feelings. The scoring standards are shown in Table 1.
A total of 20 music lovers were invited to participate in this test, the scores of each piano song were counted and the average score was calculated, and the final score ranking is shown in Table 2.
Among the aforementioned final evaluation score results, the piano music Demo_17 generated by automatic composition entered the top three, and the scores of Demo_11 and Demo_06 also ranked in front of the two work songs. The next score is the piano music automatically generated in this article, which shows that the quality of automatically generated piano music is different, and the network model of automatic composition has room for further optimization. The comparison of scores above 3 is shown in Figure 4, and the comparison of scores above 4 is shown in Figure 5.
4.2. Results of Offline Audition Evaluation
The offline performance evaluation invites professionals who have rich experience in piano performance. Professionals have designated 5 indicators for this evaluation, namely, “melody, texture, harmony, tension, and aesthetics.” The full score for each indicator is 100 points. Then, 10 piano songs are performed and the scores of 5 indicators for piano songs are recorded. In the end, the top ten scores of each piano song are shown in Table 3. The comparison of scores above 80 is shown in Figure 6, and the comparison of scores above 60 is shown in Figure 7.
Judging from the final scoring situation, 5 of the automatically generated piano songs in this article have entered the top 10, indicating that the automatically generated piano music in this article can reach the level of general manual creation, but compared with the works of the famous composer Schumann, the score is the gap is large. The two lowest scores are also automatically generated piano music in this article, which shows that the quality of automatically generated piano music is different, and the network model of automatic composition needs to be further optimized. Professionals also commented on the spot that the piano music automatically created in this article is monotonous, without complicated variations, and lacking “rhythm.”
From the results, it can be seen that the automatically generated piano music only accounts for 5% with a score of 80 points or more, and the other accounts for 25%. With a score of 60 points or more, automatically generated piano music accounts for 35%, and others account for 50%.
The current study first introduced the experimental software and hardware environment, the source of the demonstration audio, and the size of the training set after extracting the note features. Later, the detailed description was given for the construction of the automatic composition neural network model and the specific parameters in the training process. Since the evaluation of the quality of automatically created piano music is a difficult point, this article referred to previous studies and adopted a subjective evaluation method to conduct a Turing test. Moreover, the music lovers were invited to listen in order to evaluate online and professional offline performance evaluation, respectively. The online audition effect scoring platform adopts a development method that separates the front and back ends. The front end is developed with React, and the back end uses Java. We invited 20 music lovers to listen to 20 piano songs online. Then, we asked them to rate, score, and evaluate the songs according to their listening feelings. The scoring results show that the testers cannot completely distinguish the human work music from the piano music automatically generated in this article. It also shows that the piano music automatically generated in this article meets the appreciative requirements of the testers. The offline performance evaluation invites professionals. The professionals specify 5 indicators for this evaluation, then perform live performances, using the entropy method to calculate the weight, and comprehensively evaluate each piano music. The scoring results show that the piano music automatically generated in this paper can reach the level of general manual creation, but compared with the works of famous composers, the score gap is large. It shows that the popular songs automatically generated by the neural network model cannot be compared with the classic songs. The music is flexible and changeable. At the same time, I also hope that the majority of music creators can broaden their thinking when creating and compiling pop music, whether starting from their own original music or simple pop music; as long as you work hard, you can make progress.
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Z. J. Wu, The Art of Arranging, Nanhai Publishing House, Beijing, China, 2018.
Y. Liu, Y. T. Shlomo, and H. Wang, “Dynamics of integration management in chinese cross-border M&A: a longitudinal view based on the learning perspective,” in Proceedings of the 15th ITCONC, pp. 2260–2268, Beijing, China, 2012.View at: Google Scholar
Music, Hercules’ I. Can Go the Distance (Piano Arrangement), Virtual Sheet Music Inc, Laguna Niguel, CA, USA, 2016.
Y. Feng, Computer Music Technology, Science Press, Beijing, China, 2018.
D. J. O. Williamson, The Philosophy of Diagnosis in Relation to Artificial Intelligence, King’s College London (University of London), London, UK, 1998.
B. J. A. GenJam, “A genetic algorithm for generating jazz solos,” CMC, vol. 94, pp. 131–137, 1994.View at: Google Scholar
D. Eck and J. Schmidhuber, “A first look at music composition using lstm recurrent neural networks,” Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale, vol. 103, 2002.View at: Google Scholar
B. Logan, “Mel frequency cepstral coefficients for music modeling,” in Proceedings of the 1st International Symposium on Music Information Retrieval, Plymouth, MA, USA, October 2000.View at: Google Scholar
T. H. Piao, Digital Signal Processing of Music, Shanghai Music Publishing House, Shanghai, China, 2018.
L. M. Surhone, M. T. Tennoe, and S. F. Henssonow, Window function, Sage, Newcastle Upon Tyne, UK, 2013.
G. Bouchard, “Clustering and classification employing softmax function including efficient bounds,” Soft Computing, vol. 15, no. 8, pp. 289–299, 2011.View at: Google Scholar
A. Jeffrey, J. Rathke, and J. R. Java, Fully Abstract Trace Semantics for a Core Java Language, Springer, Berlin, Heidelberg, 2005.
D. E. Booth, “The cross-entropy method,” Technometrics, vol. 50, no. 1, p. 92, 2004.View at: Google Scholar