The rapid development of artificial intelligence has made various fields to have corresponding connections with it, and music plays an important role in daily life. One of the applications of artificial intelligence in the field of art is to use music generation algorithms to endow machines with the function of generating melody. This ability can provide certain assistance to musicians when composing music, so that music creators can inspire inspiration in the creative process. Researchers have done a lot of work in the automatic generation of music. The piano is widely used in the field of automatic accompaniment and has strong versatility. The main purpose of this paper is to design a piano-based automatic accompaniment system, to think of melody and harmony as a machine learning-like task. By training on a selected series of samples, a database of phonomorphic metastructures is constructed, to systematically collect the original piano accompaniment patterns by building a sound pattern database, and convert the collected original sound patterns into the original sound pattern structure and store in the database. Then, by establishing two Hidden Markov Model (HMM) systems to simulate the thinking mode of the composer’s piano accompaniment process, a melody style related to a certain collection of samples is formed. Finally, the Viterbi algorithm is used to select the appropriate piano accompaniment metastructure in the database to generate the piano accompaniment of the melody section. The experimental results of this paper show that, as far as the accompaniment generation effect is concerned, although the generation effect of this paper is slightly different, the overall difference does not change much. It shows that the effect generated by the method in this paper is relatively stable.

1. Introduction

With the progress of the times and the continuous development of computer technology, composers began to seek new and effective methods. On the road of exploration, music theorists have tried a patterned and algorithmic approach to composition. In the early 1950s, algorithmic composer used a random process to generate pieces of music as material for composition, which was the first attempt to use algorithms to compose music. In the 15th century, the patterned creation method was widely used, but with the continuous improvement of music techniques and stricter rules, relying solely on patterns and algorithms for composition resulted in a time-consuming, labor-intensive, and inaccurate situation. Therefore, composers began to use the computer as a new tool to complete the composition. Computer automatic composition is a product of the development of the times. It uses computer as a tool and combines music theory to automatically or semi-automatically generate songs through algorithms and programs. It is an imitation and reprocessing of existing music materials, reducing human intervention.

In the past 50 years, computer automatic composition has shown a diversified development trend. The main automatic composition methods are: artificial neural network, music grammar rules, genetic algorithm, and so on. These methods have their own advantages and disadvantages, which can meet the needs of automatic composition to a certain extent, but they also face problems. For example, the melody generated by the recurrent neural network technology lacks the global coherence of music; when using the genetic algorithm to solve the melody and harmony, it will encounter the harmonic search space problem of irrelevant local optimal regions. At present, the main problem faced by computer automatic composition is that the automatic composition cannot meet the constant changes of musical materials. From a certain perspective, music is a variety of permutations and combinations of musical elements, and computers are especially good at mathematical calculations. But music is an art form, and computers only have established programs and rules, without human emotions and thinking. Therefore, computer automatic composition not only requires more diversified programs, but also requires further support from artificial intelligence technology.

With the development of modern generative technologies, researchers have done a lot of work in the automatic generation of music. However, in the process of automatic piano accompaniment, due to the special requirements of melody and arrangement, most of these methods have limitations when applied to multi-track music generation. Some key factors related to music quality, such as chord progression, rhythm pattern, and musical style, are not well addressed. To address these issues and ensure the harmony of multi-track music, this paper proposes the concept of a metastructure with piano accompaniment, which further imitates compositional techniques by imitating musical fragments. It is more flexible, changeable, and more vital to use the phonological element structure to accompany the melody.

The automatic generation of melody is one of the study hotspots for artificial intelligence (AI) in the realm of art music, and many academics have begun studies in the related domains. It became a matter of addressing the piano itself and its connection to the textual and vocal parts. Vilar and Valles Grau examines these aspects to provide evidence for the changing role of the piano in the piano-voice correspondence and its impact on the final outcome of the composition. Depending on the different aspects involved in the musical event, such as rhythm, harmony or texture, results, and conclusions from the evolution presented in these two articles are presented at the end of the article [1]. Shabtai et al. recorded, modeled, and analyzed a database of sound radiation patterns from 41 modern or real orchestral instruments. He has used a centering algorithm to analyze the complexity of the acoustic radiation pattern in terms of the number of excitation points. The generation of this database consists of recordings of each instrument over the entire semitone range in an anechoic chamber using a surrounding spherical microphone array. This database can be used both to study the radiation of the instrument itself, and to implement radiation patterns in room acoustic simulations and auralization to obtain a more realistic room spatial excitation [2]. Li et al. presented the development of a robotic automatic plucking system for guitars designed to generate music without machine noise. The soft robotics approach has led to the development of a new silent actuator that uses a soft, elastic cone as a cushion to prevent impact noise. He also proposed an elastic cone design method based on nonlinear finite element analysis. The quietness characteristics of the silent actuator were confirmed by a noise test comparing the silent actuator with a conventional actuator. As an example of a performance test, the accompaniment of the song “Five Hundred Miles” was played in front of the audience to solicit their opinions [3]. This educational approach to the emotional, psychological development of musical instrument education has important implications in the educational process. In this context, Stün and Ozer employs an experimental approach, using piano accompaniment training exercises, personal instrument training habits, and instrument performance self-efficacy questionnaires. After the course of the experiment, he used a t-test to get the scores for each group. The results showed that the application of piano accompaniment in flute education had a positive effect on all subdimensions of both questionnaires [4]. Lee et al. proposed an automatic melody extraction algorithm using deep learning. In this algorithm, feature images generated using frequency band energy are extracted from polyphonic audio files, and a deep learning technique Convolutional Neural Network (CNN) is applied to the feature images. He learns a CNN-based classifier to determine the pitch values of short-frame audio signals. They want to build a novel structure for melody extraction, so the proposed algorithm is simple in structure and does not use various signal processing techniques for melody extraction, but only uses CNN to find melody from chord audio. Compared with the state-of-the-art algorithms, the proposed algorithms do not give the best results, but obtain comparable results, and we believe they can be improved with appropriate training data [5]. Zhu et al. proposed a continuous framework for generating and arranging the melody of multiple accompaniment tracks played by different instruments. First, he developed a novel chord-based rhythm-melody cross-generation model to generate melodies with chord progressions. Then, he proposed a multi-task learning-based multi-instrument joint arrangement model for arranging multi-track songs. In order to control the musical style of the arrangement, he also developed a multi style multi-instrument co-arranging model to learn the musical style through adversarial learning. Therefore, it not only maintains the harmony of the generated music, but also controls the musical style for better use. Extensive experiments on real-world datasets demonstrate the superiority and effectiveness of the proposed model [6]. Nedjah et al. proposed a parallel and reconfigurable hardware to generate harmonic music. The generated music is composed of melodic intervals determined by cellular automata in combination according to a standard protocol known as the Musical Instrument Digital Interface (MIDI) protocol. The hardware architecture is implemented in a field programmable gate array (FPGA), providing another efficient but autonomous tool for learning and research related to the field of random music. To verify the effectiveness and efficiency of the proposed design, he presents several results on hardware design requirements, including area, operating frequency, and power consumption, and characteristics for generating melodies. His results are very promising [7]. The relevant research results of the scholars listed above have certain inspiration and auxiliary functions for the research of this paper, but the research of the above scholars is mostly a systematic melody research on all musical instruments, which is not targeted. In music analysis, the potential of other musical processes and motivations outside the model is overlooked, and constructing fitness functions to evaluate complicated musical occurrences involving various creative approaches is difficult.

3. Piano Automatic Accompaniment Method Based on Sound Pattern Database

3.1. Basic Overview of Melody

Melody, rhythm, and harmony are the three major melodic elements of a complete composition. Melody, rhythm, and harmony are the three major elements of a complete song [8]. Among them, the melody represents the overall characteristics of the song and shows the style of the song; the rhythm mainly refers to the fast, slow, and urgent of the melody, which reveals the emotion of the song. Harmony is composed of two or more different tones according to certain rules, which enriches the form of songs. In Western music, sound names are represented by the English letters C, D, E, F, G, A, B, and sharps (#) and flats (b) are used to represent sharps and flats. Sharps and flats are equivalent under certain conditions. Roll call is expressed by DO, RE, MI, FA, SOL, LA, and SI. The position of note names on the piano keyboard is shown in Figure 1.

The melody is composed of tones with various properties in a certain order, and its basic components are pitch, duration, and tempo [9]. Pitch is a subjective quantity that describes music. According to the level of frequency, it can be divided into treble or bass. The relationship between treble and bass is octave and the relationship between notes, and the different forms of change between pitches directly affect the change and direction of the song [10]. Duration is the duration of a note, it is the quantization of the note, long or short, to meet the inherent time requirements of the beat. The beat refers to the total length of notes in each measure, and the duration of each beat is the sum of the durations of the notes in the measure [11]. The beat is divided into strong and weak beats. Different combinations of strong and weak beats form different time signatures. The time signature is composed of the number of beats per measure and the time value of each beat, for example, 2/4 means four or two beats. Rhythm is the movement of each tone in a song, fast or slow, long or short, strong or weak, and is an important component of a song and the core of a song’s form. The quality of the rhythm directly affects the quality of the song [12]. At present, the rhythm forms of songs are mainly active and brisk, impassioned, soothing and distant, sad and dull. Different rhythmic features represent different keynotes of the song and also show the skeleton of the song [13]. Harmony can be said to be independent of the melody, self-contained, or it can be said to be a community of melody [14]. At present, the harmony that people say often refers to the harmony in the accompaniment, not the harmony in the melody. The harmony is composed of chords and harmony progressions. The chords are the core of the accompaniment, and the harmony progression is the expression of the accompaniment [15].

3.2. Theory of Accompaniment
3.2.1. The Basic Concept of Accompaniment

Music is an important part of art, embodying naturalness, creativity, beauty, and emotion. Although a melody can exist alone, it will appear monotonous. Accompaniment can make the melody express emotions more completely, and make the theme of the song more vivid, prominent, and infectious. It is an important supplement to the melody and an important part of the performance of the song.

Accompaniment is based on a given melody, analyze its harmonic trend and chord characteristics, and use harmony to generate accompaniment sounds suitable for the melody [16]. Harmony is the combination of chords and harmonic progressions. It is the simultaneous sounding of multiple different sounds according to certain rules. It is the organizational form of multi-voice music. It has the function of forming phrases, segments, and sense of termination of music [17]. Chords are the basis of harmony. According to a certain internal relationship, three or more different tones are combined according to different rules [18].

3.2.2. Piano Accompaniment

The piano has the characteristics of a wide range, melodious tune, vivid and excellent transition, and rich expressiveness, and is known as the “king of musical instruments.” Using the piano as accompaniment not only meets people’s auditory needs, but also facilitates the development and spread of songs, so piano accompaniment is an important part of accompaniment [19].

A complete piano accompaniment system should not only use and properly configure harmony, but also utilize piano skills [20]. Constrained by its regional and national characteristics, style songs have not been widely sung, and the development of piano accompaniment has not been smooth because of their own differences. The good news is that more and more composers recognize the necessity and utility of piano accompaniment. Therefore, composers use the rich expressive power of the piano to deal with the harmony in a variety of ways to make the accompaniment more in line with the characteristics of the song [21].

The appearance of automatic piano accompaniment is the result of the combination of piano accompaniment and computer automatic accompaniment technology. At present, many researchers have conducted research on automatic piano accompaniment, but automatic piano accompaniment also needs to consider piano skills and accompaniment patterns in automation, which is the weak part of the current research [22]. Although the research on the automatic piano accompaniment system has achieved certain results, its theoretical basis is not perfect and needs to be further studied. This research will focus on the melody of single-melody songs, starting from the analysis of the rhythm characteristics of the construction of the sound pattern database, obtain the accompaniment chords through the algorithm, construct the accompaniment pattern, and generate the accompaniment. This research provides a theoretical basis for enriching and perfecting automatic accompaniment, and also lowers the threshold of music accompaniment, which has important practical significance [23].

3.3. Hidden Markov Model (HMM)
3.3.1. Overview

Hidden Markov Model is a kind of Markov chain, which is developed from the basis of Markov chain [24]. HMM is a parametric probabilistic model used to describe statistical properties in stochastic processes [25]. It describes the process of randomly generating an unobservable state random sequence from a hidden Markov chain, and then generating an observation from each state to generate a random sequence of observations.

3.3.2. Basic Algorithm

(1) Forward Algorithm. The basic principle steps of the forward algorithm are as follows:

First we need to define the front variable:

It can be found from the above formula that as long as the calculation can be performed efficiently, the value of the conditional probability can be obtained efficiently.

First, the first step is initialization;

Then, the second step is the loop calculation;

Finally, end the output;

In the forward algorithm, calculating needs to consider the possibility of transitioning from M states to state at time i − 1, and the time complexity at this time is . Compared with time x, M forward variables need to be calculated, so the time complexity can be expressed as . And , the total complexity of all forward algorithms is .

(2) Backward Algorithm. The definition of the backward variable is given the model and the assumption that the state at time x is , the probability of outputting the observation sequence is

The first step of the backward algorithm is initialization;

The second step is circular calculation;

Finally, end the output;

The complexity of the backward algorithm is also .

(3) Viterbi Algorithm. The Viterbi algorithm is a dynamic programming algorithm that seeks the optimal among all observation sequences.

The Viterbi algorithm first needs to define the variable , which is the maximum probability of the output observation sequence when the HMM has a certain path to reach at time i.

The second is to track the best path of in the process through .

The first step initialization of the Viterbi algorithm;

The maximum probability variable at this time can be expressed as;

The second step is to calculate by recursion;

Then, end;

Finally, the state sequence is obtained by backtracking.

The time complexity of Viterbi algorithm is .

(4) Baum–Welch Algorithm. This algorithm is mainly used to solve the learning problem of HMM, and the algorithm adopts the idea of recursion [26]. Making a local maximum, and new model parameters can be obtained.

Assuming a given observation sequence of model , when the state at time I is , the probability of state at time i + 1 is:

It is defined as, given the model , the probability of observing the sequence , the state at time I is is

At this time, the parameters of can be re-estimated by the following formula:

When is the probability of ;

The forward-backward algorithm is an algorithm for finding probabilities of known models and sequences, and is also a step in the loop of the Baum–Welch algorithm used for training. In each iteration of Baum–Welch, the forward and backward algorithms need to be called separately for the calculation.

4. Experiment of Piano Automatic Accompaniment System

The purpose of this paper’s research experiment is to determine the structure of the phoneme system. The original piano accompaniment patterns are taken from the music score example and range in duration from one to three bars.

4.1. Two HMMs in an Automatic Accompaniment System
4.1.1. Two HMMs

Assuming that the melody H and piano accompaniment included in the score example , can be denoted as based on , which can be regarded as a sound group structure, and can also be divided into a sequence of . The split form is

The three segmentation sequences shown above can be thought of as a piano accompaniment style. This accompaniment style includes the following information:(1)The harmonic movement characteristics of the musical score example , as well as the mode information.(2)Comparison of rhythmic sequence of a musical score example .(3)Some of the elements in of may contain imitation and an imitative structure.(4)The beat, tempo, ending method, segment length, and other information of music score example .

The harmonic movement along with rhythm and melody line has contrasting features between the melody part and the piano accompaniment portion. They are all elements of the piano accompaniment style. The harmonic movement of the composition, for example, involves the construction technology of harmonic piano accompaniment texture, as well as the mode and the characteristic chord sequence describing its harmonic movement, as well as the ending method at the end of the work (the structure of the piano accompaniment sound group at the end of the melody part of ). Polytony piano accompaniment texture generation techniques, such as beats, rhythmic contrast sequences, and imitation structures, are used to compare the rhythm and melodic lines of the melody part and the piano accompaniment portion. This paper created two hidden Markov models based on the two features of the piano style. One HMM is used to describe the melody line and rhythm contrast relationship of the musical example with piano accompaniment, while the second HMM is used to describe the mode harmonic movement of the musical example with piano accompaniment.

The model of the HMM with melody and harmony is shown in Figure 2, where Fi indicates the hidden state. The observation value K-MP (ai), which can be seen in the soundtrack stage, is represented by the matching value.

4.1.2. The Training Process of HMM

In this study, the two HMM models for harmony and rhythm were established based on the prior formal description of these models. Let us start with an example of a training method with piano accompaniment, as illustrated in Figure 3.

4.1.3. Application of VIterbi Algorithm in Hidden Markov Model

After passing a certain amount of training sample scores, it can try to use it to generate accompaniment for single-melody repertoire without piano accompaniment. Its process is shown in Figure 4.

It can be seen that a single melody piece without piano accompaniment is input, and a piece with piano accompaniment is output. Taking 5 single-melody pieces as the experimental objects, the comparison of the accompaniment generation experiment was done, and the input was repeated twice. The experimental results are shown in Figure 5.

As shown in Figure 5, a comparative experiment of the accompaniment generation effect is carried out on the five tracks. From the two repetition effects in Figure 5, although the two generation effects based on the HMM model in this paper are slightly different, the overall difference does not change much, and the generated effect is relatively stable. In comparison, the generation effect of the traditional method is not much different from that of the method in this paper, but the overall generation effect is worse than that of the method in this paper. And the effect of the two repeated experiments is quite different. The difference of the method in this paper is no more than 5%, while the difference of the traditional method is almost more than 10%, indicating that the generation effect of the same music is not stable enough.

4.2. Overview of the Piano Automatic Accompaniment System
4.2.1. The Design Goal and Composition of the Automatic Accompaniment System

The piano automatic accompaniment system was created with the purpose of being able to match single-melody songs with piano accompaniment of a specific musical value.

Because the piano accompaniment style incorporates the song’s harmony movement as well as the rhythm and melody line contrast between the melody and piano accompaniment parts. The parameter configuration of the two hidden Markov models of harmony and rhythm contrast is continuously modified using the system’s training module, and the piano accompaniment patterns in the score example are gathered within the created database in a structure similar to a sound pattern. The modeling and formal description of the imitation structure is carried out by replicating the structure of a single-beat sound group in advance, as well as the imitation of musical technique. The sound pattern structure and single-beat independent sound group structure were collected during the training using imitation and imitation approaches. The use of these musically skilled phonological metastructures in the accompaniment process for single-melody songs makes the outcomes achieved by machine composition more varied, inventive, and vital.

The training module, composition module, and database are the three basic components of the automatic accompaniment system. The training module, for example, reads in the sample spectrum, recognizes and collects the sound pattern structure, and modifies the HMM parameters; the database, on the other hand, is used to store various types of music data and collect the sound pattern element structure [27, 28]. Using the parameter information from the training section and the sound pattern element structure saved in the database, the composition module reads in a single melody piece and generates a song with piano accompaniment [29].

4.2.2. Frame Structure of Automatic Accompaniment System

The overall framework of the piano automatic accompaniment system is shown in Figure 6.

Figure 6 describes the entire workflow of the system to automatically generate piano accompaniment. It can be seen that the entire work of the system can be divided into two main parts: the training phase and the composition phase. In the following subsections, we will discuss the specific implementation of these two stages in detail.

4.2.3. The Training Process of Automatic Accompaniment

In the training phase, the system completes the following tasks by training sample scores with specific piano accompaniment styles, beats, tempos, and modes: identifying the original piano accompaniment patterns and converting them into the sound pattern element structure of the piano accompaniment stored in the database; collecting rhythm, K-MP, and other observations; Re-adjust various parameters according to the obtained two HMM chains [13, 30, 31]. The main flow of the training process is shown in Figure 7.

Through the training part, the rhythm comparison sequence of the sample spectrum and the optimal characteristic chord sequence can be obtained:

By adjusting the parameters of the rhythm HMM and the harmony HMM, respectively, through the obtained two sequences, a style close to the input sample can be formed.

4.3. Establishment of Sound Pattern Database
4.3.1. The Composition of the Database

The sound pattern database stores all the information needed by the system when it is running. It contains not only some basic information about music, but also the phonological metastructure, rhythm observations, and K-MP collected by the system from the trained sample spectrum, as shown in Table 1.

4.3.2. Phonetic Structure Collection

The pattern metastructure is the form in which system stores piano accompaniment patterns, and all the piano accompaniment metastructures collected for the training part constitute the pattern metastructure library. Pattern structures with exactly the same rhythmic distribution in the system are considered a metastructure, but patterns with the same rhythmic distribution are considered different in the middle of the song and at the end. There are three types of tone structures set up in the system: common tone type structure, single-beat independent tone group structure, and tone type structure with imitation model technique.

5. Results of the Piano Automatic Accompaniment System

5.1. Song with Piano Accompaniment Process

In the process of accompaniment for the selected track, the input is a single-melody track, and the basic information of the track, such as beat and speed is obtained. Calculating the rhythm sequence to divide the state, compare the observed sequence of HMM and chord HMM through the obtained rhythm, and then use the Viterbi algorithm to calculate the optimal hidden state sequence of the two HMMs, and finally produce the piano accompaniment. The composition process is broken down as follows:(1)Acquisition of basic information of music: including beat, tempo, debugging, and other information, through the beat information to determine the phonetic structure used to obtain the composition in the sound pattern database; and debugging determines the scale of the music.(2)The observation and calculation of the music melody, after the melody is divided into sections, the rhythm distribution of the sections is calculated through it. Then, according to the principle of priority of imitation observations, the rhythm observations obtained from adjacent bars and the rhythm observations corresponding to the imitation structures in the database are compared. If it is the same, it will be divided into one observation, and if it is different, it will be divided into a single section.(3)According to the divided rhythm observation sequence, the Viterbi algorithm is used to obtain the sound pattern metastructure sequence of the piano accompaniment. At this time, the root note information in the metastructure has not yet been determined.(4)The observation sequence of the harmonic HMM is further determined by the confirmed accompaniment pattern metastructure: K-MP sequence. This is because the phonological structure of a single beat takes the beat as the calculation unit, while other K-MP sequences use the measure as the calculation unit. Finally, the optimal chord sequence of the melody is obtained through the Viterbi algorithm.(5)For the extraction of the constriction state, since this part is relatively special, if the obtained result does not meet the condition of constriction, it needs to be replaced. This includes bandha chords and rhythms.(6)After the above steps are obtained, the chord sequence of the sound shape element structure and the optimal characteristic can be used to generate the piano accompaniment for the input music.

The final result is a composition containing left and right piano accompaniment, which has a direct relationship with the training samples in terms of harmonic movement, rhythmic contrast, and the use of imitation techniques.

In order to further test the performance of the automatic accompaniment system based on the sound pattern database structure in this paper, the effectiveness of the system is compared with the traditional accompaniment composition system, and 10 pieces are used as the sample object, and the generation time of the system is counted. Then, 8 tracks are used for imitation generation to detect the matching degree of the system.

Figure 8 (1) shows the generation time of the accompaniment melody. From the overall point of view of the melody generation time of the 10 tracks, both systems require a longer generation time, but it can also be clearly compared that the time required for the accompaniment system based on the sound pattern database proposed in this paper is much shorter. In 10 tracks, the longest time-consuming of this system is 11 minutes, while the traditional track takes 47 minutes. This can fully see that the accompaniment generation time based on the sound pattern database in this paper is significantly shortened. Figure 8 (2) is the comparison result of the matching degree of the system. It can be clearly seen that the matching rate of the traditional system is below 45%, while the matching rate of the accompaniment system proposed in this paper is above 85%. It can be seen that the matching degree of the piano accompaniment system in this paper is higher.

5.2. Comparative Detection

In order to better compare the evaluation effect of the model in this paper, 10 songs were selected as test samples, and the audience scored them to further evaluate the effect of the automatic accompaniment system proposed in this paper, and compared the effect with the traditional automatic accompaniment system. The audience scores between 1 and 10 (the larger the score, the better the effect), and the total average score of each track under the two forms is calculated. The evaluation score of each track is shown in Figure 9.

As shown in the average score comparison chart shown in Figure 9, the scoring trends of both are relatively stable. As far as the scores of the two are concerned, the total average score of the automatic accompaniment in this paper is much higher than that of the traditional automatic accompaniment system. The average score of the automatic accompaniment in this paper is about 7.2, the traditional average score is about 6.2, and the relative difference of the total average score is about 15.9%. This data clearly shows that the accompaniment generated by this method is much better than the traditional accompaniment.

5.3. Effect Evaluation

The purpose of this section is to investigate whether the system accompaniment is different from the artificial accompaniment. This time, 10 pieces of music are also used as test samples, and the accompaniment melody is generated. Then, mix 10 artificial accompaniment pieces, a total of 20 pieces, and invite 30 listeners to identify whether the randomly played piece is machine accompaniment, count the number of people who answered the machine accompaniment and artificial accompaniment, and calculate the error probability. The number of people who answered incorrectly for each track is shown in Table 2.

The error rate statistics are shown in Figure 10.

Figure 10 shows that the error rate of each track’s answer is not very different. It can be clearly seen that the overall error rate is not very different. It can be concluded that the average error rate is 45.8%, which does not exceed 50%. The relative error is about 8.3%, and the overall data is quite satisfactory. Although the accompaniment of the two still has certain differences, the difference is small. From this, it can be seen that the audience cannot distinguish the artificial accompaniment and the machine accompaniment from the hearing.

6. Conclusions

Algorithmic composition is a difficult study topic in the realm of artificial intelligence, but it also has theoretical implications. By mimicking the composer’s creative style during the subject study, this paper gains a better understanding of the composer’s way of thinking in the creative process. This work builds a sound pattern metastructure database based on the concept of “piano accompaniment metastructure,” then produces a rhythm comparison HMM and a harmony HMM, and then applies the Viterbi algorithm to the field of music composition. The work of this paper mainly focuses on the abstraction of music theory, modeling, especially imitation, the formal description of the imitation structure, and the proposal of the concept of the sound pattern element structure, and successfully generated the piano accompaniment for the music in the experiment. Among them, many piano accompaniment also used imitation, modeling techniques. Among all the results, most of the composition results have certain musical value, and some have reached a certain level of creation.

Data Availability

No data were used to support the findings of the study.

Conflicts of Interest

The author states that this article has no conflicts of interest.