Abstract

Nanoscience is a comprehensive, interdisciplinary course based on many advanced sciences and technologies that has developed rapidly in recent decades. Nanotechnology has been widely used in biomedicine, materials science, chemistry, physics, information and electronic engineering, and other fields. Nanomaterials have been widely used in various research fields because of their many excellent properties, such as quantum size phenomenon, small size phenomenon, and quantum mechanics. Surface effects and tunneling phenomena have now become the focus of scientific research. Controlled release means that the drug is released quantitatively and uniformly through the controlled release coating film, so that the concentration of the drug in the blood remains unchanged. This paper is aimed at studying the application and performance of nanoparticles in the controlled release system of anticancer drugs. This paper addresses the issue of the controlled release of anticancer drugs. This question is based on nanoparticles. Therefore, it was elaborated around supramolecular polymer nanoparticles, and a case design and analysis of its use in the controlled release system of anticancer drugs was carried out. The experimental results showed that when laser irradiation and GSH coexisted, the dissolution rate and cumulative dissolution rate of DOX were the highest, and the total release within 3 h was close to 53%. This result indicates that the release of DOX from DOX@HMPB@PEI-SS-HA is both photosensitivity and redox responsive, and the photosensitivity predominates.

1. Introduction

Chemotherapy plays an important role in various cancer treatments, but traditional chemotherapy cannot achieve key goals such as stable drug delivery, targeting, stabilization point, and timely release. We hope these issues will be resolved. In recent years, the controlled stimulated release of nanomedicines has been a research hot spot. Since artificial intelligence and production planning were proposed, genetic algorithm, as a global optimization algorithm, has been widely used. It has comprehensive applications in image processing, machine learning, computer-aided design, and other fields.

Cancer is one of the main culprits that endanger human health and threaten human life, which has always been the focus of doctors. By controlling drug release, it can be achieved through membrane control systems and in vivo diffusion systems. Targeted drug release functions can be achieved through biorecognition mechanisms, permeation mechanisms, and in vitro control. This series of means ensures that the blood drug concentration remains stable, and the patient can get the best effect by taking the drug.

The innovation of this paper is the following: (1) This paper combines drug controlled release with a decision tree algorithm and genetic algorithm and introduces the theories and related methods of the two algorithms in detail. (2) In the face of drug controlled release, this paper divides the experimental samples into four groups. Through the evaluation of the experimental results, this paper compares the performance of nanoparticles in it. It is concluded that nanoparticles play a positive role in the controlled release of anticancer drugs.

At the end of 1959, the concept of the nanometer was first proposed by a Nobel Prize winner. But really effective nanoscale research began in the 1960s. In the past decade, scientists have begun to shift their focus to the study of related nanomaterials. And it has achieved fruitful research results in the preparation process. Kumar et al. discuss the structure, morphology, and optical properties of the prepared samples. Its refinement revealed a single-phase hexagonal wurtzite structure without any impurity phase. It enhances the understanding of the structural and optical properties of Co/Mn codoped nanocrystals. However, their data is less [1]. Chevigny et al. investigated the dispersion mechanism of nanocomposites composed of polymer-grafted nanoparticles mixed with free chains of the same polymer. Through the analysis of the interparticle structure factor, they extracted the thickness of the spherical crown of the grafted brush and correlated it with the degree of dispersion: the aggregation of the particles was associated with a significant collapse of the grafted chains. However, their conclusions are not comprehensive enough [2]. Ammari et al. work on the mathematical modeling of plasmonic nanoparticles. They analyze how plasmonic resonances change and broaden as nanoparticles change in size and shape. By analyzing the imaginary part of Green’s function, it was shown that superresolution and superfocusing can be achieved using plasmonic nanoparticles. However, their analysis process is relatively simple [3]. Son et al. report on the nonvolatile memory characteristics of bistable organic memory (BOM) devices. Transmission electron microscopy (TEM) images show an isotropic distribution of gold nanoparticles around the PVK colloidal surface. However, their content is not novel enough [4]. Cabezón et al. conducted a series of experiments to explore the potential of anti-transferrin receptor 8D3 monoclonal antibodies (mAbs) to transport neurotherapeutics in the BBB. They also discuss the trade-offs of using this technique in their application and draw conclusions. Along with other volume electron microscopy imaging techniques, SBF-SEM is a powerful method worth considering for studying drug transport across the BBB. However, the influencing factors of their experiment are not single [5]. Examination of surface morphology by Han et al. revealed rod-like PANI nanoparticles encapsulated with ATP clay particles, confirming their Pickering emulsion stabilizer properties. They examined the ER performance of PANI@ATP nanoparticles dispersed in silicone oil under different electric field strengths. ER fluid flowing in particles PANI@ATP exhibits typical electrorheological behavior, and its shear stress curve is in good agreement with the Cho-Choi-Jhon model of the rheological formula of state. However, their data are not accurate enough [6]. Ke et al. reported the photothermal energy transfer efficiency of gold nanoparticles of different sizes by evaluating the temperature distribution of laser-activated particle suspensions in water. The results show that the photothermal properties of gold nanoparticles are size-tunable. However, they did not proceed from reality [7]. Sheikholeslami and Bhatti investigated the forced fusion heat transfer of nanoparticles in a porous half-ring under the action of a uniform magnetic field. Ultimately, they concluded that the velocity of nanoparticles increases with Darcy and Reynolds numbers and that the heat transfer rate is highest for flaky particles. The Nusselt number increases with increasing nanoparticle volume fraction, Darcy number, and Reynolds number and decreases with increasing Lorentz force [8, 9]. However, their process is more complicated.

3. Methods for Drug Controlled Release

3.1. Application of Nanomaterials in Drug Controlled Release Systems

The rise of nanotechnology in the 1980s was a very horizontal and rapidly growing field of research. The fusion of nanotechnology with biology, materials science, and medicine has created a new branch of nanomedicine. The specific application of “nanomedicine” in disease diagnosis, treatment, and monitoring has revolutionized the treatment methods of traditional medicine, which can alleviate the suffering of patients and improve people’s health [10]. Due to their unique chemical and physical properties, nanomaterials have many advantages in drug management and have been extensively evaluated and studied by scientists and clinicians.

The drug release behavior of traditional chemotherapy methods is uncontrollable. After the drug molecule enters the blood, it is distributed throughout the body through the diffusion mechanism. The drug molecule usually needs to undergo protein binding, metabolism, excretion, and other steps, and the drug concentration decreases rapidly. Only a small amount of the anticancer drug eventually reaches the tumor site. The use of nanomaterials as drug carriers in this paper can effectively transfer drugs to the site of injury. It achieves targeted release and controls the drug release rate.

Most polymeric nanomaterials are biodegradable and biocompatible. It can control the metabolic kinetics of drugs through surface chemical modification [11]. Therefore, they are very suitable as drug carriers: they are commonly used to make nanocarrier polymers, both natural and synthetic polymers. Natural polymers include chitosan, albumin, heparin, dextran, gelatin, sodium alginate, and collagen. Synthetic polymers include polyethylene glycol (PEG), polyglutamic acid (PGA), polylactic acid (PLA), polycaprolactone (PCL), polylactide-co-glycolide (PLGA), and N-(2-hydroxypropyl)-methacrylamide copolymer (PHPMA) [12].

3.2. Decision Tree Algorithm
3.2.1. Overview

Decision tree learning algorithms are inductive learning algorithms based on a set of sample data sets (data samples can also be referred to as examples), focusing on conclusions drawn from disturbed and irregularly sampled data sets (concepts). It represents the classification rules, where the data samples should be able to be embodied in the form of “attribute inference.”

A decision tree is a tree structure that automatically sorts data. It is a tree-structured knowledge representation that can be directly transformed into classification rules. It can be thought of as a tree-based predictive model. Among them, the root node of the tree is the entire space of the data set, and each child node corresponds to a separation problem. It is a test on a single variable that splits a spatial data set into two or more parts; each worksheet node is a data separator with sorted results. Decision trees can also be interpreted as a special form of rule sets, which are characterized by a hierarchical organization of rules. The decision tree algorithm is mainly aimed at the learning method of “classifying discrete variables into feature types,” and for continuous variables, they must be discrete in order to learn and classify [13, 14].

The learning decision tree adopts the object detection method. It compares the eigenvalues of the internal nodes of the decision tree and judges that the branch conclusion of the next node is in the leaves of the node decision tree according to different eigenvalues. Figure 1 briefly describes the process of creating a decision tree.

3.2.2. Basic Algorithm

The internal nodes of the decision tree are features or feature sets, the leaf nodes are the categories or conclusions of learning differentiation, and the features of internal nodes are test attributes or separation attributes.

After a decision tree is created by learning from a sample data set, a new set of unknown data can be sorted. When using decision trees to sort data, top-down backtracking is used. It judges and compares the eigenvalues of the internal nodes of the decision tree and decides which branch will be switched according to the different attribute values obtained from the leaf nodes.

According to the description, it can be seen that the path from the root node to the drawing node corresponds to the connection rules, and the entire decision tree corresponds to the combination selection of the rules. For example, Figure 2 represents a decision tree; in the figure are attribute names; and , , , , , are the values of attributes , respectively. When the value of attribute is and the value of attribute is , it belongs to the second category.

According to various attributes of the internal nodes of the decision tree, decision trees can be divided into the following types.

When each internal node of a decision tree contains only one attribute, it is called a univariate decision tree. But when the inner nodes of a decision tree contain many variables, it is called a multivariate decision tree. Each internal node can have two or more branches, depending on the number of different feature values for the tested feature.

(1) ID3 Algorithm. On the basis of the CLS learning algorithm, later generations have proposed a variety of decision tree learning algorithms, the most important of which is the ID3 algorithm proposed in 1979. The ID3 algorithm is a decision tree learning algorithm based on information inheritance, which is the representative of the decision tree algorithm. Most of the algorithms have been improved and applied on top of them. The information theory is introduced into the decision tree algorithm, and the divide and conquer strategy is adopted [15]. When selecting features at all levels of a decision tree, all features are identified, and the feature with the largest information gain is selected as the decision tree node, which is determined by the different values of the feature [16, 17]. Finally, this paper adopts a decision tree to classify the new samples.

In this paper, the sample data set is set as , and the purpose is to divide the sample data set into categories. Assuming that the number of sample data belonging to class is and the total number of sample data in is , then the probability that a sample data belongs to class is . At this time, the uncertainty degree of the decision tree for dividing (that is, the information descendant) is

If attribute is chosen for control (assuming attribute has distinct values), the uncertainty (conditional continuity) is

Then, the amount of information provided by attribute for classification is

In the formula, represents the reduction degree of information entropy after selecting feature as the classification feature. Therefore, it should choose to maximize feature . As a categorical feature, decision trees that do this have the greatest certainty.

It can be seen that the ID3 algorithm chooses to retain the CLS algorithm, a sentence of information theory, and it is recommended to select the attribute maximizing as the criterion for selecting the classification attribute test feature. Figure 3 shows the specific process of the ID3 algorithm.

When the ID3 algorithm is used as the discriminant model, the number of samples in the experiment, that is, the discriminant results, is shown in Table 1.

(2) C4.5 Algorithm. The C4.5 algorithm was proposed in 1993. It evolved from the ID3 algorithm, inherits the advantages of the ID3 algorithm, and adds new methods and functions [18]. The main body of the algorithm consists of three parts: C4.5 tree, pruning algorithm C4.5 pruning, and rule generation algorithm C4.5 rules.

The C4.5 tree algorithm selects the feature with the highest information gain rate in the current sample set as the test feature. It is based on the theory of information genetics and finally builds a complete decision tree. For continuous features, separate first, that is, divide the values of continuous features into different intervals, which is convenient for processing. C4.5 pruning uses an error-based pruning method to prune the entire decision tree to obtain a simplified decision tree. C4.5 rules converts a complete decision tree into a set of if.then rules and then simplifies them. Both simplified decision trees and rule sets resulting from pruning or rule creation processes can be used for classification.

Its basic idea is the following: for the sample set , it is assumed that the variable has features, that is, the eigenvalues , , …. The number of samples in which corresponds to the value of is , respectively. should be present if is the total number of samples. The entropy value of the feature is used to determine the cost of obtaining the information of the sample feature , namely,

The information acquisition rate is defined as the ratio of the average mutual information to the cost of acquiring information, namely,

That is, the information acquisition rate refers to the amount of information acquired from the unit cost, which is a measure of the relative uncertainty of the amount of information. Taking the information gain rate as the criterion for selecting the experimental characteristics, the characteristic with the largest should be selected as the experimental characteristic [19].

The pruning strategy adopted by the C4.5 algorithm is pessimistic error correction, and the misclassification rate is estimated as

The continuity corrected error rate is

Accordingly, the misclassification rate of subtree is where takes the leaves of the subtree, so the corrected misclassification rate is

Then, where is the number of leaves on the node.

Using the training data, child nodes always produce fewer errors than their respective nodes, but when the correct numbers are used, this does not happen, because they depend on the number of cards, not just the number of errors [20].

The standard deviation is calculated as follows:

Among them, for nodes, there are

And for the subtree, there are

The flowchart of the C4.5 algorithm is shown in Figure 4.

When the C4.5 algorithm is used as the discriminant model, the number of samples in the experiment, that is, the discriminant results, is shown in Table 2.

3.3. Genetic Algorithms
3.3.1. Overview

The genetic algorithm (GA) is a new global optimization algorithm developed in recent years. Based on the viewpoint of biological genetics, the self-adaptive optimization of individuals is realized through natural selection, inheritance, mutation, and other mechanisms. This reflects the evolutionary process of “natural selection and survival of the fittest” in nature. It is based on the theory of evolution and adopts design methods such as genetic combinatorial optimization technology, genetic variation, and natural selection.

The genetic algorithm is an optimization search algorithm based on biological evolution theory and molecular genetics. It first encodes possible solutions to a particular problem, called chromosome-encoded solutions. It randomly selects chromosomes as the initial group and then calculates the fitness of each chromosome based on the value of the evaluation function. It is predetermined that the best performance value of the chromosome is the highest fitness value, and the chromosome with the highest fitness value is selected for replication. It creates a new set of chromosomes. These chromosomes are more tailored to the environment through genetic operators, and people will eventually adapt to the new environment. Forming a better group becomes the best solution to the problem [21, 22].

3.3.2. Basic Algorithm

The execution of the genetic algorithm involves many random operations, first considering the result of the selection. In standard genetic algorithms, selection criteria are based on the principle of proportionality. Therefore, through the action of the th selector, the expected value of the number of people who will continue to exist in the next generation is ; then, there is

Then,

The formula shows that the effect of the selection operator will increase (decrease) the ability of a pattern above (below) the average to be applied across generations, improving quality.

Then, it analyzes the role of the crossover operator. This plan can obviously be maintained in the next generation if there is no intersection or if the intersection point is beyond the character positions specified on the left and right ends of the figure. Therefore, the probability that the mode will continue to exist in the next generation should satisfy

Taking into account the effects of selection and crossover, there are

Finally, because represents the probability of the mutation operator acting, the constant probability is . If all the specified characters remain unchanged, the pattern can naturally continue to exist in the next generation, the probability , is usually not large, and then under the action of the mutation operator, the probability of continuing to exist is

The probability of unreserved is about . So, taking into account the functions of selection, crossover, and mutation operators, we end up with

Specifically, if , is a constant, then

The flowchart of the standard genetic algorithm is shown in Figure 5.

Coding and initial population: since the algorithm itself cannot directly deal with the number of solutions, it is necessary to pass the coding form of the genotype string structure.

It can be represented by binary and other forms, such as 11001. And it generates initial groups according to the encoding form, and is usually set according to the actual situation.

Fitness calculation: fitness calculation is to evaluate the individual’s pros and cons through an evaluation function. It is the main basis for the next round of operation of the genetic algorithm, so the design of the fitness algorithm is very important.

Choosing an action.

Crossover: crossover is to exchange some information with each other for two individuals that are successfully paired randomly to form a new individual and then calculate the fitness. If the fitness data of the new individual improves, it means that the individual is optimizing in the desired direction.

Variation: in order to make the individuals in the population diverse, it prevents the problem of generating local solutions. In this paper, some parts of the individual are manipulated and changed, so that the group can survive and develop better.

4. Experiment and Analysis of Polymer Nanoparticles in Drug Controlled Release

4.1. Experiment Preparation

At present, nanodrug carriers have been widely used in cancer therapy, oral drugs, gene carriers, intracellular targeted drug delivery, quantitative drug delivery, immune-enhancing vaccines, and the prevention and treatment of vascular retention [23]. Figures 6(a) and 6(b) belong to the type of polymer nanomaterials.

Anticancer drug release experiments were divided into four groups: (1) pH 7.4, (2) pH 7.4+GSH, (3) pH 7.4+NIR laser irradiation, and (4) pH 7.4+GSH+NIR laser irradiation. The dissolution medium of the corresponding two groups (1) and (3) was PBS at pH 7.4. The dissolution medium of both groups (2) and (4) was pH 7.4 PBS containing 20 mM GSH.

First, 1 mg of DOX@HMPB@PEI-SS-HA was uniformly dispersed in 1 mL of the corresponding dissolution medium and transferred to a dialysis bag ( Da). Then, the dialysis bags were put into 5 mL of the corresponding dissolution medium, respectively, and the culture was continued in a constant temperature environment of 37°C. For groups (1) and (2), the system needs to be placed in a dark environment with a constant temperature of 37°C; for groups (3) and (4), the system must be irradiated with an 808 nm laser (power density is 1.5 W/cm2). It is illuminated 10 min before each sampling. When the preset time points are reached, 2 mL of the samples to be tested are removed and 2 mL of the same dissolution medium is quickly replenished. According to the absorbance curve of the tested sample, the absorbance of the tested sample was calculated, and the cumulative DOX dissolution rate corresponding to different time points was determined.

The in vitro dissolution experiment of each system needs to be repeated three times, and the average value of the three times is the release amount of DOX.

In this experiment, two methods of CLSM and FCM were used for detection, and two methods of MTT and Live-Dead were used for analysis. In this paper, the cytotoxicity of HMPB@PEI-SS-HA and DOX@HMPB@PEI-SS-HA was evaluated, respectively.

4.2. Experimental Results

The PVP-coated solid PB NPs were etched by HCl to form HMPB NPs. We then combined electrostatic layer-by-layer self-assembly and chemical cross-linking to selectively adsorb PEI-SH and HA-SH onto the surface of HMPB NPs. PEI-SH and HA-SH can react to generate disulfide bonds, and HMPB@PEI-SS-HA can be obtained (as shown in Figure 7).

Zeta potential and particle size of products at different preparation stages were detected by DLS. With the progress of the assembly process, the particle size of the NPs also gradually increased, and the results are shown in Table 3.

HMPB is a hollow mesoporous structure with a huge specific surface area and pore volume. It becomes an ideal carrier for loading anticancer drugs. When DOX was loaded into HMPB@PEI-SS-HA, when the mass ratio of the two was 1/1.2, the drug loading reached 46%, and the encapsulation efficiency was close to 94%. In addition to the structural characteristics, the reasons for the high loading capacity, the electrostatic interaction between the positively charged DOX and the negatively charged HMPB@PEI-SS-HA, and the formation of coordination bonds between Fe(II) and DOX in HMPB play a positive role in the drug loading capacity.

Next, we investigated the release rule of DOX from DOX@HMPB@PEI-SS-HA. The results in Figure 8 show that the release of DOX has both redox and light responsiveness. Figure 8(a) is the dissolution curve of the control group at 24 h in Figure 8(b). In the environment with neither NIR laser irradiation nor GSH, the release rate of DOX from DOX@HMPB@PEI-SS-HA was very slow, the dissolution basically reached equilibrium within 24 h, and the total release amount was around 20%. This result can be attributed to the particularly low exchange efficiency of the inner charge and the outer dissolution medium through the pores of the HMPB. This is beneficial for minimizing toxicity and side effects in healthy tissue. At the same time, it also shows that DOX@HMPB@PEI-SS-HA is very stable and will not burst suddenly during the blood circulation process, resulting in the sudden release of anticancer drugs.

As shown in Figure 8(b), when there was only GSH, the dissolution rate of DOX was faster than that of the control group, and the cumulative dissolution rate was about 22% at 3 h. It shows that the disulfide bond in the carrier is broken under reducing conditions, destroying the PEI-SS-HA part in its structure. It makes the internal DOX and the water phase medium in closer contact, and the exchange efficiency is improved. In the environment with only NIR laser irradiation, compared with the control group, the dissolution rate of DOX was significantly accelerated, and the cumulative release at 3 h exceeded 40%. It is mainly because of the strong photothermal properties of HMPB itself.

On the one hand, the laser irradiation significantly increased the temperature of the system, which increased the solubility of DOX in the dissolution medium. It simultaneously accelerates the DOX exchange efficiency between the inner space of the nanocapsule and the outer environment. On the other hand, because HMPB is inside the carrier, NIR irradiation directly destroys the structure of HMPB. Therefore, the overall structure of the carrier will collapse, resulting in a significant increase in the DOX release rate. When laser irradiation and GSH coexisted, the dissolution rate and cumulative dissolution rate of DOX were the highest, and the total release within 3 h was close to 53%. This result indicates that the release of DOX from DOX@HMPB@PEI-SS-HA is both photosensitivity and redox responsive, and the photosensitivity predominates.

4.3. Experimental Analysis

In the experiments, a NDDS with both NIR and redox responsiveness, DOX@HMPB@PEI-SS-HA, was designed and synthesized. Due to the hollow mesoporous structure of HMPB, the DOX drug loading reached 46%. Anticancer drug release experiments showed that the strong photothermal properties of HMPB accelerated the release rate of DOX from DOX@HMPB@PEI-SS-HA when irradiated with an NIR laser. In addition, DOX dissolution also has redox responsiveness, but photosensitivity dominates. Cell uptake experiments also confirmed that DOX@HMPB@PEI-SS-HA could target and bind to the CD44 receptor. The results of the cytotoxicity evaluation further confirmed the positive effect of the photothermal properties of HMPB on accelerating the release of anticancer drugs and killing cells. HMPB@PEI-SS-HA can not only load DOX but also deliver other anticancer drugs. Due to its high drug loading capacity, low toxicity, and excellent photothermal properties, it has broad application prospects in the field of photochemical combination therapy.

5. Discussion

First of all, through the study of the relevant knowledge points of the literature works, this paper has initially mastered the relevant basic knowledge. This paper analyzes how to study the application of nanomedicine in the controlled release of anticancer drugs based on the decision tree algorithm and genetic algorithm. This paper expounds the concepts and related technical algorithms of the two algorithms and explores the application of polymer nanomedicines. In this paper, the applicability of nanomedicines in the controlled release of anticancer drugs is analyzed by experiments.

Nanoscience is a comprehensive, multidisciplinary discipline based on many advanced scientific technologies that have developed rapidly in recent decades. Nanotechnology has been widely used in biomedicine, materials science, chemistry, physics, information and electronic engineering, and other fields. Nanomaterials have been widely used in various research fields because of their many excellent properties, such as quantum size phenomenon, small size phenomenon, and quantum mechanics. Surface effects and tunneling phenomena have now become the hot spots of scientific research. However, scientific researchers still face great challenges in the preparation of nanomaterials with different shapes and properties and their applications. Researchers hope to better understand the microstructure of nanomaterials and control their shape, structure, composition, and others [24, 25].

In this paper, the experimental analysis shows that the organic-inorganic composite nanocapsules are prepared by polymer nanoparticles, and the DOX drug loading reaches 46%. The in vitro release rate and total nanoparticle release rate were significantly improved. Cellular uptake and cytotoxicity experiments further confirmed the positive role of supramolecular polymer nanoparticles in controlling the release of anticancer drugs and cell death.

6. Conclusion

Today, functional nanodrug carriers have become a necessary and important means for cancer diagnosis and treatment. Therefore, the preparation and application of functional nanocarriers have become the focus of chemical and biomedical workers. The key to producing functional nanocarriers is the functionality of their materials. Supramolecular polymers with multiple stimuli-responsive properties are compared to conventional covalent polymers. Because of its nonhomogeneity, it is harmonized and repairable. The superior performance and time-synchronized and seeded copolymers have unique advantages in functional modification and self-assembly due to more side chains. There are many reports on the self-assembly of supramolecular polymers and graft copolymers, but their application in the fabrication of functional nanodrug carriers remains to be further developed. In addition, it is necessary to change the manufacturing method of polymers to find polymers through simple modification or reversibility to improve the function of preparation and performance of drug carriers.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Science and Technology Innovation Projects of Wuhan Business University (No. 2016kc03).