Journal of Nanomaterials

Journal of Nanomaterials / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 6960787 |

G. A. Nemnes, T. L. Mitran, A. Manolescu, "Gap Prediction in Hybrid Graphene-Hexagonal Boron Nitride Nanoflakes Using Artificial Neural Networks", Journal of Nanomaterials, vol. 2019, Article ID 6960787, 8 pages, 2019.

Gap Prediction in Hybrid Graphene-Hexagonal Boron Nitride Nanoflakes Using Artificial Neural Networks

Academic Editor: Bo Tan
Received06 Dec 2018
Accepted02 May 2019
Published16 May 2019


The electronic properties of graphene nanoflakes (GNFs) with embedded hexagonal boron nitride (hBN) domains are investigated by combined ab initio density functional theory calculations and machine-learning techniques. The energy gaps of the quasi-0D graphene-based systems, defined as the differences between LUMO and HOMO energies, depend not only on the sizes of the hBN domains relative to the size of the pristine graphene nanoflake but also on the position of the hBN domain. The range of the energy gaps for different configurations increases as the hBN domains get larger. We develop two artificial neural network (ANN) models able to reproduce the gap energies with high accuracies and investigate the tunability of the energy gap, by considering a set of GNFs with embedded rectangular hBN domains. In one ANN model, the input is in one-to-one correspondence with the atoms in the GNF, while in the second model the inputs account for basic structures in the GNF, allowing potential use in upscaled systems. We perform a statistical analysis over different configurations of ANNs to optimize the network structure. The trained ANNs provide a correlation between the atomic system configuration and the magnitude of the energy gaps, which may be regarded as an efficient tool for optimizing the design of nanostructured graphene-based materials for specific electronic properties.

1. Introduction

The absence of an electronic gap in pristine graphene hinders many of the expected applications based on the field effect. Graphene nanopatterning is one way to tune the electronic and transport properties, and this can be achieved by reducing the dimensionality [14], by drilling periodic arrangements of holes [5, 6], by embedding hexagonal boron nitride (hBN) [712], or by combining any of these. Graphene nanoribbons (GNRs) and graphene nanoflakes (GNFs), typically passivated with monovalent species like hydrogen or halogen atoms, are two examples of quasi-1D and quasi-0D graphene systems, respectively, which attracted a lot of attention in the past few years. GNRs can have a metallic or semiconducting behavior depending on the lateral width and edge type: armchair or zigzag. In contrast to GNRs, where only the edge states may influence the electronic properties, in GNFs these are markedly influenced by both edge and corner states and, in general, by the different possible shapes [13, 14]. In addition, GNFs may be functionalized, which further extends the range of the electronic, optical and, magnetic properties.

GNFs can be produced by bottom-up approaches, where the synthesis takes place in solution by mechanical extrusion, using magnetic field alignment and thermal annealing [15, 16] or by top-down methods, using techniques like e-beam lithography [17], plasma etching [18], or a cationic surfactant-mediated exfoliation of graphite [19]. Besides the many applications envisioned for nanoelectronics and spintronics [20], more recently, novel applications also indicate the role of GNFs for biological recognition [21]. Therefore, the methods for an efficient investigation of multiple configurations of GNFs and related structures are highly demanded.

In the past few years, machine-learning (ML) techniques are gaining ground in the field of condensed matter. They have been developed to predict the band gaps in solids [22, 23], while they also provide new clues in crystal structure prediction [24, 25]. They can be used to bypass the Kohn-Sham equations by learning energy functionals via examples [26] or predicting DFT Hamiltonians [27]. The generic aim is to develop less expensive and faster methods to calculate the system’s properties. To this end, the methodology contained in PROPhet [28] provides a general framework for coupling machine learning and first-principles methods. ML techniques can also provide more insights about the physical properties of a system. The usefulness as a universal descriptor of grain boundary systems was pointed out [29], potentially indicating which building blocks map to particular physical properties. ML techniques can also achieve high accuracies, the prediction errors of molecular machine-learning models being below that of the hybrid DFT error [30]. High-throughput DFT calculations in connection with ML techniques, as well as some of the problems, challenges, and future perspectives are illustrated in a recent review [31].

Regarding graphene systems, ML techniques have been employed in several studies, e.g., for obtaining an accurate interatomic potential for graphene [32], for searching the most stable structures of doped boron atoms in graphene [33], for investigating the influence of GNF topology [34], and for predicting accuracy differences between different levels of theory [35], as well as for the prediction of interfacial thermal resistance between graphene and hBN [36].

In this paper, we investigate the electronic properties of hybrid graphene-hBN nanoflakes, using combined DFT and ML methods. We construct the distribution of gap energies using ab initio DFT calculations, as LUMO-HOMO differences, which depend on the size and position of the hBN domains within the GNF. Given the large number of possibilities of setting the hBN domains, extensive DFT calculations are typically required, with a significant computational cost. Instead, we develop artificial neural network (ANN) models able to reproduce the energy gaps with high accuracies, which significantly reduce the computational effort. We test our ANN models against reference gap values obtained by DFT and discuss the optimal conditions for the network structure.

2. Model Systems and Computational Methods

We consider GNFs with embedded hBN domains, passivated with hydrogen, as indicated in Figure 1. The hBN domains are rectangular-shaped regions containing an equal number of boron and nitrogen atoms. In this way, the systems retain an intrinsic semiconducting behavior, without a net chemical doping. The embedded rectangular hBN is randomly positioned in the graphene nanoflake. The widths and heights of the rectangular hBN regions are extracted from a flat distribution so that the entire graphene nanoflake can be replaced by BN. The systems analyzed here have a total of 200 atoms, of which atoms are stemming from graphene/hBN and hydrogen atoms. For the investigation of the electronic properties, a number of 900 nonequivalent systems are generated.

The DFT calculations are performed using the SIESTA code [37] employing local density approximation (LDA) in the parametrization of Ceperley and Alder [38]. The strictly localized basis set allows a linear scaling of the computational time with the system size. The self-consistent solution of the Kohn-Sham equations was obtained using the standard double-ζ polarized basis set, a grid cutoff of 100 Ry, and the norm-conserving pseudopotentials of Troullier and Martins [39] with typical valence electron configurations for carbon, boron, and nitrogen. The supercells are cubic cells with a linear size of 50 Å, which provides enough empty space so that two neighboring nanoflake structures do not interact. Gamma point calculations are performed for the cluster-type systems. The gap energies are determined, being defined as the difference between the LUMO and HOMO energies, .

Based on the DFT results, we implement ANN models able to reproduce the gap energy for similar systems from a new set. The ANNs are standard fully connected backpropagation neural networks implemented using the FANN library [40], with three layers: one input layer, one hidden layer, and one output layer. In Method 1, we assign an input neuron to each atom of species C, B, or N, so that the number of input neurons is . Method 2 accounts for the chemical neighborhood of a certain atomic species and its prevalence in the system. In this case, we use input neurons, where 4 of them account for the proportions of the four atomic species (C, B, N, and H) and 16 neurons are associated with the normalized counts of atom quadruplets , where , B, and N and are the three nearest neighbors of , with . Therefore, to obtain the normalized counts of atom quadruplets, one has to loop over all atoms (C, B, or N) and consider the number of times each particular configuration of nearest neighbors appears. The number of neurons in the hidden layer is varied, from 25 to 200 neurons, in order to find a close-to-optimal configuration. The output layer has a single neuron, , and the result maps the gap energy by a continuous function in the [0,1] interval, corresponding to a maximum gap energy . Method 2 has the advantage that the input does not depend on the system size, allowing the same ANN to handle upscaled structures. The two proposed methods are not limited to rectangular shapes and may handle systems with irregular patterns.

For training, we employ the iRPROP algorithm of Igel and Husken [41], which is a variant of the standard RPROP algorithm introduced by Riedmiller and Braun [42]. The iRPROP algorithm is adaptative and there is no preset learning rate. The sigmoid activation function is used and the mean square error during training is set to . Since the ANNs are randomly initialized and the final weight configurations depend on the seeds, an ensemble of 1000 ANNs is trained on the same data set. Finally, a statistics regarding the accuracy obtained on the test data is performed.

The trained ANNs are tested on a set of 100 new examples and the predicted gaps are compared to the reference values obtained by DFT calculations. We use the coefficient of determination as a measure of how far the observed outcomes are replicated by the ANN model.

3. Results and Discussion

GNFs are quasi-0D systems with a discrete energy spectrum, where the gap energy is typically influenced by their geometry, passivation, and nanopatterning. By embedding hBN in GNFs, which is a wide band-gap isomorph of graphene, it is expected that the gap energy has a strong variation. Particularly in finite systems, the position and shape of the embedded rectangular hBN domain, closer to the edges or at the center of the GNF, significantly influences .

We first investigate the variation of as a function of the hBN domain size, given by the BN fraction , where and are the number of boron and nitrogen atoms, respectively. As it is indicated in Figure 2, there is a rather wide dispersion of values, as there are multiple configurations with the same . Still, a clear trend is visible for : larger gaps may be obtained as the BN domain size increases, while smaller gaps are still present. A fit with a second degree polynomial function shows the statistical increase of the gap energy, as .

Next, we investigate the accuracies in predicting the energy gaps for the proposed ANN models. In Method 1, we start with an ANN configuration with three layers, with neurons in the input layer, neurons in the hidden layer, and output neuron. The ANN is trained on 800 examples and tested on a new set of 100 structures. The results are represented in Figure 3, where the predicted gap is plotted vs. the reference DFT gap. The coefficient of determination calculated for the training set yields a rather high value of 99.7%, which indicates a consistent convergence during training. Typically, for this ANN configuration, the threshold for the mean square error set to 10−5 is reached in ∼400 steps. Running the ANN on the test systems, one obtains values as high as 95%. However, as detailed in the following, the performance of the ANN relies on the converged configuration, which may depend on the ANN initialization.

In the second method, labeled Method 2, the ANN is trained to capture the local chemical neighborhood. For the same set of systems, there are 16 instances of atom quadruplets , with , B, and N and , B, N, and H. These are counted for each structure and normalized to , the total number of carbon, boron, and nitrogen atoms. Along with these 16 inputs, the fractions corresponding to each of the four atomic species are added, yielding a total number of input neurons. These extra input neurons improve the prediction behavior of the ANN as they emphasize the importance of the size of the hBN domains. The same training procedure and convergence criterion are employed as for Method 1. The convergence during training is poorer (), and the obtained accuracy is typically smaller () for Method 2, although they are comparable with the ones obtained for Method 1. However, Method 2 is by construction scale invariant and this is potentially a significant advantage in investigating systems with different sizes.

The final ANN configuration following the training phase depends on the assigned random initial weights. Consequently, the accuracy of the output results obtained by running the test examples is subject to the initialization procedure. In order to see how robust are the obtained results, we construct histograms using an ensemble of 2000 trained ANNs. The results are shown in Figure 4 for the two methods. In Method 1, as the number of hidden neurons is varied, the distributions evolve from a rather widespread distribution of coefficients for to a more confined distribution around the high accuracy values; this is for a number of neurons in the hidden layer, (between 100 and 125 neurons). Increasing further does not improve the accuracy. Rather, as the ANN becomes larger, memory effects become important to the detriment of capturing the essential features of the structures. Moreover, by decreasing the mean square error to 10−6 when training ANNs with , they become overtrained and the coefficient does not improve either. Therefore, we conclude that optimal ANN configurations exist, with quite high maximal output accuracies (∼97%) and a relatively narrow band of ∼10%, where the coefficients of the most trained ANNs can be found.

Comparatively, by employing Method 2, the histograms follow the same trend, although the accuracy spread is larger. Still, the highest values can reach as high as ∼91%. This shows that by describing the local chemical environment and constructing a statistics reflecting the neighborhood of the different species, one can infer quite reasonably the electronic features of the GNFs, in particular the energy gaps. A direct comparison to Method 1 is shown in Figure 5. Additionally, the distribution of coefficients for an intermediate model based on geometrical parameters of the rectangular hBN domains is also indicated. In this case, the four distances between the edges of the hBN rectangles and the edges of the GNF along with the two linear sizes of the hBN domains were taken as inputs, i.e., input neurons. However, this approach can be used as long as the geometric features of the samples can be easily identified, i.e., in this case the parameters describing the rectangular shapes. The distribution of the coefficients lies in between the ones corresponding to Method 1 and Method 2, with a maximum at 94.1%, compared to the best results of 97.2% obtained with Method 1 and 91.9% using Method 2. This also shows that by identifying the geometrical features in graphene-hBN systems, without taking into account a detailed representation of the species present in the structure, i.e., considering the hBN domain as a whole, reasonable accuracies may be achieved.

4. Conclusions

The electronic properties of GNFs with embedded hBN domains were investigated using combined DFT and ML techniques. Using DFT calculations, we constructed the energy gap distribution for a set of systems with different rectangular hBN shapes. The collected data was used to train two types of ANNs. In Method 1, one input neuron is assigned to one atom of species C, B, or N, while in Method 2 the prevalence of the chemical neighborhood and atomic species was taken into account. The trained ANNs provide a correlation between the different domain shapes and their sizes and locations within the GNFs, on one hand, and the magnitude of the energy gaps, on the other hand. Method 1 shows the highest accuracies, while in Method 2 smaller ANNs are not bound to a fixed system size and the accuracies are comparable. A statistical analysis reveals the optimal configurations of the three-layer ANNs, pointing out potential memory and overtraining effects in large networks. The approach based on ANNs is therefore a feasible route, providing a reduction of the computational effort, while retaining a high accuracy; therefore, it may be employed for optimizing the design and selecting candidates of nanostructured graphene-based materials for specific electronic properties.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This work was supported by the Romanian Ministry of Research and Innovation under the project PN 19060205/2019 and by the Romania-JINR cooperation project.


  1. Y.-W. Son, M. L. Cohen, and S. G. Louie, “Energy gaps in graphene nanoribbons,” Physical Review Letters, vol. 97, no. 21, article 216803, 2006. View at: Publisher Site | Google Scholar
  2. Y.-W. Son, M. L. Cohen, and S. G. Louie, “Half-metallic graphene nanoribbons,” Nature, vol. 444, no. 7117, pp. 347–349, 2006. View at: Publisher Site | Google Scholar
  3. V. Barone, O. Hod, and G. E. Scuseria, “Electronic structure and stability of semiconducting graphene nanoribbons,” Nano Letters, vol. 6, no. 12, pp. 2748–2754, 2006. View at: Publisher Site | Google Scholar
  4. S. Dutta and S. K. Pati, “Novel properties of graphene nanoribbons: a review,” Journal of Materials Chemistry, vol. 20, no. 38, pp. 8207–8223, 2010. View at: Publisher Site | Google Scholar
  5. J. Bai, X. Zhong, S. Jiang, Y. Huang, and X. Duan, “Graphene nanomesh,” Nature Nanotechnology, vol. 5, no. 3, pp. 190–194, 2010. View at: Publisher Site | Google Scholar
  6. G. A. Nemnes, C. Visan, and A. Manolescu, “Electronic and thermal conduction properties of halogenated porous graphene nanoribbons,” Journal of Materials Chemistry C, vol. 5, no. 18, pp. 4435–4441, 2017. View at: Publisher Site | Google Scholar
  7. L. Ci, L. Song, C. Jin et al., “Atomic layers of hybridized boron nitride and graphene domains,” Nature Materials, vol. 9, no. 5, pp. 430–435, 2010. View at: Publisher Site | Google Scholar
  8. Y. Miyata, E. Maeda, K. Kamon et al., “Fabrication and characterization of graphene/hexagonal boron nitride hybrid sheets,” Applied Physics Express, vol. 5, no. 8, article 085102, 2012. View at: Publisher Site | Google Scholar
  9. Y. Lin and J. W. Connell, “Advances in 2d boron nitride nanostructures: nanosheets, nanoribbons, nanomeshes, and hybrids with graphene,” Nanoscale, vol. 4, no. 22, pp. 6908–6939, 2012. View at: Publisher Site | Google Scholar
  10. G. A. Nemnes, “Spin current switching and spin-filtering effects in Mn-doped boron nitride nanoribbons,” Journal of Nanomaterials, vol. 2012, Article ID 748639, 5 pages, 2012. View at: Publisher Site | Google Scholar
  11. G. A. Nemnes and S. Antohe, “Spin filtering in graphene nanoribbons with Mn-doped boron nitride inclusions,” Materials Science and Engineering: B, vol. 178, no. 19, pp. 1347–1351, 2013. View at: Publisher Site | Google Scholar
  12. L. Chen, L. He, H. S. Wang et al., “Oriented graphene nanoribbons embedded in hexagonal boron nitride trenches,” Nature Communications, vol. 8, article 14703, 2017. View at: Publisher Site | Google Scholar
  13. I. Snook and A. Barnard, Graphene Nano-Flakes and Nano-Dots: Theory, Experiment and Applications, Physics and Applications of Graphene, Sergey Mikhailov, IntechOpen, 2011. View at: Publisher Site
  14. C. Mansilla Wettstein, F. P. Bonafe, M. B. Oviedo, and C. G. Sanchez, “Optical properties of graphene nanoflakes: shape matters,” The Journal of Chemical Physics, vol. 144, no. 22, article 224305, 2016. View at: Publisher Site | Google Scholar
  15. J. Wu, W. Pisula, and K. Müllen, “Graphenes as potential material for electronics,” Chemical Reviews, vol. 107, no. 3, pp. 718–747, 2007. View at: Publisher Site | Google Scholar
  16. L. Zhi and K. Mullen, “A bottom-up approach from molecular nanographenes to unconventional carbon materials,” Journal of Materials Chemistry, vol. 18, no. 13, pp. 1472–1484, 2008. View at: Publisher Site | Google Scholar
  17. C. Berger, Z. Song, X. Li et al., “Electronic confinement and coherence in patterned epitaxial graphene,” Science, vol. 312, no. 5777, pp. 1191–1196, 2006. View at: Publisher Site | Google Scholar
  18. S. Neubeck, L. A. Ponomarenko, F. Freitag et al., “From one electron to one hole: quasiparticle counting in graphene quantum dots determined by electrochemical and plasma etching,” Small, vol. 6, no. 14, pp. 1469–1473, 2010. View at: Publisher Site | Google Scholar
  19. S. Mutyala and J. Mathiyarasu, “Preparation of graphene nanoflakes and its application for detection of hydrazine,” Sensors and Actuators B: Chemical, vol. 210, pp. 692–699, 2015. View at: Publisher Site | Google Scholar
  20. A. Valli, A. Amaricci, V. Brosco, and M. Capone, “Quantum interference assisted spin filtering in graphene nanoflakes,” Nano Letters, vol. 18, no. 3, pp. 2158–2164, 2018. View at: Publisher Site | Google Scholar
  21. V. Castagnola, W. Zhao, L. Boselli et al., “Biological recognition of graphene nanoflakes,” Nature Communications, vol. 9, no. 1, p. 1577, 2018. View at: Publisher Site | Google Scholar
  22. J. Lee, A. Seko, K. Shitara, K. Nakayama, and I. Tanaka, “Prediction model of band gap for inorganic compounds by combination of density functional theory calculations and machine learning techniques,” Physical Review B, vol. 93, no. 11, article 115104, 2016. View at: Publisher Site | Google Scholar
  23. G. Pilania, J. E. Gubernatis, and T. Lookman, “Multi-fidelity machine learning models for accurate bandgap predictions of solids,” Computational Materials Science, vol. 129, pp. 156–163, 2017. View at: Publisher Site | Google Scholar
  24. Y. Liu, T. Zhao, W. Ju, and S. Shi, “Materials discovery and design using machine learning,” Journal of Materiomics, vol. 3, no. 3, pp. 159–177, 2017. View at: Publisher Site | Google Scholar
  25. K. Ryan, J. Lengyel, and M. Shatruk, “Crystal structure prediction via deep learning,” Journal of the American Chemical Society, vol. 140, no. 32, pp. 10158–10168, 2018. View at: Publisher Site | Google Scholar
  26. F. Brockherde, L. Vogt, L. Li, M. E. Tuckerman, K. Burke, and K. R. Müller, “Bypassing the Kohn-Sham equations with machine learning,” Nature Communications, vol. 8, no. 1, p. 872, 2017. View at: Publisher Site | Google Scholar
  27. G. Hegde and R. C. Bowen, “Machine-learned approximations to density functional theory Hamiltonians,” Scientific Reports, vol. 7, no. 1, article 42669, 2017. View at: Publisher Site | Google Scholar
  28. B. Kolb, L. C. Lentz, and A. M. Kolpak, “Discovering charge density functionals and structure-property relationships with PROPhet: a general framework for coupling machine learning and first-principles methods,” Scientific Reports, vol. 7, no. 1, p. 1192, 2017. View at: Publisher Site | Google Scholar
  29. C. W. Rosenbrock, E. R. Homer, G. Csanyi, and G. L. W. Hart, “Discovering the building blocks of atomic systems using machine learning: application to grain boundaries,” NPJ Computational Materials, vol. 3, no. 1, p. 29, 2017. View at: Publisher Site | Google Scholar
  30. F. A. Faber, L. Hutchison, B. Huang et al., “Prediction errors of molecular machine learning models lower than hybrid DFT error,” Journal of Chemical Theory and Computation, vol. 13, no. 11, pp. 5255–5264, 2017. View at: Publisher Site | Google Scholar
  31. G. R. Schleder, A. C. M. Padilha, C. M. Acosta, M. Costa, and A. Fazzio, “From DFT to machine learning: recent approaches to materials science—a review,” Journal of Physics: Materials, 2019. View at: Publisher Site | Google Scholar
  32. P. Rowe, G. Csányi, D. Alfè, and A. Michaelides, “Development of a machine learning potential for graphene,” Physical Review B, vol. 97, no. 5, article 054303, 2018. View at: Publisher Site | Google Scholar
  33. T. M. Dieb, Z. Hou, and K. Tsuda, “Structure prediction of boron-doped graphene by machine learning,” The Journal of Chemical Physics, vol. 148, no. 24, article 241716, 2018. View at: Publisher Site | Google Scholar
  34. M. Fernandez, J. I. Abreu, H. Shi, and A. S. Barnard, “Machine learning prediction of the energy gap of graphene nanoflakes using topological autocorrelation vectors,” ACS Combinatorial Science, vol. 18, no. 11, pp. 661–664, 2016. View at: Publisher Site | Google Scholar
  35. M. Fernandez, A. Bilic, and A. S. Barnard, “Machine learning and genetic algorithm prediction of energy differences between electronic calculations of graphene nanoflakes,” Nanotechnology, vol. 28, no. 38, article 38LT03, 2017. View at: Publisher Site | Google Scholar
  36. H. Yang, Z. Zhang, J. Zhang, and X. C. Zeng, “Machine learning and artificial neural network prediction of interfacial thermal resistance between graphene and hexagonal boron nitride,” Nanoscale, vol. 10, no. 40, pp. 19092–19099, 2018. View at: Publisher Site | Google Scholar
  37. J. M. Soler, E. Artacho, J. D. Gale et al., “The SIESTA method for ab initio order- materials simulation,” Journal of Physics: Condensed Matter, vol. 14, article 2745, 2002. View at: Google Scholar
  38. D. M. Ceperley and B. J. Alder, “Ground state of the electron gas by a stochastic method,” Physical Review Letters, vol. 45, no. 7, pp. 566–569, 1980. View at: Publisher Site | Google Scholar
  39. N. Troullier and J. L. Martins, “Efficient pseudopotentials for plane-wave calculations,” Physical Review B, vol. 43, no. 3, pp. 1993–2006, 1991. View at: Publisher Site | Google Scholar
  40. S. Nissen, “Implementation of a fast artificial neural network library (FANN),” Tech. Rep., Tech. Rep., Department of Computer Science University of Copenhagen (DIKU), 2003, View at: Google Scholar
  41. C. Igel and M. Husken, “Improving the Rprop learning algorithm,” in Proceedings of the Second International Symposium on Neural Computation, NC’2000, pp. 115–121, 2000, ICSC Academic Press. View at: Google Scholar
  42. M. Riedmiller and H. Braun, “A direct adaptive method for faster backpropagation learning: the RPROP algorithm,” in IEEE International Conference on Neural Networks, pp. 586–591, San Francisco, CA, USA, March-April 1993. View at: Publisher Site | Google Scholar

Copyright © 2019 G. A. Nemnes et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.