Applied Computational Intelligence and Soft Computing

Applied Computational Intelligence and Soft Computing / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6626948 | https://doi.org/10.1155/2021/6626948

Janusz Bobulski, Mariusz Kubanek, "Deep Learning for Plastic Waste Classification System", Applied Computational Intelligence and Soft Computing, vol. 2021, Article ID 6626948, 7 pages, 2021. https://doi.org/10.1155/2021/6626948

Deep Learning for Plastic Waste Classification System

Academic Editor: Miin-Shen Yang
Received21 Oct 2020
Revised24 Mar 2021
Accepted26 Apr 2021
Published05 May 2021

Abstract

Plastic waste management is a challenge for the whole world. Manual sorting of garbage is a difficult and expensive process, which is why scientists create and study automated sorting methods that increase the efficiency of the recycling process. The plastic waste may be automatically chosen on a transmission belt for waste removal by using methods of image processing and artificial intelligence, especially deep learning, to improve the recycling process. Waste segregation techniques and procedures are applied to major groups of materials such as paper, plastic, metal, and glass. Though, the biggest challenge is separating different materials types in a group, for example, sorting different colours of glass or plastics types. The issue of plastic garbage is important due to the possibility of recycling only certain types of plastic (PET can be converted into polyester material). Therefore, we should look for ways to separate this waste. One of the opportunities is the use of deep learning and convolutional neural network. In household waste, the most problematic are plastic components, and the main types are polyethylene, polypropylene, and polystyrene. The main problem considered in this article is creating an automatic plastic waste segregation method, which can separate garbage into four mentioned categories, PS, PP, PE-HD, and PET, and could be applicable on a sorting plant or home by citizens. We proposed a technique that can apply in portable devices for waste recognizing which would be helpful in solving urban waste problems.

1. Introduction

Waste and the risks associated with it are becoming an increasingly serious problem in environmental protection. There is an expanding interest in waste management in the world, in both the development of technologies to minimize their quantity and those related to their disposal and economic use. The main reason for extreme waste generation is irrational materials management. The garbage gather in landfills may be used as secondary raw materials, the value of which is estimated at a couple hundred million dollars. 25% of this amount is coal; 35% is zinc, lead, iron, and other metals; and 40% is related to components such as ash, slag, rock waste, aggregates, and others [1]. Limiting the mass of generated waste to a level that ensures balance between raw material, ecological, and sanitary waste is not possible without extensive synchronization of technologies and the manner people live with the formation and working of an ecological structure in the area. Actions aimed at reducing the amount of waste produced and placed in the surroundings should include recycling raw materials, minimizing waste production from end to end, the use of modern low-waste or nonwaste technologies, and replacing traditionally used raw materials [2]. The target system for solving the problem of production waste polluting the natural environment is low and waste-free technologies. Nonwaste technology (NWT) is based on preventing waste and full comprehensive use of the raw material. It involves a number of technological processes that lead to total management and, consequently, the elimination of pollution without harmful effects on the environment. The condition here is that waste should not be deposited. The implementation of NWT has its economic justification, because the full use of materials and, consequently, the reduction of the amount of waste, allows for increased production and allows for the reduction of imports of raw materials. In some cases, it is also possible to reduce the consumption of electricity, heat, or technology by reducing energy-consuming waste treatment processes. The benefits of using nonwaste technology also include reducing material consumption, environmental losses, and operating costs.

Another method to reduce waste is recycling. Its basic job is to maximize the reuse of the same materials, including reduction of expenditure on their processing. The recycling process takes place in two areas: the production of goods and the subsequent generation of waste from them. Its assumptions assume the imposition of appropriate attitudes among manufacturers, conducive to the production of the most recoverable materials, and the creation of appropriate behavior among recipients. Recycling of waste from used postconsumer products can take place, among others through the secondary use of raw material combined with a change in its condition and composition. For this, it is necessary to sort waste not only into fractions such as metal, bio, plastic paper, or glass. It is necessary here to use advanced techniques to distinguish the type of material in individual groups because not all of them are suitable for reuse today. For example, the easiest way to recover and recycle PET is plastic.

To facilitate the recycling process, worldwide labelling of several types of plastics was introduced as follows:(i)01-PET-polyethylene-terephthalate(ii)02-HDPE-high-density-polyethylene(iii)03-PVC-polyvinyl-chloride(iv)04-LDPE-low-density-polyethylene(v)05-PP-polypropylene(vi)06-PS-polystyrene(vii)07-other

Four types of plastic dominate in household waste: PET, HDPE, PP, PS. Dividing them into individual types of plastics would allow reuse of some of them. One of the options is the use of computer image recognition techniques in combination with artificial intelligence. We proposed a technique that can apply in portable devices for waste recognizing which would be helpful in solving urban waste problems. The device could be used both at home and in waste sorting plants, and when used microcomputer with microcamera, it will present results by LED diodes. Then, the user puts the waste in the correct box manually.

2. Review of Plastic Waste Separation Methods

The process of sorting materials suitable for reprocessing from the municipal solid waste is problematic and expensive. First, dry and wet wastes are separated, and electromagnetic techniques are used to sort iron-containing materials. However, one of the visual [3, 4] methods can be used to segregate plastic garbage. In optical sorting, cameras are used to identify different waste fractions based on visual properties, such as colour, shape, or texture. Huang et al. planned a sorting method that combines a 3D colour camera and a laser beam on a conveyor belt. The method creates triangles over the camera image on the base laser beam, which is why it is called triangulation scanning [5]. Another group of methods is spectral imaging. It is a combination of spectral reflection measurement technology and computer image processing. These types of methods use near infrared (NIR), hyperspectral imaging (HSI), and visual image spectroscopy (VIS) [68]. The hyperspectral camera acquires images in the narrow spectral bands, and another system analyzes spectroscopic data. Then, the data is preprocessed and reduced using a special algorithm. The array of compressed air nozzles over the belt pushes the waste into individual containers depending on the decision of the classifier [9, 10]. For spectroscopy-based techniques, light is directed to plastic waste, and each type of plastic reflects a different range of waves. NIR and laser sensors capture the reflected spectrum and, on this basis, the material is classified. This type of technique was developed by Safavi et al. [11] for identifying PP material in mixed waste. For the classification of PP and PE materials, the HSI method using NIR (near infrared) light (1000–1700 nm) can be used [12, 13]. Principal component analysis (PCA) [14] is used to increase the accuracy of the classification algorithm. However, the alternative is a quick method of classifying plastics using a fusion of MIR spectroscopy and independent component analysis (ICA) developed by Kassouf et al. in [15]. Unfortunately, the presented methods have several significant disadvantages: waste must be ground, which is a cost, and small particles are more difficult to classify. Therefore, a technique without these drawbacks should be developed.

3. Proposed System

The system with a microcomputer dedicated to image processing may be used to identify the type of plastic from which the waste is made. The system we propose uses an RGB camera and a microcomputer with computer vision software to classify plastic garbage. The classifier in form of a program controls the nozzles with air to manage the waste to the right container (Figure 1). The software in the system uses image processing techniques for image preprocessing. The key element is the classifier developed based on convolution artificial neural networks and deep learning [16], which are used for object classification. In the case of the home version of the device, the device will consist of a Raspberry Pi type microcomputer that recognizes the object, and the user will manually place the rubbish in a specific container. This version can also be used in the industry.

4. Convolutional Neural Network

The Convolutional Neural Network (CNN) is a mathematical model of an artificial neural network. The structure of neurons is created similarly to the structure of the mammalian visual cortex. The local pixel arrangement determines the shape of the object. CNN first recognizes smaller local patterns in the image and then combines them into more complicated shapes. Convolutional Neural Networks may be an effective solution to the problem of sorting waste because they are very effective in recognizing objects in the image. The structure of CNN usually consists of three types of layers: convolutional, pool, and fully connected. Convolutional and pool layers are stacked one after the other. In contrast, layers with fully connected neurons generate probabilities of class membership [17, 18]. The structure was chosen experimentally. The programming process was made in MATLAB.

5. Experiment

When designing the structure of a neural network, the first step is fixing the size of the input image. High-resolution results increase in the number and time of calculations, which in turn may lead to overloading of the computational units and their memory. An additional goal was to develop such a structure that can be built into a Raspberry Pi type microcomputer. A too-large size of processed images would be impossible for it to analyze in real time. In turn, the low resolution of the input images will make it difficult or impossible to recognize the object and thus achieve the expected performance. We determined to conduct research with image resolution of 120 × 120 pixels and 227 × 227 pixels. The next important step was the point of the number and layers types of the CNN network. Two CNN networks were experienced, opposite in the number of layers and size of convolution filters. The first tested structure (based on the AlexNet network) enclosed 23 layers. In this network, the first convolution layer consisted of 64 filters of size 11 × 11. A total of six layers were responsible for encoding the image and then delivering data to the three full-connected layers. This structure for images of 227 × 227 pixels is shown in Table 1.


NumberName of layerParameters

1Image input227 × 227 × 3
2Convolution64 filters, size 11 × 11
3ReLU
4Cross channel normalization
5Max pooling
6Convolution128 filters, size 5 × 5
7ReLU
8Cross channel normalization
9Max pooling
10Convolution128 filters, size 3 × 3
11ReLU
12Convolution192 filters, size 3 × 3
13ReLU
14Convolution128 filters, size 3 × 3
15ReLU
16Max pooling
17Fully connectedInputs 18432, outputs 512
18ReLU
19Fully connectedInputs 521, outputs 1024
20ReLU
21Fully connectedInputs, outputs 4
22Soft max
23Classification4

The second network (author’s proposal) contained 15 layers. In this network, the first convolution layer consisted of 64 filters of size 9 × 9. A total of three layers were responsible for encoding the image and then delivering data to the two full-connected layers. This structure for images of 120 × 120 pixels is shown in Table 2.


NumberName of layerParameters

1Image input120 × 120 x 3
2Convolution64 filters, size 9 × 9
3Max poolingr
4ReLU r
5Convolution64 filters, size 5 × 5
6ReLU layer
7Average pooling
8Convolution64 filters, size 5 × 5
9ReLU
10Average pooling
11Fully connectedInputs 10816, outputs 64
12ReLU
13Fully connectedInputs 64, outputs 4
14Soft max
15Classification4

In our research, we used a simplified model of the station for object recognition, in which only one waste is in the camera lens. The preparation of input data for the learning and testing phase is a key element. For experiments with deep neural networks, it is necessary to gather a lot of data for each identified class, a few thousand. The set of images represented objects categorised in four classes: PET, PE-HD, PS, and PP. Images are from the WaDaBa [19] database, and several samples are shown in Figure 2. The image database contains mostly photos of PET objects because there are the most common domestic waste that is being recycled. That is why individual classes have a different number of photos. In order to set up the quantity of images in each class, we have modified existing images by rotating them. Images from the PET class every 24°, from the PE-HD class, were rotated every 6°; from the PS class every 5°; and from the PP class every 7°. In this way, we obtained 33,000 images for the PET class, 36,000 images for the PE-HD class, 37,440 images for the PS class, and 33,80 images for the PP class. Different degrees of rotation were used for the development of the image set for images from different classes in order to equalize the number of samples in each category. As the results showed, this dataset proved to be sufficient to teach CNN correctly.

6. Results and Discussion

6.1. Training and Validation

The research consisted in training the prepared networks and determining the classification accuracy using different divisions of the input data into training and test data. The data were prepared for four stages: 90% (training data), 10% (test data), 80%–20%, 70%–30%, and 60%–40% (Table 3).


NumberDivisionPE-HDPETPPPS

190%–10%32400/360029700/330029952/332833696/3744
280%–20%28800/720026400/660026624/665629952/7488
370%–30%25200/1080023100/990023296/998426208/11232
460%–40%21600/1440019800/2320019968/1331222464/14976

The network learning process was conducted with sets of data described above. Teaching was passed for two structures, with two types of input image, with resolutions of 120 × 120 and 227 × 227 pixels. Smaller images were created by applying the image resizing function, which also reduced the amount of detail in the images and consequently the number of features. Learning was carried out for a variable value of learning coefficient, starting from 0.001 and decreasing every subsequent 4 epochs, and fixed 1064 iterations for the epoch. Experiments showed the best accuracy and loss values obtained in subsequent iterations during learning of the stratified network for a 90%–10% partition and at input image resolutions of 120 × 120 pixels. The charts were made after 10 epochs.

Tables 47 present tests conducted for mentioned networks.


NumberDivision2 epochsTime4 epochsTime10 epochsTime
Accur. (%)(min)Accur. (%)(min)Accur. (%)(min)

190%–10%93,272997,436199,92217
280%–20%92,712796,975798,69203
370%–30%90,742493,685297,78184
460%–40%86,572090,254992,77167


NumberDivision2 epochsTime4 epochsTime10 epochsTime
Accur. (%)(min)Accur. (%)(min)Accur. (%)(min)

190%–10%69,437980,2518391,72540
280%–20%66,767677,8017488,34527
370%–30%63,897073,6917184,64504
460%–40%60,706970,4515980,23498


NumberDivision2 epochsTime4 epochsTime10 epochsTime
Accur. (%)(min)Accur. (%)(min)Accur. (%)(min)

190%–10%73,296392,3112596,41364
280%–20%70,086190,8312193,39347
370%–30%67,135786,2411490,29311
460%–40%62,445583,0410688,46301


NumberDivision2 epochsTime4 epochsTime10 epochsTime
Accur. (%)(min)Accur. (%)(min)Accur. (%)(min)

190%–10%62,387386,8321499,23725
280%–20%60,447184,2119997,51707
370%–30%59,216980,4919297,92642
460%–40%58,946472,8416593,45549

Analyzing the results of experiments, it can be seen that, in the case of our 15-layer network and images 120 × 120, 4 epochs are enough to obtain a tolerable level. Further training, also with a lower learning rate, does not give significant effects of accuracy. Achieved accuracy of 97.43% after 4 epochs is a good result. Further learning up to the tenth epoch increases efficiency to almost 100%. In the case of images of 227 × 227 pixels, the computation time has doubled and accuracy achieved 91.72%. That is not acceptable for the system that works in the real environment [1921].

In the case of the 23-layer network, the learning process was different. This network achieved an accuracy of 99.23% for the first case of data split with images of 227 × 227 pixels. Unfortunately, the learning time of 725 minutes compared to 217 minutes (15-layer network) made the relearning process impractical. This result is not good if the system is to be operated in a real environment. For images of 120 × 120 pixels, this network after 10 epochs achieved accuracy 3% lower than the smaller network (Figures 3 and 4).

6.2. Testing

The experiment was carried out on the WaDaBa database [19]. We used 5 sets of data with 2000 images, which equals ten thousand images. In the goal to thoroughly verify the correctness of the proposed method in the testing process, we used the cross-validation method. The data has been divided into 5 parts. Four parts were used for teaching and the fifth for testing. In individual sets of A, B, C, D, and E, individual parts were exchanged so that all of them could be used in the testing process in individual tests and the remaining ones for learning. The results of experiments performed with the proposed method achieved an average efficiency of 74%, with FRR = 10% and FAR = 16% (Table 8). These results are preliminary to the development of the waste selection method based on image processing techniques. Analyzing the current state of the art in this field, we did not find solutions for this type. The review of the existing methods shows that they are not used in the automatic selection of whole waste, but only with particles, what is expensive.


DatasetAccuracy (%)

A72
B75
C70
D76
E77

Average74

7. Conclusion and Future Works

The results of the experiment show that our 15-layer network achieves better performance for images of 120 × 120 pixels compared to the 23-layer network for 227 × 227 pixels. An additional advantage of our solution has shorter network learning time. The proposed 15-layer network turned out to be a better structure due to better generalizing properties, which translates into the use of fewer features for recognition. Therefore, it is possible to use smaller image sizes which have more useful features and less noise. Compared to other convolutional neural networks (Table 9), our network is less effective. However, compared to other networks, it has much fewer parameters, which is a big advantage in the case of implementation for mobile devices such as the Raspberry Pi platform.


CNNAccuracy (%)Number of parameters (millions)

Our744
AlexNet72222
MobileNet v.1806
MobileNet v.2868

The classification of waste for four classes is in most cases at a good level. Further work will be carried out on covering the waste image database to include waste images under more realistic conditions, as well as from other types.

We also plan more detailed research to take into account changes in hyper learning parameters and various types of filters.

The research results in Europe showed that the investment outlays for obtaining primary raw materials are much higher than the outlays incurred in relation to the use of secondary raw materials obtained from production waste or waste after use. Obtaining and processing recyclable materials also involves lower energy consumption. They can also replace traditional energy carriers. For example, municipal and agricultural waste is used to produce biogas or thermal energy. Replacing primary raw materials with secondary raw materials also reduces the use of materials, eliminates the cost of exporting waste to landfills and maintains these landfills, shortens the production process, reduces labour input, and thus reduces the cost of product production.

Data Availability

The datasets used in this study are available from the relevant authors upon reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded by the “Polish Ministry of Science and Higher Education” with the name “Regional Initiative of Excellence” in the years 2019–2022 (Project no. 020/RID/2018/19), the amount of financing 12,000,000 PLN.

References

  1. H. I. Abdel-Shafy and M. S. M. Mansour, “Solid waste issue: sources, composition, disposal, recycling, and valorization,” Egyptian Journal of Petroleum, vol. 27, no. 4, pp. 1275–1290, 2018. View at: Publisher Site | Google Scholar
  2. J. Radziewicz, “Problemy gospodarki odpadami w polsce,” 2019, https://rme.cbr.net.pl/index.php/archiwum-rme/13-nr-42/ekologia-i-srodowisko/12-problemy-gospodarki-odpadami-w-polsce. View at: Google Scholar
  3. S. P. Gundupalli, S. Hait, and A. Thakur, “A review on automated sorting of source-separated municipal solid waste for recycling,” Waste Management, vol. 60, pp. 56–74, 2017. View at: Publisher Site | Google Scholar
  4. F. Pita and A. Castilho, “Influence of shape and size of the particles on jigging separation of plastics mixture,” Waste Management, vol. 48, pp. 89–94, 2016. View at: Publisher Site | Google Scholar
  5. J. Huang, T. Pretz, and Z. Bian, “Intelligent solid waste processing using optical sensor based sorting technology,” Image and Signal Processing (CISP), vol. 4, pp. 1657–1661, 2010. View at: Google Scholar
  6. S. Pieber, M. Meirhofer, A. Ragossnig, L. Brooks, R. Pomberger, and A. Curtis, “Advanced waste-splitting by sensor based sorting on the example of the MTPlant oberlaa,” Tagungsband Zur, vol. 10, pp. 695–698, 2010. View at: Google Scholar
  7. I. Vegas, K. Broos, P. Nielsen, O. Lambertz, and A. Lisbona, “Upgrading the quality of mixed recycled aggregates from construction and demolition waste by using near-infrared sorting technology,” Construction and Building Materials, vol. 75, pp. 121–128, 2015. View at: Publisher Site | Google Scholar
  8. K. Cpałka, “Case study: interpretability of fuzzy systems applied to nonlinear modelling and control,” in Design of Interpretable Fuzzy Systems. Studies in Computational Intelligence, , Springer, Cham, Switzerland, 2017. View at: Google Scholar
  9. A. Picón, O. Ghita, A. Bereciartua, J. Echazarra, P. F. Whelan, and P. M. Iriondo, “Real-time hyperspectral processing for automatic nonferrous material sorting,” Journal of Electronic Imaging, vol. 21, no. 1, pp. 1–10, 2012. View at: Publisher Site | Google Scholar
  10. P. Tatzer, M. Wolf, and T. Panner, “Industrial application for inline material sorting using hyperspectral imaging in the NIR range,” Real-Time Imag, vol. 11, no. 2, 2005. View at: Publisher Site | Google Scholar
  11. S. M. Safavi, H. Masoumi, S. S. Mirian, and M. Tabrizchi, “Sorting of polypropylene resins by color in MSW using visible reflectance spectroscopy,” Waste Management, vol. 30, no. 11, pp. 2216–2222, 2010. View at: Publisher Site | Google Scholar
  12. S. Serranti, A. Gargiulo, G. Bonifazi, A. Toldy, S. Patachia, and R. Buican, “The utilization of hyperspectral imaging for impurities detection in secondary plastics,” The Open Waste Management Journal, vol. 3, no. 1, pp. 56–70, 2010. View at: Publisher Site | Google Scholar
  13. S. Serranti, A. Gargiulo, and G. Bonifazi, “Characterization of post-consumer polyolefin wastes by hyperspectral imaging for quality control in recycling processes,” Waste Management, vol. 31, no. 11, pp. 2217–2227, 2011. View at: Publisher Site | Google Scholar
  14. S. Serranti, A. Gargiulo, and G. Bonifazi, “Classification of polyolefins from building and construction waste using NIR hyperspectral imaging system,” Resources, Conservation and Recycling, vol. 61, pp. 52–58, 2012. View at: Publisher Site | Google Scholar
  15. A. Kassouf, J. Maalouly, D. N. Rutledge, H. Chebib, and V. Ducruet, “Rapid discrimination of plastic packaging materials using MIR spectroscopy coupled with independent components analysis (ICA),” Waste Management, vol. 34, no. 11, pp. 2131–2138, 2014. View at: Publisher Site | Google Scholar
  16. M. Wang, Z. Wang, and J. Li, “Deep: convolutional neural network applies to face recognition in small and medium databases,” in Proceedings of the 4th International Conference on Systems and Informatics, ICSAI, pp. 1368–1372, Nanjing, China, 2018. View at: Google Scholar
  17. I. Goodfellow, Y. Benigo, and A. Courville, Deep Learning, The MIT Press, Cambridge, MA, USA, 2016.
  18. D. Frejlichowski, K. Gościewska, P. Forczmański, and R. Hofman, “Application of foreground object patterns analysis for event detection in an innovative video surveillance system,” Pattern Analysis and Applications, vol. 18, no. 3, pp. 473–484, 2015. View at: Publisher Site | Google Scholar
  19. J. Bobulski and J. Piatkowski, “PET waste classification method and plastic waste database - WaDaBa,” Image Processing and Communications Challenges 9, vol. 681, pp. 57–64, 2018. View at: Publisher Site | Google Scholar
  20. J. Bobulski and M. Kubanek, “Waste classification system using image processing and convolutional neural networks,” Advances in Computational Intelligence, vol. 11057, pp. 350–361, 2019. View at: Publisher Site | Google Scholar
  21. J. Bobulski and M. Kubanek, “CNN use for plastic garbage classification method,” in Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, ACM, New York, NY, USA, 2019. View at: Google Scholar

Copyright © 2021 Janusz Bobulski and Mariusz Kubanek. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views5197
Downloads2169
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.