Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2019, Article ID 1519753, 8 pages
Research Article

Portable Multispectral System Based on Color Detector for the Analysis of Homogeneous Surfaces

1Department of Electronics and Computer Technology, University of Granada, 18071 Granada, Spain
2Department of Signal Theory and Communications, University Carlos III, 28911 Madrid, Spain
3Department of Analytical Chemistry, University of Granada, 18071 Granada, Spain

Correspondence should be addressed to A. Martínez-Olmos; se.rgu@zenitrama

Received 4 September 2018; Revised 24 October 2018; Accepted 28 November 2018; Published 10 January 2019

Academic Editor: Harith Ahmad

Copyright © 2019 A. Martínez-Olmos et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


In this work, a compact affordable and portable spectral imaging system is presented. The system is intended to be employed in general applications, such as material classification or determination of the concentration of chemical species together with colorimetric sensors. The imaging device is reduced to a small digital color detector with an active area of . This device provides a quantification of the incident emission in the form of four digital words corresponding to its averaged components blue, green, red, and near infrared. In this way, the size of the image is reduced to one pixel. The wavelength selection is carried out by means of a LED array disposed surrounding the color detector. The LEDs are selected to cover the wavelength range from 360 to 890 nm. A sequential measurement protocol is followed, and the generated data is transmitted to an external portable device via a Bluetooth link where a classification protocol is implemented in a custom-developed Android™ application. The presented system has been applied in three different scenarios involving material classification, meat freshness monitoring, and chemical analysis. The analysis of the data using principal components shows that it is possible to find a set of wavelengths where the classification of the samples is optimal.

1. Introduction

Imaging spectrometry was firstly defined by Goetz et al. in 1985 [1] “as a major advance in remote sensing which consists of the acquisition of images in many narrow contiguous spectral bands throughout the visible and solar-reflected infrared spectral bands simultaneously”. With this technique it is possible to acquire a complete reflectance spectrum for each picture element (pixel) in the image that it is considered as a cube of information [2]. The term hyperspectral refers to the multidimensional character of the spectral data set. The technique is known as multispectral imaging when the images are acquired at several discrete wavelengths in the considered range of the spectrum, usually 6 to 10 or even less [3, 4]. Imaging spectrometry is usually applied over a wide range of wavelengths covering from ultraviolet (UV) at 380 nm to near infrared (NIR) up to 1100 nm. It combines two methodologies, spectroscopy and imaging. The imaging part of this technique provides the intensity at every pixel of the image, whereas the spectrometry side generates a single spectrum for the same element [5]. Spectral imaging was introduced as a technique for remote sensing. Since then, this analysis technique has expanded to many other fields, and nowadays it is applied in a diversity of areas such as agriculture, military, environment, geography, medicine, and nutrition, among others. [611].

The main components that comprise a typical spectral imaging system are lighting system, focusing optics, detection system, and wavelength selector. The design of the illumination system is mostly determined by the method for wavelength selection. First studies on multispectral imaging systems employed a monochrome digital camera to collect the reflected light of the previously filtered illumination [12]. Other systems filter or disperse the reflected light to select the desired wavelength [13, 14]. More recently, wavelength selection is directly achieved within the illumination source by illuminating the sample with quasimonochromatic light. Programmable light sources are commonly used in these kinds of systems [15]. Nowadays, the expansion of very effective light-emitting diodes (LEDs) that are invariant and remarkably stable as compared with white lights has enabled the development of LED-based multispectral imaging systems [16, 17]. Conventional spectral imaging systems include complicated illumination sources, delicate optics, and high-resolution cameras [1821], so they tend to be expensive and fragile laboratory equipment. Moreover, the system control and the data collection and processing are usually implemented in a personal computer (PC).

In a previous work, the authors presented a compact multispectral system based on a Raspberry Pi module [22]. The low cost and compact size of this camera module, facing other in the market, as well as the processing capacity of the Raspberry in a compact and portable instrument, makes this novel system suitable for a variety of applications. Following these design criteria, that is, portability, low cost, and wide range of applications, a new prototype is developed and presented here. The novelty of this instrument lies in the image detector employed and the classification software implemented as an Android™ application. While the previous system, as it happens in most of the described multispectral systems, was based on a camera for the image acquisition, a simple digital color detector is used in this work. This device provides the red, green, blue, and infrared (R, G, B, and IR) components of the incident light on a small active area in a form of 16-bit digital words. This is considered as an image of only one pixel, which can be representative of an extensive and homogeneous area, meaning that the texture and color of the sample are constant on the whole surface. In this way, the complexity of the system and the generated data is reduced, and the color depth is improved in comparison with the images generated with classical CMOS cameras. These one-pixel images are transmitted to a remote portable device such as a smartphone or a tablet where the results of the classification process are presented.

The system has been evaluated in three different scenarios. In the first one, a material classification has been carried out to analyze several kinds of white sheets of paper. In the second experience, a package of fresh pork meat has been monitored during eight days. In the third experiment, the instrument has been applied to measure the concentration of potassium in water solutions at pH 9. With a simple dimensionality reduction technique (principal component analysis) combined with a standard low-complexity classification tool (support vector machine), it is possible to determine at which wavelengths the samples can be separated with high precision. The classification algorithm has been implemented as an Android application to be used in a smart device that communicates with the developed instrument via a Bluetooth protocol.

2. System Description

As it has been exposed in the previous section, the presented system has been developed as a low cost portable multispectral imaging system where the imaging device is a digital color detector that generates 1-pixel images. These data are transmitted through a Bluetooth protocol to a portable device (smartphone or tablet) or through a USB connection to a computer that completes the system. In both devices, the data are processed and presented to the user. The scheme of the developed system, which is exposed in detail in the following sections, is presented in Figure 1.

Figure 1: Scheme of the presented system.
2.1. Instrument Design

The portable instrument is shown in the picture of Figure 2(a). The printed circuit board has dimensions of , and it is enclosed in a black box of dimensions to avoid external illumination interference. This box has an aperture facing the sensing area of the instrument composed of the imaging device and the light source. The instrument is placed directly on the surface of the sample to be analyzed; in this way, the distance to the LED array and the imaging device is always the same as depicted in Figure 2(b).

Figure 2: Portable instrument (a) and measurement disposition (b).

The imaging device is the color detector model S11059-02DT (Hamamatsu Photonics K.K., Japan), which is an I2C interface-compatible digital detector sensitive to red (575 to 660 nm, ), green (455 to 630 nm, ), blue (400 to 540 nm, ), and near infrared (700 to 885 nm, ) radiations. The incident light is directly codified into words of 16 bits of resolution. The sensitivity and integration time can be adjusted so that light measurements can be performed over a wide range. This device presents a photosensitive area of .

The light source consists of a LED matrix surrounding the color detector. The selected models are the following: VLMU1610-365-135, VLMU3100 (Vishay Intertechnology Inc., USA), LD MSVG-JGLH-46-1 (OSRAM Opto Semiconductors GmbH, Germany), AA3021ZGSK, APTR3216PGW, APT2012NW, APT2012SRCPRV (Kingbright Electronic Co., Ltd., China), SML-LX15HC-RP-TR (Lumex Inc., USA), VSMG2700, and VSMF3710 (Vishay Intertechnology Inc.). The emission peaks of these LEDs are, respectively, 367, 405, 455, 515, 555, 610, 655, 700, 830, and 890 nm as it can be observed in Figure 3(a), where the normalized emission spectra of the LEDs are depicted.

Figure 3: Normalized emission spectra of the LEDs (a) and scheme of the distribution of the LEDs (b).

There are two LEDs of each model in the array disposed in symmetrical positions as presented in Figure 3(b). In this way, the color detector is always placed in the middle of every two LEDs of the same model. The distribution of the array in couples of LEDs is aimed to generate a uniform irradiance distribution on the sample. It is known that for a two-LED array the irradiance is homogenous at a distance if the LEDs are separated a maximum distance given by [23, 24] where is a number that depends on the relative position of the LED-emitting region from the curvature center of the spherical encapsulant. The value of is given by the angle θ1/2 defined as the view angle when irradiance is half of the value at 0° (value typically provided by the manufacturer):

In the presented work, the distance to the sample is determined by the height of the box (see Figure 2(b)) which is 3 cm. For the selected LED, the angle θ1/2 ranges from 60 to 70°, therefore the value of in equation (2) varies from 0.65 to 1. This range leads to a maximum distance between LEDs of 3 to 3.14 cm for a distance to the sample . In the design of the LED matrix of Figure 3(b), the distance of the LEDs in couple ranges from 2.2 cm in the middle of the square to 3 cm from corner to corner. These values guarantee that the irradiance on the sample is homogeneous. The wavelength selection is carried out activating the LEDs sequentially in a growing wavelength.

All the LEDs have been biased at their typical forward voltage by applying a voltage of 5 V to the serial combination of every LED and a resistor of 220 Ω. In this situation, the LEDs showed a stable radiance with intensity dispersion below 0.1%. This configuration produces different intensities for different LED models. In order to generate a uniform response of the system, the integration time for each wavelength is defined so that the product (intensity of emission multiplied by the integration time) is constant. In this way, the integration time associated with a wavelength with a low-intensity emission is higher than the time corresponding to a wavelength generated by a high-intensity LED. These integration times range from 35 to 700 ms.

The microcontroller used in this design is the model PIC18F2550 (Microchip Technology Inc., USA). This device integrates a full speed USB 2.0 (12 Mbit/s) interface that has been used for calibration purposes. The microcontroller activates the LEDs of the light source and receives the output of the digital color detector. It transmits these results to an external computer through the USB port or to a remote device via a Bluetooth connection. To implement this wireless communication, a Bluetooth module RN-42 (Microchip Technology Inc.) is included in the design. This module communicates with the microcontroller through a two-wire serial protocol and integrates a small antenna.

2.2. Android Application

A user-friendly Android application was developed to use a smartphone as the external reader and processing unit of the multispectral imaging device. The instrument is connected to the smartphone for bidirectional data transmission using the Bluetooth interface. The chosen integrated development environment (IDE) to code the application was Android Studio 3.1.3. The application was designed and tested against API 24 (Android 7.0) using a Samsung smartphone model Galaxy S7, although it supports different Android versions as the lowest API level compatible with the application is API 18 (Android 4.3).

The application user interface consists of two screens. The first one, shown in Figure 4(a), allows a generic use of the multispectral imaging device. The user can choose between a sequential measurement throughout all the frequencies or the selection of a particular wavelength by means of a slider control. After the measurement is done, the results of the R, G, B, and IR components for each wavelength can be consulted in a plain text report that is saved in the internal memory of the smartphone. Figure 4(b) shows the second user interface of the application, where the user can choose among the three different classification scenarios in which the system has been evaluated. The processing algorithm is changed accordingly to the chosen scenario, and the final results of the measurement are directly displayed on the screen.

Figure 4: User interfaces of the developed Android multispectral imaging application for (a) a generic use of the instrument and (b) a specific scenario-based use.

3. Results and Discussion

The presented system has been applied in three different scenarios with very diverse objectives. The aim is to prove the applicability of this instrument in a wide range of fields.

Automatic classification is one of the main applications of spectral imaging [25, 26]. In the first experience, the system has been used to analyze several kinds of white paper with the objective of developing a classification algorithm from the generated data (Scenario I).

In the second experience, a package of fresh pork meat has been monitored during 8 days while it was stored at 4°C (Scenario II). It is known that the packaged meat is affected by the microbial activity, which is the main responsible for the food spoilage [27]. Although the external appearance of the meat might not be altered to the naked eye, this bacterial growth, which increases with the storage time, produces quality degradation [28]. The objective of this experience is to analyze the packaged meat daily in order to develop a prediction algorithm that is able to estimate the storage time of packaged meat and, therefore, the status of the content.

In the third experiment, the instrument has been applied to measure the concentration of potassium in water solutions at pH 9 (Scenario III). For this purpose, a potassium sensitive membrane has been used. The reagents used to prepare the sensing membrane were 0.8 mg of dibenzo18-crown-6-ether (DB18C6) as ionophore, 1.3 mg of 1,2-benzo-7-(diethyla-mino)-3-(octadecanoylimino) phenoxazine (liphophilized Nile Blue) as lipophilic pH indicator, 63.0 mg of o-nitrophenyloctylether (NPOE) as plasticizer, 1.1 mg of potassium tetrakis (4-chlorophenyl)-borate (TCPB) as lipophilic salt and 26.0 mg of polyvinyl chloride (PVC) as polymer, all dissolved in 1 mL of tetrahydrofurane (THF) [29, 30]. Once dissolved, 60 μL of the sensing cocktail is dropped by spin coating on a Mylar sheet, obtaining in this way a round-shape sensing membrane whose color changes depending on the potassium concentration when it is introduced for 3 minutes in the sample, working in the range from 10-5 to 0.95 M of potassium.

In each of the three scenarios, each measurement is a four-dimensional vector xλ, where is the LED-emitting wavelength. We consider ten different values of the emitting wavelength (see Figure 2) and a total of six measurements per wavelength. Figures 5(a)5(c) shows the resulting data projected in a two-dimensional space using principal component analysis (PCA) for representative wavelengths where the data is visually separated among classes. Data has been standardized to have zero mean. In Figure 5(a), the results for Scenario I when are shown. In Figure 5(b), Scenario II when is depicted. Figure 5(c) shows Scenario III when . In all cases, data points corresponding to the same class (paper, days of storage, or potassio concentration) cluster together, reasonably separated away (compared to the cluster variance) from data corresponding to other classes.

Figure 5: Two-dimensional projection with PCA of the dataset in Scenarios I (a), II (b), and III (c) at three wavelengths. Legend in (c) refers to logarithmic pH values.

To assess the performance of a classifier that uses the low-dimensional measure embedding described in Figure 6, we compare the average classification test error rate for each wavelength for a linear support vector machine (SVM) [31] when the 30% of the data is left out for test. Each experiment is repeated 1000 times for different train/test partitions. The results are shown by means of a box plot, where the dashed central line shows the median of the empirical distribution of the error along the 1000 repetitions of the experiment, the box boundaries represent the first and 3 quartiles of the error, and the lines outside the boxes represent the extreme error values in the sample. It can be observed that the error rate heavily depends on the wavelength, but for every case we can find a wavelength for which the error mode is close to zero. Further, only results for Scenarios II and III are presented, as for Scenario I the error rate for all wavelengths is zero (i.e., a linear classifier perfectly separates the available data no matter the choice of the training set). For Scenario II, the average error rate is minimized at (2.25%), and for Scenario III the average error rate is minimized at (0.4%).

Figure 6: Classification performance for Scenario II (a) and Scenario III (b) using a linear SVM classifier trained over 70% of the available data.

Further, the normalized inverse distance to the SVM decision boundary can be used as an approximation to the classifier’s probabilistic confidence [32]. In the experiments summarized in Figure 6, it is numerically observed that the confidence in the right class is typically 10 times larger than for the rest of the classes. For instance, in Scenario II, the average confidence in the right class is 0.567, while for the rest of the classes the average confidence is only 0.042.

The results described above demonstrate that the proposed portable multispectral imaging system is able to provide discriminative measurements using only four light bands, and hence it can be used for cheap low-complexity detection device for a wide set of potential industrial applications.

The previous experiments have been repeated using the CMOS camera of a smartphone instead of the color detector, maintaining the same geometry presented in Figure 2. Since the analyzed samples are homogeneous in texture and color, only 1 pixel of the photography provided by the camera is considered. In this scenario, the difference in the two sets of experiments, the one carried out with the color detector and the other with the CMOS camera, is only the color depth (16 bits for the color detector and 8 bits for the CMOS camera). The prediction errors generated with both systems are presented in Table 1. As it can be observed, the error rates in the prediction where the analysis is carried out with the CMOS camera are much higher than the ones obtained with the original system.

Table 1: Comparison of prediction error rate in the 3 different scenarios when the analysis is carried out using the color detector and the CMOS camera.

4. Conclusions

In this work, we have presented a prototype of a multispectral imaging system for general purposes. It is based on the use of a high-resolution digital color detector as the imaging device and a LED panel for the wavelength selection. The color detector generates one-pixel images in a four-dimension space (R, G, B, and IR) that are considered representative of an extensive homogeneous area. These images are transmitted to a remote smart device through a Bluetooth wireless connection. A custom application for Android operative system has been developed for the acquiring and processing of the images. This scheme implies a simplification of the traditional multispectral imaging system based on a complex camera not only in the hardware requirements but also in terms of signal processing. The result has been a compact and portable field measuring instrument that is easy to use for any nontrained user.

The feasibility of the prototype has been proved in three different scenarios. In the first one, the analysis of different white paper samples has been carried out. In the second one, the external appearance of packaged pork meat has been monitored during 8 consecutive days. In the last experience, the system has been applied for the determination of potassium concentration by the analysis of a potassium-sensitive membrane. The generated data in each experiment have been studied by multivariate analysis techniques such as principal component, finding that in every case it is possible to select a wavelength for which the samples can be separated with high precision. Therefore, we have proved that the presented system offers broad application possibilities including automatic samples classification, monitoring of the status of stored food, or colorimetric determination of analytes in solution.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This work was supported by project CTQ2016-78754-C2-1-R from the Spanish Ministry of Economy and Competitiveness. P. Escobedo wants to thank the Spanish Ministry of Education, Culture and Sport (MECD) for a predoctoral grant (FPU13/05032). This study was partially supported by the Unidad de Excelencia de Química aplicada a biomedicina y medioambiente, Universidad de Granada.


  1. A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for Earth remote sensing,” Science, vol. 228, no. 4704, pp. 1147–−1153, 1985. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: principles and applications,” Cytometry Part A, vol. 69A, no. 8, pp. 735–747, 2006. View at Publisher · View at Google Scholar · View at Scopus
  3. F. Kazemzadeh, S. A. Haider, C. Scharfenberger, A. Wong, and D. A. Clausi, “Multispectral stereoscopic imaging device: simultaneous multiview imaging from the visible to the near-infrared,” IEEE Transactions on Instrumentation and Measurement, vol. 63, no. 7, pp. 1871–1873, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” Journal of Food Engineering, vol. 146, pp. 62–71, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. N. Gat, “Imaging spectroscopy using tunable filters: a review,” in Proceedings Volume 4056, Wavelet Applications VII, pp. 50–64, Orlando, FL, USA, April 2000. View at Publisher · View at Google Scholar
  6. Y. Zhao, C. Yi, S. G. Kong, Q. Pan, and Y. Cheng, Multi-Band Polarization Imaging and Applications, Springer-Verlag, Berlin Heidelberg, 2016. View at Publisher · View at Google Scholar
  7. Y. Ma, R. Li, G. Yang, L. Sun, and J. Wang, “A research on the combination strategies of multiple features for hyperspectral remote sensing image classification,” Journal of Sensors, vol. 2018, Article ID 7341973, 14 pages, 2018. View at Publisher · View at Google Scholar
  8. Q. Huang, Q. Chen, H. Li, G. Huang, Q. Ouyang, and J. Zhao, “Non-destructively sensing pork’s freshness indicator using near infrared multispectral imaging technique,” Journal of Food Engineering, vol. 154, no. 116, pp. 69–75, 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. M. S. Braga, R. F. V. V. Jaimes, W. Borysow, O. F. Gomes, and W. J. Salcedo, “Portable multispectral colorimeter for metallic ion detection and classification,” Sensors, vol. 17, no. 8, article 1730, 2017. View at Publisher · View at Google Scholar · View at Scopus
  10. H. Steiner, S. Sporrer, A. Kolb, and N. Jung, “Design of an active multispectral SWIR camera system for skin detection and face verification,” Journal of Sensors, vol. 2016, Article ID 9682453, 16 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Liu, S. H. Lee, and J. S. Chahl, “A multispectral 3-D vision system for invertebrate detection on crops,” IEEE Sensors Journal, vol. 17, no. 22, pp. 7502–7515, 2017. View at Publisher · View at Google Scholar · View at Scopus
  12. D. Saunders and J. Cupitt, “Image processing at the national gallery: the VASARI project,” National Gallery Technical Bulletin, vol. 14, pp. 72–85, 1993. View at Google Scholar
  13. H. Liang, “Advances in multispectral and hyperspectral imaging for archaeology and art conservation,” Applied Physics A: Materials Science & Processing, vol. 106, no. 2, pp. 309–323, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. H. Erives and N. B. Targhetta, “Implementation of a 3-D hyperspectral instrument for skin imaging applications,” IEEE Transactions on Instrumentation and Measurement, vol. 58, no. 3, pp. 631–638, 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Tominaga, “CIC@20: multispectral imaging,” in Color and Imaging Conference, 20th Color and Imaging Conference Final Program and Proceedings, Society for Imaging Science and Technology, Los Angeles, CA, USA, January 2012.
  16. Y. Gong, D. Zhang, P. Shi, and J. Yan, “High-speed multispectral iris capture system design,” IEEE Transactions on Instrumentation and Measurement, vol. 61, no. 7, pp. 1966–1978, 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. R. Shrestha and J. Y. Hardeberg, “Multispectral imaging using LED illumination and an RGB camera,” in Color and Imaging Conference, 21st Color and Imaging Conference Final Program and Proceedings, Society for Imaging Science and Technology, January 2013.
  18. J. Park, M. H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, October 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. S. Tominaga and T. Horiuchi, “Spectral imaging by synchronizing capture and illumination,” Journal of the Optical Society of America A, vol. 29, no. 9, pp. 1764–1775, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. T. H. Kim, H. J. Kong, T. H. Kim, and J. S. Shin, “Design and fabrication of a 900–1700 nm hyper-spectral imaging spectrometer,” Optics Communication, vol. 283, no. 3, pp. 355–361, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Zucco, V. Caricato, A. Egidi, and M. Pisani, “A hyperspectral camera in the UVA band,” IEEE Transactions on Instrumentation and Measurement, vol. 64, no. 6, pp. 1425–1430, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. N. López-Ruiz, F. Granados-Ortega, M. A. Carvajal, and A. Martínez-Olmos, “Portable multispectral imaging system based on Raspberry Pi,” Sensor Review, vol. 37, no. 3, pp. 322–329, 2017. View at Publisher · View at Google Scholar · View at Scopus
  23. I. Moreno, M. Avendaño-Alejo, and R. I. Tzonchev, “Designing light-emitting diode arrays for uniform near-field irradiance,” Applied Optics, vol. 45, no. 10, pp. 2265–2272, 2006. View at Publisher · View at Google Scholar · View at Scopus
  24. A. J.-W. Whang, Y.-Y. Chen, and Y.-T. Teng, “Designing uniform illumination systems by surface-tailored lens and configurations of LED arrays,” Journal of Display Technology, vol. 5, no. 3, pp. 94–103, 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. P. Jonsson, J. Casselgren, and B. Thörnberg, “Road surface status classification using spectral analysis of NIR camera images,” IEEE Sensors Journal, vol. 15, no. 3, pp. 1641–1656, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. C. Nansen, M. Kolomiets, and X. Gao, “Considerations regarding the use of hyperspectral imaging data in classifications of food products, exemplified by analysis of maize kernels,” Journal of Agricultural and Food Chemistry, vol. 56, no. 9, pp. 2933–2938, 2008. View at Publisher · View at Google Scholar · View at Scopus
  27. Z. Fang, Y. Zhao, R. D. Warner, and S. K. Johnson, “Active and intelligent packaging in meat industry,” Trends in Food Science and Technology, vol. 61, pp. 60–71, 2017. View at Publisher · View at Google Scholar · View at Scopus
  28. I. M. Pérez de Vargas-Sansalvador, M. M. Erenas, D. Diamond, B. Quilty, and L. F. Capitan-Vallvey, “Water based-ionic liquid carbon dioxide sensor for applications in the food industry,” Sensors and Actuators B: Chemical, vol. 253, pp. 302–309, 2017. View at Publisher · View at Google Scholar · View at Scopus
  29. M. M. Erenas, O. Piñeiro, M. C. Pegalajar, M. P. Cuellar, I. de Orbe-Payá, and L. F. Capitán-Vallvey, “A surface fit approach with a disposable optical tongue for alkaline ion analysis,” Analytica Chimica Acta, vol. 694, no. 1-2, pp. 128–135, 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. M. M. Erenas, K. Cantrell, J. Ballesta-Claver, I. De Orbe-Payá, and L. F. Capitán-Vallvey, “Use of digital reflection devices for measurement using hue-based optical sensors,” Sensors and Actuators B: Chemical, vol. 174, pp. 10–17, 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. C. M. Bishop, Pattern recognition and machine learning (Information Science and Statistics), Springer-Verlag Berlin, Heidelberg, 2006.
  32. Y. Grandvalet, J. Mariethoz, and S. Bengio, “A probabilistic interpretation of SVMs with an application to unbalanced classification,” in Advances in Neural Information Processing Systems 18 (NIPS), MIT press, 2005. View at Google Scholar