Journal of Food Quality

Journal of Food Quality / 2021 / Article
Special Issue

Artificial Intelligence in Food Quality Improvement

View this Special Issue

Review Article | Open Access

Volume 2021 |Article ID 6242288 | https://doi.org/10.1155/2021/6242288

Karim Ennouri, Slim Smaoui, Yaakoub Gharbi, Manel Cheffi, Olfa Ben Braiek, Monia Ennouri, Mohamed Ali Triki, "Usage of Artificial Intelligence and Remote Sensing as Efficient Devices to Increase Agricultural System Yields", Journal of Food Quality, vol. 2021, Article ID 6242288, 17 pages, 2021. https://doi.org/10.1155/2021/6242288

Usage of Artificial Intelligence and Remote Sensing as Efficient Devices to Increase Agricultural System Yields

Academic Editor: Rijwan Khan
Received11 May 2021
Revised08 Jun 2021
Accepted18 Jun 2021
Published28 Jun 2021

Abstract

Artificial Intelligence is an emerging technology in the field of agriculture. Artificial Intelligence-based tools and equipment have actually taken the agriculture sector to a different level. This new technology has improved crop production and enhanced instantaneous monitoring, processing, and collection. The most recent computerized structures using remote sensing and drones have made a significant contribution to the agro-based domain. Moreover, remote sensing has the capability to support the development of farming applications with the aim of facing this main defy, via giving cyclic records on yield status during studied periods at diverse degrees and for diverse parameters. Various hi-tech, computer-supported structures are created to determine different central factors such as plant detection, yield recognition, crop quality, and several other methods. This paper includes the techniques employed for the analysis of collected information in order to enhance the productivity, forecast eventual threats, and reduce the task load on cultivators.

1. Introduction

Actually, in the agricultural sector, various agricultural producers are fighting to deal with the dangers and risks posed by the usage of pesticides in their crops to combat pests and other illnesses. All these components combine and present the farmers with a new challenge. Since agriculture relies on natural forces for most of its produce and rain uncertainties, every year, farmers are placed under great pressure because of a shortage of available employees, and the increasing desire to achieve greater yields [1]. This means that agriculture needs to expand substantially in the coming years, and farm efficiency needs to be doubled virtually so that ranchers can achieve their objectives. The automation industry in agriculture remains at the forefront of the rising issues and concerns worldwide. The population is increasing enormously and the need for food and jobs is expanding with this increase. Presently, farmers’ traditional methods are not able to satisfy these purposes. Consequently, modern automated procedures have been implemented to make things simpler and more successful [2]. For various technological developments, artificial intelligence can be applied in agriculture. These include consultancy services for artificial intelligence, data analysis, the Internet, the use of cameras and other sensors, etc. Artificial Intelligence in agriculture will become sufficiently competent to offer improved, predicted insights by studying the various sources of data, such as weather, terrain, crop productivity, and temperature [3]. Such artificial intelligence-powered technology can assist the farming sector to make greater crops in the food supply chain and enhance a broad range of agricultural chores. These new approaches have helped to boost food requirements and have supplied billions of individuals throughout the system with work opportunities. The application of artificial intelligence in agriculture has defended crop productivity from several causes (such as population expansion and climate change). In the agricultural sector, there are key challenges:(i)Every day, the decision to prepare soil, seeds, and harvest is increasingly challenging for farmers. Agriculture is strongly related to the variation of different climate elements such as temperature, precipitation, and moisture. The increase in pollution and degradation also leads to climate change [4]. For farmers, it is a great issue.(ii)Each plant requires precise soil nourishment. Phosphorous, potassium, and nitrogen are the three basic nutrients needed in soil. The lack of any of these elements can result in poor crop productivity [5].(iii)Protection of plants or weeds is also an important function. It may result in an increase in costs in addition to absorbing soil nutrients, which, if not regulated, leads to a lack of nutrition in the soil [6].

Although many applications in agriculture are available, there is still limited knowledge about the latest technology worldwide. Artificial Intelligence supports various segments to improve yield and effectiveness. Artificial Intelligence results are helping to surmount the conventional difficulties in each domain. Similarly, Artificial Intelligence in agriculture assists cultivators to increase their proficiency and decrease natural unfriendly effects [7]. The farming business transparently grasped Artificial Intelligence into their training to modify the general result. Artificial Intelligence is assisting ranchers in staying up to date with climate predicting information in a propelled manner. The forecasted information assists ranchers with expanding production and benefits without hazarding the harvest [8]. The examination of produced information assists the ranchers to avoid potential risk by comprehension and learning with Artificial Intelligence [9]. Actualizing such an exercise assists to formulate a smart assessment on reasonable delay.

Moreover, using Artificial Intelligence is an effective method to distinguish potential imperfections and component deficits in soil. With the picture identification approach, Artificial Intelligence recognizes potential imperfections through pictures caught by the camera [10]. Deep learning appliances are being developed with the assistance of Artificial Intelligence to investigate vegetation models in agriculture. Such Artificial Intelligence-enabled appliances are helpful in understanding ground deficiencies, plant nuisances, and illnesses [11]. Ranchers can utilize Artificial Intelligence to supervise weeds through executing computer visualization, robotic technology, and machine learning [12]. With the assistance of Artificial Intelligence, information is assembled to keep verification records on weeds, which aid the ranchers to use pesticides exactly where the plants are located [13]. This diminished the use of the synthetic product splash in a desired sector [14]. As a result, Artificial Intelligence reduces herbicide use in the area relative to the amount of synthetic substances regularly sprayed [15].

Remote sensing has distinctive benefits over other types of ecological measurement techniques [16]. These advantages include the ability to evaluate factors and ground/land characteristics without having a direct connection to the region of study; the ability to build remote observances, thereby avoiding risks for the user and lowering field measurement fees; and the ability to return at any time and perform repeated data study progresses for the purposes of observing and conditioning evaluations [17]. The domains related to remote sensing are numerous: marine, risk assessments, and natural resource supervision. Technology is continuously progressing and offers the foundation for abundant amounts of innovation and development.

Remote sensing refers to the identification of electromagnetic power from a given surface with the assistance of satellites or airplanes [18]. Spectral detectors can be separated into two categories relying upon the quantity of wavebands with which they evaluate spectral reflectance: (a) multispectral detectors, which get the reflectance data in limited (from 3 to 10) broad wavebands only in the perceptible and near-infrared spectral zones (from 400 to 1100 nm) with small impact of atmospheric dispersing [19] and (b) hyperspectral detectors, which obtain the reflectance data virtually incessantly (numerous hundred wavebands) in the perceptible to infrared spectral zone of the electromagnetic range (from 400 to 2500 nm).

The improvement of novel technologies, for example, high spatial and hyperspectral detectors, made it important to build up an assortment of new techniques, for example, multivariate statistical techniques, to explore this kind of information [20].

2. Applications of Modern Technologies in Agriculture

Like many industries, agriculture has profited from the effects of technology. Farmers rely on information technology for a variety of tasks, not just farm management. Indeed, the way farmers manage crops and livestock has been altered by information technology [21].

Farmers may employ Cloud computing to improve the management of their crops and businesses. They may develop budgets and operating schedules based on their production plans using some of these programmes. Work plans can be drawn up and progress tracked in relation to the weather prediction. Machine activities and production may be measured with the use of mobile task management systems and data integration techniques [22].

Furthermore, Radio Frequency Identification (RFID) is the technology used for agricultural tracking and security. Livestock, for example, may be tracked using RFID-enabled “livestock tracking tags.” This can be beneficial for tracking cattle on a daily basis, as well as for health monitoring and preserving a database of each animal’s health history. Furthermore, through its security tagging, this technology aids in the reduction of counterfeiting/impure food shipments during crop shipping, particularly certified organic crops [23].

Besides, precision agriculture contains a variety of tools, including data analytics. This is known as “smart farming,” and it is now being used by many food producers to reduce costs and boost yields. The following describes how it works: Crop yields, fertiliser applications, soil mapping, weather events, and animal health are among the data that farm offices gather. Even small producers may collect data from a variety of sources to aid in decision-making that will help them reduce expenses and enhance yields. The use of water sensors, which may be used to plan future crops and water use, is of great importance here. This is especially beneficial in drought regions [24].

The introduction of Artificial Intelligence algorithms in cultivated areas as well as in farming products has been advanced in agriculture. Cognitive information technology in agriculture has become inventive, knowledgeable, and efficient. Artificial Intelligence can also help producers to estimate needs via providing data such as the trends of historical data in food commodities, regional main food preferences, etc. [25]. The Artificial Intelligence scope in farming is large and can serve, for example, in the pesticide spraying via sensors and other devices installed in drones and robots. These technologies contribute to preventing the overuse of pesticides, water, and herbicides; maintain soil fertility; and, at the same time, increase personnel productivity and efficiency while improving quality [26]. Artificial Intelligence-powered solutions present many benefits for the agricultural sector.

2.1. Environmental Challenge Management Using Weather Forecasting

In the increasing domain of precision agriculture, weather data play an essential function, as an agricultural technique that assists control and precise cultivation. Nevertheless, Artificial Intelligence-powered systems and information employ smart resource allocation, which helps farmers to negotiate shifts under changing environmental conditions, as a result of many environmental issues, such as climate change and other risks to agricultural productivity [27].

2.2. Surveillance System for Soil and Crops

With new solutions and the installation of Internet of Things (IoT) sensors on farmland, ranchers can immediately detect the moisture content of the soil and know its chemical structure and composition. These implanted sensors can be adjusted so that farmers are automatically informed of insufficient soil content of substances such as potassium, nitrogen, phosphorus, or humidity [28]. Remote sensing complemented by a 3D laser scan also helps to provide agricultural land plant metrics that ensure crops are grown according to the correct soil conditions. Drones additionally play a major role in identifying and quantifying agricultural health issues sooner via offering significant insights into improving production and minimising input costs with professional multispectral cameras and sensors [29].

2.3. Farming and Predictive Analysis

Predictive analysis by using innovations acquires the facts and information necessary to decide how production may be improved and to take all corrective measures to attain the objective. Smart agriculture, on the other hand, includes a range of strategies and skills that allow farmers to maximise the yield and improve soil fertility. When using these technologies, it becomes possible to interfere properly at the correct time, in the correct location, in order to respond with excellent precision to the specific needs of individual crops and different sections of the farm [30].

2.4. Artificial Intelligence-Enabled System for Agricultural Data Evaluation and Insect Detection

By applying Artificial Intelligence in agriculture, producers can now evaluate a number of things in real time. Sensors can detect the emergence of insects in their territories, and the sensors can determine what sort of insects they are. It quits and does nothing if it is a helpful or neutral insect. However, it provides information from the Cloud if it is a significant pest or a deadly disease. The Artificial Intelligence-driven solutions therefore enable producers to optimise their plans to generate greater returns through adequate use of resources, management of crop selection, and much more [31].

2.5. Adequate Irrigation and Sustainable Farming

The growing demand for food has led to farmers improving their productivity by using various techniques, resulting in soil abuse. Increasing returns over time reduces land quality, which generates too low returns to pay even for seeds. Irrigation is a process that is human-intensive. Different automated systems may now influence Artificial Intelligence and machinery to evaluate soil fertility, historical weather patterns, and seed quality to help farmers manage their water supplies effectively [32]. Through seeding the optimum-planting crop, minimising water waste, and enhancing yields, the use of Cognitive IoT solutions can contribute to enhancing water management.

3. Application of Remote Sensing in Agriculture and Vegetation Inventory

Strategies utilized for investigating vegetation characteristics with remote detecting can be separated into physical techniques, empirical techniques, and a combination of both [33]. Generally, physical techniques are founded on the radiative exchange hypothesis, and they reproduce plant-light connections with the assistance of simulation designs [34]. Empirical techniques depend on the statistical connection involving in situ estimated vegetation properties and the vegetation reflectance data [35].

Generally speaking, an applied methodology for discovering empirical connections between vegetation characteristics and spectral reflectance includes joining the reflectance data of at least two individual spectral wavebands to deduct an indicator called vegetation index (VI). For example, the Normalized Difference Vegetation Index (NDVI) utilizes the data from the reduced reflectance in the red region and elevated reflectance in the near infrared region [36]. NDVI has been utilized for many years to gauge different vegetation factors; for example, biomass and yield, from restricted to exhaustive degrees [37, 38]. Likewise, physically based vegetation indicators identified with vegetation biophysical characteristics have been built up [39]. Table 1 presents the principal remote sensing vegetation indicators used in the remote estimation of crop vegetation.


ApplicationSymbolNameFormulaReference

Assessment of the general state of vegetationTVITriangular Vegetation IndexTVI = 0.5 × [120 × (R750 − R550) − 200 × (R670 − R550)][40]
GNDVIGreen Normalized Difference Vegetation IndexGNDVI = (R860 − R550)/(R860 + R550)[41]

Assessment of the amount of photosynthesisREPIRed Edge Position IndexREPI = 700 + 40 × {[(R670 + R780)/2 − R700]/(R740 − R700)}[42]
CTR2CarterCTR2 = R695/R760[43]

Assessment of nitrogen contentNDNINormalized Difference Nitrogen IndexNDNI = [LOG(1/R1510) − LOG(1/R1680)]/[LOG(1/R1510) + LOG(1/R1680)][44]
Assessment of the amount of light used in photosynthesisPRIPhotochemical Reflectance IndexPRI = (R531 − R570)/(R531 + R570)[45]
ZMIZarco-Tejada and Miller IndexZMI = R750/R710[46]

Assessment of the amount of dry biomassPSRIPlant Senescence Reflectance IndexPSRI = (R680 − R500)/R750[47]
NDLINormalized Difference Lignin IndexNDLI = [LOG(1/R1754) −  LOG(1/R1680)]/[LOG(1/R1754) + LOG(1/R1680)][44]
CAICellulose Absorption IndexCAI = [0.5 × (R2000 + R2200)] − R2100[48]

Assessment of water contentWBIWater Band IndexWBI = R970/R900[49]
NDWINormalized Difference Water IndexNDWI = (R857 − R1241)/(R857 + R1241)[50]
DSWIDisease water stress IndexDSWI = (R802 + R547)/(R1657 + R682)[51]

Among these indices, the Enhanced Vegetation Index (EVI) is similar to the Normalized Difference Vegetation Index (NDVI) and can be used to quantify vegetation greenness [52, 53]. However, EVI corrects for some atmospheric conditions, canopy background noise, and is more sensitive in areas with dense vegetation. In addition, the Soil Adjusted Vegetation Index (SAVI) is structured similar to the NDVI but with the addition of a soil brightness correction factor [54]. Moreover, the NDRE (Normalized Difference Red Edge) is an index that can only be formulated when the Red edge band is available in a sensor. It is sensitive to chlorophyll content in leaves (how green a leaf appears), variability in leaf area, and soil background effects [55].

Another strategy includes mixing numerous spectral wavebands into a unique empirical prototype utilizing multivariate statistical methods [56, 57]. The empirical designs can be additionally separated into linear (for example, partial least squares regression) and nonlinear (for example, support-vector machines) designs.

Empirical techniques are computationally rapid and recapitulate local information efficiently; however, they also have some weaknesses [58]. These processes frequently lack cause-and-effect relationships, making it more difficult to move a design to a new location, study it at a different time, or even to a different spectral detector without systematically recalibrating it. The restrictions of empirical techniques can be partially overcome by utilizing physical techniques [59]. Nevertheless, physical techniques are computer-intensive, occasionally necessitate several input variables for calibration, and need rigorous parameterisation before they can be employed [60].

4. Earth Observation Satellite Systems

Globe inspection satellites fluctuate in accordance with their orbit, and from the position of the imaging device, the information categories, spectral traits, and the swath size of detectors [61]. These variables are set at the start of operation and are part of the satellite’s installation. For instance, with the aim of observing the weather conditions at a big scale and elevated frequency, it is suitable for a satellite to be on a geostationary trajectory. Nevertheless, as the trajectory is a significant length above the globe, it is complicated to attain an elevated spatial resolution. On the other hand, for appliances such as the passing of clouds over land, an elevated spatial resolution is not needed [62].

An elevated spatial resolution device would be required for projects that require high-resolution images of a specific region, such as the observation of a glacier river or the inspection of structures damaged by a natural disaster [63]. Such a detector would usually have a thin swath and be on an orbiter at Low Earth Orbit (like, for example, the QuickBird satellite). In such an orbit, it is not feasible to observe constantly the identical district, due to the continuous movement of the satellite around the globe [64]. Pictures can just be obtained on the satellite. For instance, moderate-resolution Imaging Spectroradiometer (MODIS) pictures have been employed to plot water bodies at general and local levels. For local tasks, pictures supplied via the Enhanced Thematic Mapper Plus (ETM+), the Thematic Mapper (TM), and the Operational Land Imager (OLI) from Landsat satellite series are extensively employed [65].

Hui et al. [66] designed, using multi-temporal Landsat TM and ETM + pictures, the temporal and spatial modifications of a studied Lake. OLI pictures were employed by Du et al. [67] to remove water body charts in subareas. While evaluated with MODIS, the Landsat TM, ETM+, and OLI pictures have much greater spatial resolutions (30 meters) and can extract entities with higher aspect and precision. Table 2 shows principal spectral properties of Landsat TM/ETM.


BandWavelength (μm)Principal applications

B-10.45–0.52 (blue)This band is practical for mapping coastal water areas, differentiating between soil and vegetation, forest-type mapping, and detecting cultural features.
B-20.52–0.60 (green)This band corresponds to the green reflectance of healthy vegetation. It is also practical for cultural feature identification.
B-30.63–0.69 (red)This band is practical for discriminating between many plant species. It is also practical for determining soil boundary and geological boundary delineations as well as cultural features.
B-40.76–0.90 (near-infrared)This band is especially responsive to the amount of vegetation biomass present in a scene. It is practical for crop identification and emphasizes soil/crop and land/water contrasts.
B-51.55–1.75 (mid-infrared)This band is sensitive to the amount of water in plants, which is practical in crop drought studies and in plant health analyses. This is also one of the few bands that can be used to discriminate between clouds, snow, and ice.
B-610.4–12.5 (thermal infrared)This band is practical for vegetation and crop stress detection, heat intensity, insecticide applications, and for locating thermal pollution. It can also be used to locate geothermal activity.
B-72.08–2.35 (mid-infrared)This band is important for the discrimination of geologic rock type and soil boundaries, as well as soil and vegetation moisture content.

Nevertheless, spatial resolution pictures of Landsat are still insufficiently elevated to recognize satisfactorily little-sized masses [68]. Trade satellite structures, such as IKONOS, SPOT6, SPOT7, and Quick-bird, allow these small entities and bodies to be plotted, though they can be expensive.

Besides, the European Space Agency initiated a novel satellite technology with elevated optical spatial resolution, identified as Sentinel 2 at the end of June 2015. This satellite can offer automatic and general previews of detailed spatial resolution and multispectral pictures with an elevated temporal motion, satisfying the needs of the following genesis of functional products, like for example, soil cover plots, ground cover detection plots, and transformation as well as geophysical attributes [69, 70]. The Sentinel-2 pictures have the possibility of representing an immense importance for local mapmaking of entities, according to its interesting characteristics (such as the spatial resolution of ten meters for 4 different bands and the ten days of frequency expedition) and accessible free records. The Sentinel-2 multispectral picture has third-teen bands, in which four bands (blue, red, green, and near infrared) have a spatial resolution of ten meters and six bands have a spatial resolution of twenty meters. Figure 1 illustrates the geospatial analytics process.

4.1. The Electromagnetic Spectrum

Electromagnetic radiation could be described as the power that moves with the rapidity of brightness in a harmonic wave model. Visible brightness is only one class of electromagnetic radiation; other categories comprise radio signals, gamma rays, as well as infrared rays. All of these include the electromagnetic spectrum; the diverse types of electromagnetic radiation differ across the electromagnetic range regarding frequency and wavelength [72]. Wavelength can be defined as the length involving the positions in two wave successions, whereas frequency is described as the amount of wave sequences delivering the identical point in a specified time phase (1 cycle per s = 1 Hertz, or Hz). The numerical association between frequency and wavelength is indicated via the equation: S = , where is the wavelength, f is the frequency, and S is the brightness speed (which is constant at 300 × 103 km per second in a vacuum). Visible brightness symbolizes just a small fraction of the electromagnetic spectrum. It varies in wavelength starting with 3.9 × 10−7 m (violet) to 7.5 × 10−7 m (red), and has corresponding frequencies that vary beginning with 7.9 × 1014 to 4 × 1014.

In remote sensing, an instrument (i.e., sensor or scanner) is attached to a satellite or aeroplane that collects data and information concerning specific entities or districts on the land. Generally, the data list the stage of electromagnetic power that the objective has. The degree of the selected geographic region is dependent on the captor’s scientific requirements and the elevation of the navigation vehicle in which it is equipped. When electromagnetic radiation comes into contact with any object or material, such as water, trees, or atmospheric gases, a variety of interactions can occur, including the emission, reflection, scattering, or absorption of electromagnetic radiation by the substance, or the diffusion of electromagnetic radiation through the substance. Usually, remote sensing is related to the data transcription and recognition of returned and released electromagnetic radiation. Each entity or matter has a specific release and/or reflectance characteristic, jointly identified as its spectral endorsement, which differentiates it from other entities and matters. Remote detectors are adjusted to assemble these spectral endorsements. Spectral records can be collected in two configurations: analogical, as aerial photographs, or digital configuration as two-dimensional matrix, or picture compiled with pixels that accumulate electromagnetic radiation rates collected by means of a satellite-fixed assortment [73]. In addition, sensors can be divided into two groups: active and passive sensors. The passive sensors, the most known class of sensors currently in function globally, study naturally happening electromagnetic radiation that is either returned or released from districts and entities of importance. Whereas active sensors, like microwave systems such as radar, transmit artificial electromagnetic radiation toward the elements of importance and subsequently register what quantity of that electromagnetic radiation is returned to the system [74].

4.2. Data Resolutions

A remotely sensed record is principally defined via four kinds of resolutions:

4.2.1. Temporal Resolution

The temporal resolution indicates the revisiting frequency of a satellite detector for an objective location. The temporal resolution is associated with numerous features, as well as swath overlap, satellite aptitudes, and latitude. The period of day or month has an immense effect on satellite pictures [75]. Particular target organisms can fluctuate quickly at any time, such as the lunar time periods that influence the oceans, continuously mounting and diminishing, or else fruit trees can lose their leaves in winter involving a supplementary difficulty to differentiate precisely green foliage.

4.2.2. Spectral Resolution

The detector’s spectral resolution provides the quantity of spectral groups (red, blue, green, near-infrared, mid-infrared, thermic, etc.) in which the detector can register electromagnetic radiation. Nevertheless, the number of bands is not the unique basic attribute of spectral resolution [76]. The band frequencies in the electromagnetic field are valuable. The sensibility of detectors to unimportant modifications in electromagnetic power is significant. The better the detector’s radiometric resolution, the more receptive it is in order to perceive low differences in returned or released energy.

4.2.3. Spatial Resolution

The spatial resolution provides the pixel measurements of satellite pictures representing the globe’s surface. In aerial imagery, it is related to picture characteristics and the stage at which small entities can be identified within the picture [77]. The spatial resolution of airborne images in white and black (only one band) ranges from 40 to 800 row pairs per millimetre. The greater the resolution of a sensing structure, the more efficiently the draft of entities can be monitored on the soil. The spatial resolution of a picture turns on:(i)The picture dimension coefficient: spatial resolution diminishes as the dimension coefficient rises(ii)The amount of the optical structure(iii)The type of assemblage of the photographic motion(iv)The disparity of single entities and objects(v)Atmospheric dissemination consequences: can provoke lower resolution and contrast(vi)Picture motion: the relative activity involving soil and detector can provoke distortion

Moreover, it is important to note that the most unpredictable feature is the atmosphere, which is hard to predict and usually fluctuates [78].

4.2.4. Radiometric Resolution

The radiometric resolution is defined as the quantity of data in one pixel and is determined in bits. One bit of data indicates a binary decision of “no” or “yes,” accompanied by a numeric value of 0 or else 1 [79]. White and Black pictures, called grayscale pictures, from digital photographic devices are generally in 8 bits, with estimation between 0 and 255 to designate the data. Coloured pictures often have three canals in 8 bits; every canal has a specified value for blue, green, and red. Simultaneously, they generate the viewed colour and the intensity of every canal manages the darkness. It is a method of additive colour merging.

As an example, a radiometric resolution of 11 signifies that the pixel has 2048 (equivalent to 211) probable shades of red, 12 bits symbolize 4096 (equivalent to 212) shades of red, and 14 bits correspond to 16384 (equivalent to 214) shades of red. While amplifying radiometric resolution results in a larger interval for the pixel, this does not imply that it is the best option.

5. Machine Learning in Remote Sensing

Firstly, machine learning techniques were employed in remote sensing in the nineties. It was founded primarily on remote sensing as a method to computerize cognition-based construction. The work by Huang and Jensen [80] explained how a cognition-foundation was built with minimum input from users, and afterwards, decision trees were grown to understand the commands from an individual input for the specialist structure. They concluded that machine learning-assisted methodology provided greater precision when compared to traditional techniques. Consequently, analogous improvements in machine learning were made and were rapidly approved as an essential instrument by remote sensing experts and scientists. It is presently being employed in a large assortment of varied tasks, from an unsupervised satellite picture view categorization to the organization [81].

5.1. Machine Learning Categories

Machine learning can be allocated to three types, as seen in Figure 2:(i)Supervised machine learning: the machine learns via labelled data. The prototype is prepared on existing records before it begins formulating decisions through the novel records. The output in supervised machine learning is measured. The target variable can be continuous as Linear, Polynomial, or Quadratic Regressions or categorical as Logistic Regression, Support Vector Machine, Decision Tree, Gradient Boosting, Bagging, Random forest, etc. [82].(ii)Unsupervised machine learning: the machine is trained on nonlabelled data and with no suitable control. It mechanically deduces models and associations in the records through constructing clusters. The prototype learns by measurements and presumed constructions in the records. Target is absent as in principal component analysis, factor analysis, etc. [83].(iii)Reinforced learning: the prototype learns by means of test and error technique. This type of learning implies an operator that will interact with the environment to make reactions and subsequently determine errors or reaction consequences [84].

The distinction involving supervised and unsupervised learning is present when employing supervised prototypes, where the operator has constructed a preestablished marker with a set of traits. While the unsupervised algorithm deduces the data set via information classification into various categories constructed upon the connexion it has identified among various data [85], reinforcement learning is quite dissimilar. The operator gives the algorithm a setting and the algorithm formulates choices within that setting. It is constantly enhancing itself with every decision supported by the outcome of the previous decision.

5.2. Image Processing and Map Production

The process of obtaining land surface data from remotely sensed records depends on a succession of complex steps, for the reason that radiance calculated via sensors (expressed in watt per m2) does not permit for immediate inference of soil cover. Previously, numerous functional cartographic structures were established on monitoring interactive ocular analysis of some pictures obtained at particular periods of the year, and principally depended on specialist explanation. Picture processing devices have gradually sustained this way.

Any plan creation of ground cover includes a succession of major operating stages. For every stage, numerous algorithmic and theoretical options are feasible. Waldner et al. [86] have exposed that crop mask precision differs more from one farming district to another rather than from one modern technique to another. Obviously, some technological selections might be more suitable than others. Nevertheless, in the majority of situations, the quality and amount of the remote sensing input and adjustment data set participate in a significant function. The solution to achievement is nearly the sufficiency of technological choices chosen for the quality and quantity of input globe inspection and in situ adjustment records, as well as in relation to the site features to be plotted. Four major levels in the territory cover sequence making might be recognized noticeably: (1) picture segmentation; (2) characteristic extraction; (3) categorization; and finally (4) postprocessing, counting, filtering, and/or combination.

5.2.1. Picture Segmentation

The soil is divided into pixels through satellite descriptions, while visual monitor analysis defines standardized models. A picture raster composed of pixels and a vector composed of objects are identified as the two most important conceptual patterns established in order to illustrate the spatial aspect of the globe. While the spatial resolution is more or less than the dimension of the soil cover components to be represented, ground cover data are usually treated at the pixel degree and then the segmentation stage is not required. For high-spatial-resolution captures offering pixels much less important than the ground cover components, the prototype of vector is generally desired and the picture should be segmented into entities through picture segmentation algorithms. Picture segmentation collects adjoining pixels into spatially continuous entities in accordance with the spectral traits and their spatial perspectives, with the objective of registering significant spatially isolated ground entities and objects. The entity-based principle is effectively adjusted to picture structure extraction, has essential appropriate information, and accepts multiscale analysis because of hierarchical or multi-degree segmentation [87]. From another point of view, this stage is an additional origin of error when evaluated with the pixel-based methodology. In fact, it is mainly suggested to work with object-based categorization while the pixel dimension is much less important than the landscape constituents. Regularly, metric and decametric pictures are frequently fragmented into objects, while hectometric-resolution pictures are undeniably not fragmented. In unusual cases, pixel- and object-based making of chains have been planned, and interactive construction of ground cover plan is carried out [88]. Picture segmentation can be performed according two different methodologies: the angle-based strategies, which depend on a restricted recognition of edges, and the district developing techniques, which distinguish spatial groups of ordered pixels. One of the most known district-developing calculations in remote sensing comprises the act of assembling objects as long as the standardized variance of pixel values within mixed object persists under a specified threshold [89]. Besides spectral homogeneity, the mixing of objects can likewise be controlled by object form, in order to increase the coordination with spatial ground cover objects.

5.2.2. Attribute Extraction

The attribute extraction stage entails estimating the most discriminant factors to be used as contributions for categorization calculation from remote sensing images or time cycles. These attributes might be of different natures: (1) spectral, such as multispectral reflectance, or acquired indexes, such as the NDVI or some other vegetation, chlorophyll, or soil indices; (2) temporal, such as the lowest, highest, or amplitude of a variable over a specified time epoch; (3) textural, such as local disparity, entropy, or some other variable obtained from a co-occurrence matrix; and (4) a spatial or relevant variable that is especially suitable for the object-based methodology. Presently, three principal procedures might be seen in the field of land cover plotting. Firstly, usual methodologies depend fundamentally on spectral traits and, eventually, some basic temporal traits dependent on NDVI time arrangement, taking into account that these are the origins of all other traits in any circumstance. Considering progressively high calculating performances and the propagation of Artificial Intelligence algorithms, numerous remote sensing experts presently judge that “more is better” and depend on classification calculations to choose the most discriminant ones. Moreover, information-based approaches intend to incorporate external specialist cognition through structuring unarranged attributes as indicated by the classification target and also via holding just those attributes considered important as per specialists’ principle [90].

5.2.3. Categorization

This stage comprises one or numerous numerical steps to finally assign each pixel or object to one of the classes of the ground cover configuration. The large diversity of categorization algorithms can be divided into two major groups: the supervised group, which utilizes a training data set to align the algorithm a priori, and the unsupervised group, which creates bunches of pixels to be named a posteriori as soil cover category considering in situ or auxiliary records. Currently, predicting steps of supervised categorization are extremely valuable. They comprise programmed cleaning of in situ training data set, or dynamic learning to construct a more proficient training data set through repeatedly increasing the effectiveness of the classifier design. The arrangement of techniques employed to distinguish pictures in ground cover categories is continually extending. A survey of these techniques was resumed by Nitze et al. [91] and is incorporated as follows:

(1) Categorization Based on Maximum Likelihood. Until lately, the Maximum Likelihood categorization technique was considered as the most utilized approach for the supervised categorization of remote sensor information [92]. The Maximum Likelihood principle depends on probability. In this methodology, preparing information is utilized to explain statistically the target categories through their multivariate probabilistic density capacities. Every density capacity corresponds to the probability that the spectral model of a class drops inside a specified district in multidimensional spectral space. The spectral reference of every pixel is subsequently allocated to the category of which it has the most elevated probability of being an element [93], whilst the essential benefit of the Maximum Likelihood approach is the complete control that a user has over the soil cover classes to be utilized in the last categorization. Its application is constrained by its dependence on the Gaussian distribution of input records, an assumption that is frequently ignored when employing multitemporal records of numerous spectral traits and multimodal distributions [94]. Furthermore, categorization through Maximum Likelihood employs the equivalent set of traits for all categories and needs an elevated number of calculations to categorize the picture information totally. Especially, this is evident when an elevated number of attributes is employed as input to the categorization step, or where a large number of spectral categories must be separated. In such cases, the usage of the Maximum Likelihood classifier can be considerably faster than other supervised categorization methods. The different restrictions related to Maximum Likelihood categorization convert into the dynamic improvement of new categorization calculations for the field of remote sensing. Of these novel techniques, artificial neural networks [95], support vector machines [96], Decision Trees [97], and groups of classification trees such as Random Forest [98] have shown enormous hope.

(2) Artificial Neural Networks. The utilization of Artificial Neural Networks for remote sensing categorization is incited by the fact that the human brain is proficient at handling high amounts of information and records from a wide range of sources [99, 100], and that scientific renderings of this methodology might be valuable for preparing and analysing picture information. While applied to picture categorization, an Artificial Neural Network is a hugely equal allocated processor made up of basic handling items that gains information from its environment via a self-learning operation, to adaptively build linkages involving the input records, as for example, satellite imagery attributes, and the output records, as for example, target cover groups [101]. Prominent Artificial Neural Networks are the Multilayer Perceptron (MLP) [102], and Kohonen’s Self-Organizing Feature Map [103] and Fuzzy ARTMAP [104]. While these methodologies change as far as their definite usage is concerned, they necessitate training and organization to separate important data from remotely detected picture information [105]. Figure 3 represents the Artificial Neural Network structure [106].

During the training phase, image data from areas whose features (or classes) are used as inputs to the system is collected. These data are used by the system in an iterative methodology that characterizes the rules that produce the best organizational outcomes. Showed rules are then utilized in the organization phase to designate attribute information to the training class of which it has the most significant probability of being a component.

The benefits to Artificial Neural Networks integrate their capacity to: (1) perform more precisely when input data involve numerous large data sets that are estimated at various scales and the frequency distributions of which are uncommon; (2) learn and constantly update complex models, as for example, nonlinear connections among input information and output groups, as more information is given in a varying domain; (3) give, via generalization, strong answers in the presence of partial or inaccurate information; and (4) consolidate a priori understanding and logical physical restrictions into the investigation [107, 108]. However, the inconveniences to Artificial Neural Networks have restricted their appropriation to basic applications [109]. On the other hand, the most important disadvantage of Artificial Neural Networks is that they are a “black box” for explanation [110]. In fact, it has habitually been hard to clarify in a significant manner the procedure through which the output has been gotten, because the guidelines for picture organization and analysis learned via the system are not simply reachable or describable [111]. Therefore, other organization strategies with more prompt logical clarification abilities will be utilized in general.

(3) Support Vector Machines. Support Vector Machines, defined as a supervised nonparametric statistical learning procedure for solving categorization problems [112], show incredible potential for the categorization of remotely detected picture information [113]. Support Vector Machines resolve a quadratic optimization problem to decide the ideal separating limits (hyperplanes) involving two groups in multidimensional element space [114]. Support Vector Machines do this task by concentrating on the training data that lie at the edge of the group disseminations. While groups cannot be isolated, the training data are assigned into a higher-dimensional space utilizing core methods, where the novel record distribution allows the greater fitting of a straight hyperplane [115]. This method is replicated for each pair of groups to split the information into the pre-selected number of groups. The guidelines for best group division are then utilized to assign all picture information into the preselected target groups. Figure 4 illustrates the Support Vector Machine principle [116].

The basis of the Support Vector Machines’ principle of categorization is, hence, the idea that just the training samples that lie on the group limits are required for discrimination [117]. The benefit of utilizing Support Vector Machines is their capacity to surpass conventional organization techniques when just little training data sets are accessible [118]. The fundamental rule that promotes Support Vector Machines is that the learning procedure depends on basic hazard minimization [119]. Under this idea, Support Vector Machines limit organization errors on hidden information without making any a priori suppositions on the statistical distribution of the information [120]. The significant weakness in utilizing Support Vector Machines concerns the choice of the most adequate core function type and its related parameters. Even if various choices exist, some core functions cannot give the best Support Vector Machines’ design for remote detecting uses [121]. This is significant because inadequate decisions may prompt overfitting, which may have a huge negative effect on Support Vector Machines’ execution and precision of organization [122]. Furthermore, Support Vector Machines have not been improved to manage heterogeneous information, such as the outlier impacts commonly encountered in remote sensing data, the addition of which can significantly reduce classifier influence [123]. Regardless of these issues, Support Vector Machines are a prominent alternative for land cover organizations.

(4) Decision Tree Classification. Decision Trees, supervised organization strategies dependent on recursive binary divisions agreeing to many upgraded guidelines, have become an alluring alternative for separating discrete class data for land cover classification [124]. A Decision Tree accepts many elements as input, and comes back with an output via an arrangement of tests [125]. Trees construct the instruction by recursive binary dividing areas that are progressively homogeneous concerning their class variable [126]. Decision Tree classifiers make multivariate designs dependent on many decision instructions characterized by mixes of parameters and many linear discriminant equations that are applied at every test node [127]. Ordinarily, after an adequate number of training tests have been gathered, a Decision Tree learning calculation utilizes the training information to create Decision Trees that are then changed into another illustration of information representation, called production instructions. Since production instructions are uncomplicated, they can be analyzed by specialists and can be represented easily [128].

The utilization of Decision Trees for picture organization has many benefits, for example, the capacity to collect information at various estimation scales [129], nonordinary (nonparametric) input information frequency distributions [130], and nonlinear connections between input information and groups [131]. These are analogous to those explained by Artificial Neural Networks. Nevertheless, Decision Trees are simple to use because fewer numbers of factors are required to be determined [132] as demonstrated in Figure 5; they give a hierarchical construction that is clear and simple to interpret [133]; and they can be trained via making instructions and settings directly from training information with minimal operator collaboration [134].

One of the most important features of decision trees is that they can adjust when new learning information is provided and that the system's output can be evaluated to see how a deduction was reached [135]. The disadvantages of using Decision Trees include their inability to include spaces with high dimensionality [136], noisy data [137], and overfitting [112]. A better comprehension of impacts on Decision Tree organization performance is a part of remote detecting that is presently experiencing further investigation [138], and has prompted the advancement of ensemble Decision Tree-based strategies, for example, the Random Forest technique, which enhances organization performance through the mixture of numerous individual Decision Trees.

(5) Random Forests Classification. Random Forest, an improved form of Decision Tree, is an ensemble learning calculation that consolidates different organizations of similar information to generate higher arrangement precisions than different types of Decision Trees [98]. Random Forest works by fitting numerous Decision Tree-based organizations to a data set, and afterward utilizing a guideline-based approach to deal with joining the forecasts from all the trees as illustrated by Figure 6 [139].

During this procedure, singular trees are developed from differing subsets of training information utilizing a procedure entitled bagging. Bagging includes the irregular subsampling (with substitution) of the original information for developing each tree. Usually, for each developed tree, 66% of the training information is utilized to develop the tree, while the remaining 34% is left unused for later error evaluation [140]. A classification is subsequently fit to each bootstrap model; nevertheless, at every node (split), just a few randomly specific indicator factors are utilized in the binary dividing [141]. The parting procedure proceeds until supplementary subdivision no longer diminishes the Gini indices [142]. With only one vote, each tree contributes to the task of the most common class to the input data [143]. The common vote of an observation determines the forecasted group, with binds division determined at random [144].

The most important benefit of Random Forest is that it is conceivably more precise and reliable than usual parametric or Decision Tree Artificial Intelligence techniques [143]. This is because the classifiers’ category executes more precisely than any unique classifier, while dodging classifier limitations [145]. Furthermore, Random Forest needs the description of only two factors to construct the forecast model (that is, the quantity of classification trees required and the quantity of forecast factors utilized in every node to develop the tree), and is accordingly judged to be moderately simple to parameterize [143]. Further points of interest result from the Random Forest’s utilization of bagging to cause singular tree development from training information subsets. Completely developed trees are utilized to estimate precision and error rates for every sample utilizing the Out-Of-Bag forecasts, which are then standardized over all tests. Since the Out-Of-Bag records are not employed to fit the trees, the Out-Of-Bag estimates are basically cross-validated precision estimates [143]. Random Forest is additionally ready to evaluate the significance of only one variable. For this reason, Random Forest switches one of the input factors, preserving the rest invariable, and determines the reduction in precision that has occurred by methods for the Out-Of-Bag error [143]. This is helpful when it is critical to realize how each predictive variable influences the organization design [145]. The disadvantage of using Random Forest is that with a large number of trees, it becomes more difficult to analyse individual trees and comprehend their configuration [146], leading to a black box environment that complicates decision instructions [147]. Table 3 presents the advantages and limitations of algorithms employed for area classification of satellite images.


AlgorithmAdvantagesLimitations

Maximum likelihood(i) Easy application
(ii) Simple to comprehend and interpret
(iii) Forecasts category membership probability
(i) Parametric
(ii) Supposes normal distribution of records
(iii) Elevated training sample needed

Artificial neural networks(i) Manages big attribute space well
(ii) Shows strength of class membership
(iii) Normally high classification precision
(iv) Challenge to training records deficiencies—needs less training records than Decision Trees
(i) Requires factors for network modeling
(ii) Tends to overfit records
(iii) Black box (rules are unidentified)
(iv) Computationally powerful
(v) Time-consuming training

Support vector machines(i) Manages large feature space well
(ii) Insensitive to Hughes consequence
(iii) Works well with little training data sets
(iv) Does not overfit
(i) Requires factors: regularization and core
(ii) Reduced performance with limited attribute space
(iii) Computationally powerful
(iv) Created as binary, even though variations are present

Decision trees(i) No requirement for any sort of factor
(ii) Simple to use and understand
(iii) Handles absent records
(iv) Handles records of diverse types and degrees
(v) Handles nonlinear connexions
(vi) Not sensitive to noise
(i) Susceptible to noise
(ii) Are inclined to overfit
(iii) Does not perform as well as others in big attribute spaces
(iv) Big training test needed

Random forests(i) Ability to establish variable significance
(ii) Strong to data diminution
(iii) Does not overfit
(iv) Generates unbiased precision estimate
(v) Higher precision than Decision Trees
(i) Decision guidelines undefined (black box)
(ii) Computationally powerful
(iii) Needs input factors

(6) Postprocessing. These processes can develop the categorization outputbecause of the option to employ diverse filtering methods or otherwise to combine diverse categorization outputs. Initially, macroscopic deficiencies are modified interactively because they are clearly identified via regular optical examination. Basic filtering parameters over sliding section of 3 pixels × 3 pixels or 5 pixels × 5 pixels, such as a majority filter, eliminates the salt-and-pepper result provoked through pixel-based categorization. Such a majority filter could moreover be employed for pixel-based categorization output by objects acquired through multispectral reflectance picture segmentation, therefore offering a much smoother land cover plan. Fusion methods are needed to combine outputs from the classifier group. With majority voting, a unique output chart can be obtained either when the ensemble selects the category to which all classifiers agree or when at least half of the classifiers agree. Weighted majority voting can be employed whilst a number of classifiers are supposed to execute better than others, or are weighted via the connected possibility or membership of the categorization output. It is imperative to remark that the diverse stages explained above are mostly correlated, and every choice must consider the total land cover cartography creation sequence to guarantee that a suitable solution is realized.

6. Conclusion

Today, these Artificial Intelligence-powered solutions are used to solve several industrial objectives, such as transport, banking, medicine, and agriculture. The use of this Artificial Intelligence technology has revolutionised the entire food process with enormous benefits. In addition to supporting producers in automatic farming and culture, Artificial Intelligence in agriculture also leads to precision farming with better crop yield and better quality while using limited resources. Moreover, remote sensing uses advanced methods, which assist ranchers to watch their crops without having to watch the farm physically. Today, several companies look forward to Artificial Intelligence-enabled agriculture development. Artificial Intelligence, combined with remote sensing, redefines the usual patterns of agriculture and thus reclassifies the conventional model of farming. In agriculture, the future of Artificial Intelligence is increasingly evolving with many sophisticated strategies through comprehensive transformation.

Conflicts of Interest

The authors declare that they have no conflicts of Interest.

Authors’ Contributions

KE designed the review plan and wrote the text; SS contributed to writing of the text; YG, MC, and OBB contributed to figure and table preparations and writing of the text; ME and MAT contributed to reviewing of the text. All authors have read and approved the final manuscript.

References

  1. A. S. Davis and G. B. Frisvold, “Are herbicides a once in a century method of weed control ?Are herbicides a once in a century method of weed control,” Pest Mmanagement Sscience, vol. 73, no. 11, pp. 2209–2220, 2017. View at: Publisher Site | Google Scholar
  2. A. P. Vink, Land Uuse in Aadvancing Aagriculture, Springer Science & Business Media, Berlin, Germany, 2013.
  3. Y. K. Dwivedi, L. Hughes, E. Ismagilova et al., “Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy,” International Journal of Information Management, vol. 57, Article ID 101994, 2019. View at: Google Scholar
  4. P. Smith, M. R. Ashmore, H. I. Black et al., “The role of ecosystems and their management in regulating climate, and soil, water and air qualityReview: tThe role of ecosystems and their management in regulating climate, and soil, water and air quality,” Journal of Applied Ecology, vol. 50, no. 4, pp. 812–829, 2013. View at: Publisher Site | Google Scholar
  5. D. B. Lobell and S. M. Gourdji, “The influence of climate change on global crop productivity,” Plant Pphysiology, vol. 160, no. 4, pp. 1686–1697, 2012. View at: Publisher Site | Google Scholar
  6. H. Lambers, M. C. Brundrett, J. A. Raven, and S. D. Hopper, “Plant mineral nutrition in ancient landscapes: high plant species diversity on infertile soils is linked to functional diversity for nutritional strategies,” Plant and Soil, vol. 348, no. 1, pp. 7–27, 2011. View at: Publisher Site | Google Scholar
  7. G. Bannerjee, U. Sarkar, S. Das, and I. Ghosh, “Artificial intelligence in agriculture: a literature survey,” International Journal of Scientific Research in Computer Science Applications and Management Studies, vol. 7, no. 3, pp. 1–6, 2018. View at: Google Scholar
  8. M. J. Smith, “Getting value from artificial intelligence in agriculture,” Animal Production Science, vol. 60, no. 1, pp. 46–54, 2020. View at: Publisher Site | Google Scholar
  9. K. Jha, A. Doshi, P. Patel, and M. Shah, “A comprehensive review on automation in agriculture using artificial intelligence,” Artificial Intelligence in Agriculture, vol. 2, pp. 1–12, 2019. View at: Publisher Site | Google Scholar
  10. D. Shadrin, A. Menshchikov, A. Somov, G. Bornemann, J. Hauslage, and M. Fedorov, “Enabling precision agriculture through embedded sensing with artificial intelligence,” IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 7, pp. 4103–4113, 2019. View at: Publisher Site | Google Scholar
  11. R. G. Perea, E. C. Poyato, P. Montesinos, and J. R. Díaz, “Prediction of applied irrigation depths at farm level using artificial intelligence techniques,” Agricultural Water Management, vol. 206, pp. 229–240, 2018. View at: Publisher Site | Google Scholar
  12. V. Marinoudi, C. G. Sørensen, S. Pearson, and D. Bochtis, “Robotics and labour in agriculture. A context consideration,” Biosystems Engineering, vol. 184, pp. 111–121, 2019. View at: Publisher Site | Google Scholar
  13. M. G. Alalm and M. Nasr, “Artificial intelligence, regression model, and cost estimation for removal of chlorothalonil pesticide by activated carbon prepared from casuarina charcoal,” Sustainable Environment Research, vol. 28, no. 3, pp. 101–110, 2018. View at: Publisher Site | Google Scholar
  14. R. Lal, A. Sharda, and P. Prabhakar, “Optimal multi-robot path planning for pesticide spraying in agricultural fields,” in Proceedings of the 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 5815–5820, IEEE, Melbourne, Australia, December 2017. View at: Publisher Site | Google Scholar
  15. H. Luo, Y. Niu, M. Zhu, X. Hu, and H. Ma, “Optimization of pesticide spraying tasks via multi-uavs using genetic algorithm,” Mathematical Problems in Engineering, vol. 2017, Article ID 7139157, 16 pages, 2017. View at: Publisher Site | Google Scholar
  16. K. Ennouri, M. A. Triki, and A. Kallel, “Applications of remote sensing in pest monitoring and crop management,” in Bioeconomy for Ssustainable Ddevelopment, pp. 65–77, Springer, Singapore, 2020. View at: Publisher Site | Google Scholar
  17. K. Ennouri and A. Kallel, “Remote sensing: an advanced technique for crop condition assessment,” Mathematical Problems in Engineering, vol. 2019, Article ID 9404565, 8 pages, 2019. View at: Publisher Site | Google Scholar
  18. T. Blaschke, “Object based image analysis for remote sensing,” ISPRS Jjournal of Pphotogrammetry and Rremote Ssensing, vol. 65, no. 1, pp. 2–16, 2010. View at: Publisher Site | Google Scholar
  19. Y. Kim, D. M. Glenn, J. Park, H. K. Ngugi, and B. L. Lehman, “Characteristics of active spectral sensor for plant sensing,” Transactions of the ASABE, vol. 55, no. 1, pp. 293–301, 2012. View at: Publisher Site | Google Scholar
  20. D. Blondeau-Patissier, J. F. Gower, A. G. Dekker, S. R. Phinn, and V. E. Brando, “A review of ocean color remote sensing methods and statistical techniques for the detection, mapping and analysis of phytoplankton blooms in coastal and open oceans,” Progress in Ooceanography, vol. 123, pp. 123–144, 2014. View at: Publisher Site | Google Scholar
  21. I. Darnhofer, S. Bellon, B. Dedieu, and R. Milestad, “Adaptiveness to enhance the sustainability of farming systems. A review,” Agronomy for Ssustainable Ddevelopment, vol. 30, no. 3, pp. 545–555, 2010. View at: Publisher Site | Google Scholar
  22. J. R. Rosell and R. Sanz, “A review of methods and applications of the geometric characterization of tree crops in agricultural activities,” Computers and Eelectronics in Aagriculture, vol. 81, pp. 124–141, 2012. View at: Publisher Site | Google Scholar
  23. S. Fountas, G. Carli, C. G. Sørensen et al., “Farm management information systems: cCurrent situation and future perspectives,” Computers and Electronics in Agriculture, vol. 115, pp. 40–50, 2015. View at: Publisher Site | Google Scholar
  24. D. Levy, W. K. Coleman, and R. E. Veilleux, “Adaptation of potato to water shortage: irrigation management and enhancement of tolerance to drought and salinity,” American Journal of Potato Research, vol. 90, no. 2, pp. 186–206, 2013. View at: Publisher Site | Google Scholar
  25. P. Lillford and A. M. Hermansson, “Global missions and the critical needs of food science and technology,” Trends in Food Science & Technology, vol. 111, pp. 800–811, 2021. View at: Publisher Site | Google Scholar
  26. J. I. Boye and Y. Arcand, “Current trends in green technologies in food production and processing,” Food Engineering Reviews, vol. 5, no. 1, pp. 1–17, 2013. View at: Publisher Site | Google Scholar
  27. S. S. L. Chukkapalli, S. Mittal, M. Gupta et al., “Ontologies and artificial intelligence systems for the cooperative smart farming ecosystem,” IEEE Access, vol. 8, pp. 164045–164064, 2020. View at: Publisher Site | Google Scholar
  28. N. Khan, R. L. Ray, G. R. Sargani, M. Ihtisham, M. Khayyam, and S. Ismail, “Current pProgress and fFuture pProspects of aAgriculture tTechnology: gGateway to sSustainable aAgriculture,” Sustainability, vol. 13, no. 9, p. 4883, 2021. View at: Publisher Site | Google Scholar
  29. M. Bergerman, J. Billingsley, J. Reid, and E. van Henten, “Robotics in agriculture and forestry,” in Springer Hhandbook of Rrobotics, pp. 1463–1492, Springer, Cham, Switzerland, 2016. View at: Publisher Site | Google Scholar
  30. J. J. Beck, H. T. Alborn, A. K. Block et al., “Interactions among plants, insects, and microbes: elucidation of inter-organismal chemical communications in agricultural ecology,” Journal of Aagricultural and Ffood Cchemistry, vol. 66, no. 26, pp. 6663–6674, 2018. View at: Publisher Site | Google Scholar
  31. P. Shankar, N. Werner, S. Selinger, and O. Janssen, “Artificial iIntelligence dDriven cCrop pProtection oOptimization for sSustainable aAgriculture,” in Proceedings of the 2020 IEEE/ITU International Conference on Artificial Intelligence for Good (AI4G), pp. 1–6, IEEE, Geneva, Switzerland, September 2020. View at: Publisher Site | Google Scholar
  32. H. Bhardwaj, P. Tomar, A. Sakalle, and U. Sharma, “Artificial Intelligence and Its Applications in Agriculture With the Future of Smart Agriculture Techniques,” in Artificial Intelligence and IoT-Based Technologies for Sustainable Farming and Smart Agriculture, pp. 25–39, IGI Global, Harrisburg, PA, USA, 2021. View at: Publisher Site | Google Scholar
  33. V. L. Mulder, S. De Bruin, M. E. Schaepman, and T. R. Mayr, “The use of remote sensing in soil and terrain mapping—aA review,” Geoderma, vol. 162, no. 1-2, pp. 1–19, 2011. View at: Publisher Site | Google Scholar
  34. J. Verrelst, J. P. Rivera, F. Veroustraete et al., “Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods–A comparisonExperimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods—aA comparison,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 108, pp. 260–272, 2015. View at: Publisher Site | Google Scholar
  35. H. Zandler, A. Brenning, and C. Samimi, “Quantifying dwarf shrub biomass in an arid environment: cComparing empirical methods in a high dimensional setting,” Remote Sensing of Environment, vol. 158, pp. 140–155, 2015. View at: Publisher Site | Google Scholar
  36. A. Karnieli, N. Agam, R. T. Pinker et al., “Use of NDVI and land surface temperature for drought assessment: mMerits and limitations,” Journal of Cclimate, vol. 23, no. 3, pp. 618–633, 2010. View at: Publisher Site | Google Scholar
  37. A. A. Gitelson, Y. Peng, and K.F. Huemmrich, “Relationship between fraction of radiation absorbed by photosynthesizing maize and soybean canopies and NDVI from remotely sensed data taken at close range and from MODIS 250 m resolution data,” Remote Sensing of Environment, vol. 147, pp. 108–120, 2014. View at: Publisher Site | Google Scholar
  38. L. Sever, J. Leach, and L. Bren, “Remote sensing of post-fire vegetation recovery; a study using Landsat 5 TM imagery and NDVI in North-East Victoria,” Journal of Spatial Science, vol. 57, no. 2, pp. 175–191, 2012. View at: Publisher Site | Google Scholar
  39. B. Kamble, A. Kilic, and K. Hubbard, “Estimating crop coefficients using remote sensing-based vegetation index,” Remote Ssensing, vol. 5, no. 4, pp. 1588–1602, 2013. View at: Publisher Site | Google Scholar
  40. N. H. Broge and E. Leblanc, “Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density,” Remote Ssensing of Eenvironment, vol. 76, no. 2, pp. 156–172, 2001. View at: Publisher Site | Google Scholar
  41. A. A. Gitelson, Y. J. Kaufman, and M. N. Merzlyak, “Use of a green channel in remote sensing of global vegetation from EOS-MODIS,” Remote Ssensing of Environment, vol. 58, no. 3, pp. 289–298, 1996. View at: Publisher Site | Google Scholar
  42. T. P. Dawson and P. J. Curran, “Technical nnote a nnew ttechnique for iinterpolating the rreflectance rred eedge pposition,” International Journal of Remote Sensing, vol. 19, no. 11, pp. 2133–2139, 1998. View at: Publisher Site | Google Scholar
  43. G. A. Carter, T. R. Dell, and W.G. Cibula, “Spectral reflectance characteristics and digital imagery of a pine needle blight in the southeastern United States,” Canadian Jjournal of Fforest Rresearch, vol. 26, no. 3, pp. 402–407, 1996. View at: Publisher Site | Google Scholar
  44. L. Serrano, J. Penuelas, and S. L. Ustin, “Remote sensing of nitrogen and lignin in Mediterranean vegetation from AVIRIS data: dDecomposing biochemical from structural signals,” Remote Ssensing of Environment, vol. 81, no. 2-3, pp. 355–364, 2002. View at: Publisher Site | Google Scholar
  45. J. A. Gamon, J. Penuelas, and C. B. Field, “A narrow-waveband spectral index that tracks diurnal changes in photosynthetic efficiency,” Remote Sensing of Eenvironment, vol. 41, no. 1, pp. 35–44, 1992. View at: Publisher Site | Google Scholar
  46. P. J. Zarco-Tejada, J. R. Miller, T. L. Noland, G. H. Mohammed, and P. H. Sampson, “Scaling-up and model inversion methods with narrowband optical indices for chlorophyll content estimation in closed forest canopies with hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 7, pp. 1491–1507, 2001. View at: Publisher Site | Google Scholar
  47. M. N. Merzlyak, A. A. Gitelson, O. B. Chivkunova, and V. Y. Rakitin, “Non-destructive optical detection of pigment changes during leaf senescence and fruit ripening,” Physiologia Pplantarum, vol. 106, no. 1, pp. 135–141, 1999. View at: Publisher Site | Google Scholar
  48. C. S. Daughtry, “Discriminating crop residues from soil by shortwave infrared reflectance,” Agronomy Journal, vol. 93, no. 1, pp. 125–131, 2001. View at: Publisher Site | Google Scholar
  49. J. Penuelas, I. Filella, and J. A. Gamon, “Assessment of photosynthetic radiation-use efficiency with spectral reflectance,” New Phytologist, vol. 131, no. 3, pp. 291–296, 1995. View at: Publisher Site | Google Scholar
  50. B. C. Gao, “NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space,” Remote Ssensing of Eenvironment, vol. 58, no. 3, pp. 257–266, 1996. View at: Publisher Site | Google Scholar
  51. L. S. GalvaoGalvão, A. R. Formaggio, and D. A. Tisot, “Discrimination of sugarcane varieties in Southeastern Brazil with EO-1 Hyperion data,” Remote Sensing of Environment, vol. 94, no. 4, pp. 523–534, 2005. View at: Publisher Site | Google Scholar
  52. Y. Pan, L. Li, J. Zhang, S. Liang, X. Zhu, and D. Sulla-Menashe, “Winter wheat area estimation from MODIS-EVI time series data using the Crop Proportion Phenology Index,” Remote Sensing of Environment, vol. 119, pp. 232–242, 2012. View at: Publisher Site | Google Scholar
  53. Y. Zhan, S. Muhammad, P. Hao, and Z. Niu, “The effect of EVI time series density on crop classification accuracy,” Optik, vol. 157, pp. 1065–1072, 2018. View at: Publisher Site | Google Scholar
  54. H. Ren, G. Zhou, and F. Zhang, “Using negative soil adjustment factor in soil-adjusted vegetation index (SAVI) for aboveground living biomass estimation in arid grasslands,” Remote Sensing of Environment, vol. 209, pp. 439–445, 2018. View at: Publisher Site | Google Scholar
  55. J. Jorge, M. Vallbé, and J. A. Soler, “Detection of irrigation inhomogeneities in an olive grove using the NDRE vegetation index obtained from UAV images,” European Journal of Remote Sensing, vol. 52, no. 1, pp. 169–177, 2019. View at: Publisher Site | Google Scholar
  56. M. W. Matthews, “A current review of empirical procedures of remote sensing in inland and near-coastal transitional waters,” International Journal of Remote Sensing, vol. 32, no. 21, pp. 6855–6899, 2011. View at: Publisher Site | Google Scholar
  57. J. Teng, A. J. Jakeman, J. Vaze, B. F. Croke, D. Dutta, and S. Kim, “Flood inundation modelling: aA review of methods, recent advances and uncertainty analysis,” Environmental Modelling & Software, vol. 90, pp. 201–216, 2017. View at: Publisher Site | Google Scholar
  58. M. Yebra, P. E. Dennison, E. Chuvieco et al., “A global review of remote sensing of live fuel moisture content for fire danger assessment: mMoving towards operational products,” Remote Sensing of Environment, vol. 136, pp. 455–468, 2013. View at: Publisher Site | Google Scholar
  59. H. Costa, G. M. Foody, and D. S. Boyd, “Supervised methods of image segmentation accuracy assessment in land cover mapping,” Remote Sensing of Environment, vol. 205, pp. 338–351, 2018. View at: Publisher Site | Google Scholar
  60. K. Jia, S. Liang, X. Wei et al., “Land cover classification of Landsat data with phenological features extracted from time series MODIS NDVI data,” Remote Ssensing, vol. 6, no. 11, pp. 11518–11532, 2014. View at: Publisher Site | Google Scholar
  61. C. Kuenzer, M. Ottinger, M. Wegmann et al., “Earth observation satellite sensors for biodiversity monitoring: potentials and bottlenecks,” International Journal of Remote Sensing, vol. 35, no. 18, pp. 6599–6647, 2014. View at: Publisher Site | Google Scholar
  62. N. Pettorelli, M. Wegmann, A. Skidmore et al., “Framing the concept of satellite remote sensing essential biodiversity variables: challenges and future directions,” Remote Sensing in Ecology and Conservation, vol. 2, no. 3, pp. 122–131, 2016. View at: Publisher Site | Google Scholar
  63. O. Rojas, A. Vrieling, and F. Rembold, “Assessing drought probability for agricultural areas in Africa with coarse resolution remote sensing imagery,” Remote Ssensing of Environment, vol. 115, no. 2, pp. 343–352, 2011. View at: Publisher Site | Google Scholar
  64. M. Lyons, S. Phinn, and C. Roelfsema, “Integrating Quickbird multi-spectral satellite and field data: mapping bathymetry, seagrass cover, seagrass species and change in Moreton Bay, Australia in 2004 and 2007,” Remote Sensing, vol. 3, no. 1, pp. 42–64, 2011. View at: Publisher Site | Google Scholar
  65. W. Schroeder, P. Oliva, L. Giglio, B. Quayle, E. Lorenz, and F. Morelli, “Active fire detection using Landsat-8/OLI data,” Remote Ssensing of Eenvironment, vol. 185, pp. 210–220, 2016. View at: Publisher Site | Google Scholar
  66. F. Hui, B. Xu, H. Huang, Q. Yu, and P. Gong, “Modelling spatial-temporal change of Poyang Lake using multitemporal Landsat imageryModelling spatial‐temporal change of Poyang Lake using multitemporal Landsat imagery,” International Journal of Remote Sensing, vol. 29, no. 20, pp. 5767–5784, 2008. View at: Publisher Site | Google Scholar
  67. Z. Du, W. Li, D. Zhou et al., “Analysis of Landsat-8 OLI imagery for land surface water mapping,” Remote Ssensing Lletters, vol. 5, no. 7, pp. 672–681, 2014. View at: Publisher Site | Google Scholar
  68. T. Dube and O. Mutanga, “Evaluating the utility of the medium-spatial resolution Landsat 8 multispectral sensor in quantifying aboveground biomass in uMgeni catchment, South Africa,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 101, pp. 36–46, 2015. View at: Publisher Site | Google Scholar
  69. F. D. Van der Meer, H. M. A. Van der Werff, and F. J. A. Van Ruitenbeek, “Potential of ESA’s Sentinel-2 for geological applications,” Remote Ssensing of Eenvironment, vol. 148, pp. 124–133, 2014. View at: Publisher Site | Google Scholar
  70. M. Battude, A. Al Bitar, D. Morin et al., “Estimating maize biomass and yield over large areas using high spatial and temporal resolution Sentinel-2 like remote sensing data,” Remote Sensing of Environment, vol. 184, pp. 668–681, 2016. View at: Publisher Site | Google Scholar
  71. 2020, Satelytics, Geospatial Aanalytics Wworks in “Isolation” but Sshares its Vvitals toward an Eearly Ddetection Ooutcome, 2020, https://www.satelytics.com/blog/oil-gas-solutions/2020-geospatial-analytics-works-in-isolation-but-shares-its-vitals-toward-an-early-detection-outcome/.
  72. E. Adam, O. Mutanga, and D. Rugege, “Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: a review,” Wetlands Ecology and Management, vol. 18, no. 3, pp. 281–296, 2010. View at: Publisher Site | Google Scholar
  73. H. Ghassemian, “A review of remote sensing image fusion methods,” Information Fusion, vol. 32, pp. 75–89, 2016. View at: Publisher Site | Google Scholar
  74. C. Toth and G. Jóźków, “Remote sensing platforms and sensors: aA survey,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 115, pp. 22–36, 2016. View at: Publisher Site | Google Scholar
  75. M. Claverie, V. Demarez, B. Duchemin et al., “Maize and sunflower biomass estimation in southwest France using high spatial and temporal resolution remote sensing data,” Remote Sensing of Environment, vol. 124, pp. 844–857, 2012. View at: Publisher Site | Google Scholar
  76. A. Mei, R. Salvatori, N. Fiore, A. Allegrini, and A. D’Andrea, “Integration of field and laboratory spectral data with multi-resolution remote sensed imagery for asphalt surface differentiation,” Remote Ssensing, vol. 6, no. 4, pp. 2765–2781, 2014. View at: Publisher Site | Google Scholar
  77. Y. Zhong, Q. Zhu, and L. Zhang, “Scene classification based on the multifeature fusion probabilistic topic model for high spatial resolution remote sensing imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 11, pp. 6207–6222, 2015. View at: Publisher Site | Google Scholar
  78. G. Duveiller and P. Defourny, “A conceptual framework to define the spatial resolution requirements for agricultural monitoring using remote sensing,” Remote Sensing of Environment, vol. 114, no. 11, pp. 2637–2650, 2010. View at: Publisher Site | Google Scholar
  79. L. Bruzzone and F. Bovolo, “A novel framework for the design of change-detection systems for very-high-resolution remote sensing images,” Proceedings of the IEEE, vol. 101, no. 3, pp. 609–630, 2012. View at: Publisher Site | Google Scholar
  80. X. Huang and J. R. Jensen, “A machine-learning approach to automated knowledge-base building for remote sensing image analysis with GIS data,” Photogrammetric Eengineering and Rremote Ssensing, vol. 63, no. 10, pp. 1185–1193, 1997. View at: Google Scholar
  81. A. Romero, C. Gatta, and G. Camps-Valls, “Unsupervised deep feature extraction for remote sensing image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 3, pp. 1349–1362, 2015. View at: Publisher Site | Google Scholar
  82. M. J. Cracknell and A. M. Reading, “Geological mapping using remote sensing data: aA comparison of five machine learning algorithms, their response to variations in the spatial distribution of training data and the use of explicit spatial information,” Computers & Geosciencess, vol. 63, pp. 22–33, 2014. View at: Publisher Site | Google Scholar
  83. X. Lu, X. Zheng, and Y. Yuan, “Remote sensing scene classification by unsupervised representation learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 9, pp. 5148–5157, 2017. View at: Publisher Site | Google Scholar
  84. K. Gai and M. Qiu, “Reinforcement learning-based content-centric services in mobile sensing,” IEEE Network, vol. 32, no. 4, pp. 34–39, 2018. View at: Publisher Site | Google Scholar
  85. X. X. Zhu, D. Tuia, L. Mou et al., “Deep learning in remote sensing: aA comprehensive review and list of resources,” IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 4, pp. 8–36, 2017. View at: Publisher Site | Google Scholar
  86. F. Waldner, S. Fritz, A. Di Gregorio et al., “A unified cropland layer at 250 m for global agriculture monitoring,” Data, vol. 1, no. 1, p. 3, 2016. View at: Publisher Site | Google Scholar
  87. J. Yuan, D. Wang, and R. Li, “Remote sensing image segmentation by combining spectral and texture features,” IEEE Transactions on Ggeoscience and Rremote Ssensing, vol. 52, no. 1, pp. 16–24, 2013. View at: Publisher Site | Google Scholar
  88. W. Li, F. Baret, M. Weiss et al., “Combining hectometric and decametric satellite observations to provide near real time decametric FAPAR product,” Remote Sensing of Environment, vol. 200, pp. 250–262, 2017. View at: Publisher Site | Google Scholar
  89. M. Li, L. Ma, T. Blaschke, L. Cheng, and D. Tiede, “A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments,” International Journal of Applied Earth Observation and Geoinformation, vol. 49, pp. 87–98, 2016. View at: Publisher Site | Google Scholar
  90. K. Jia, S. Liang, N. Zhang et al., “Land cover classification of finer resolution remote sensing data integrating temporal features from time series coarser resolution data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 93, pp. 49–55, 2014. View at: Publisher Site | Google Scholar
  91. I. Nitze, U. Schulthess, and H. Asche, “Comparison of machine learning algorithms random forest, artificial neural network and support vector machine to maximum likelihood for supervised crop type classification,” in Proceedings of the 4th Conference on Geographic Object-Based Image Analysis, vol. 35, Rio de Janeiro, Brazil, May 2012. View at: Google Scholar
  92. J. R. Otukei and T. Blaschke, “Land cover change assessment using decision trees, support vector machines and maximum likelihood classification algorithms,” International Journal of Applied Earth Observation and Geoinformation, vol. 12, pp. S27–S31, 2010. View at: Publisher Site | Google Scholar
  93. P. S. Sisodia, V. Tiwari, and A. Kumar, “Analysis of supervised maximum likelihood classification for remote sensing image,” in Proceedings of the International Cconference on Rrecent Aadvances and Iinnovations in Eengineering (ICRAIE-2014), pp. 1–4, IEEE, Jaipur, India, May 2014. View at: Publisher Site | Google Scholar
  94. Y. Ghobadi, B. Pradhan, H. Z. M. Shafri, and K. Kabiri, “Assessment of spatial relationship between land surface temperature and landuse/cover retrieval from multi-temporal remote sensing data in South Karkheh Sub-basin, Iran,” Arabian Journal of Geosciences, vol. 8, no. 1, pp. 525–537, 2015. View at: Publisher Site | Google Scholar
  95. L. Wang, X. Zhou, X. Zhu, Z. Dong, and W. Guo, “Estimation of biomass in wheat using random forest regression algorithm and remote sensing data,” The Crop Journal, vol. 4, no. 3, pp. 212–219, 2016. View at: Publisher Site | Google Scholar
  96. C. H. Li, B. C. Kuo, C. T. Lin, and C. S. Huang, “A spatial–contextual support vector machine for remotely sensed image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 3, pp. 784–799, 2011. View at: Publisher Site | Google Scholar
  97. A. A. Elnaggar and J. S. Noller, “Application of remote-sensing data and decision-tree analysis to mapping salt-affected soils over large areas,” Remote Sensing, vol. 2, no. 1, pp. 151–165, 2010. View at: Publisher Site | Google Scholar
  98. M. Belgiu and L. Drăguţ, “Random forest in remote sensing: aA review of applications and future directions,” ISPRS Jjournal of Pphotogrammetry and Rremote Ssensing, vol. 114, pp. 24–31, 2016. View at: Publisher Site | Google Scholar
  99. K. Ennouri, R. Ben Ayed, M. A. Triki et al., “Multiple linear regression and artificial neural networks for delta-endotoxin and protease yields modelling of Bacillus thuringiensis,” 3 Biotech, vol. 7, no. 3, pp. 1–13, 2017. View at: Publisher Site | Google Scholar
  100. K. Ennouri, H. Ben Hlima, R. Ben Ayed et al., “Assessment of Tunisian virgin olive oils via synchronized analysis of sterols, phenolic acids, and fatty acids in combination with multivariate chemometrics,” European Food Research and Technology, vol. 245, no. 9, pp. 1811–1824, 2019. View at: Publisher Site | Google Scholar
  101. P. K. Srivastava, D. Han, M. A. Rico-Ramirez, M. Bray, and T. Islam, “Selection of classification techniques for land use/land cover change investigation,” Advances in Space Research, vol. 50, no. 9, pp. 1250–1265, 2012. View at: Publisher Site | Google Scholar
  102. A. Lekfuangfu, T. Kasetkasem, I. Kumazawa, P. Rakwatin, and T. Chanwimaluang, “Incoperating texture in remote sensing image classification using a MLP deep neural network,” in Information and Communication Technology for Embedded Systems (IC-ICTES), Bangkok, Thailand, March 2016. View at: Google Scholar
  103. E. Bedini, “Mapping alteration minerals at Malmbjerg molybdenum deposit, central East Greenland, by Kohonen self-organizing maps and matched filter analysis of HyMap data,” International Journal of Remote Sensing, vol. 33, no. 4, pp. 939–961, 2012. View at: Publisher Site | Google Scholar
  104. M. Han, C. Zhang, and Y. Zhou, “Object-wise joint-classification change detection for remote sensing images based on entropy query-by fuzzy ARTMAP,” GIScience & Remote Sensing, vol. 55, no. 2, pp. 265–284, 2018. View at: Publisher Site | Google Scholar
  105. J. A. Dos Santos, P. H. Gosselin, S. Philipp-Foliguet, R. D. S. Torres, and A. X. Falao, “Multiscale classification of remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 10, pp. 3764–3775, 2012. View at: Publisher Site | Google Scholar
  106. Edureka, “What is a neural network? Introduction To artificial neural networks,” 2019, https://www.edureka.co/blog/what-is-a-neural-network/. View at: Google Scholar
  107. A. F. Marj and A. M. J.A. M. Meijerink, “Agricultural drought forecasting using satellite images, climate indices and artificial neural network,” International Journal of Remote Sensing, vol. 32, no. 24, pp. 9707–9719, 2011. View at: Publisher Site | Google Scholar
  108. L. Hassan-Esfahani, A. Torres-Rua, A. Jensen, and M. McKee, “Assessment of surface soil moisture using high-resolution multi-spectral imagery and artificial neural networks,” Remote Sensing, vol. 7, no. 3, pp. 2627–2646, 2015. View at: Publisher Site | Google Scholar
  109. S. Murmu and S. Biswas, “Application of fuzzy logic and neural network in crop classification: a review,” Aquatic Procedia, vol. 4, pp. 1203–1210, 2015. View at: Publisher Site | Google Scholar
  110. S. Khairunniza-Bejo, S. Mustaffha, and W. I. W. Ismail, “Application of artificial neural network in predicting crop yield: aA review,” Journal of Food Science and Engineering, vol. 4, no. 1, p. 1, 2014. View at: Google Scholar
  111. M. Li, S. Zang, B. Zhang, S. Li, and C. Wu, “A review of remote sensing image classification techniques: tThe role of spatio-contextual information,” European Journal of Remote Sensing, vol. 47, no. 1, pp. 389–411, 2014. View at: Publisher Site | Google Scholar
  112. G. Mountrakis, J. Im, and C. Ogole, “Support vector machines in remote sensing: aA review,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 66, no. 3, pp. 247–259, 2011. View at: Publisher Site | Google Scholar
  113. N. Raghavendra and P. C. Deka, “Support vector machine applications in the field of hydrology: a review,” Applied Ssoft Ccomputing, vol. 19, pp. 372–386, 2014. View at: Publisher Site | Google Scholar
  114. X. Yang, “Parameterizing support vector machines for land cover classification,” Photogrammetric Engineering & Remote Sensing, vol. 77, no. 1, pp. 27–37, 2011. View at: Publisher Site | Google Scholar
  115. J. Xia, J. Chanussot, P. Du, and X. He, “Rotation-based support vector machine ensemble in classification of hyperspectral data with limited training samples,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 3, pp. 1519–1531, 2015. View at: Publisher Site | Google Scholar
  116. S. Vidhya, “Kernel trick in SVM,” 2019, https://medium.com/analytics-vidhya/how-to-classify-non-linear-data-to-linear-data-bb2df1a6b781. View at: Google Scholar
  117. R. Zuo and E. J. M. Carranza, “Support vector machine: a tool for mapping mineral prospectivity,” Computers & Geosciences, vol. 37, no. 12, pp. 1967–1975, 2011. View at: Publisher Site | Google Scholar
  118. B. W. Heumann, “An object-based classification of mangroves using a hybrid decision tree—Support vector machine approach,” Remote Sensing, vol. 3, no. 11, pp. 2440–2460, 2011. View at: Publisher Site | Google Scholar
  119. S. Li, H. Wu, D. Wan, and J. Zhu, “An effective feature selection method for hyperspectral image classification based on genetic algorithm and support vector machine,” Knowledge-Based Systems, vol. 24, no. 1, pp. 40–48, 2011. View at: Publisher Site | Google Scholar
  120. U. Maulik and D. Chakraborty, “Remote sensing image classification: aA survey of support-vector-machine-based advanced techniques,” IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 1, pp. 33–52, 2017. View at: Publisher Site | Google Scholar
  121. L. Gao, J. Li, M. Khodadadzadeh et al., “Subspace-based support vector machines for hyperspectral image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 2, pp. 349–353, 2014. View at: Publisher Site | Google Scholar
  122. G. Cavallaro, M. Riedel, M. Richerzhagen, J. A. Benediktsson, and A. Plaza, “On understanding big data impacts in remotely sensed image classification using support vector machine methods,” IEEE Jjournal of Sselected Ttopics in Aapplied Eearth Oobservations and Rremote Ssensing, vol. 8, no. 10, pp. 4634–4646, 2015. View at: Publisher Site | Google Scholar
  123. L. H. Thai, T. S. Hai, and N. T. Thuy, “Image classification using support vector machine and artificial neural network,” International Journal of Information Technology and Computer Science, vol. 4, no. 5, pp. 32–38, 2012. View at: Publisher Site | Google Scholar
  124. R. Sharma, A. Ghosh, and P. K. Joshi, “Decision tree approach for classification of remotely sensed satellite data using open source support,” Journal of Earth System Science, vol. 122, no. 5, pp. 1237–1247, 2013. View at: Publisher Site | Google Scholar
  125. K. Ennouri, R. Ben Ayed, S. Ercisli, S. Smaoui, M. Gouiaa, and M. A. Triki, “Variability assessment in Phoenix dactylifera L. accessions based on morphological parameters and analytical methods,” Acta Pphysiologiae Pplantarum, vol. 40, no. 1, pp. 1–11, 2018. View at: Publisher Site | Google Scholar
  126. A. Baraldi, “Fuzzification of a crisp near-real-time operational automatic spectral-rule-based decision-tree preliminary classifier of multisource multispectral remotely sensed images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 6, pp. 2113–2134, 2011. View at: Publisher Site | Google Scholar
  127. X. Miao, J. S. Heaton, S. Zheng, D. A. Charlet, and H. Liu, “Applying tree-based ensemble algorithms to the classification of ecological zones using multi-temporal multi-source remote-sensing data,” International Jjournal of Rremote Ssensing, vol. 33, no. 6, pp. 1823–1849, 2012. View at: Publisher Site | Google Scholar
  128. L. Luo and G. Mountrakis, “Integrating intermediate inputs from partially classified images within a hybrid classification framework: aAn impervious surface estimation example,” Remote Sensing of Environment, vol. 114, no. 6, pp. 1220–1229, 2010. View at: Publisher Site | Google Scholar
  129. K. S. He, D. Rocchini, M. Neteler, and H. Nagendra, “Benefits of hyperspectral remote sensing for tracking plant invasions,” Diversity and Distributions, vol. 17, no. 3, pp. 381–392, 2011. View at: Publisher Site | Google Scholar
  130. D. J. Lary, A. H. Alavi, A. H. Gandomi, and A. L. Walker, “Machine learning in geosciences and remote sensing,” Geoscience Frontiers, vol. 7, no. 1, pp. 3–10, 2016. View at: Publisher Site | Google Scholar
  131. A. E. Maxwell, T. A. Warner, and F. Fang, “Implementation of machine-learning classification in remote sensing: aAn applied review,” International Journal of Remote Sensing, vol. 39, no. 9, pp. 2784–2817, 2018. View at: Publisher Site | Google Scholar
  132. J. L. Ding, M. C. Wu, and T. Tiyip, “Study on soil salinization information in arid region using remote sensing technique,” Agricultural Ssciences in China, vol. 10, no. 3, pp. 404–411, 2011. View at: Publisher Site | Google Scholar
  133. M. S. Tehrany, B. Pradhan, and M. N. Jebur, “Spatial prediction of flood susceptible areas using rule based decision tree (DT) and a novel ensemble bivariate and multivariate statistical models in GIS,” Journal of Hydrology, vol. 504, pp. 69–79, 2013. View at: Publisher Site | Google Scholar
  134. S. Liaghat and S. K. Balasundram, “A review: tThe role of remote sensing in precision agriculture,” American Jjournal of Aagricultural and Bbiological Ssciences, vol. 5, no. 1, pp. 50–55, 2010. View at: Publisher Site | Google Scholar
  135. J. R. B. Bwangoy, M. C. Hansen, D. P. Roy, G. De Grandi, and C. O. Justice, “Wetland mapping in the Congo Basin using optical and radar remotely sensed data and derived topographical indices,” Remote Sensing of Environment, vol. 114, no. 1, pp. 73–86, 2010. View at: Publisher Site | Google Scholar
  136. Y. Ran, X. Li, and L. Lu, “Evaluation of four remote sensing based land cover products over China,” International Journal of Remote Sensing, vol. 31, no. 2, pp. 391–401, 2010. View at: Publisher Site | Google Scholar
  137. A. Ghulam, I. Porton, and K. Freeman, “Detecting subcanopy invasive plant species in tropical rainforest by integrating optical and microwave (InSAR/PolInSAR) remote sensing data, and a decision tree algorithm,” ISPRS Jjournal of Pphotogrammetry and Rremote Ssensing, vol. 88, pp. 174–192, 2014. View at: Publisher Site | Google Scholar
  138. D. Tien Bui, B. Pradhan, O. Lofman, and I. Revhaug, “Landslide susceptibility assessment in vietnam using support vector machines, decision tree, and Naive Bayes Models,” Mathematical Pproblems in Engineering, vol. 2012, Article ID 974638, 26 pages, 2012. View at: Publisher Site | Google Scholar
  139. Towards Data Science, “Seeing the forest for the trees: an introduction to random fores,” 2019, https://towardsdatascience.com/seeing-the-forest-for-the-trees-an-introduction-to-random-forest-41a24fc842ac. View at: Google Scholar
  140. A. O. Ok, O. Akar, and O. Gungor, “Evaluation of random forest method for agricultural crop classification,” European Journal of Remote Sensing, vol. 45, no. 1, pp. 421–432, 2012. View at: Publisher Site | Google Scholar
  141. N. Horning, “Random Forests: aAn algorithm for image classification and generation of continuous fields data sets,” in Proceedings of the International Conference on Geoinformatics for Spatial Infrastructure Development in Earth and Allied Sciences, Osaka, Japan, 2010. View at: Google Scholar
  142. A. Mellor, A. Haywood, C. Stone, and S. Jones, “The performance of random forests in an operational setting for large area sclerophyll forest classification,” Remote Sensing, vol. 5, no. 6, pp. 2838–2856, 2013. View at: Publisher Site | Google Scholar
  143. V. F. Rodriguez-Galiano, M. Chica-Olmo, F. Abarca-Hernandez, P. M. Atkinson, and C. Jeganathan, “Random Forest classification of Mediterranean land cover using multi-seasonal imagery and multi-seasonal texture,” Remote Sensing of Environment, vol. 121, pp. 93–107, 2012. View at: Publisher Site | Google Scholar
  144. M. Immitzer, C. Atzberger, and T. Koukal, “Tree species classification with random forest using very high spatial resolution 8-band WorldView-2 satellite data,” Remote Ssensing, vol. 4, no. 9, pp. 2661–2693, 2012. View at: Publisher Site | Google Scholar
  145. B. Ghimire, J. Rogan, and J. Miller, “Contextual land-cover classification: incorporating spatial dependence in land-cover classification models using random forests and the Getis statistic,” Remote Sensing Letters, vol. 1, no. 1, pp. 45–54, 2010. View at: Publisher Site | Google Scholar
  146. D. S. Chapman, A. Bonn, W. E. Kunin, and S. J. Cornell, “Random Forest characterization of upland vegetation and management burning from aerial imagery,” Journal of BiogeographyThe Psocoptera of Ttropical South East Asia, vol. 37, no. 1, pp. 37–46, 2010. View at: Publisher Site | Google Scholar
  147. C. Gómez, J. C. White, and M. A. Wulder, “Optical remotely sensed time series data for land cover classification: aA review,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 116, pp. 55–72, 2016. View at: Publisher Site | Google Scholar

Copyright © 2021 Karim Ennouri et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1243
Downloads813
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.