Introduction
Weed-induced yield loss predictions have been attempted using a variety of different approaches. In density-based models, weed density and yield loss follow a hyperbolic relationship, meaning that as weed density increases, it will have a reduced impact on yield loss as this parameter approaches a threshold (Cousens Reference Cousens1985; Spitters et al. Reference Spitters, Kropff and Groot1989). As density alone does not inherently provide information about relative weed size or time of emergence compared with the crop, its predictive power for yield loss may be low and variable (Ali et al. Reference Ali, Streibig and Andreasen2013; Kropff Reference Kropff1988). This problem is more evident when late-emerging cohorts of weeds are present, because the model intrinsically assumes that they have a competitive potential similar to early-emerging weeds (Ali et al. Reference Ali, Streibig and Andreasen2013; Jeschke et al. Reference Jeschke, Stoltenberg, Kegode, Sprague, Knezevic, Hock and Johnson2011).
In contrast, the relative weed leaf area compared with the crop (L w) is a parameter that implicitly considers differences in plant size. The L w is defined as:

where LAIw is the leaf area index (LAI) of the weeds and LAIc is the LAI of the crop for a given ground area and time. Through combining the leaf area of the weeds and crop per ground area basis, L w can provide a better representation of weed interference, especially due to shading (Kropff Reference Kropff1988; Spitters and Aerts Reference Spitters and Aerts1983). A single, large early-emerging weed may have the same leaf area as several smaller late-emerging weeds, which allows their competitive ability to be weighted proportionally (Spitters and Aerts Reference Spitters and Aerts1983). As LAIw will likely include weeds from different cohorts, L w accounts for the relative time of emergence of weeds based on the additive contribution of individuals to the total LAI (Kropff Reference Kropff1988; Spitters and Aerts Reference Spitters and Aerts1983). Thus, early- and late-emerging weeds result in large and small plants at the critical time of weed control, respectively.
Kropff and Spitters (Reference Kropff and Spitters1991) proposed using L w to predict yield loss, because it directly influences growth rate through shading mechanisms. Thus, if LAIw represents a small proportion of the total canopy, the resulting weed growth rates will be lower and the ones of the crop higher. Conversely, as LAIw increases, the crop growth rate will be slowed by weed shading, causing greater yield losses. These weed–crop interactions are affected by the competitive ability of the weed population against the crop. Therefore, the competitive ability of the weeds must be weighed proportionally to its contribution to the L w. To do this, Kropff and Spitters (Reference Kropff and Spitters1991) proposed to incorporate L w and a weed competition coefficient (q) into the model to predict weed-induced yield loss:

where YL k is yield loss relative to the yield under weed-free conditions (Y wf) (Kropff and Spitters Reference Kropff and Spitters1991). The q value represents the intensity of competitive interactions between the weed community and the crop. A q < 1 indicates that the crop is more competitive than the weed community, and q > 1 indicates the weed is more competitive than the crop (Kropff and Spitters Reference Kropff and Spitters1991).
While L w is more biologically relevant for yield loss prediction compared with density, historically, it has not been used in research or production, as leaf area could not be quickly measured. Recently, advances in digital image analysis opened the possibility of quantifying weed and crop leaf cover efficiently during the growing season (Campillo et al. Reference Campillo, Prieto, Daza, Moñino and García2008; Lee and Lee Reference Lee and Lee2011). Thus, Equation 1 can be modified for the relative weed leaf cover (LCw) by replacing LAI with the leaf cover index (LCI) (total leaf cover per ground area basis) of the crop (LCIc) and weeds (LCIw). LCw has been shown to be analogous to L w in the early growing season before canopy closure, as little leaf overlap occurs (Lotz et al. Reference Lotz, Kropff, Wallinga, Bos and Groeneveld1994; Ngouajio et al. Reference Ngouajio, Lemieux and Leroux1999; Nielsen et al. Reference Nielsen, Miceli-Garcia and Lyon2012). Equation 2 is then modified for leaf cover by replacing L w with LCw.
To further improve simplicity and efficiency of LCw data collection, remote sensing tools such as unmanned aerial systems (UASs) can be used to measure LCIw and LCIc (Duan et al. Reference Duan, Zheng, Guo, Ninomiya, Guo and Chapman2017; Rasmussen et al. Reference Rasmussen, Nielsen, Garcia-Ruiz, Christensen and Streibig2013). UASs are highly flexible tools, as they can quickly capture imagery at the field scale when flying at high altitudes and capture high-resolution images at the plant scale at lower altitudes (i.e., <1 cm pixel−1 ground sample distance [GSD]). Determination of optimal flight and equipment parameters is critical to accurately distinguish crops and weeds. Some of those parameters include, but are not limited to, sensor type, sensor pixel density, atmospheric/weather conditions, and the crops in question (Mesas-Carrascosa et al. Reference Mesas-Carrascosa, Torres-Sánchez, Clavero-Rumbao, García-Ferrer, Peña, Borra-Serrano and López-Granados2015). It is therefore necessary to compare LCw measured at multiple flight altitudes and with different sensor types to balance data-collection efficiency with image resolution.
UASs can be equipped with several sensor types, including red–green–blue (RGB) cameras, which capture light in the visual range of the electromagnetic spectrum (380 to 700 nm), and multispectral (MS) cameras, which sense light within and beyond the visual range in narrow discrete ranges of wavelengths (bands) (NASA 2010). Through image analysis, crops can be distinguished from weeds based on differences in spectral reflectance using machine learning algorithms (Venkataraju et al. Reference Venkataraju, Arumugam, Stepan, Kiran and Peters2023). MS imagery has been shown to improve plant species differentiation compared with RGB imagery, especially in the near-infrared light range (700 to 1,100 nm) (Mohidem et al. Reference Mohidem, Che’Ya, Juraimi, Fazlil Ilahi, Mohd Roslim, Sulaiman, Saberioon and Mohd Noor2021). Additionally, MS imagery can provide more stable canopy cover estimation over time compared with RGB, as it is less sensitive to changes in environmental conditions such as cloud cover and sunlight angle. In other words, MS imagery generates less spectral noise (Ashapure et al. Reference Ashapure, Jung, Chang, Oh, Maeda and Landivar2019; Zhou et al. Reference Zhou, Jung and Yu2020).While potentially more precise for vegetation differentiation and canopy cover estimation, MS imagery might be inaccessible to some due to greater sensor prices compared with RGB imagery (Ashapure et al. Reference Ashapure, Jung, Chang, Oh, Maeda and Landivar2019).
With UASs, the feasibility of implementing the Kropff-Spitters model in real time during the growing season before canopy closure might be possible. We considered that this possibility was worth exploring to provide more information to growers about crop performance in relation to the efficacy of their weed management programs. It was hypothesized that weed-induced yield loss can be predicted with the Kropff-Spitters model adjusted for leaf cover (Equation 2) where maize (Zea mays L.) and weed leaf cover is measured with high-resolution UAS imagery. Additionally, it was hypothesized that prediction accuracy is higher with 15-m MS imagery, as it will have a finer GSD and be less sensitive to spectral noise compared with RGB or 30-m MS imagery. Therefore, the goals of this study were (1) to evaluate the accuracy of the Kropff-Spitters model to predict maize yield loss based on UAS measured leaf cover and (2) to assess model performance at different UAS flight altitudes and with different sensor types.
Materials and Methods
Experimental Sites
Images were collected from four existing maize experiments during the summer 2023 growing season in Goldsboro, Clayton, and Lewiston, North Carolina, at the Cherry, Central Crops, and Peanut Belt Research Stations respectively. In Goldsboro (35.3942°N, 78.04709°W), the soil is a Pantego-loam (fine-loamy, siliceous, semiactive, thermic Umbric Paleaquult) with 1.9% organic matter. In Clayton (35.6760°N, 78.5141°W), the soil is a Dothan loamy-sand (fine-loamy, kaolinitic, thermic Plinthic Kandiudult) with 1.25% organic matter. The soil of location 1 in Lewiston (Lewiston 1) (36.1332°N, 77.1772°W) is a mosaic of Goldsboro sandy loam (fine-loamy, siliceous, subactive, thermic Aquic Paleudult) and Norfolk sandy loam (fine-loamy, kaolinitic, thermic Typic Kandiudult). The soil of location 2 in Lewiston (Lewiston 2) (36.1343°N,77.1772°W) is a Norfolk sandy loam (fine-loamy, kaolinitic, thermic Typic Kandiudult). Both Lewiston locations had 1.5% organic matter. The most common weed species were sicklepod [Senna obtusifolia (L.) H.S. Irwin and Barneby] and fall panicum (Panicum dichotomiflorum Michx.), in Goldsboro; Palmer amaranth (Amaranthus palmeri S. Watson), Texas millet [Urochloa texana (Buckley) R. Webster], and ivyleaf morningglory (Ipomoea hederacea Jacq.) in Clayton; and common ragweed (Ambrosia artemisiifolia L.), common lambsquarters (Chenopodium album L.), and U. texana in Lewiston. In Goldsboro, the experiment was planted with maize variety ‘DKC63-57(VT2P)’ in 18 plots measuring 12 by 30 m with 76-cm row spacing, while Clayton and Lewiston were planted in 54 and 72 plots measuring 3.66 by 9.14 m with ‘DeKalb 62-08’ and ‘Pioneer 1870’ variety maize, respectively (Table 1). Experiments were arranged as a randomized complete block design with three replications in Goldsboro and four in Clayton and Lewiston, with standard fertilization according to North Carolina State University Extension recommendations (Crozier Reference Crozier2019). Clayton and Lewiston were prepared with conventional tillage according to North Carolina State University Extension recommendations (Crozier Reference Crozier2019), while Goldsboro was managed under no-till practices. Maize yield was determined by harvesting the inner 12 rows in Goldsboro and all 4 rows per plot in Lewiston 2 using a 6-row commercial harvester and the inner 2 rows in Clayton and Lewiston 1 with a small-plot harvester. Yields at all locations were adjusted to 15.5% moisture content.
Table 1. Maize planting and management programs for four locations in North Carolina.

a D1 and D2 represent the two maize planting densities in Goldsboro, which were included to produce additional variation in weed/maize leaf cover ratios.
b Two postemergence herbicide applications were included in Goldsboro to keep control plots weed-free.
Aerial Image Collection
At Goldsboro, aerial images were collected when maize was at the V3 and V5 stages (Ritchie et al. Reference Ritchie, Hanway and Benson1986). In Clayton, images were collected when maize was at V5 and V7, and at both Lewiston locations, images were collected at V8, just before canopy closure. Variation in sampling time was due to the need to circumvent the local weed control actions and properly quantify LCw before weed removal. At all locations, RGB and four-band MS (red, green, red-edge, and near-infrared) images were collected at flight altitudes of 15 and 30 m using a DJI Mavic 3M (DJI Shenzhen, China) (Table 2). GSD was 0.47 cm pixel−1 with RGB-15-m imagery, 0.87 cm pixel−1 with RGB-30-m imagery, 0.66 cm pixel−1 with MS-15-m imagery, and 1.45 cm pixel−1 with MS-30-m imagery. Image collection was done around solar noon (1200 to 1400 hours). For each location and date, images of each spectral and altitude treatment were individually stitched into maps (i.e., orthomosaic) using Agisoft Metashape v. 2.0.1 (St Petersburg, Russia), producing a total of 32 unique orthomosaics. Each MS orthomosaic was a composite of all four bands collected at that image collection.
Table 2. DJI Mavic 3M RGB and MS sensor and RTK specifications a .

a Abbreviations: MS, multispectral; RGB, red–green–blue; RTK, real-time kinematic; CMOS, complementary metal-oxide-semiconductor.
Ground Sampling/Processing
To validate the substitution of L wf or LCw (Equations 1 and 2), LAIc and LCIc were compared for each aerial image collection. LAIc was collected twice per plot in Goldsboro and randomly in a third of the plots in Clayton and Lewiston with an LAI 2200C plant canopy analyzer (LI-COR, Lincoln, NE, USA) immediately following aerial image collection. In Goldsboro, LAIc was collected within 0.4-m2 quadrats where weeds were removed. In Clayton and Lewiston, LAIc was collected below the maize canopy but above the weeds (50 cm) between the two middle rows of the plot. In all locations a 90° view cap was used to block the user from LAI readings. LCIc was measured on a per plot basis for each orthomosaic following aerial data processing.
Aerial Data Processing
To distinguish between maize and weeds in the aerial imagery, an individual supervised object-based classification algorithm via a support vector machine (SVM) (Cortes and Vapnik Reference Cortes and Vapnik1995) was trained for each orthomosaic using ArcGIS Pro v. 3.1.1 (Redlands, CA, USA) (Figure 1). Training data for each SVM were developed by manually classifying the area within eight random 4-m2 quadrats as soil, maize, or weeds on the corresponding orthomosaic. Additional training samples were generated if irregular atmospheric conditions (e.g., intermittent cloud cover) caused classification errors. The area per plot classified as maize and weeds was defined as its LCIc and LCIw, which was used to calculate a per plot LCw (Equation 1) for each orthomosaic. To determine the accuracy of LCw estimation based on the SVM, confusion matrices were developed by manually labeling 500 random points for each RGB-15-m classified map as maize, weeds, or soil via the accuracy assessment tool in ArcGIS Pro v. 3.1.1.

Figure 1. Example of output of maize and weed classification from red–green–blue (RGB)-15-m imagery using a supervised object-based classification algorithm via a support vector machine.
Yield Loss Prediction
To predict yield loss, Y wf was defined as the highest-yielding plot per block within each location where there was minimal to nondetectable weed cover. Observed yield loss (YLobs) was defined as the difference between Y wf and Y obs within each block. The predicted yield loss (YLp) relative to Y wf was then calculated (in kg ha−1) with the formula:

To determine q for each location and stage, YLp was calculated iteratively with q values ranging from 0.1 to 1.5 with RGB-15-m-derived LCw. RGB-15-m training data were used rather than MS or 30-m as they presents the widest range and balance of spectral signals and optical resolution (pixels cm−1). The q that produced the smallest absolute difference between YLp and YLobs was determined as the optimum q for each, which will be referred to as q o.
Model Correction
A correction factor (c) was generated to compensate for changes in model performance due to bias when substituting L w for LCw (Equation 2). Because q and LCw values for each image collection account for variation among locations and phenological stages (Kropff and Spitters Reference Kropff and Spitters1991; Ngouajio et al. Reference Ngouajio, Lemieux and Leroux1999), a single c was developed with RGB-15-m training data from the location with the greatest standard error in LCw and validated with the rest of the locations. This was done to account for variation in crop–weed ratios without overfitting the model to all collected data. Where ΔYL is the difference between YLp and YLobs, c was calculated based on the slope, when different than zero, of the model that best described the relationship between ΔYL and YLobs. For each plot, the corrected yield loss prediction (YLcorr) was calculated as:

To assess the necessity of running the model with an individual q o for each image collection, an additional analysis was done with the average q value over all locations and crop phenological stages (q x̅). If q x̅ provided similar accuracy as q o for each location, then the former was considered robust and was chosen for the following analyses.
Statistical Analysis
Linear regression analyses were done with JMP Pro 17 (Cary, NC, USA) to compare model performance across locations, phenological stages, UAS imagery types, and weed competition measurements. Linear regressions were additionally run to validate the use of LCI in place of LAI. Root mean-square error (RMSE), r2, and P-value were used to characterize model regression fitness and accuracy.
Results and Discussion
UAS-derived Leaf Estimated Cover Accuracy
LAIc was positively related with LCIc measurements across all locations and phenological stages with RGB-15m imagery (r2 = 0.72; Figure 2). Overall classification accuracy of the SVM was 83%, 76%, and 96% for maize, weeds, and soil, respectively, and the kappa coefficient, which takes into account both the generator and user’s accuracy when training the classifier, was 0.78 (Cohen Reference Cohen1960). Given the high accuracy for the three classes and the strong linear relationship with LAIc, LCI was used as a surrogate for LAI.

Figure 2. Relation between unmanned aerial system (UAS) red–green–blue (RGB)-15-m-derived maize leaf cover index (LCI) and ground-measured maize leaf area index (LAI) in four locations in North Carolina.
Determining qo and qx̅ for the Kropff-Spitters Model
Weed competition was described with a range of q o values across locations and phenological stages, which ranged from 0.2 at V5 in Clayton to 0.8 at V8 in Lewiston 2 (Table 3). The q x̅ across all 268 plots was 0.54, indicating that, on average, maize was more competitive than weeds regardless of early-season phenological stage, as q x̅ < 1 (Kropff and Spitters Reference Kropff and Spitters1991). Compared with Goldsboro, the competition coefficients at Clatyon V7 and Lewiston were 2.3 and 2.7 times greater, respectively (Kropff and Spitters Reference Kropff and Spitters1991; Table 3). The stability of q o in Goldsboro at different maize growth stages suggests that the weed community’s competitive ability grew proportionally to that of maize, while in Clayton, the weed community progressively increased its competitive ability over maize. It is expected that as the growing season progresses and the weeds in each community get larger, their competitive abilities will increase, as observed in Clayton. Additionally, Goldsboro had considerably less weed pressure and species diversity than other locations, likely resulting in its lower q o (Table 3).
Table 3. Red–green–blue (RGB)-15-m-derived Kropff and Spitters model attributes and observed yield measurements for all locations and stages in North Carolina.

a Optimized competition coefficient.
b Relative weed leaf cover (Equation 2). Values in parentheses represent the standard error of the mean.
Yield Loss Prediction and Correction
There were no major differences between the q o and q x̅ models, with both having c = −0.69. Additionally, the slopes of the q o and q x̅ corrected models were 0.78 ± 0.01 and 0.75 ± 0.01, respectively, with intercepts being 214 ± 13 and 227 ± 13. Therefore, q x̅ was chosen for further analyses to use a single value that applied to all locations and sampling times.
Running the model without correcting deviation from YLobs resulted in low accuracy for YLp (Figure 3A). This was more evident when YLobs was greater than and less than 3,000 kg ha−1, where YLp was over- and underpredicted, respectively (Figure 4). This trend may be due to the use of LCI in place of LAI, as leaf cover is not able to capture the full magnitude of competition when leaf overlap occurs and at early stages when there is no overlap at all. To account for differences between the two, c was developed using RGB-15-m training data from Clayton (Figure 4). Clayton was selected because it had the highest overall variability in LCw out of the four locations (Table 3). If the model accurately measured YLp, we would expect the slope in Figure 4 to be zero, indicating no difference between YLp and YLobs. However, the slope for the model (i.e., c) was −0.69 (Figure 4). Because the model was highly linear, incorporating c (Equation 4) dramatically reduced the error of the model and corrected yield loss predictions (YLcorr) from YLobs in comparison to YLp (Figure 3A and 3B). When corrected for LCI (Equation 4), model performance increased in the validation data set (Goldsboro and Lewiston locations) with r2 increasing from 0.17 to 0.97 and RMSE decreasing from 705 kg ha−1 to 219 kg ha−1 (Figure 3).

Figure 3. Relationship between predicted (q x̅) and observed yield loss at validation locations (Lewiston 1 and 2 and Goldsboro) in North Carolina with all red–green–blue (RGB) and multispectral (MS) imagery taken at 15 m and 30 m pooled together. (A) Relationship between predicted and observed yield loss before incorporation of the correction factor c; (B) relationship after incorporation of c.

Figure 4. Relationship between Δ yield loss (difference between predicted and observed yield loss) and observed yield loss at Clayton V5 and V7 (red–green–blue [RGB]-15-m) in North Carolina using the q x̅ analysis. The red dashed line represents Δ yield loss = 0.
Flight altitude and image sensor had minimal effect on the accuracy of the prediction (Table 4). Before correction, the relation between YLobs and YLp was low for all sensor–altitude combinations (r2 < 0.14) except for RGB-15-m (r2 = 0.53). However, when YLcorr was estimated, all sensors and altitudes performed equally well with r2 > 0.96. When comparing pre- and postcorrection, the correction also allowed the overall RMSE to be reduced 3-fold, from 650 kg ha−1 to 202 kg ha−1 (Table 4).
Table 4. Best-fit linear model parameters for the relation between YLobs and YLp for different sensor–altitude combinations based on the
${q_{\bar x}}$
analysis for data collected in Goldsboro and Lewiston 1 and 2 in North Carolina
a
.

a Values in parentheses represent the standard error of the mean.
b For all models, P < 0.0001.
c RMSE, root mean-square error.
Caveats
Considerations for q Estimation
Weed communities will vary in their competitive ability relative to the crop due to environmental conditions, weed species composition, and crop species and planting density. Therefore, while a single value of q, like q x̅ used in our study, would be ideal, it is unlikely the value presented could accurately predict yield loss across all environmental conditions, management strategies, and growth stages. Instead, through iterative testing and validation, general values of q can be determined for weed communities with a few predominant species and at specific growth stages. This type of general descriptor is not unusual in agriculture. For example, individual evapotranspiration coefficients have been developed for multiple crops and are used across large areas and in different environments and soil types (Hargreaves and Samani Reference Hargreaves and Samani1985; Hatfield and Dold Reference Hatfield and Dold2018).
Substitution of LAI for LCI
The estimation of total leaf area and LAI from aerial imagery is a difficult process. The equipment currently available for this purpose, such as light detection and ranging (LIDAR) or multisensor cameras, is expensive and technically complex, making it difficult to use for agricultural production at present (Liu et al. Reference Liu, Jin, Nie, Wang, Yu, Cheng, Shao, Wang, Tuohuti, Bai and Liu2021). For this reason, we explored substituting total leaf area with leaf cover. The latter can be accurately estimated with UAS using on low-cost sensors. It must be acknowledged that when using leaf cover in place of total leaf area, competition for light within the canopy and its impact on yield loss might not be fully captured due to occlusion among weeds and the crop. Leaf area is associated with physiological processes that can determine how competitive a plant will be within a stressful environment where resources such as water and nutrients are limited (Kropff Reference Kropff1988). For all row crops, a critical period of weed control (CPWC) exists early in the season before crop canopy closure when any weeds present have the greatest impact on yield (Gantoli et al. Reference Gantoli, Ayala and Gerhards2013; Hall et al. Reference Hall, Swanton and Anderson1992). After the CPWC, the crop typically has grown enough to shade smaller late-emerging weeds, reducing their competitive ability (Gantoli et al. Reference Gantoli, Ayala and Gerhards2013; Hall et al. Reference Hall, Swanton and Anderson1992). Furthermore, during CPWC is when growers can implement effective postemergence control (e.g., cultivation, herbicides) while minimizing crop injury. Thus, image collection is ideal during CPWC for the Kropff and Spitters model. Previous studies have shown that while some occlusion may still occur among leaves (Figure 1), leaf cover and leaf area are similar early in the season (Nielsen et al. Reference Nielsen, Miceli-Garcia and Lyon2012; Ramirez-Garcia et al. Reference Ramirez-Garcia, Almendros and Quemada2012). We contend that our results illustrate that UAS-derived leaf cover, collected at the right time, can capture enough detail in LCw in comparison to LAIw to make valid yield loss predictions, especially with the incorporation of a correction factor. This is further validated, because the correction factor developed with imagery from Clayton (Figure 4) successfully improved model performance in Lewiston and Goldsboro (Figure 3).
Machine learning and artificial intelligence approaches are starting to contribute to more accurate LAI estimations from aerial imagery, which might improve the robustness of models and algorithms predicting yield loss. For example, using data from multiple sensors (RGB, multiple MS sensors), Liu et al. (Reference Liu, Jin, Nie, Wang, Yu, Cheng, Shao, Wang, Tuohuti, Bai and Liu2021) predicted LAI using aerial images and shallow and deep machine learning in experimental maize plots. The accuracy of their predictions was estimated at r2 = 0.70 to 0.89 and relative RMSE = 13%. In our case, we were able to relate LCI from aerial images to ground-truth LAI at r2 = 0.70 and relative RMSE = 7% (Figure 2). Therefore, our simpler algorithm was representing LAI in a manner similar to more complex approaches based on machine learning. Interestingly, Liu et al. (Reference Liu, Jin, Nie, Wang, Yu, Cheng, Shao, Wang, Tuohuti, Bai and Liu2021) also observed over- and underpredictions at low and high LAI, respectively. This was similar to our yield predictions as a function of leaf area (Figure 4). Therefore, even with more detailed image processing systems, it seems that leaf occlusion is an intrinsic limitation of UAS image collection after CPWC (i.e., canopy closure).
Correcting for Differences in Predicted versus Observed Yield Loss
Through substitution of LAI for LCI a bias was introduced in the Kropff-Spitters model (Equation 2), creating the need for a correction factor. When YLobs was <3,000 kg ha−1, the uncorrected model overpredicted the impact of weeds on the crop. Very low maize yield indicates the presence of suboptimal growing conditions (e.g., soil type, moisture, pest damage). Therefore, it is possible that adding weed interference to a situation where other more important limiting factors were present caused YLp overestimation. Conversely, when maize growth and yield potential are high, fast canopy closure can result in high levels of occlusion among weed leaves with crop leaves, because UAS imagery cannot penetrate the canopy. In this case, the estimation of LCw is reduced, because weeds might shade maize within the canopy out of view of UAS imagery. Also, the relation between leaf cover and underground or noncompetitive interactions involved in yield loss is minimized, as this model cannot quantify those at low LCw levels (McKenzie-Gopsill et al. Reference McKenzie-Gopsill, Amirsadeghui, Earl, Jones, Lukens, Lee and Swanton2019; Stone et al. Reference Stone, Cralle, Chandler, Miller, Bovey and Carson1998).
The present study confirmed that leaf area–based prediction models, such as the one proposed by Kropff and Spitters (Reference Kropff and Spitters1991), can make use of UAS imagery as a viable and practical tool for estimating leaf cover. While LCw initially resulted in poor model performance, predictions were greatly improved through incorporation of a correction factor. Furthermore, combined with UAS imagery, the predictive model can be used in a practical and efficient manner at large spatial scales within commercial growing operations. Through automation of image analysis and optimization of the model, growers could generate yield loss predictions in short time frames after UAS image collection. This would allow growers to make management and financial decisions—using additional postemergence herbicide or fertilizer applications before label cutoff times at canopy closure—as early as possible. Regarding model efficiency, because accuracy was similar between RGB and MS sensors regardless of the altitude in the range evaluated, future optimization of the system could focus on RGB imagery and higher altitudes (≥30 m). This would allow UAS pilots to collect imagery of larger fields in less time and allow for an increase in computational efficiency. Ultimately, the end goal of systems like the one studied here is to give growers the power to make financial and management decisions as early as possible and gain time to anticipate ways to ensure the sustainability of their operations.
Acknowledgments
The authors thank the staff at the various research farms and Zachary Taylor for technical support.
Funding statement
This research was supported by the U.S. Department of Agriculture–National Resources Conservation Service Conservation Innovation grant no. NR213A750013G031, the U.S. Department of Agriculture Area-Wide Funds, Hatch Project NC02906, and the North Carolina Agricultural Foundation, Inc.
Competing interests
The authors declare no conflicts of interest.