Introduction
As the global population continues to grow, the increasing demand for food has made improving agricultural productivity and reducing resource waste one of the key challenges in agricultural development. In traditional farming, weeds not only compete with crops for nutrients and water but also provide habitats for various pests and diseases, leading to a decline in crop yields. According to data from the United Nations Food and Agriculture Organization (FAO) in 2023, more than 8,000 species of weeds have been identified worldwide, with greater than 26% causing crop yield reductions. Given the detrimental effects of weeds, effectively controlling their growth has become a critical aspect of crop cultivation (Deng et al. Reference Deng, Qi and Ma2018).
To eliminate the impact of weeds on crops, various weed control methods have been explored. Traditional manual weeding methods, such as using hoes, sickles, or push mowers, are simple to operate but have high labor intensity, low efficiency, and high costs. The effectiveness of these methods often depends on the skill level of the workers, frequently leading to missed weeds. Although mechanical weeding improves efficiency, its application in large-scale fields remains limited (Bloomer et al. Reference Bloomer, Harrington, Ghanizadeh and James2024). Chemical herbicide spraying is highly efficient, but it can result in herbicide waste, environmental pollution, and negative impacts on non-target plants and surrounding ecosystems.
In the face of these limitations of traditional weeding methods, especially with the rapid advances in sensor technology (Shaikh et al. Reference Shaikh, Karim, Zeadally and Nebhen2022), machine learning algorithms (Liakos et al. Reference Liakos, Busato, Moshou, Pearson and Bochtis2018), artificial intelligence (AI) (Sharma et al. Reference Sharma, Verma and Hardaha2023), and drone technology in the 21st century (Wen et al. Reference Wen, Zhang, Deng, Lan, Yin and Shan2018), intelligent field-weeding robots have emerged. These robots, equipped with advanced image processing technology and AI algorithms, use vision sensors, GPS systems, robotic arms, laser tools, and automated control systems to accurately detect, locate, and eliminate weeds without harming crops. Based on different weeding techniques, intelligent weeding robots can be classified into precision-spraying robots, mechanical weeding robots, and thermal weeding robots (Hall et al. Reference Hall, Dayoub and Kulk2017; Hu et al. Reference Hu, Luo and Yan2012; Quan et al. Reference Quan, Zhang and Jiang2021; Xing et al. Reference Xing, Ding and Xue2022). Among these, laser weeding technology, which allows for precise weed removal, represents a future trend. Precision weeding, which targets only specific weeds and avoids affecting crops and soil, relies on sensor technology and AI algorithms. Unlike traditional broad-spectrum weed control methods, precision weeding significantly improves resource efficiency and reduces environmental pollution, making it especially suitable for sustainable agriculture and organic crop production. Sustainable agriculture refers to the practice of conducting agricultural production in an eco-friendly and economically viable manner to meet the current food demands while protecting the environment and natural resources. The goal is to ensure that future generations can continue farming. Sustainable agriculture aims to reduce overreliance on land, energy, and water resources; minimize the excessive use of chemicals and harmful substances; and promote soil health, biodiversity, and ecosystem balance.
Compared with traditional weeding methods, intelligent field-weeding robots can significantly reduce labor costs, improve weeding efficiency, and minimize environmental impacts. Precision weeding in interrow and near-row areas will be a key area for technological breakthroughs in future farmland weed control. Under the broader context of smart agriculture, intelligent field-weeding robots are becoming a research hotspot in agricultural technology. A review of key technologies and research advances in intelligent field-weeding robots will not only provide valuable insights for researchers in related fields but also offer new perspectives for the intelligent development of agricultural production.
To understand the development process of intelligent weeding robots, this study used bibliometric and scientific mapping methods (Chen et al. Reference Chen, Chen and Liu2015) to analyze literature on intelligent weeding from the core Social Sciences Citation Index database of the Web of Science (WOS) platform. Keywords such as “weeding robot,” “weeder,” and “robot platform for weeding” were set in the search interface, covering the years 2009 to 2023, and literature types including “article,” “review article,” “early access,” and “book chapters” were selected. After removal of irrelevant and duplicate documents, a total of 385 relevant papers were identified. The number of publications, years, and keyword co-occurrence maps, as well as country, author, and institution maps were obtained using CiteSpace 6.2.R4 (64-bit) Basic software (Xu et al. Reference Xu and Deng2023).
As shown in Figure 1, the number of relevant publications has been increasing exponentially. From 2009 to 2015, the number of publications was relatively low and stable. After 2016, the number of publications increased steadily, indicating growing interest and increasing attention to the research theme of intelligent weeding.

Figure 1. Annual publication volume of research literature relevant to intelligent weeding.
Figure 2 shows the clustering based on keywords such as “weeding,” “recognition,” “deep learning,” “navigation,” and “weeding equipment.” The colors range from blue (weakest relevance) to red (strongest relevance). The clustering was based on the two key technologies of intelligent weeding and the current development status, selecting 15 major clusters with 277 nodes and 1,114 links, resulting in a network density of 0.0291, a Q value of 0.7189 (>0.3), and a mean silhouette value of 0.8921 (>0.4), indicating a reasonable clustering structure and good homogeneity within clusters. The diagram also shows that researchers focus on deep learning technology, machine vision, and the development of weeding robots, with strong interrelations between these content areas, consistent with practical applications in production.

Figure 2. Co-occurrence graph of key terms.
In recent years, researchers have optimized deep learning algorithms (Chong et al. Reference Chong, Weyler, Lottes, Behley and Stachniss2023; Weyler et al. Reference Weyler, Läbe, Magistri, Behley and Stachniss2023) to achieve weed and crop recognition and localization within the machine vision field. They have also developed mobile robot platforms to plan navigation routes (Diao et al. Reference Diao, Guo, Zhang, Zhang, Yan, He, Zhao and Zhao2023; Li et al. Reference Li, Su, Zhang and Peng2023c), thereby achieving automated precision weeding (Guo et al. Reference Guo, Goh, Li, Zhang and Li2023; Li et al. Reference Li, Li, Kang, Feng, Long and Wang2023a; Tran et al. Reference Tran, Schouteten, Degieter, Krupanek, Jarosz, Areta, Emmi, De Steur and Gellynck2023). Analysis of Figure 3 shows that institutions such as China Agricultural University (Li et al. Reference Li, Quan, Guo, Pi, Shi, Lou, Jiang, Sun, Yang and Xu2023b), University of California (Su et al. Reference Su, Fennimore and Slaughter2020), Indian Council of Agricultural Research (ICAR) (Pandey et al. Reference Pandey, Tiwari and Sharma2023), and Consejo Superior de Investigaciones Científicas (CSIC) (Emmi et al. Reference Emmi, Fernández, Gonzalez-de-Santos, Francia, Golfarelli, Vitali, Sandmann, Hustedt and Wollweber2023) have produced significant research outcomes in recent years. Scholars from the United States, such as Fennimore (Su et al. Reference Su, Fennimore and Slaughter2020; Raja et al. Reference Raja, Nguyen, Slaughter and Fennimore2020), Slaughter (Su et al. Reference Su, Fennimore and Slaughter2020; Raja et al. Reference Raja, Nguyen, Slaughter and Fennimore2020), and Johnson (Johnson et al. Reference Johnson III and Luo2018); Chinese experts such as Cao (Zhang et al. Reference Zhang, Tian, Cao, Zhu, Qin and Ge2022; Tian et al. Reference Tian, Cao, Qin, Fang and Ge2021), Tian (Zhang et al. Reference Zhang, Tian, Cao, Zhu, Qin and Ge2022; Tian et al. Reference Tian, Cao, Qin, Fang and Ge2021), and Ge (Tian et al. Reference Tian, Cao, Qin, Fang and Ge2021); Spanish researchers like Ribeiro Angela (Conesa-Muñoz et al. Reference Conesa-Muñoz, Valente, Del Cerro, Barrientos and Ribeiro2016) and Perez-Ruiz Manuel (Aravind et al. Reference Aravind, Purushothaman and Manuel Pérez-Ruiz2017); and scholars from Germany, India, Australia, and Japan have all made notable contributions to international research.

Figure 3. Research country, author, and institutional affiliation map.
Overall, the field of intelligent weeding has been a focus of attention, with substantial research results from countries including the United States, China, Spain, Germany, India, Australia, and Japan. Concurrently, deep learning technology has been applied to address weed removal issues. Future research will continue to focus on the development of weeding robots.
Recognition based on machine vision is a prerequisite for effective weed removal, while navigation and localization technology determine the efficiency of precision weeding. These two aspects constitute the key technologies of intelligent weeding, as illustrated in Figure 4. This paper reviews the research status of intelligent weeding robots and summarizes the critical technologies of intelligent robots, including an overview of some public datasets. It elaborates on the research progress of intelligent weeding robots categorized by weeding methods. Finally, the paper concludes with a summary and a discussion of future development trends for intelligent weeding robots.

Figure 4. Key technologies of intelligent weed control.
Research Progress of Key Technologies in Intelligent Weeding Robots
Accurately and intelligently distinguishing between weeds and crops in the field is a prerequisite for the precise weeding operations of weeding robots. Navigation and localization technology, which determines the efficiency of precision weeding, is essential. These two aspects constitute the key technologies of intelligent weeding robots (Yuan et al. Reference Yuan, Zhao and Cheng2020). This paper reviews these two key technologies.
Recognition Technologies Based on Machine Vision
Research on the recognition of farmland weeds has been extensive, with methods including manual recognition, spectral analysis, spectral imaging, infrared recognition, and machine vision recognition (Chen et al. Reference Chen, Zou and Wu2013). The proportions of these recognition methods in the WOS platform are shown in Figure 5. Manual recognition is inefficient, labor-intensive, and costly, with no recent references, indicating its eventual phaseout. With the continuous advancement of science and technology, computer vision technology has gradually been applied to various fields. In the 1980s, computer vision technology began to be used in agricultural applications (Wang et al. Reference Wang, Zhang, Dowell, Sun and Peterson2001). Currently, weed recognition mainly relies on machine vision technology, and the research and development of intelligent field-weeding robots cannot be separated from machine vision technology. As shown in Figure 5, other recognition methods represent a smaller proportion and are less commonly used in actual weeding operations.

Figure 5. The proportion of references on each recognition method.
Weeds are generally found in complex field environments, and any recognition technology must apply specific characteristics to the objects being identified. Weed recognition primarily involves extracting features such as morphology, color, and texture of crops and surrounding weeds. Researchers provide these extracted features to machine learning algorithms for recognition, as shown in Figure 6, which depicts the traditional machine learning–based recognition workflow. This stage of feature extraction is referred to as manual feature recognition, which includes recognition technologies based on color, shape, texture, and spectrum. As the cost of computer hardware decreases and central processing unit (CPU) computing power increases, deep learning, which requires extensive data computation, has gradually expanded into the agricultural field. Deep learning methods extract weed features more effectively than manual feature extraction, and this stage of feature extraction is referred to as deep learning recognition technology. The following sections will detail recognition technologies based on manual features and deep learning.

Figure 6. Recognition of traditional machine learning.
Color-based Recognition
Compared with other feature-based recognition methods, color features require less computational effort and are more effective for weed detection in fields with crops that have distinctive colors. Researchers have utilized color indices to segment weeds, crops, and soil, employing recognition methods based on red–green−blue (RGB) (Chen et al. Reference Chen, Shen and Mao2009; Jafari et al. Reference Jafari, Mohtasebi and Jahromi2006; Nieuwenhuizen et al. Reference Nieuwenhuizen, Tang and Hofstee2007), hue–saturation–value (HSV) (Hamuda et al. Reference Hamuda, Mc Ginley and Glavin2017; Miao et al. Reference Miao, Yu and Xu2020), and hue–saturation–intensity (HSI) color spaces (Li et al. Reference Li, Grift and Yuan2016).
In the RGB color space, the green channel contains more useful information compared with the red channel, thus requiring the integration of threshold algorithms to accomplish the segmentation task. However, it is challenging to segment plant pixels under low or bright lighting conditions in this space, whereas HSV and HSI color spaces are more robust to changes in lighting conditions. This paper presents some references to color-based recognition algorithms and their recognition accuracy in Table 1.
Table 1. Color-based recognition results.

a HSI, hue–saturation–intensity; HSV, hue–saturation–value; RGB red–green−blue.
Color features are easy to recognize and allow for quick decision making, making them suitable for real-time image processing. Additionally, ordinary cameras can meet the requirements for feature extraction, making this approach applicable to various crops and weed types. However, variations in lighting and shadows can affect recognition performance. When the colors of the plants are similar, relying solely on color features may not provide satisfactory separation. To improve recognition accuracy, it is necessary to combine other features.
Shape-based Recognition
Shape features are crucial morphological characteristics in biology and play a key role in distinguishing between crops and weeds. Researchers have combined these features with machine learning algorithms, such as artificial neural networks (ANN), morphological processing algorithms, and classification algorithms like support vector machines (SVM) (Bakhshipour and Jafari Reference Bakhshipour and Jafari2018; Murawwat et al. Reference Murawwat, Qureshi and Ahmad2018). Some studies have integrated shape features with other features, such as color and spectral features, utilizing a comprehensive set of morphological characteristics for analysis (Hussin et al. Reference Hussin, Jamil and Nordin2013).
Murawwat et al. (Reference Murawwat, Qureshi and Ahmad2018) applied SVM and blob analysis techniques for weed recognition. In non-occluded scenarios, the recognition accuracy reached 100%; however, in complex scenes where weeds overlap with carrot (Daucus carota L.) plants, the recognition accuracy dropped to 90%, as shown in the segmented image in Figure 7A. Bakhshipour and Jafari (Reference Bakhshipour and Jafari2018) compared the performance of SVM and ANN in classifying sugar beet (Beta vulgaris L.) plants and weeds. SVM achieved a crop recognition accuracy of 96.67% and weed accuracy of 93.33%, while ANN achieved 93.33% accuracy for crops and 92.5% for weeds. The classification results are shown in Figure 7B. An autonomous fine-tuning and feature selection using a genetic algorithm was proposed by Wong et al. (Reference Wong, Chekima and Ahmad2013) and tested with the assumption that the weeds are young and non-occluded. The results show that solidity of the shapes is the most prominent feature and alone could be used to achieved 90% recognition rates. One hundred percent recognition was achieved with the combination of shape and moment invariants, as shown in the segmented image in Figure 7C. Kiani and Jafari (Reference Kiani and Jafari2012) combined discriminant analysis with back-propagation neural networks to classify maize (Zea mays L.) plants and weeds, achieving a maize recognition accuracy of 100% and a weed recognition accuracy of 96%, as depicted in Figure 7D.

Figure 7. Classification performance based on shape features.
Additionally, Jeon et al. (Reference Jeon, Tian and Zhu2011) conducted research on recognizing crops against the soil background using machine learning algorithms such as ANN, utilizing shape features to identify weeds. Li et al. (Reference Li, Zhu and Ji2010) employed morphological operations and distance transformation–based threshold segmentation to separate overlapping leaves. They then used the ant colony optimization algorithm and SVM classifiers for feature selection and classification, achieving a recognition accuracy of 95%.
Shape features are effective when plant leaves are intact and not overlapping. However, when there is significant overlap or damage to the leaves, extracting shape features becomes much more difficult. Furthermore, when multiple plant species with similar shapes are present in field images, classification based on shape features becomes highly complex.
Texture-based Recognition
Textural features represent the spatial arrangement of pixel grayscale levels in an image region, which are critical for recognizing objects or regions of interest in images. Researchers have used the gray-level co-occurrence matrix to extract textural features of crops and weeds (Mustafa et al. Reference Mustafa, Hussain and Ghazali2007; Wu et al. Reference Wu, Liu and Wen2009) and employed supervised learning algorithms, such as SVM and ANN, for weed recognition. To address the challenge of significant leaf occlusion hindering effective textural feature extraction, some studies have adopted wavelet decomposition methods combined with supervised learning algorithms for recognition (Bakhshipour et al. Reference Bakhshipour, Jafari, Nassiri and Zare2017).
As shown in Figure 8, the horizontal axis represents crop/weed recognition methods based on textural features, and the vertical axis indicates the corresponding crop and weed recognition accuracy. PCA refers to principal component analysis; GLCM refers to gray-level co-occurrence matrix; and FFT refers to fast Fourier transform. The figure shows that the crop recognition accuracy across different methods ranges from 89% to 92%, and weed recognition accuracy ranges from 85% to 98%. The method by Wu et al. (Reference Wu, Liu and Wen2009) achieves the highest recognition accuracy. Their image-segmentation process involves converting the original color image to grayscale based on the statistical values of the red, green, and blue components. The textural features of weeds and maize seedlings are then obtained using GLCM and the statistical properties of the grayscale image histogram. These textural features are used in the classification process. PCA is employed to select textural features that contribute best to reducing spatial dimensions. SVM is used as the classification tool to identify weeds and maize seedlings in the early growth stages of a maize field. The results show that the SVM classifiers with different feature selection strategies can successfully identify weeds and maize, achieving an accuracy ranging from 92.31% to 100%.

Figure 8. Texture-based crop/weed recognition accuracy. ANN, artificial neural networks; FFT, fast Fourier transform; GLCM, gray-level co-occurrence matrix; PCA, principal component analysis; SVM, support vector machines.
Like shape features, the extraction of textural features is a computationally intensive image processing task. Typically, feature selection and dimensionality reduction algorithms are used to select the most contributory feature parameters for input into classifiers. Effective texture analysis requires a large amount of high-quality labeled data for training. The advantage of textural features lies in their stability when dealing with occluded leaves and in distinguishing between crops and weeds, even under varying lighting conditions.
Spectrum-based Recognition
The main challenge in classifying weeds and crops lies in their similar spectral characteristics. If the weed and crop leaf colors are different, this recognition technique can effectively distinguish them; if their colors are similar, other features such as shape must be included for efficient recognition. Researchers have used hyperspectral cameras to collect data and then integrated machine learning algorithms for recognition (Bai et al. Reference Bai, Xu and Wei2013; Gao et al. Reference Gao, Nuyttens, Lootens, He and Pieters2018; Herrmann et al. Reference Herrmann, Shapira and Kinast2013; Pantazi et al. Reference Pantazi, Moshou and Bravo2016; Piron et al. Reference Piron, Leemans and Kleynen2008).
Gao et al. (Reference Gao, Nuyttens, Lootens, He and Pieters2018) explored the feasibility of using a near-infrared snapshot mosaic hyperspectral camera for weed and maize classification. They tested random forest (RF) models to build classifiers with different spectral feature combinations, identifying an optimal RF model with 30 key spectral features. The average accuracy for corn, field bindweed (Convolvulus arvensis L.), Rumex spp., and Canada thistle [Cirsium arvense (L.) Scop.] was 1.0, 0.789, 0.691, and 0.752, respectively, as shown in Figure 9A. Pantazi et al. (Reference Pantazi, Moshou and Bravo2016) achieved optimal results with active learning by using a self-organizing map (SOM) and mixture of Gaussians (MOG) single-class classifiers. The crop recognition performance was 100% for both methods. For the MOG-based single-class classifier, the correct recognition rate for different weed species ranged from 31% to 98%. The SOM-based single-class classifier’s correct recognition rate varied between 53% and 94%, as illustrated in Figure 9B. Zhao et al. (Reference Zhao, He and Qiao2013) proposed a multifeature weed recognition method based on multispectral imaging and data mining, in which the multifeature recognition rate was higher than single-feature recognition. The combination of spectral, textural, and fractal dimension features yielded the highest recognition accuracy of 96.3%, as depicted in Figure 9C. Bai et al. (Reference Bai, Xu and Wei2013) used stepwise discriminant analysis to select spectral reflectance data at four key wavelength points—710, 755, 950, and 595 nm—for precise weed recognition. By determining prior probabilities based on category size, the Bayesian discriminant function model achieved a recognition accuracy of 98.89%, enabling precise and stable weed recognition during the early growth stage of winter canola (Brassica napus L.), as shown in Figure 9D. Herrmann et al. (Reference Herrmann, Shapira and Kinast2013) used ground-level image spectroscopy data, with high spectral and spatial resolutions, for detecting annual grasses and broadleaf weeds in wheat (Triticum aestivum L.) fields. The image pixels were used to cross-validate partial least-squares discriminant analysis classification models. The best model was chosen by comparing the cross-validation confusion matrices in terms of their variances and Cohen’s kappa values. This best model used four classes—broadleaf, grass weeds, soil, and wheat—and resulted in a kappa of 0.79 and total accuracy of 85%.

Figure 9. Classification performance based on spectral features. MOG, mixture of Gaussians; RF, Random Forest.
Hyperspectral cameras can capture subtle spectral differences between crops and weeds. However, pixel-based recognition is inefficient. Machine learning algorithms, such as SVM and RF, can be employed to build weed recognition classification models, significantly improving efficiency and accuracy in large-scale crop production. Nevertheless, these methods also face challenges, such as changing lighting conditions, the similarity of spectral features between crops and weeds, and the complexity of processing and analyzing image data.
Therefore, relying on a single feature for recognition often results in low accuracy and poor stability, as it fails to fully utilize multifeature information for recognition. It is essential to consider a combination of factors, optimize model algorithms, and integrate other agricultural technologies to achieve more accurate and reliable weed detection and management. How to optimize the fusion of features and resolve the contradiction between recognition accuracy and response time is a critical issue that needs to be addressed.
Analysis of these references and comparison of various feature-based recognition methods indicates that techniques using color, shape, texture, and spectral features can achieve high recognition rates. However, the performance of these techniques in real-time weed detection is hindered by the complex field environment, as the recognition rate depends on image acquisition methods, preprocessing methods, and the quality of feature extraction.
Deep Learning–based Recognition
Deep learning algorithms effectively avoid the subjectivity introduced by the feature extraction process in traditional machine learning methods. They can automatically extract deep features from images, offering stronger representation capabilities and unique network feature structures, thereby improving weed recognition accuracy.
On the one hand, deep learning methods can extract weed features. For instance, Peng et al. (Reference Peng, Xia and Peng2019) proposed a two-stage algorithm based on faster region-based convolutional neural networks integrated with feature pyramid networks, which achieved good detection performance in complex backgrounds in cotton (Gossypium hirsutum L.) fields. Fawakherji et al. (Reference Fawakherji, Youssef, Bloisi, Pretto and Nardi2019) developed a model that accurately classified crops and weeds by generating patches from binary images for robotic use. dos Santos Ferreira et al. (Reference Freitas and da Silva2017) trained a neural network using the Caffe Net architecture, achieving 97% accuracy in weed detection.
On the other hand, deep learning algorithms can directly recognize weeds. Naveed et al. (Reference Naveed, Muhammad and Irshad2023) proposed a novel weed detection model that can be executed on CPU systems, reducing computational costs. Some researchers have optimized deep learning algorithms for better recognition performance (Bah et al. Reference Bah, Hafiane and Canals2019; Krizhevsky et al. Reference Krizhevsky, Sutskever and Hinton2017; Sun et al. Reference Sun, He and Tan2018). Other studies have utilized one-stage object detection algorithms from the YOLO series for weed detection (Sun et al. Reference Sun, Liu, Wang, Zhai and Yu2024; Ying et al. Reference Ying, Xu and Zhang2021; Zhang et al. Reference Zhang, Sun and Chen2023). In deep network research, Li et al. (Reference Li, Li, Kang, Feng, Long and Wang2023a) proposed E2CropDet, a deep learning-based crop-row detection network that achieved end-to-end detection at 166 frames s−1, with a lateral deviation of 5.945 pixels in centerline extraction, surpassing semantic segmentation (7.153) and Hough transform–based methods (17.834). You et al. (Reference You, Liu and Lee2020) continuously improved a weed/crop segmentation network by integrating four additional components, reducing weed density. Some experts have achieved good recognition results in multistage algorithm design (Adhikari et al. Reference Adhikari, Yang and Kim2019; Huang et al. Reference Huang, Wu, Sun, Ma and Jiang Yu2020). Table 2 provides information on deep learning–based seedling and weed recognition technologies.
Table 2. Experimental results of deep learning algorithms.

DIM, decisive input modulation; PC, predictive coding; BC, biased computation; CNN, convolutional neural networks.
Analysis of the experimental results of recognition algorithms indicates that this technology makes weed detection and classification more accurate in complex field environments.
Traditional feature-based recognition technologies primarily focus on image-level classification, while deep learning–based recognition focuses on pixel-level classification, where each pixel is segmented and labeled as either weed or crop. In recent years, some scholars have combined deep learning methods with traditional methods, proposing solutions for different processing steps in fruit and vegetable recognition against similar color backgrounds. This demonstrates that the integration of image processing technology and deep learning technology is a significant research direction for the future.
Navigation and Localization Technologies
Navigation and localization technologies are critical for intelligent weeding. After seedling and weed recognition, accurate localization of weeds is necessary to assist intelligent weeding devices in completing real-time calculations and weeding tasks. With the continuous development of AI technology in recent years, this key technology has been increasingly researched and improved by experts and scholars. Satellite navigation, visual navigation, and integrated navigation are the most widely used, and the following sections will introduce the research and development status of these three navigation and localization technologies.
Satellite Navigation and Localization
Currently, the application of Global Navigation Satellite System (GNSS) navigation is widespread and mature. Agricultural machinery equipped with GNSS can significantly improve operation quality and efficiency in the field, although GNSS signal loss can occur in complex environments, such as dense foliage.
Examining the history of satellite navigation development, GNSS can be applied in three ways.
GPS Localization: Stoll et al. (Reference Stoll and Dieter2000) used GPS as the sole localization sensor for autonomous driving experiments, achieving a standard deviation better than 100 mm under various conditions, with a lateral deviation range of 25 to 69 mm during straight-line driving. Corpe et al. (Reference Corpe, Tang and Abplanalp2013) developed a GPS-based agricultural robot equipped with multiple sensors for environmental information detection, considering complex field conditions.
Real time kinematic (RTK)-GPS Localization: Kise et al. (Reference Kise, Noguchi and Ishii2001) applied this localization method to a tractor control system, reducing heading response and error during trajectory-following operations. Researchers have used this method for intrarow weed control (Nørremark et al. Reference Nørremark, Griepentrog and Nielsen2012; Pérez-Ruiz et al. Reference Pérez-Ruiz, Slaughter and Gliever2012).
RTK-DGPS Localization: Luo et al. (Reference Luo, Zhang and Zhao2009) achieved a maximum linear-tracking error of less than 0.15 m at a travel speed of 0.8 m s−1, with an average tracking error of less than 0.03 m using this method. Bakker et al. (Reference Bakker, van Asselt and Bontsema2011) conducted autonomous navigation research in sugar beet fields using an RTK-DGPS-based agricultural robot platform, achieving centimeter-level precision in field trials. Subsequently, RTK-DGPS with centimeter-level localization accuracy has been widely used in agricultural machinery navigation systems (Hu et al. Reference Hu, Gao and Bai2015). Li et al. (Reference Li, Zhao and Huang2017) combined dual-loop steering-control technology with this navigation method, achieving a path-tracking error average of less than 1.9 cm and a standard deviation of less than 4.1 cm. Additionally, some experts have combined satellite localization technology with other navigation techniques to achieve better localization accuracy.
Visual Navigation and Localization
In the 1980s, the United Kingdom and the United States were the first to research visual navigation systems. This localization technology has been a great success, and to this day, experts and scholars continue to use visual navigation systems for precise pesticide spraying, intelligent mechanical weeding, and physical weeding, despite some drawbacks during usage. This localization technology is often used in combination with intelligent robots.
Marchant et al. (Reference Marchant1996) developed a weeding robot based on visual navigation and localization technology using a grayscale band-pass filter. At a traveling speed of 1.6 m s−1, the lateral localization error was 15.6 mm. Lee et al. (Reference Lee, Slaughter and Giles1999) developed an intelligent weeding robot based on machine vision. This robot is equipped with two cameras, one for navigation and the other for weed recognition. Kise et al. (Reference Kise, Zhang and Más2005) applied stereovision to the navigation system of agricultural vehicles in the field, enabling accurate localization of crop rows in weedy fields and guiding the tractor to travel precisely along both straight and curved lines. Zhang and Ying (Reference Zhang and Ying2006) also proposed a field automatic navigation system based on machine vision. Meng et al. (Reference Meng, Liu and Zhang2013) used visual navigation and localization technology and proposed a crop-row centerline detection method constrained by linear correlation coefficients, solving the problems of slow detection algorithms and susceptibility to external interference. García-Santillán et al. (Reference García-Santillán, Guerrero and Montalvo2018) developed a system for detecting crop and weed rows in early growth stages of cornfields using a camera mounted at the front of a tractor, based on visual navigation and localization technology.
Some researchers have applied machine learning algorithms to visual navigation systems. Hiremath et al. (Reference Hiremath, Van Evert and ter Braak2014) proposed a vision-based navigation algorithm using particle filtering, and experiments demonstrated that this algorithm has good robustness, enabling accurate navigation in the field. Zhou et al. (Reference Zhou, Chen and Liang2014) applied a self-learning visual navigation method to a wheeled mobile robot. Yao et al. (Reference Yao and Zhang2016) proposed a navigation control algorithm based on binocular vision, and experiments showed that the system had a small navigation offset. Wang et al. (Reference Wang, Liu and Xiong2019) applied deep learning algorithms to orchard environment navigation systems, extracting new orchard road navigation lines that solved issues of susceptibility to other conditions.
Li et al. (Reference Li, Su and Yue2022) proposed an Aster-U-Net model to address issues such as complex image backgrounds in visual navigation systems in both field and greenhouse environments, as well as weed and light interference. Thakur et al. (Reference Thakur, Venu and Gurusamy2023) published an academic work aimed at using acquired knowledge to guide the construction of practical agricultural machine vision systems. This work thoroughly examined the components of machine vision systems; investigated image acquisition, processing, and classification techniques; and explored the methods adopted by each technology. Additionally, it studied how to integrate these processes to perform various agricultural activities, such as weeding, seeding, harvesting, fruit counting, overlapping, and sorting.
Integrated Navigation and Localization
A combined navigation system typically consists of two or more subsystems based on different navigation technologies. By leveraging the error characteristics and advantages of each navigation technology, a continuously operating combined navigation system can provide continuous and comprehensive navigation parameters. In recent years, researchers have mainly employed the following three methods to achieve integrated navigation and localization functions.
First, the combination of GPS navigation technology and machine vision navigation technology has been applied to weeding systems. Francisco et al. (2005) fused these two navigation technologies with a fuzzy logic model, utilizing the relative information from vision to correct GPS errors. Bakker (Reference Bakker2009) combined these navigation technologies in a multifunctional automatic weeding robot, enabling row navigation and herbicide spraying. Zhang et al. (Reference Zhang, Xiang and Wei2015) designed a system that integrates these two navigation technologies and uses corn crop row information captured by cameras for interrow mechanical weeding.
Second, the combination of laser navigation and inertial navigation systems has been explored. Kim et al. (Reference Kim, Kim and Hong2012) designed a paddy field-weeding robot based on multisensor fusion, combining laser navigation and inertial navigation systems, and achieved a maximum operational deviation of 6.2 cm.
Third, the GPS/Dead Reckoning (DR) integrated navigation system has been applied to weeding robots. Ding et al. (Reference Ding, Qiu and Zhou2006) applied a GPS/DR integrated navigation system to a weeding robot, improving the navigation accuracy and addressing the issue of signal interruptions.
Additionally, Ding et al. (Reference Ding, Ge and Lu2015) combined GPS localization technology with a fuzzy control navigation system. Simulation results showed that this method is feasible, with the system achieving rapid and stable performance. Currently, the most widely used integrated navigation system is the GNSS/Inertial Navigation System (INS) integrated navigation system. This system combines satellite navigation and inertial navigation technologies to achieve high-precision localization, speed measurement, attitude determination, and timing functions. Developing a highly reliable navigation system is a challenge rather than a simple task. Furthermore, some researchers have developed autonomous robots with integrated navigation and localization systems based on total stations and 2D LiDAR laser scanners for plant phenotyping studies. Reiser et al. (Reference Reiser, Vazquez and Paraforos2018) combined a 2D laser scanner with a four-wheel autonomous robot to navigate between corn rows, achieving differential steering at a 30° downward angle and collecting concurrent timestamped data. Data fusion generated a 3D point cloud, which can be used for various applications and navigation purposes, particularly for phenotypic analysis, individual plant treatment, and precise weeding. As shown in Figure 10, the robot platform used for data collection is represented in the robot operating system (ROS) visualization tool rviz (Kam et al. Reference Kam, Lee, Park and Kim2015) during LiDAR data assembly. Reiser et al. (Reference Reiser, Sehsah and Bumann2019) also developed a rotary weeding implement for autonomous electric robots to address weeding between orchard and vineyard rows. This implement autonomously followed rows based on 2D LiDAR data at a forward speed of 0.16 m s−1 and a working depth of 40 mm. In the future, the combination of autonomous navigation and weeding can improve weeding quality and reduce power consumption.

Figure 10. Robot platform (left) and data visualization (right). Kinect v2, a sensor (Microsoft n.d., Redmond, WA, USA); MT900, machine target prism (Trimble n.d., Sunnyvale, CA, USA); Sick LMS111, 2D-LiDAR laser scanner (SICK n.d., Waldkirch, Germany); SPS 930, universal total station (Trimble).
It should be noted that high-precision GPS and increasingly popular LiDAR technology provide new options for field-weeding robot navigation systems. Combining machine vision with GPS or LiDAR to design efficient weeding robot navigation systems could be a significant trend in future developments.
Establishing Public Datasets
Images of crops and weeds are generally required to be acquired and processed in real time, with cameras mounted on mobile robots operating in the field. Most datasets comprise RGB images of crops and weeds taken with high-resolution digital cameras from around the world, with some datasets containing information on multiple weed species. Table 3 introduces the sources and contents of the obtained public datasets.
Table 3. Public datasets.

Progress in Research on Intelligent Weeding Robots
Precision spraying and physical weeding are currently the mainstream methods for intelligent weeding. This section reviews intelligent spraying weeding robots, mechanical weeding robots, and thermal weeding robots, focusing on these two weeding methods.
Precision-Spraying Weeding Robots
The Smart Sprayer combines sensors, AI algorithms, and automated control systems to optimize the use of pesticides and herbicides. In the late 20th century, Lee et al. (Reference Lee, Slaughter and Giles1999) developed a prototype robot for precision herbicide spraying on tomato (Solanum lycopersicum L.) plants based on a machine vision system, achieving real-time identification accuracy of 73.1% for tomatoes and 68.85% for weeds. Åstrand and Baerveldt (Reference Åstrand and Baerveldt2002) developed an autonomous agricultural robot for mechanical weed control in outdoor environments, utilizing a grayscale vision system. This system could detect crop-row structures and guide the robot with an accuracy of ±2 cm. It also employed a color-based vision system capable of identifying single crops among weeds, allowing the robot to manage weeds within crop rows.
Subsequent designs focused on site-specific weed management for smart sprayers. Hussain et al. (Reference Hussain, Farooque, Schumann, McKenzie-Gopsill, Esau, Abbas, Acharya and Zaman2020) designed a variable-rate smart sprayer that achieved the highest accuracy using the YOLOv3-tiny model, it was able to save 43% spraying liquid during weeds and simulated diseased plant detection experiments. Partel et al. (Reference Partel, Kakarla and Ampatzidis2019) developed a low-cost spraying system that utilized AI and YOLOv3 for weed recognition and classification, with an NVIDIA GTX 1070 GPU achieving 71% accuracy in detecting and locating weed species. Upadhyay et al. (Reference Upadhyay, Sunil, Zhang, Koparan and Sun2024) designed and developed a YOLOv4-based smart spraying system, achieving an average effective spray rate of 93.33%, with 100% precision and a recall rate of 92.8% in indoor experiments. In contrast, field trials showed a slightly lower spray rate of 90.6%, while maintaining 95.5% precision and an 89.47% recall rate.
The See and Spray robot (Commercial promotion can be traced back to 2021) developed by John Deere combines a vision system with a precision-spraying system, achieving an identification accuracy of greater than 98%. It classifies weeds and crops using vision technology. Powered by tractors, it can operate continuously for long periods, working up to 12 h in large-scale crop fields such as cotton and soybean [Glycine max (L.) Merr.], covering 200 ha d−1 at 16 km hr−1. This robot reduces herbicide usage by 50-90%, significantly minimizing environmental impact (Figure 11A).

Figure 11. Precision-spraying robots.
Figure 11B is the Greeneye Technology weeding robot (Gained widespread attention, can be traced back to 2021), whose core technology is its AI-based selective spraying system (SSP). It uses onboard cameras to capture real-time field images. Combined with deep machine learning algorithms, the system accurately identifies and locates various types of weeds, enabling selective spraying on each plant. Compared with traditional weeding methods, SSP reduces herbicide usage by greater than 87% on average.
The SprayBox robot (Earliest appearance can be traced back to 2022), developed by Verdant Robotics, is equipped with 50 nozzles and a sophisticated computer system that integrates computer vision and machine vision technology. It targets individual weeds and crops at a rate of 20 times per second, spraying herbicides or fertilizers with millimeter precision. The system can spray up to 1.52 ha h−1 and identify and process more than 500,000 plants, reducing chemical herbicide usage by approximately 95% compared with traditional spraying techniques. It has been scaled for use in carrot cultivation.
On the other hand, demand-driven spraying (DoD) is a novel approach that applies calculated doses of herbicides to target weeds. Utstumo et al. (Reference Utstumo, Urdal and Brevik2018) used DoD technology to apply 5.3 μg of glyphosate per droplet, reducing herbicide usage 10-fold. Spaeth et al. (Reference Spaeth, Sökefeld, Schwaderer, Gauer, Sturm, Delatrée and Gerhards2024) reported savings of 10% to 55% in herbicide usage through weed recognition using digital image processing technology. Liu et al. (Reference Liu, Abbas and Noor2021) integrated a deep learning model and a variable-rate sprayer for targeted weed control, with VGG-16 demonstrating the best performance, achieving an F1 score of 0.88 in weed classification, and 86% of weed targets were completely sprayed under actual field conditions. Jin et al. (Reference Jin, Liu, Zhe, Xie, Bagavathiannan, Hong, Xu, Chen, Yu and Chen2023) presented a smart sprayer system with ResNet, achieving F1 scores of 92% or higher, enabling precise weed control in dormant bermudagrass [Cynodon dactylon (L.) Pers.] grass lawns.
These systems improve weed and crop identification accuracy, allowing for targeted herbicide application. EcoRobotix, a Swiss company, developed a solar-powered weeding robot (Earliest appearance can be traced back to 2017) that applies machine vision, GPS, and other sensors to autonomously track crop rows and detect weeds with 95% accuracy. It then uses a parallel robotic arm to quickly and precisely spray small doses of herbicide directly onto weeds, reducing pesticide usage by 20 times (Figure 12A).

Figure 12. Drone weeding and weeding robots.
The integration of remote sensing technology has undoubtedly enhanced the efficiency of precision spraying. Gerhards et al. (Reference Gerhards, Andujar and Hamouz2022) used airborne and ground-based remote sensing technology to gather weed information and applied multifeature fusion technology to identify weeds, enabling precise herbicide application. The combination of sensors and drone technology effectively improves identification efficiency. Figure 12B shows the Precision AI Weeding Drone (Earliest appearance can be traced back to 2022), equipped with 0.5-mm-resolution cameras capable of distinguishing weeds in a short time and accurately spraying them with herbicides.
The application of smart sprayers in global agriculture is rapidly expanding. Precision-spraying equipment combined with AI technology provides farmers with efficient, low-consumption solutions. In addition to reducing chemical herbicide usage, these systems can increase crop yields and reduce soil and water contamination. As AI models continue to improve, smart sprayers are becoming adaptable to different crop types and climate conditions, providing greater flexibility and accuracy in real-world applications.
The soil and water pollution caused by excessive use of herbicides and the pesticide residues on cultivated crops have become hidden dangers to human health. The machine vision subsystem cannot distinguish between plants with similar characteristics, resulting in misidentification and thus mis-spraying. During weeding operations, small-sized weeds may later regenerate in the targeted spraying area. Although the position error caused by the change in the nozzle angle can be reduced through calibration, key factors such as the sensitivity and stability of the servomotor still need to be considered. When the robot turns, it reduces the speed of the vehicle, while the flow of chemicals remains unchanged, making it easier for weeds to develop resistance at the place where the machine turns. The best time to apply herbicides is when the weed canopy is still developing. Once missed, the weeds can tolerate larger doses of herbicides, and this timing is elusive. In addition, the realization of further precision-spraying technology requires a high investment cost, and future technological improvements need to reduce costs.
Public concerns about the relationship between chemical herbicides and food safety, farm worker health, biodiversity, and the environment in general have renewed interest in alternative weed control measures, primarily physical weed control methods. The subsequent sections will review intelligent weeding robots that utilize physical weeding methods.
Mechanical Weeding Robots
In the past century, the commercialization of interrow mechanical weeding technology was limited due to the continued dominance of cost-effective chemical weeding methods, leading to low market demand for expensive intelligent interrow mechanical weeding equipment. However, recent advances in domestic electronic information, automatic control, and AI technologies have spurred extensive research into interrow mechanical weeding by researchers, driving the emergence of intelligent interrow mechanical weeding equipment. Table 4 details the mechanical weeding robot actuators and their characteristics.
Table 4. Characteristics of mechanical weeding implementations.

Mechanical weeding robots face challenges in removing interrow weeds and eliminating perennial weeds. Low accuracy in weed and crop identification and positioning increases the risk of crop damage during the weeding process, necessitating further optimization of identification and positioning algorithms. High-efficiency weeding operations can cause severe soil disturbance, damaging crops. Therefore, the design must strike a balance between operational speed, reducing costs, and minimizing crop damage. Different soil conditions present varying levels of resistance, requiring weeding devices to adapt to different types of soil to reduce operational resistance. For example, heavy clay soils often result in poor weeding and soil fragmentation effects. Mechanical weeding also demands rapid tool movement, meaning the hardware needs to have a higher response speed. After completion of weeding tasks, weed entanglement between the weeding components can affect efficiency, so further optimization of these components is necessary.
Thermal Weeding Robots
Modern physical weeding methods include flame weeding, laser weeding, steam weeding, infrared radiation weeding, and hot-water weeding, with laser weeding being the latest invention among thermal weeding methods.
Laser weeding is an effective physical weeding method that involves emitting high-energy laser beams at weeds over a short period, directly transferring thermal energy to selectively heat plant material, causing the moisture within plant cells to rise and inhibiting weed growth. The penetration of specific-wavelength laser radiation into tissues, the thermal effects within irradiated tissues, and the associated damage mechanisms are critical for the successful laser control of weeds. Hoki (Reference Hoki2000) irradiated young rice (Oryza sativa L.) plants with lasers of different wavelengths (532 and 1,064 nm), discovering effect and dose–effect relationships that were neither uniform nor consistent. Targeting stems can be challenging for some weed species. Mathiassen et al. (Reference Mathiassen, Bak, Christensen and Kudsk2006) studied the effects of lasers on the apical meristems of certain weed species at the cotyledon stage using a handheld system under three different potted weed conditions, testing two lasers and two spot sizes, and applying different energy doses by varying irradiation times.
In recent years, laser weeding technology has increasingly relied on the overall regulation of laser weeding equipment. Xiong et al. (Reference Xiong, Ge, Liang and Blackmore2017) designed a prototype robot for indoor performance testing, achieving a hit rate of 97% with a laser penetration speed of 30 mm s−1 and a dwell time of 0.64 s weed−1. Considering the high dynamic advantages of parallel mechanisms (PM), Wang et al. (Reference Wang, Leal-Naranjo and Ceccarelli2022) proposed a novel laser weeding frame based on a two-degrees-of-freedom, five-rotation parallel mechanical arm for dynamic laser weeding. Fatima et al. (Reference Fatima, ul Hassan and Hasan2023) designed a lightweight, deep learning–based commercial autonomous laser weeding robot weed detection system (Figure 13A). LaserWeeder, a weeding robot developed by Carbon Robotics (n.d.) in the United States (Commercial promotion can be traced back to 2022), uses lasers instead of herbicides. Combined with AI and visual technology, it achieves a recognition accuracy of 99%. Consuming about 30 kWh d−1, it can work continuously for 8 to 10 h per charge, with a range of 1.5 to 3 km h−1, depending on the weed density. The CO2 laser module array emits once every 50 ms with an accuracy of 3 mm and can perform laser weeding on 8 targets at the same time. It can handle 6 to 8 ha d−1, and the laser system can handle up to 100 weeds s−1 without the need for chemical agents, making it particularly suitable for organic farmland that needs to avoid chemical residues (Figure 13B). The WeedBot Laser Weeder (Earliest appearance can be traced back to 2022) (n.d.), developed by the European-based company WeedBot (n.d.), is designed for organic farming and high-precision weeding scenarios, ensuring healthy crop growth. It precisely targets weeds and eliminates them with lasers, with a recognition accuracy of greater than 98%. Battery-powered, it consumes around 25 kWh d−1 and can work continuously for 8 h, covering 1.5 to 2 km h−1 and processing approximately 10 ha d−1, depending on terrain and weed density.

Figure 13. Laser weeding robots.
Additional references for laser weeding machines are listed in Table 5. Analysis of Table 5 reveals that the organic combination of laser generators and mechanical arms has been a research focus for laser weeding machines. Some scholars have also conducted research on laser generators. Notably, in recent years, more experts and scholars have focused on the whole-machine aspects of weeding machines.
Table 5. Circuit design and characteristics of laser weeding device.

The accuracy of weed centroid positioning is often inadequate, impacting the precision of weeding operations. The diversity in weed species and shapes makes detection challenging, leading to potential misidentifications. Robots may also miss some weeds, affecting the overall weeding effectiveness. During laser weeding, there is a risk of crop damage, especially when positioning is inaccurate. Optimizing laser energy usage and improving energy efficiency are significant technical challenges. Laser weeding may also cause reflection issues, increasing safety risks, and care must be taken to ensure that reflections do not harm crops or surrounding equipment. These challenges represent key technological hurdles for the future development of laser weeding robots. Future progress must address improvements in recognition accuracy, operational efficiency, environmental impact, and energy utilization across these systems.
Discussion
In the rapid development of smart agriculture today, intelligent weeding equipment, as an important component of intelligent agricultural machinery, is bound to undergo further reconstruction and upgrades with the promotion of new production operation models and the introduction of advanced intelligent technologies. Smart agriculture refers to the use of advanced technologies such as the Internet of Things (IoT), AI, big data, sensors, and robotics to enhance the efficiency, productivity, and sustainability of agricultural operations. It involves data-driven decision making, precision agriculture techniques, and real-time monitoring to optimize crop management, reduce resource waste, and improve farm management systems. Smart agriculture focuses on increasing yields, minimizing environmental impact, and enabling automation and remote control of agricultural processes.
Although weeding robots are still in the prototype development stage, companies like FarmWise (n.d.) and Carbon Robotics (n.d.) are gradually moving toward commercialization. This section reviews two major technical issues of weeding robots—weed detection and vision-based navigation—as well as mainstream weeding robots. Currently, intelligent weeding still requires in-depth research in the following areas.
Optimization of Recognition Algorithms and Precision Weeding Efficiency
To further improve the operational efficiency of intelligent weeding, advanced deep learning technologies need to be optimized, including data augmentation, feature extraction, attention mechanisms, and model simplification. These improvements are essential to address the challenges in recognizing overlapping stems or leaves between weeds and crops. Additionally, data annotation, particularly the labeling of massive weed datasets, deserves more attention. Researchers must enhance the robustness and generalization of deep learning algorithms. Reinforcement learning and transfer learning algorithms can be used to achieve better results with less data.
The recognition of crop and weed characteristics such as color, shape, texture, and spectral features still requires an integrated approach combining novel image processing techniques and AI. Current algorithms face complexity and long processing times, and future optimization is needed to overcome these drawbacks.
The emergence of new physical weeding technologies, such as laser weeding, offers a promising outlook for intelligent weeding. Intelligent weeding devices need to be closely integrated with AI technology, using different combinations of navigation technologies for different application scenarios, to further address the challenge of weed removal in interrow regions. The performance of various intelligent weeding equipment developed for different weed-handling conditions must be further improved to enhance operational efficiency. For instance, small- and medium-sized weeding robots need to improve in terms of cooperative operation, autonomy, and human–machine coordination.
Intelligent Sensing and Equipment Generalization
The operation of sensors is required for navigation data, image recognition data, and more. In recent years, multimodal sensors, such as visual, infrared, and ultrasonic sensors, have seen rapid development, providing valuable assistance in obtaining comprehensive and real-time information from complex field environments. Future research should further explore multisensor fusion technology, machine vision, field navigation technology, and multidisciplinary integration to achieve intelligent sensing functions. Through intelligent sensing, efficient identification and location of crops and weeds can be realized, enabling intelligent weeding.
With the extensive application of AI, intelligent weeding devices are also evolving toward wide-area operations, group intelligence, and multifunctional operations. For example, equipment for sowing, weeding, and fertilization can be quickly swapped. The generalization of robotic platforms can lower production costs. Additionally, an open-platform structure with compatibility will significantly increase operational efficiency. Intelligent weeding systems may also integrate crop disease and pest monitoring for pesticide management and, through intelligent sensing of crop growth and maturity, facilitate automated fertilization and harvesting.
Integration of Agricultural Machinery and Agronomy
In some countries, a few fields have already achieved a leveled furrow environment suitable for intelligent weeding equipment. Considerations for optimal interrow spacing and leveled furrows can reduce crop and weed occlusion and clustering, which lowers the complexity of deep learning networks and facilitates the application of intelligent weeding technologies. Through integration of agricultural machinery and agronomy, the weeding environment can be improved and operational efficiency increased. Rational close planting, intercropping, and mixed cropping can fully utilize solar energy and spatial structure, enhancing crop growth while controlling weed density and damage.
Further Integration of Drone Technology
The development of agricultural drones provides new solutions for smart agriculture and represents a major trend in agricultural equipment development. Drones have natural advantages, such as obtaining ultra–high resolution images at low altitudes, which allows for the detailed observation of crops and weeds. In addition, drones generate vast amounts of imagery during aerial photography, providing datasets for training and applying deep learning algorithm models. Equipped with different sensors and perception systems, drones can capture spectral information from crops and weeds, which, combined with machine learning algorithms, significantly improves weed identification accuracy. Drones also offer flexibility in scheduling flights and can generate digital surface models with 3D measurements. Currently, drones are widely used in field weed identification and intelligent spraying. Future integration of sensor, deep learning, communication, and drone technologies can achieve higher weed identification efficiency.
Integration of 5G, Digital Twin Technology, and IoT Technologies
The integration of 5G, IoT, and digital twin technologies is rapidly driving weed control robots toward becoming smarter and more efficient. This convergence not only enhances the performance and decision-making capabilities of robots but also provides precise and visualized operational support for agricultural management, contributing to the overall intelligence level of farming operations.
Digital twin technology creates a digital replica of the physical weed control robot, enabling full life-cycle management through virtual–physical interaction. By building digital models that correspond to the physical robot and the farm environment, digital twins provide real-time status monitoring, simulation optimization, and predictive maintenance. In a virtual environment, robots can simulate path planning and weed control strategies to optimize paths, reduce energy consumption, and ensure crops are not damaged. Monitoring through the digital twin model allows real-time simulation and analysis of the operational status of the robot’s components. By combining historical data and algorithms, the system can predict when the robot may experience failure, enabling timely preventive maintenance and reducing downtime. Simultaneously, the farm environment, crop conditions, and the robot’s actual working status can be visually displayed. Operators can monitor the robot’s work process via a virtual interface, offering remote guidance and adjustments.
IoT facilitates the intelligent scheduling of weed control tasks and supports decision making by integrating climate conditions and weed growth patterns with agronomy to determine the optimal weeding time. Weed control robots can connect to sensors installed in the field, such as soil moisture, weather, and crop growth status sensors, to collect environmental and crop condition data. These data enable robots to more accurately identify weed growth areas and optimize weeding strategies. Under the IoT framework, basic data processing for weeding tasks can be handled by edge computing devices (e.g., local servers), while more complex analyses and model inference tasks are transferred to the cloud for computation. Through IoT networks, farm management systems can monitor the status of the weed control robots (battery life, mechanical wear, software condition, etc.) in real time and carry out equipment scheduling, fault alarms, and automatic maintenance when necessary.
With its ultra-low latency, 5G technology ensures real-time remote operation of weed control robots over large farmlands, even supporting cross-regional control of multiple robots working in collaboration. Multiple weed control robots can share data via 5G networks to perform coordinated operations, avoiding repeated weeding or missed weeds, thereby improving efficiency. This technology supports real-time data transmission from robots using high-definition cameras or other sensors (e.g., LiDAR, depth cameras), enabling a central system to analyze and make decisions regarding weed control.
The integration of 5G, IoT, and digital twin technologies significantly enhances the real-time performance and decision-making capabilities of weed control robots, enabling them to operate with higher precision and efficiency in complex farm environments. This reduces the risk of damaging crops or missing weeds. These technologies empower weed control robots with intelligent perception, remote control, and autonomous decision-making capabilities, supporting large-scale farm operations where robots can collaborate intelligently, achieving unmanned and automated weeding operations. Through continuous data collection and feedback, robotic systems can optimize their operational processes in different environments and crop conditions, providing personalized and precise weeding services.
However, these integrated smart field-weeding robots also face risks and challenges. The vast amount of data involved raises security and privacy concerns, necessitating robust cybersecurity measures. The interoperability between different IoT devices and systems is also a challenge, requiring the establishment of common standards and protocols. Furthermore, managing the complexity of these weeding robot systems and ensuring scalability will require ongoing innovation and investment.
Moreover, for specific target users, that is, non-technical personnel, the operation should be sufficiently safe and simple to facilitate quick user adoption and proficient operation. After-sales and technical support services should also be provided in the later stages.
Funding statement
The authors are grateful for the support of National Key R&D Program (2023YFD2001205) and Vegetable Industry Technology System Expert Position Project of Shandong Province (SDAIT-05-11).
Competing interests
The authors declare no conflicts of interest.