In this research, the deep-learning optimizers Adagrad, AdaDelta, Adaptive Moment Estimation (Adam), and Stochastic Gradient Descent (SGD) were applied to the deep convolutional neural networks AlexNet, GoogLeNet, VGGNet, and ResNet that were trained to recognize weeds among alfalfa using photographic images taken at 200×200, 400×400, 600×600, and 800×800 pixels. An increase in the image sizes reduced the classification accuracy of all neural networks. The neural networks that were trained with images of 200×200 pixels resulted in better classification accuracy than the other image sizes investigated here. The optimizers AlexNet and GoogLeNet trained with AdaDelta and SGD outperformed the Adagrad and Adam optimizers; VGGNet trained with AdaDelta outperformed Adagrad, Adam, and SGD; and ResNet trained with AdaDelta and Adagrad outperformed the Adam and SGD optimizers. When the neural networks were trained with the best-performing input image size (200×200 pixels) and the best-performing deep learning optimizer, VGGNet was the most effective neural network, with high precision and recall values (≥0.99) when validation and testing datasets were used. Alternatively, ResNet was the least effective neural network in its ability to classify images containing weeds. However, there was no difference among the different neural networks in their ability to differentiate between broadleaf and grass weeds. The neural networks discussed herein may be used for scouting weed infestations in alfalfa and further integrated into the machine vision subsystem of smart sprayers for site-specific weed control.