We will develop an image processing pipeline and deep machine learning algorithms based on water trap field images for robust aphid recognition and counting. Specifically, It will investigate the hypotheses that (1) a deep convolutional neural network can be applied to aphid recognition and counting in response to challenges where the interested object is tiny and highly clustered, and/or similar to other objects in appearance; and (2) using synthetic images can improve the performance of deep learning-based models without manual labelling large numbers of images as training datasets. The final developed prediction model based on the two hypotheses will be able to recognise and count aphid automatically, allowing farmers and agronomists to quickly evaluate the risk of aphid infestation and ultimately virus yellows outbreak in sugar beet fields. To validate the hypotheses, this project will address three key research objectives (1) Synthetic images and masks generation; first an image generation pipeline is developed based on conventional image processing algorithms. However, this pipeline inherently produces domain shift between synthetic images and real field images. We will therefore render the synthetic images to be more realistic by using a generative adversarial network (details are described in WP3.) without paired images for training. This research aims to deliver an image generation pipeline for training supervised deep learning algorithms without manual labelling. The proposed approach is capable of expanding to other agricultural applications. (2) Deep convolutional neural network (DCNN) development; the proposed network is mainly based on Mask-RCNN, which is a classical network for instance segmentation (treating multiple objects of the same class as distinct individual objects). This network will adapt its reception fields to solve the challenge of tiny and highly clustered aphid recognition and counting. The developed network will work hierarchically. It first extracts object features and then generates proposals about the regions where there might be an aphid based on the input image. Finally, it outputs the class of aphid, whether this species belongs to Myzus persicae, and generates a mask in pixel level of the aphid (Myzus persicae). (3) Validation of the effectiveness of synthetic images in the developed DCNN models; one of the current bottlenecks is the requirement of a large number of annotation data to fit deeplearning-based models. In order to alleviate the manual labelling tasks, we will investigate the effectiveness of synthetic images on training a deep convolution neural network and quantify the optimal number of synthetic images needed for training a good performance model. The comparison study with different datasets (synthetic images and real field images) will be carried out to validate the developed system with the evaluation metrics including counting accuracy and Intersection over Union (IoU).