Welcome to the Bahria University DSpace digital repository. DSpace is a digital service that collects, preserves, and distributes digital material. Repositories are important tools for preserving an organization's legacy; they facilitate digital preservation and scholarly communication.
dc.contributor.author | Anam Bibi, 01-243202-004 | |
dc.date.accessioned | 2022-12-22T06:42:25Z | |
dc.date.available | 2022-12-22T06:42:25Z | |
dc.date.issued | 2022 | |
dc.identifier.uri | http://hdl.handle.net/123456789/14496 | |
dc.description | Supervised by Dr Momina Moetesum | en_US |
dc.description.abstract | With the advent of deep learning, several computer vision problems have been revisited. Object detection is one of them. The potential of deep object detectors for insect pest detection in crops in constrained environments is relatively less explored. Pest infestation detection is vital for crop health monitoring and precision agriculture. However, the inspection can be tedious and ineffective in providing complete situational awareness. Automated pest detection is an important application of AI-driven agriculture that can mitigate the negative impact of insect pest infestation on crop yield. Insect pests pose a major threat to crop productivity and are a cause of considerable economical loss. Various automatic pest detection and recognition solutions have been presented. However, most of these lack robustness due to the challenges present in an unconstrained in-field environment. Most literature only focused on the classification of pests, therefore using the fixed stationary camera images in the control environment but in real life, we have various complex scenarios that include illumination variations, camouflage, background clutter, low resolution, shape deformations, sparse and dense population etc. To overcome this problem, we proposed a deep learning-based solution that employs YOLOv5 architecture and ESRGAN to detect and classify various types of pests affecting crops. For improving the network learning and capability We used different data augmentation techniques. In addition, to improve the performance of small-sized pest detection we used Enhance super-resolution Generative Antagonistic Networks (ESRGAN) deep neural network. ESRGAN converted the LR (low resolution) images into super-resolution images and trained network YOLOv5 architecture on the super-resolution images. Finally, we evaluated the performance of trained models on the test image data. Our proposed model successfully achieved 85.6% accuracy on corn and 60.1% accuracy on rice. To the best of our knowledge, the results obtained by our system are the highest amongst the state-of-the-art techniques reported on the same datasets, which shows the effectiveness of our proposed approach. Further more the proposed model is capable of timely detection and recognition of different sizes, types and stages of pests in overlapped, crowded (dense distribution) and sparse distribution areas. Therefore, this proposed approach can be useful in crop health monitoring systems and can help the farmers in the selection of the appropriate pesticides/other chemicals to be used based on the type and population of the pests. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Computer Sciences | en_US |
dc.relation.ispartofseries | MS (CS);T-01888 | |
dc.subject | Health Monitoring | en_US |
dc.subject | Automated Insect Detection | en_US |
dc.title | Automated Insect Detection for Crop Health Monitoring | en_US |
dc.type | MS Thesis | en_US |