Abstract:
Wildfires are one of the costliest and deadliest natural disasters around the globe,
affecting millions of acres of forest resources and threatening the lives of human and animals.
Thousands of forest fires across the globe results in serious damage to the environment.
Further, industrial explosions, domestic fires, farm fires, and wildfires are huge problem that
causes negative effects on the environment contributing significantly towards the issue of
climate change. Damage caused by such incidents are time-sensitive and can be fatal
resulting in a huge loss to life and property if not timely dealt with. Recent advances in aerial
images show that they can be beneficial in wildfire studies. Among different technologies and
methods for collecting aerial images, drones have been used recently for manual/automatic
monitoring of potential risk areas. Images received from the drones can be processed using
vision and machine learning techniques for automated and timely detection of fires thus
shortening the response time and reducing the damage caused by the fire whilst minimizing
the cost of firefighting. Automated vision-based fire detection has therefore become an
important research topic in recent years. Desired properties of good vision-based fire
detection are low false alarm rate, fast response time, and high accuracy. This thesis presents
a comprehensive literature review of recent vision-based approaches for the automated
detection of fire from images and videos. It also includes computing the area under the fire
and planning to mitigate the fire. The literature has broadly been categorized into classic
vision/machine learning-based approaches and deep learning-based approaches. Based on the
comparison of these approaches using a variety of datasets and performance metrics, it has
been observed that deep learning-based approaches generally yield better performance as
compared to classic vision/machine learning-based techniques. In this research, we further
explored various deep learning alternatives for accurate fire detection. A Yolov5-based deep
learning model has been proposed in this research for efficient region-based detection and
segmentation of fire. Pixel level segmentation is also performed using Mask RCNN to
estimate the area under the fire so that planning can be done to mitigate with the fire. The
problem of availability of limited labeled training data as compared to the training samples
required for deep learning-based model training is mitigated through variety of preprocessing and augmentation techniques. Comparison with existing vision-based fire
segmentation approaches on publicly available datasets show the improved performance of
proposed approach as compared to the competitors.