| dc.description.abstract |
Unmanned Aerial Vehicle (UAV) well known as drones, increasing in demand currently and in the future in many fields such as the delivery of packages, survelience, path planning, military combat, traffic monitoring, construction supervision, and search and rescue operations. In the majority of the fields, to perform specific tasks UAVs have to navigate in different environments carrying a camera underneath and avoid obstacles without human interaction means autonomous navigation. This research proposes a model for navigation with static obstacle avoidance in a map-based environment. A new method is put forth to control UAV autonomously to cover the target area mission with arbitrary start locations and multiple landing positions. There are several tactics for UAV navigation was proposed, but in this work an end-to-end a reinforcemen learning approach is selceted. The reinforcement Learning is used for learning of a UAV control policy to the agent. This policy generalize the new map with varying battery energy constraints. The maximum flying range of UAVs remains severely limited despite recent advancements in battery technology. In order to train a double deep Q-network (DDQN) to generate control decisions for the UAV, we use map-like input channels to convey spatial information via convolutional network layers to the UAV agent, while considering limited power budget and the target area coverage aim. The proposed approach can be useful in a wide range of Environments for UAV and ground robot. Moreover, proposed model is resultant to find best path in the environment. Experiments will perform in different situations to verify that proposed model has improved coverage ratio , landing ratio, and movement steps results. |
en_US |