| dc.description.abstract |
Navigation of non-holonomic robots is quite complex since these systems have limitations on where they can directly move toward. These challenges tarnish the effectiveness of traditional motion planning algorithms such as the Wavefront and A* algorithms. This work shows how RL, namely DQN, can be employed to overcome these drawbacks and improve path planning in environments of significant complexity.
This project focuses on the design and performance analysis of an RL-based algorithm to effectively navigate a non-holonomic robot in an 8×8 grid environment. Traditional methods like Wavefront and A* were found to be ineffective due to their failure to consider non-holonomic constraints, dynamics, and optimal path planning. Deep Q-Networks (DQN), a reinforcement learning method, was identified as a solution to these limitations.
The RL approach relies on sensor-based perception, state-action mapping, and an idealized reward function, allowing the robot to navigate through static obstacle configurations such as sparse, clustered, and maze-like environments. The performance of the system was evaluated based on parameters such as path length, time, collision frequency, and flexibility. The results showed that the RL framework outperformed traditional techniques in terms of smooth, realizable trajectories that respect non-holonomic constraints.
However, limitations were noted, including significant computation for model training and dependence on sensor precision. The study suggests the use of advanced RL algorithms, multi-agent systems, and deployment in real environments. This research highlights RL as a scalable approach for autonomous robots to navigate complex environments and forms the basis for future developments. |
en_US |