Learning-Based Multi-Robot Lane Navigation: Scalable Trajectory Prediction using Neural Networks

1 minute read

Problem Statement & Motivation

Accurate trajectory prediction is crucial for safe and efficient robot navigation. While high-fidelity simulators like Webots offer reliable results, they lack scalability for real-time deployment or multi-agent scenarios. The goal of this project is to implement a scalable neural network model that can generate robot trajectories efficiently, with acceptable precision.

The task involves navigating a cyclic path with a variable number of lanes, introducing additional planning complexity.

Our Method

We investigated a range of machine learning techniques to model robot motion:

  1. Environment Setup:
    • The navigation task was simulated in Webots.
    • The environment includes a cyclic lane path with adjustable width and complexity.
  2. Trajectory Prediction:
    • Input: initial robot position.
    • Output: incremental prediction of next positions using the neural network.
    • The model is autoregressive: each predicted step becomes the input for the next.
  3. Learning Approaches:
    • MLP, RNN, and GNN: struggled to maintain trajectory accuracy.
    • Reinforcement Learning (PPO): yielded stable behaviors.
    • Imitation Learning: learned from expert trajectories.
    • Combined RL + Imitation Learning: achieved best performance in terms of accuracy and scalability.
  4. Simulation Loop:
    • The predicted movement is integrated step-by-step (xₜ → xₜ₊₁).
    • The learned policy generalizes over multiple steps.

Evaluation & Results

  • The best-performing model used imitation learning guided by reinforcement learning, balancing precision with efficiency.

Observations

  • Training time is substantial and environment-specific.
  • The learned controller is scalable and can potentially extend to multi-agent settings with further training.