Adaptive Cruise Control (ACC) is one of the common advanced driving features that aims to assist the driver from fatigue. A lot of models for ACC are available, such as Proportional, Integral, and Derivative (PID), Model Predictive Control (MPC), and reinforcement learning (RL). Recently a deep RL technique called deep deterministic policy gradient (DDPG) has been widely used for velocity control. DDPG is a mixture of deep learning (DL) and RL method, that uses neural networks (NN) to estimate its action. In this work, we utilize an attention DDPG model. The attention mechanism in DDPG increases the overall effectiveness of the model by reducing focus on the less important features. The network structure consists of one hidden layer with 30 neurons, which has been deployed in our previous work. The reward function has been designed to concentrate on overall safety and comfort. In this paper, we introduce a new criterion of weather conditions, which has not been used before in the literature. We trained our model with a publicly available dataset. For testing, we created simulated sensor data created in the Mississippi State University Autonomous Vehicular Simulator (MAVS). We introduced various scenarios within the simulation environment with differing weather conditions to assess the performance of the ACC model. In many cases, the sensors are weather-sensitive, which can affect the performance of the ACC system. The primary objective of this study is to evaluate the performance of the Attention DDPG-based ACC model under varying weather conditions. The results show that the agent can maintain safety across a range of weather conditions.
|