Simulation study of multiple intelligent vehicle control using stochastic learning automata

Document Type



An intelligent controller is described for an automated vehicle planning its trajectory based on sensor and communication data received. The intelligent controller is designed using a stochastic learning automaton. Using the data received from on-board sensors, two automata (for lateral and longitudinal actions) are capable of learning the best possible actions to avoid collisions. The system has the advantage of being able to work in unmodeled stochastic environments. Computer simulation is a way to test the effectiveness of the learning automata method because the system becomes highly complex because of the presence of a large number of vehicles. Simulations for simultaneous lateral and longitudinal control of a vehicle using this method provide encouraging results. Multiple vehicle simulations are also given, and the resulting complexity is discussed. The analysis of the situations is made possible by the study of the interacting reward-penalty mechanisms in individual vehicles. Simple scenarios consisting of multiple vehicles are defined as collections of discrete states, and each state is treated as a game of automata. The definition of the physical environment as a series of discrete state transitions associated with a “stationary automata environment” is the key to this analysis and to the design of the intelligent controller. The aim is to obtain the necessary and sufficient rules for state transitions to reach the goal state.


Artificial Intelligence and Robotics | Controls and Control Theory | Systems and Communications | Transportation | Urban Studies and Planning


Use Find in Your Library, contact the author, or use interlibrary loan to garner a copy of the article. Publisher copyright policy allows author to archive post-print (author’s final manuscript). When post-print is available or publisher policy changes, the article will be deposited