You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Flow is a computational framework for deep RL and control experiments for traffic microsimulation.
See our website for more information on the application of Flow to several mixed-autonomy traffic scenarios. Other results and videos are available as well.
If you use Flow for academic research, you are highly encouraged to cite our paper:
C. Wu, A. Kreidieh, K. Parvate, E. Vinitsky, A. Bayen, "Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control," CoRR, vol. abs/1710.05465, 2017. [Online]. Available: https://arxiv.org/abs/1710.05465
If you use the benchmarks, you are highly encouraged to cite our paper:
Vinitsky, E., Kreidieh, A., Le Flem, L., Kheterpal, N., Jang, K., Wu, F., ... & Bayen, A. M, Benchmarks for reinforcement learning in mixed-autonomy traffic. In Conference on Robot Learning (pp. 399-409). Available: https://proceedings.mlr.press/v87/vinitsky18a.html
Contributors
Flow is supported by the Mobile Sensing Lab at UC Berkeley and Amazon AWS Machine Learning research grants. The contributors are listed in Flow Team Page.
About
Computational framework for reinforcement learning in traffic control