sphinx-quickstart on Mon Mar 1 09:26:26 2021. You can adapt this file completely to your liking, but it should at least contain the root toctree directive.

Welcome to ElegantRL!


ElegantRL is an open-source massively parallel framework for deep reinforcement learning (DRL) algorithms implemented in PyTorch. We aim to provide a next-generation framework that embraces recent breakthroughs, e.g., massively parallel simulations, ensemble methods, population-based training.

ElegantRL features strong scalability, elasticity and lightweightness, and allows users to conduct efficient training on either one GPU or hundreds of GPUs:

  • Scalability: ElegantRL fully exploits the parallelism of DRL algorithms at multiple levels, making it easily scale out to hundreds or thousands of computing nodes on a cloud platform, say, a SuperPOD platform with thousands of GPUs.

  • Elasticity: ElegantRL can elastically allocate computing resources on the cloud, which helps adapt to available resources and prevents over/under-provisioning/under-provisioning.

  • Lightweightness: The core codes <1,000 lines (check elegantrl_helloworld).

  • Efficient: in many testing cases, it is more efficient than Ray RLlib.

ElegantRL implements the following DRL algorithms:

  • DDPG, TD3, SAC, A2C, PPO, REDQ for continuous actions

  • DQN, DoubleDQN, D3QN, PPO-Discrete for discrete actions

  • QMIX, VDN; MADDPG, MAPPO, MATD3 for multi-agent RL

For beginners, we maintain ElegantRL-HelloWorld as a tutorial. It is a lightweight version of ElegantRL with <1,000 lines of core codes. More details are available here.


ElegantRL generally requires:

  • Python>=3.6

  • PyTorch>=1.0.2

  • gym, matplotlib, numpy, pybullet, torch, opencv-python, box2d-py.

You can simply install ElegantRL from PyPI with the following command:

1pip3 install erl --upgrade

Or install with the newest version through GitHub:

1git clone https://github.com/AI4Finance-Foundation/ElegantRL.git
2cd ElegantRL
3pip3 install .


Indices and tables