Need help with elastic?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

pytorch
573 Stars 55 Forks BSD 3-Clause "New" or "Revised" License 219 Commits 7 Opened issues

Description

PyTorch elastic training

Services available

!
?

Need anything else?

Contributors list

No Data

LicenseCircleCI

TorchElastic

TorchElastic allows you to launch distributed PyTorch jobs in a fault-tolerant and elastic manner. For the latest documentation, please refer to our website.

Requirements

torchelastic requires * python3 (3.8+) * torch * etcd

Installation

pip install torchelastic

Quickstart

Fault-tolerant on

4
nodes,
8
trainers/node, total
4 * 8 = 32
trainers. Run the following on all nodes.
bash
python -m torchelastic.distributed.launch
            --nnodes=4
            --nproc_per_node=8
            --rdzv_id=JOB_ID
            --rdzv_backend=etcd
            --rdzv_endpoint=ETCD_HOST:ETCD_PORT
            YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)

Elastic on

1 ~ 4
nodes,
8
trainers/node, total
8 ~ 32
trainers. Job starts as soon as
1
node is healthy, you may add up to
4
nodes. ```bash python -m torchelastic.distributed.launch --nnodes=1:4 --nprocpernode=8 --rdzvid=JOBID --rdzvbackend=etcd --rdzvendpoint=ETCDHOST:ETCDPORT YOURTRAININGSCRIPT.py (--arg1 ... train script args...)
## Contributing

We welcome PRs. See the CONTRIBUTING file.

License

torchelastic is BSD licensed, as found in the LICENSE file.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.