Transformers Example with IgniteIn this example, we show how to use _Ignite_ to finetune a transformer model:on 1 or more GPUs or TPUs compute training/validation metrics log learning rate, metrics etc save the best model weightsConfigurations:[x] single GPU [x] multi GPUs on a single node [x] TPUs on ColabRequirements:pytorch-ignite: pip install pytorch-ignite transformers: pip install transformers datasets: pip install datasets…
Source code on GitHub.
In this example, we show how to use _Ignite_ to finetune a transformer model:
Configurations:
pip install pytorch-ignitepip install transformerspip install datasetspip install tqdmpip install tensorboardXpip install firepip install clearmlAlternatively, install the all requirements using pip install -r requirements.txt.
Run the example on a single GPU:
``bash
python main.py run
`
If needed, please, adjust the batch size to your GPU device with --batch_size argument.
The default model is bert-base-uncased incase you need to change that use the --model argument, for details on which models can be used refer here
Example:
`bash
#Using DistilBERT which has 40% less parameters than bert-base-uncased
python main.py run --model="distilbert-base-uncased"
`
For details on accepted arguments:
`bash
python main.py run -- --help
`
#### Single node, multiple GPUs
Let's start training on a single node with 2 gpus:
`bash
`using torch.distributed.launch
python -u -m torch.distributed.launch --nproc_per_node=2 --use_env main.py run --backend="nccl"
or
`bash
`using function spawn inside the code
python -u main.py run --backend="nccl" --nproc_per_node=2
##### Using Horovod as distributed backend
Please, make sure to have Horovod installed before running.
Let's start training on a single node with 2 gpus:
`b...