Pre-trained model:
Adapter for bart-base in Houlsby architecture trained on the MRPC dataset for 15 epochs with early stopping and a learning rate of 1e-4.
Adapter in Pfeiffer architecture trained on the MRPC task for 20 epochs with early stopping and a learning rate of 1e-4. See https://arxiv.org/pdf/2007.07779.pdf.
Pfeiffer Adapter trained on the MRPC dataset.
Adapter in Houlsby architecture trained on the MRPC task for 20 epochs with early stopping and a learning rate of 1e-4. See https://arxiv.org/pdf/2007.07779.pdf.
MRPC adapter (with head) trained using the `run_glue.py` script with an extension that retains the best checkpoint (out of 30 epochs).
Adapter for gpt2 in Houlsby architecture trained on the MRPC dataset for 10 epochs with a learning rate of 1e-4.
Adapter for distilbert-base-uncased in Pfeiffer architecture trained on the MRPC dataset for 15 epochs with early stopping and a learning rate of 1e-4.
Adapter for bart-base in Pfeiffer architecture trained on the MRPC dataset for 15 epochs with early stopping and a learning rate of 1e-4.
Adapter for gpt2 in Pfeiffer architecture trained on the MRPC dataset for 10 epochs with a learning rate of 1e-4.
Adapter for distilbert-base-uncased in Houlsby architecture trained on the MRPC dataset for 15 epochs with early stopping and a learning rate of 1e-4.
# Adapter `AdapterHub/bert-base-uncased-pf-mrpc` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the...
# Adapter `AdapterHub/roberta-base-pf-mrpc` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the...