AdapterHub
  •   Explore
  •   Upload
  •   Docs
  •   Blog
  •  
  •  
  1. Explore
  2. Task

Task Adapters

Pre-trained model:

All architectures
All architectures
bert bart xlm-roberta distilbert gpt2 roberta mbart

QNLI

Question Natural Language Inference is a version of SQuAD which has been converted to a binary classification task. The positive examples are (question, sentence) pairs which do contain the correct answer, and the negative examples are (question,sentence) from the same paragraph which do not contain the answer.
nli/qnli@ukp gpt2
1 version Architecture: houlsby non-linearity: swish reduction factor: 16 Head: 

Adapter for gpt2 in Houlsby architecture trained on the QNLI dataset for 10 epochs with a learning rate of 1e-4.

nli/qnli@ukp roberta-large
1 version Architecture: pfeiffer Head: 

QNLI adapter (with head) trained using the `run_glue.py` script with an extension that retains the best checkpoint (out of 10 epochs).

nli/qnli@ukp facebook/bart-base
1 version Architecture: houlsby non-linearity: swish reduction factor: 16 Head: 

Adapter for bart-base in Houlsby architecture trained on the QNLI dataset for 15 epochs with early stopping and a learning rate of 1e-4.

nli/qnli@ukp roberta-base
1 version Architecture: pfeiffer Head: 

QNLI adapter (with head) trained using the `run_glue.py` script with an extension that retains the best checkpoint (out of 15 epochs).

nli/qnli@ukp distilbert-base-uncased
1 version Architecture: houlsby Head: 

Adapter for distilbert-base-uncased in Houlsby architecture trained on the QNLI dataset for 15 epochs with early stopping and a learning rate of 1e-4.

nli/qnli@ukp bert-base-uncased
1 version Architecture: houlsby Head: 

Adapter in Houlsby architecture trained on the QNLI task for 20 epochs with early stopping and a learning rate of 1e-4. See https://arxiv.org/pdf/2007.07779.pdf.

nli/qnli@ukp distilbert-base-uncased
1 version Architecture: pfeiffer Head: 

Adapter for distilbert-base-uncased in Pfeiffer architecture trained on the QNLI dataset for 15 epochs with early stopping and a learning rate of 1e-4.

nli/qnli@ukp roberta-base
1 version Architecture: houlsby Head: 

QNLI adapter (with head) trained using the `run_glue.py` script with an extension that retains the best checkpoint (out of 15 epochs).

nli/qnli@ukp roberta-large
1 version Architecture: houlsby Head: 

QNLI adapter (with head) trained using the `run_glue.py` script with an extension that retains the best checkpoint (out of 15 epochs).

nli/qnli@ukp gpt2
1 version Architecture: pfeiffer non-linearity: relu reduction factor: 16 Head: 

Adapter for gpt2 in Pfeiffer architecture trained on the QNLI dataset for 10 epochs with a learning rate of 1e-4.

nli/qnli@ukp facebook/bart-base
1 version Architecture: pfeiffer non-linearity: relu reduction factor: 16 Head: 

Adapter for bart-base in Pfeiffer architecture trained on the QNLI dataset for 15 epochs with early stopping and a learning rate of 1e-4.

nli/qnli@ukp bert-base-uncased
1 version Architecture: pfeiffer Head: 

Adapter in Pfeiffer architecture trained on the QNLI task for 20 epochs with early stopping and a learning rate of 1e-4. See https://arxiv.org/pdf/2007.07779.pdf.

AdapterHub/bert-base-uncased-pf-qnli bert-base-uncased
huggingface.co Head: 

# Adapter `AdapterHub/bert-base-uncased-pf-qnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the...

AdapterHub/roberta-base-pf-qnli roberta-base
huggingface.co Head: 

# Adapter `AdapterHub/roberta-base-pf-qnli` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the...

Paper | Imprint & Privacy

Brought to you with ❤️  by authors from: