AdapterHub
  •   Explore
  •   Upload
  •   Docs
  •   Blog
  •  
  •  
  1. Explore
  2. Task

Task Adapters

Pre-trained model:

All architectures
All architectures
bert bart xlm-roberta distilbert gpt2 roberta mbart

SQuAD 2.0

SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
  Website 🤗  huggingface.co
qa/squad2@lohfink-rossi facebook/bart-large
1 version Architecture: lohfink-rossi-leaveout non-linearity: relu reduction factor: 16 Head: 

Adapter for bart-large using a custom architecture (Lohfink-Rossi-Leaveout) trained on the SQuAD 2.0 dataset for 15 epochs with a Cosine with Restarts learning rate scheduler ans learning rate 0.001.

qa/squad2@ukp bert-base-uncased
1 version Architecture: pfeiffer Head: 

Adapter for bert-base-uncased in Pfeiffer architecture trained on the SQuAD 2.0 dataset for 15 epochs with early stopping and a learning rate of 1e-4.

qa/squad2@ukp roberta-base
1 version Architecture: pfeiffer Head: 

Adapter for roberta-base in Pfeiffer architecture trained on the SQuAD 2.0 dataset for 15 epochs with early stopping and a learning rate of 1e-4.

qa/squad2@ukp roberta-base
1 version Architecture: houlsby Head: 

Adapter for roberta-base in Houlsby architecture trained on the SQuAD 2.0 dataset for 15 epochs with early stopping and a learning rate of 1e-4.

qa/squad2@ukp bert-base-uncased
1 version Architecture: houlsby Head: 

Adapter for bert-base-uncased in Pfeiffer architecture trained on the SQuAD 2.0 dataset for 15 epochs with early stopping and a learning rate of 1e-4.

AdapterHub/bert-base-uncased-pf-squad_v2 bert-base-uncased
huggingface.co Head: 

# Adapter `AdapterHub/bert-base-uncased-pf-squad_v2` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the...

AdapterHub/roberta-base-pf-squad_v2 roberta-base
huggingface.co Head: 

# Adapter `AdapterHub/roberta-base-pf-squad_v2` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the...

Paper | Imprint & Privacy

Brought to you with ❤️  by authors from: