AdapterHub
  •   Explore
  •   Upload
  •   Docs
  •   Blog
  •  
  •  
  1. Explore
  2. Task

Task Adapters

Pre-trained model:

All architectures
All architectures
bert bart xlm-roberta distilbert gpt2 roberta mbart

Swag

Given a partial description like "she opened the hood of the car," humans can reason about the situation and anticipate what might come next ("then, she examined the engine"). SWAG (Situations With Adversarial Generations) is a large-scale dataset for this task of grounded commonsense inference, unifying natural language inference and physically grounded reasoning. The dataset consists of 113k multiple choice questions about grounded situations (73k training, 20k validation, 20k test). Each question is a video caption from LSMDC or ActivityNet Captions, with four answer choices about what might happen next in the scene. The correct answer is the (real) video caption for the next event in the video; the three incorrect answers are adversarially generated and human verified, so as to fool machines but not humans. SWAG aims to be a benchmark for evaluating grounded commonsense NLI and for learning representations. The full data contain more information, but the regular configuration will be more interesting for modeling (note that the regular data are shuffled). The test set for leaderboard submission is under the regular configuration.
🤗  huggingface.co
AdapterHub/bert-base-uncased-pf-swag bert-base-uncased
huggingface.co Head: 

# Adapter `AdapterHub/bert-base-uncased-pf-swag` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the...

AdapterHub/roberta-base-pf-swag roberta-base
huggingface.co Head: 

# Adapter `AdapterHub/roberta-base-pf-swag` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [swag](https://huggingface.co/datasets/swag/)...

Paper | Imprint & Privacy

Brought to you with ❤️  by authors from: