AdapterHub
  •   Explore
  •   Upload
  •   Docs
  •   Blog
  •  
  •  
  1. Explore
  2. Task

Task Adapters

Pre-trained model:

All architectures
All architectures
bart xlm-roberta bert roberta distilbert gpt2 mbart t5 xmod

MLKI_TS

Enhancing the sentence-level factual triple knowledge in language models, which is suitable for language modeling tasks
  Website
mlki/ts@mlki bert-base-multilingual-cased
1 version Architecture: pfeiffer non-linearity: relu reduction factor: 16 Head: 

Knowledge adapter set for multilingual knowledge graph integration. This adapter is for factual triple enhancement (sentence-level). We trained it with triples from T-REx across 84 languages.

mlki/ts@mlki xlm-roberta-base
1 version Architecture: pfeiffer non-linearity: relu reduction factor: 16 Head: 

Knowledge adapter set for multilingual knowledge graph integration. This adapter is for factual triple enhancement (sentence-level). We trained it with triples from T-REx across 84 languages.

mlki/ts@mlki xlm-roberta-large
1 version Architecture: pfeiffer non-linearity: relu reduction factor: 16 Head: 

Knowledge adapter set for multilingual knowledge graph integration. This adapter is for factual triple enhancement (sentence-level). We trained it with triples from T-REx across 84 languages.

Paper | Imprint & Privacy

Brought to you with ❤️  by authors from: