Pre-trained model:
Pfeiffer Adapter trained with Masked Language Modelling on Mingrelian Wikipedia Articles for 50k steps and a batch size of 64.
Houlsby Adapter trained with Masked Language Modelling on Mingrelian Wikipedia Articles for 50k steps and a batch size of 64.