Language models

View docs

If you are interested in evaluating the performance of specific language models, you can browse the available models below.

Available language models
Name Description Owner Language Author Date added
Docker image
Status
Average performance
Name Description Owner Language Author Date added Docker image Status Average performance
Transformer XL None Jon G English Zihang Dai et al. 2020-01-21 cpllab/language-models:transformer-xl Validated 76.81%
JRNN None Jon G English Josefowicz et al. 2020-01-21 cpllab/language-models:jrnn Validated 76.09%
Vanilla LSTM None Jon G English Hochreiter & Schmidhuber 2020-01-30 cpllab/language-models:vanilla-lstm Validated 65.59%
RNNG None Jon G English Dyer et al. 2020-01-30 cpllab/language-models:rnng Validated 74.22%
Ordered Neurons None Jon G English Shen et al. 2020-01-30 cpllab/language-models:ordered-neurons Validated 72.47%
GPT-2 None Jon G English Radford et al. (OpenAI) 2020-01-21 cpllab/language-models:gpt2 Validated 84.93%
TinyLSTM None Jon G English Hochreiter & Schmidhuber 2020-07-06 cpllab/language-models:tinylstm Validated 63.19%
BERT None Héctor Javier English Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova 2020-09-25 phenompeople/bert-server:UNCASED_EN_BASE Not validated nan%
GPT-2 XL None Jon G English Radford et al. (OpenAI) 2020-01-21 cpllab/language-models:gpt2-xl Validated 89.97%