ALBERT, a deep learning model for processing natural language

Google has released “A Lite Bert (ALBERT)” in open source version, which is positioned as the lightweight version of “BERT”, a deep learning model for natural language processing. Google says that ALBERT achieves just as much accuracy as Bert, with fewer parameters.

ALBER (A Lite BERT) is the lightweight version of “BERT (Bidirectional Encoder Representations from Transformers)”, a self-taught method , which Google announced in 2018. They reduced the number of parameters on BERT, which is a deep learning model for natural language processing (NLP). By allocating the capacity efficiently, they were able to make dramatic improvements, and make it less constrained my memory limitations.

Although with fewer parameters, ALBERT is said to keep the accuracy of BERT, and it has been reported that ALBERT had even better results than BERT on Stanford Question Answering Dataset (SQuAD) v2.0 and v1.1, and 12 NLP tasks like RACE benchmark.

Google has already released ALBERT as open source implementation for the deep-learning framework TensorFlow. There are four models: Base, Large, Xlarge, and Xxlarge. At the end of December 2019, Google released version 2 along with the Chinese model. At the beginning of January, version 2 for TensorFlow Hub model was modified so that it can also be used for TensorFlow 1.15.

ALBERT
https://github.com/google-research/ALBERT