PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data
Abstract
A Portuguese pretrained T5 model outperforms the original T5 and other multilingual models on various NLP tasks when using a Portuguese vocabulary.
In natural language processing (NLP), there is a need for more resources in Portuguese, since much of the data used in the state-of-the-art research is in other languages. In this paper, we pretrain a T5 model on the BrWac corpus, an extensive collection of web pages in Portuguese, and evaluate its performance against other Portuguese pretrained models and multilingual models on three different tasks. We show that our Portuguese pretrained models have significantly better performance over the original T5 models. Moreover, we demonstrate the positive impact of using a Portuguese vocabulary. Our code and models are available at https://github.com/unicamp-dl/PTT5.
Get this paper in your agent:
hf papers read 2008.09144 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 19
Browse 19 models citing this paperDatasets citing this paper 0
No dataset linking this paper