Text Classification Using Neural Network Language Model (NNLM) and BERT: An Empirical Comparison
Document Type
Conference Proceeding
Publication Date
1-1-2022
Publication Title
Lecture Notes in Networks and Systems
Publisher
Springer
Publisher Location
New York, NY
Volume
296
First page number:
175
Last page number:
189
Abstract
Text Classification is one of the most cited applications of Natural Language Processing. Classification can save the cost of manual efforts and at the same time increase the accuracy of a task. With multiple advancements in language modeling techniques over the last two decades, a number of word embedding models have been proposed. In this study, we discuss and compare two of the most recent models for the task text classification and present a technical comparison.
Keywords
BERT; Language model; Natural Language Processing; NLP; Text classification; Transformers; Word embedding
Disciplines
Other Computer Sciences | Programming Languages and Compilers
Repository Citation
Esmaeilzadeh, A.,
Taghva, K.
(2022).
Text Classification Using Neural Network Language Model (NNLM) and BERT: An Empirical Comparison.
Lecture Notes in Networks and Systems, 296
175-189.
New York, NY: Springer.
http://dx.doi.org/10.1007/978-3-030-82199-9_12