Dealing with a small amount of data : developing Finnish sentiment analysis
Publiceringsår
2022
Upphovspersoner
Toivanen, Ida; Lindroos, Jari; Räsänen, Venla; Taipale, Sakari
Abstrakt
Sentiment analysis has been more and more prominently visible among all natural language processing tasks. Sentiment analysis entails information extraction of opinions, emotions, and sentiments. In this paper, we aim to develop and test language models for low-resource language Finnish. We use the term “low-resource” to describe a language lacking in available resources for language modeling, especially annotated data. We investigate four models: the state-of-the-art FinBERT [1], and competitive alternative BERT models Finnish ConvBERT [2], Finnish Electra [3], and Finnish RoBERTa [4]. Having a comparative framework of multiple BERT variations is connected to our use of additional methods that are implemented to counteract the lack of annotated data. Basing our sentiment analysis on partly annotated survey data collected from eldercare workers, we supplement our training data with additional data sources. In addition to the non-annotated section of our survey data, additional data (external in-domain dataset and open-source news corpus) are focused on to determine how training data can be increased with the use of methods like pretraining (masked language modeling) and pseudo-labeling. Pretraining and pseudo-labeling, often defined as semi-supervised learning methods, make it possible to utilize unlabeled data either by initializing the model, or by labeling unlabeled data samples with seemingly real labels prior to actual model implementation. Our results suggest that out of all the single BERT models, FinBERT performs the best for our use case. Moreover, applying ensemble learning and combining multiple models further betters model performance and predictive power, and it outperforms a single FinBERT model. The use of both pseudo-labeling and ensemble learning proved to be valuable assets in the extension of training data for low-resource languages such as Finnish. However, with pseudo labeling, proper regularization methods should be considered to prevent confirmation bias from affecting the model performance.
Visa merOrganisationer och upphovspersoner
Publikationstyp
Publikationsform
Artikel
Moderpublikationens typ
Konferens
Artikelstyp
Annan artikel
Målgrupp
VetenskapligKollegialt utvärderad
Kollegialt utvärderadUKM:s publikationstyp
A4 Artikel i en konferenspublikationPublikationskanalens uppgifter
Moderpublikationens namn
2022 BESC : 9th International Conference on Behavioural and Social Computing
Förläggare
ISBN
Publikationsforum
Publikationsforumsnivå
1
Öppen tillgång
Öppen tillgänglighet i förläggarens tjänst
Nej
Parallellsparad
Ja
Övriga uppgifter
Vetenskapsområden
Data- och informationsvetenskap; Språkvetenskaper
Nyckelord
[object Object],[object Object]
Publiceringsland
Förenta staterna (USA)
Förlagets internationalitet
Internationell
Språk
engelska
Internationell sampublikation
Nej
Sampublikation med ett företag
Nej
DOI
10.1109/besc57393.2022.9995536
Publikationen ingår i undervisnings- och kulturministeriets datainsamling
Ja