Cross-Lingual Sentiment Analysis using modified BRAE
1. Komachi Lab
M1 Ryosuke Miyazaki
2015/10/16
Cross-Lingual Sentiment Analysis using modified BRAE
Sarthak Jain and Shashank Batra
EMNLP 2015
EMNLP 2015 reading group
※ All figures in this slide are cited from original paper
2. Komachi Lab
Abstract
✤ To perform Cross Lingual Sentiment Analysis
- They use parallel corpus that include
resource rich (English) and resource poor (Hindi)
✤ They create new Movie Reviews Dataset in Hindi
for evaluation
✤ Their model significantly outperforms state of the art,
especially when labeled data is scarce
2
4. Komachi Lab
BRAE Model
4
Bilingually Constrained Recursive Auto-encoder
First, we consider standard Recursive Auto-encoder for each language respectively
construct parent vector reconstruct children vector
Minimize reconstruction errors (Euclidean distance)
c: child vector
y, p: parent vector
5. Komachi Lab
BRAE Model
5
Loss Function
They also produce representation from another language
Assumption
A phrase and its correct translation should
share the same semantic meaning
Loss Function about source language
Transforming loss
Like wise, they define for target language
Objective function
6. Komachi Lab
Training (Unsupervised)
✤ Word embeddings are pre-trained by Word2Vec
✤ 1st: Pre-train ps, and pt respectively on RAE
6
✤ 2nd: Fix pt and train ps on BRAE
- Vice-versa for ps
- Set ps = p’s, pt = p’t when it reaching a local minima.
7. Komachi Lab
Training (Supervise)
✤ Modification for Classifying Sentiment
✤ Adding Softmax and Cross entropy error functions
to only source language (resource rich language)
✤ In this phase, penalty term is included in reconstruction error
7
✤ And, transformation weights (θt
s, θs
t) are not updated in this phase
8. Komachi Lab
Training (Supervise)
✤ 1st: only update resource rich related parameters
8
ce: cross entropy
✤ 2nd: only update resource poor related parameters
- Since the gold labels are only associated with resource rich,
they use transformation to obtain sentiment distribution
✤ Predict overall sentiment associated with the resource poor
- concat pt, p’s then
train by softmax regression using weight matrix
10. Komachi Lab
Experimental Settings
✤ HindMonoCorp 0.5 (44.49M sentences) and
English Gigaword Corpus for word embeddings
✤ Bilingual sentence-aligned data from HindEnCrop
(273.9k sentence pairs)
10
For Unsupervised phase
For Supervised phase (use MOSES to obtain bilingual phrase pairs)
✤ IMDB11 dataset (25000 pos, 25000 neg)
✤ Rotten Tomatoes Review dataset (4 documents, {0, 1, 2, 3})
✤ Their model was able to correctly infer word sense for polysemous words
11. Komachi Lab
Experimental Setting
✤ Rating Based Hindi Movie Review Dataset (2945 movie reviews, {1, 2, 3, 4})
they create this new dataset for evaluation
✤ Standard Movie Reviews Dataset (125 positive, 125 negative)
11
Evaluation Data set
✤ learning rate: 0.05
✤ word vector dimension: 80
✤ joint error of BRAE (α): 0.2
✤ λL: 0.001
✤ λBRAE: 0.0001
Tuning by Grid Search on Cross Validation
✤ κ: 0.2, η: 0.35
✤ λp: 0.01
✤ λS: 0.1
✤ λT: 0.04
12. Komachi Lab
Results
✤ BRAE-U: neither include penalty term, nor fix the transformations weights
✤ BRAE-P: only include the penalty term
✤ BRAE-F: include both term
12
monolingual
cross lingual
monolingual
monolingual
monolingual
cross lingual
cross lingual
cross lingual Confusion matrix (BRAE-F)
13. Komachi Lab
Results
13
Accuracy with amount of
labeled training data used
✤ Their model achieve best performance even though
data are 50% less than those of others.
Accuracy with amount of
unlabeled training data used
14. Komachi Lab
Analysis
✤ Since the movement in semantic vector space was restricted, their
model have an advantage about unknown words
14
“Her acting of a schizophrenic mother made our hearts weep”
base line classify as negative due to “weep”, but their model correctly predict positive
Example:
✤ Their model was able to correctly infer word sense for polysemous words
15. Komachi Lab
Error Analysis
✤ conflicting sentiments about two different aspects about the same object
✤ presence of subtle contextual references
15
Difficult situation
✤ “His poor acting generally destroys a movie, but this time it didn’t”
- correct is positive, predict rate is 2
✤ “This movie made his last one looked good”
- wrong prediction of rating 3
Example of latter case