SlideShare ist ein Scribd-Unternehmen logo
1 von 69
Downloaden Sie, um offline zu lesen
NLTK, Gensim
( : )
PyCon Korea 2015
2015-08-29
Lucy Park ( )
1 / 69
KoNLPy
,
? !
NLTK
,
Gensim
, .
bag-of-words, document embedding
, KoNLPy, NLTK, Gensim
.
KoNLPy: http://konlpy.org
NLTK: http://nltk.org
Gensim: http://radimrehurek.com/gensim
2 / 69
(a.k.a. lucypark,
echojuliett, e9t)
.
CAVEAT: KoNLPy
Just another yak shaver...
11:49 <@sanxiyn> 또다시 yak shaving의 신비한 세계
11:51 <@sanxiyn> yak shaving이 뭔지 다 아시죠?
11:51 <디토군> 방금 찾아보고 왔음
11:51 <@mana> (조용히 설명을 기대중)
11:51 <@sanxiyn> 나무를 베려고 하는데
11:52 <@sanxiyn> 도끼질을 하다가
11:52 <@sanxiyn> 도끼가 더 잘 들면 나무를 쉽게 벨텐데 해서
11:52 <@sanxiyn> 도끼 날을 세우다가
11:52 <@sanxiyn> 도끼 가는 돌이 더 좋으면 도끼 날을 더 빨리 세울텐데 해서
11:52 <@sanxiyn> 좋은 숫돌이 있는 곳을 수소문해 보니
11:52 <@mana> …
11:52 <&홍민희> 그거 전형적인 제 행동이네요
11:52 <@sanxiyn> 저 멀이 어디에 세계 최고의 숫돌이 난다고
11:52 <@sanxiyn> 거기까지 야크를 타고 가려다가
11:52 <@mana> 항상하던 짓이라서 타이핑을 할 수 없었습니다
11:52 <@sanxiyn> 야크 털을 깎아서…
11:52 <@sanxiyn> etc.
3 / 69
/
KoNLPy, NLTK, Gensim (?)
!
Gensim ,
.
,
.
.
?
3
4 / 69
1997
20 ...
5 / 69
IBM .
6 / 69
. ,
.
.
-- Bengio et al., Deep Learning 1 , Book in preparation for MIT Press, 2015.
7 / 69
" "
Representations of text
8 / 69
?
, .
" " .
9 / 69
{ , }
≈
⊂
10 / 69
, ,
(binary)
set
one-hot vector
1-of-V
= , = , = , . . . =w
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
1
0
0
⋮
0
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
w ( )
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
0
1
0
⋮
0
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
w
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
0
0
1
⋮
0
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
w
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
0
0
0
⋮
1
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
wi
i
|V|
11 / 69
.
ex: " " " ", " " " " 0 .
( = ( = 0w )⊤
w w )⊤
w
* BOW (discrete) (local representation) .
(distributed representation) 12 / 69
,
bag of words(BOW)
/ ( "term existance" )
n (a.k.a. term frequency (TF))
ex: term frequency–inverse document frequency (TF-IDF)
=d1
binary
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
1
1
0
⋮
0
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
=d1
tf
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
3
1
0
⋮
0
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
13 / 69
.*
.
.
" ".
* : Vincent Spruyt, The Curse of Dimensionality in classification, 2014. 14 / 69
" ?"
(WordNet) X is-a Y (hypernym)
(DAG; directed acyclic graph) . ( : .)
from nltk.corpus import wordnet as wn
wn.synsets('computer') # "computer"
#=> [Synset('computer.n.01'), Synset('calculator.n.01')]
: ( ) /
15 / 69
:
1. , .
from nltk.corpus import wordnet as wn
wn.synsets('nginx')
#=> []
2. .
wn.synsets('yolo') # : you only live once
#=> []
3. NLTK ... ( !)
wn.synsets(' ', lang='kor')
#=> WordNetError: Language is not supported.
16 / 69
.. . ?
17 / 69
.
18 / 69
.
You shall know a word by the company it keeps.
-- J.R. Firth (1957)
19 / 69
1: ?
__ .
, __ .
__ .
: [ ]
20 / 69
2: " " ?
?
21 / 69
.
?
SO .
22 / 69
;;
23 / 69
, (context)
.
(context) := (window) /
1-8 , 3-17 *
" "
ex: "PyCon" "R" "Python" ?
* Jurafsky, Martin, Speech and Language Processing 3rd Ed. 19.1 , Book in preparation, 2015. 24 / 69
"Co-occurrence"
25 / 69
Co-occurrence
documents = [[" ", " ", " ", " "],
[" ", "R", " ", " "]]
1. Term-document matrix ( ):
Computation (!)
x_td = [[1, 1], #
[1, 0], #
[0, 1], # R
[1, 1], #
[1, 1]] #
1. Term-term matrix ( ):
syntactic, semantic
( ==5):
x_tt = [[0, 1, 1, 2, 0], #
[1, 0, 0, 1, 1], #
[1, 0, 0, 1, 1], # R
[2, 1, 1, 0, 2], #
[0, 1, 1, 2, 0]] #
∈Xtd ℝ|V|×M
M
∈Xtt ℝ|V|×|V|
26 / 69
Co-occurrence matrix
.
import math
def dot_product(v1, v2):
return sum(map(lambda x: x[0] * x[1], zip(v1, v2)))
def cosine_measure(v1, v2):
prod = dot_product(v1, v2)
len1 = math.sqrt(dot_product(v1, v1))
len2 = math.sqrt(dot_product(v2, v2))
return prod / (len1 * len2)
:
skewed ( , )
discriminative (ex: list(' '))
?
27 / 69
Neural embeddings
: !
(language model)
ex: [" ", " ", " ", " "] ? (ex: "!", "." "?")
, .
Neural network language model (NNLM)*
* Bengio, Ducharme, Vincent, A Neural Probabilistic Language Model, {NIPS, 2000; JMLR, 2003}. 28 / 69
word2vec *
T. Mikolov word2vec neural embedding
! (n -> m )
* Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their
Compositionality. In Proceedings of NIPS, 2013. 29 / 69
: word2vec *
* , , , (1946-2015) , 49 2 , 2015. 30 / 69
neural embedding ?
31 / 69
doc2vec *
: ( ) !
.
.
|V|
* Le, Mikolov, Distributed Representations of Sentences and Documents, ICML, 2014. ( doc2vec paragraph vector
) 32 / 69
: Term frequency -> . .
documents = [
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' ']
]
words = set(sum(documents, []))
print(words)
# => {' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '}
def term_frequency(document):
# do something
return vector
vectors = [term_frequency(doc) for doc in documents]
print(vectors)
# => {
# 'S1': [1, 1, 0, 1, 0, 0, 1, 1],
# 'S2': [1, 1, 0, 1, 0, 0, 1, 1],
# 'S3': [1, 0, 0, 1, 1, 1, 1, 0],
# 'S4': [1, 1, 0, 1, 1, 1, 0, 0],
# 'S5': [1, 0, 1, 1, 1, 0, 1, 0],
# 'S6': [1, 1, 1, 1, 1, 0, 0, 0]
# }
|V|
33 / 69
: doc2vec -> . .
documents = [
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' ']
]
words = set(sum(documents, []))
print(words)
# => {' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '}
def doc2vec(document):
# do something else ( Gensim !)
return vector
vectors = [doc2vec(doc) for doc in documents]
print(vectors)
# => {
# 'S1': [-0.01599941, 0.02135301, 0.0927715],
# 'S2': [0.02333261, 0.00211833, 0.00254255],
# 'S3': [-0.00117272, -0.02043504, 0.00593186],
# 'S4': [0.0089237, -0.00397806, -0.13199195],
# 'S5': [0.02555059, 0.01561624, 0.03603476],
# 'S6': [-0.02114407, -0.01552016, 0.01289922]
# }
|V|
34 / 69
.
:
Sparse, long vectors Dense, short vectors
1-hot-vectors word2vec
bag-of-words
(term existance, TF, TF-IDF )
doc2vec
.
35 / 69
Sentiment analysis for Korean movie reviews
36 / 69
, ?
37 / 69
!
38 / 69
Naver sentiment movie corpus v1.0
Maas et al., 2011
:
100 140 ( ' ')
20 ( 64 )
ratings_train.txt: 15 , ratings_test.txt: 5
/ (i.e., random guess yields 50%
accuracy)
: 19MB
URL: http://github.com/e9t/nsmc/
39 / 69
README coming soon
40 / 69
$ cat ratings_train.txt | head -n 25
id document label
9976970 .. 0
3819312 ... .... 1
10265843 0
9045019 .. .. 0
6483659 !
5403919 3 1 8 . ... . 0
7797314 . 0
9443947 ..
7156791 1
5912145 ? .. ?
9008700 . ♥ 1
10217543 90 !! ~
5957425 0
8628627 . . .
9864035
6852435 1
9143163 .
4891476 0
7465483 !!♥
3989148 , . . 1
4581211 . 1
2718894 1
9705777 . ....
471131 . 1
41 / 69
, ?
1. KoNLPy
2. NLTK
3. Bag-of-words(term existence) classify
4. Gensim doc2vec classify
42 / 69
1. Data preprocessing (feat. KoNLPy)
def read_data(filename):
with open(filename, 'r') as f:
data = [line.split('t') for line in f.read().splitlines()]
data = data[1:] # header
return data
train_data = read_data('ratings_train.txt')
test_data = read_data('ratings_test.txt')
NOTE: pandas.read_csv() quoting parameter
csv.QUOTE_NONE (3) (ex:
)
43 / 69
1. Data preprocessing (feat. KoNLPy)
# row, column
print(len(train_data)) # nrows: 150000
print(len(train_data[0])) # ncols: 3
print(len(test_data)) # nrows: 50000
print(len(test_data[0])) # ncols: 3
44 / 69
1. Data preprocessing (feat. KoNLPy)
from konlpy.tag import Twitter
pos_tagger = Twitter()
def tokenize(doc):
# norm, stem optional
return ['/'.join(t) for t in pos_tagger.pos(doc, norm=True, stem=True)]
train_docs = [(tokenize(row[1]), row[2]) for row in train_data]
test_docs = [(tokenize(row[1]), row[2]) for row in test_data]
#
from pprint import pprint
pprint(train_docs[0])
# => [([' /Exclamation',
# ' /Noun',
# '../Punctuation',
# ' /Noun',
# ' /Noun',
# ' /Verb',
# ' /Noun'],
# '0')]
NOTE:
. 45 / 69
1. Data preprocessing (feat. KoNLPy)
" ?"
, x
*
" (POS) ?"
ex: ' /Josa' vs ' /Noun'
* https://github.com/karpathy/char-rnn 46 / 69
2. Data exploration (feat. NLTK)
!
Training data token
tokens = [t for d in train_docs for t in d[0]]
print(len(tokens))
# => 2194536
47 / 69
2. Data exploration (feat. NLTK)
nltk.Text() ! (Exploration )
document ,
.
import nltk
text = nltk.Text(tokens, name='NMSC')
print(text)
# => <Text: NMSC>
48 / 69
2. Data exploration (feat. NLTK)
Count tokens
print(len(text.tokens)) # returns number of tokens
# => 2194536
print(len(set(text.tokens))) # returns number of unique tokens
# => 48765
pprint(text.vocab().most_common(10)) # returns frequency distribution
# => [('./Punctuation', 68630),
# (' /Noun', 51365),
# (' /Verb', 50281),
# (' /Josa', 39123),
# (' /Verb', 34764),
# (' /Josa', 30480),
# ('../Punctuation', 29055),
# (' /Josa', 27108),
# (' /Josa', 26696),
# (' /Josa', 23481)]
49 / 69
2. Data exploration (feat. NLTK)
Plot tokens by frequency
text.plot(50) # Plot sorted frequency of top 50 tokens
;;
50 / 69
2. Data exploration (feat. NLTK)
!
Troubleshooting: For those who see rectangles instead of letters in the saved plot file,
include the following configurations before drawing the plot:
Some example fonts:
Mac OS: /Library/Fonts/AppleGothic.ttf
from matplotlib import font_manager, rc
font_fname = 'c:/windows/fonts/gulim.ttc' # A font of your choice
font_name = font_manager.FontProperties(fname=font_fname).get_name()
rc('font', family=font_name)
51 / 69
2. Data exploration (feat. NLTK)
. .
.
52 / 69
2. Data exploration (feat. NLTK)
Collocations ( ): (ex: "text" + "mining")
text.collocations() #
# => /Determiner /Noun; /Suffix /Josa; /Determiner /Noun; /Noun
# /Verb; /Noun /Josa; 10/Number /Noun; /Noun /Suffix; /Noun
# /Adjective; /Noun /Josa; /Noun /Josa; /Noun /Josa; /Suffix
# /Josa; /Noun /Noun; /Noun /Josa; /Suffix /Josa; /Noun
# /Josa; /Noun /Josa; /Suffix /Josa; /Noun /Suffix; /Noun
# /Josa
NOTE: nltk.Text() , source code API .
.
53 / 69
3. Sentiment classification with term-existance*
term .
# 2000
# WARNING: time/memory efficient
selected_words = [f[0] for f in text.vocab().most_common(2000)]
def term_exists(doc):
return {'exists({})'.format(word): (word in set(doc)) for word in selected_words}
# training corpus
train_docs = train_docs[:10000]
train_xy = [(term_exists(d), c) for d, c in train_docs]
test_xy = [(term_exists(d), c) for d, c in test_docs]
Try this:
. 50 ?
TF, TF-IDF ?
(sparse matrix) or scikit-learn TfidfVectorizer() !
* Natural Language Processing with Python Chapter 6. Learning to classify text 54 / 69
3. Sentiment classification with term-existance
. !
.
NLTK :
Naive Bayes Classifiers
Decision Tree Classifiers
Maximum Entropy Classifiers
.
scikit-learn .
NLTK Naive Bayes Classifier !
55 / 69
3. Sentiment classification with term-existance
Naive Bayes classifier
#
classifier = nltk.NaiveBayesClassifier.train(train_xy)
print(nltk.classify.accuracy(classifier, test_xy))
# => 0.80418
classifier.show_most_informative_features(10)
# => Most Informative Features
# exists( /Noun) = True 1 : 0 = 38.0 : 1.0
# exists( /Noun) = True 0 : 1 = 30.1 : 1.0
# exists(♥/Foreign) = True 1 : 0 = 24.5 : 1.0
# exists( /Noun) = True 0 : 1 = 22.1 : 1.0
# exists( /Noun) = True 0 : 1 = 19.5 : 1.0
# exists( /Noun) = True 0 : 1 = 19.4 : 1.0
# exists( /Noun) = True 1 : 0 = 18.9 : 1.0
# exists( /Noun) = True 0 : 1 = 16.9 : 1.0
# exists( /Noun) = True 1 : 0 = 16.9 : 1.0
# exists( /Noun) = True 1 : 0 = 15.9 : 1.0
/ baseline accuracy 0.50
1 accuracy 0.80 .
?
56 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
doc2vec / .
Gensim . *
: Radim Rehureck
* Gensim 0.12.0 . ! 57 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
!!!
Gensim ...
58 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
doc2vec .
doc2vec
from gensim.models.doc2vec import TaggedDocument OK
from collections import namedtuple
TaggedDocument = namedtuple('TaggedDocument', 'words tags')
# 15 training documents
tagged_train_docs = [TaggedDocument(d, [c]) for d, c in train_docs]
tagged_test_docs = [TaggedDocument(d, [c]) for d, c in test_docs]
59 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
from gensim.models import doc2vec
#
doc_vectorizer = doc2vec.Doc2Vec(size=300, alpha=0.025, min_alpha=0.025, seed=1234)
doc_vectorizer.build_vocab(tagged_train_docs)
# Train document vectors!
for epoch in range(10):
doc_vectorizer.train(tagged_train_docs)
doc_vectorizer.alpha -= 0.002 # decrease the learning rate
doc_vectorizer.min_alpha = doc_vectorizer.alpha # fix the learning rate, no decay
# To save
# doc_vectorizer.save('doc2vec.model')
Try this:
Vector size, epoch
60 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
! (1/3)
pprint(doc_vectorizer.most_similar(' /Noun'))
# => [(' /Noun', 0.5669919848442078),
# (' /Noun', 0.5522832274436951),
# (' /Noun', 0.5021427869796753),
# (' /Noun', 0.5000861287117004),
# (' /Noun', 0.4368450343608856),
# (' /Noun', 0.42848479747772217),
# (' /Noun', 0.42714330554008484),
# (' /Noun', 0.41590073704719543),
# (' /Noun', 0.41056352853775024),
# (' /Noun', 0.4052993059158325)]
... .
61 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
! (2/3)
pprint(doc_vectorizer.most_similar(' /KoreanParticle'))
# => [(' /KoreanParticle', 0.5768033862113953),
# (' /KoreanParticle', 0.4822016954421997),
# ('!!!!/Punctuation', 0.4395076632499695),
# ('!!!/Punctuation', 0.4077949523925781),
# ('!!/Punctuation', 0.4026390314102173),
# ('~/Punctuation', 0.40038347244262695),
# ('~~/Punctuation', 0.39946430921554565),
# ('!/Punctuation', 0.3899948000907898),
# ('^^/Punctuation', 0.3852730989456177),
# ('~^^/Punctuation', 0.3797937035560608)]
! .
62 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
, v(KING) – v(MAN) + v(WOMAN) = v(QUEEN)
?
63 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
! (3/3)
pprint(doc_vectorizer.most_similar(positive=[' /Noun', ' /Noun'], negative=[' /Noun']))
# => [(' /Noun', 0.32974398136138916),
# (' /Noun', 0.305545836687088),
# (' /Noun', 0.2899821400642395),
# (' /Noun', 0.2856029272079468),
# (' /Noun', 0.2840743064880371),
# (' /Noun', 0.28247544169425964),
# (' /Noun', 0.2795347571372986),
# (' /Noun', 0.2794691324234009),
# (' /Noun', 0.27567577362060547),
# (' /Noun', 0.2734225392341614)]
...? ?
64 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
, ;;
, . ? *
text.concordance(' /Noun', lines=10)
# => Displaying 10 of 145 matches:
# Josa /Noun /Josa ,,/Punctuation /Noun /Noun ...../Punctuation /Noun
# /Noun /Noun ../Punctuation /Noun /Noun /Noun /Noun /Noun /Josa /Nou
# nction /Noun /Josa /Adjective /Noun /Verb /Verb /Noun /Josa
# /Noun /Noun /Noun ?/Punctuation /Noun /Noun ./Punctuation /Noun /No
# b /Noun /Noun /KoreanParticle /Noun /Noun /Adjective ./Punctuation
# osa /Noun /Josa /Noun /Josa /Noun /Josa /Noun /Noun /Josa /Nou
# /Noun /Josa /Noun ./Punctuation /Noun ,/Punctuation /Noun /Noun /Suf
# Josa /Noun /Noun /Noun /Noun /Noun /Josa /Noun /Suffix /Josa /
# /Verb ./Punctuation /Adjective /Noun /Noun .../Punctuation /Noun
# tive /Noun /Noun .../Punctuation /Noun /Noun ../Punctuation /Noun /J
* Huang, Socher, Improving Word Representations via Global Context and Multiple Word Prototypes, ACL, 2012. 65 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
. .
train_x = [doc_vectorizer.infer_vector(doc.words) for doc in tagged_train_docs]
train_y = [doc.tags[0] for doc in tagged_train_docs]
len(train_x) # term existance
# => 150000
len(train_x[0])
# => 300
test_x = [doc_vectorizer.infer_vector(doc.words) for doc in tagged_test_docs]
test_y = [doc.tags[0] for doc in tagged_test_docs]
len(test_x)
# => 50000
len(test_x[0])
# => 300
66 / 69
4. Sentiment classification with doc2vec (feat. Gensim)
scikit-learn LogisticRegression() .
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state=1234)
classifier.fit(train_x, train_y)
classifier.score(test_x, test_y)
# => 0.78246000000000004
! Accuracy 0.78 .
Term existance accuracy 2% , .
, term existance , doc2vec
!
67 / 69
.
KoNLPy , NLTK , Gensim
.
Term existance document vector .
task apply
!
/ *
/
!
* , , , : , SNUDM Technical Reports, Apr 14,
2015. 68 / 69
:D
http://lucypark.kr
@echojuliett
69 / 69

Weitere ähnliche Inhalte

Was ist angesagt?

モバイルゲームの「大規模な開発」かつ「高頻度の更新」を実現するための開発環境整備の取り組み
モバイルゲームの「大規模な開発」かつ「高頻度の更新」を実現するための開発環境整備の取り組みモバイルゲームの「大規模な開発」かつ「高頻度の更新」を実現するための開発環境整備の取り組み
モバイルゲームの「大規模な開発」かつ「高頻度の更新」を実現するための開発環境整備の取り組み
MorioImai
 
文字コードに起因する脆弱性とその対策(増補版)
文字コードに起因する脆弱性とその対策(増補版)文字コードに起因する脆弱性とその対策(増補版)
文字コードに起因する脆弱性とその対策(増補版)
Hiroshi Tokumaru
 

Was ist angesagt? (20)

【Unite 2017 Tokyo】Navmesh完全マスターへの道
【Unite 2017 Tokyo】Navmesh完全マスターへの道【Unite 2017 Tokyo】Navmesh完全マスターへの道
【Unite 2017 Tokyo】Navmesh完全マスターへの道
 
Angular の紹介
Angular の紹介Angular の紹介
Angular の紹介
 
もし福岡にコミュニティがなかったら ∼ とある福岡ITコミュニティの話
もし福岡にコミュニティがなかったら ∼ とある福岡ITコミュニティの話もし福岡にコミュニティがなかったら ∼ とある福岡ITコミュニティの話
もし福岡にコミュニティがなかったら ∼ とある福岡ITコミュニティの話
 
モバイルゲームの「大規模な開発」かつ「高頻度の更新」を実現するための開発環境整備の取り組み
モバイルゲームの「大規模な開発」かつ「高頻度の更新」を実現するための開発環境整備の取り組みモバイルゲームの「大規模な開発」かつ「高頻度の更新」を実現するための開発環境整備の取り組み
モバイルゲームの「大規模な開発」かつ「高頻度の更新」を実現するための開発環境整備の取り組み
 
ロボット好き集まれ!こいつ、動くぞ。星と翼のパラドクス開発事例
ロボット好き集まれ!こいつ、動くぞ。星と翼のパラドクス開発事例ロボット好き集まれ!こいつ、動くぞ。星と翼のパラドクス開発事例
ロボット好き集まれ!こいつ、動くぞ。星と翼のパラドクス開発事例
 
PHPの今とこれから2020
PHPの今とこれから2020PHPの今とこれから2020
PHPの今とこれから2020
 
Unreal Engine最新機能 アニメーション+物理ショーケース!
Unreal Engine最新機能 アニメーション+物理ショーケース!Unreal Engine最新機能 アニメーション+物理ショーケース!
Unreal Engine最新機能 アニメーション+物理ショーケース!
 
第1回UE4勉強会 in 大阪 - エンジン改造ってどうなの?
第1回UE4勉強会 in 大阪 - エンジン改造ってどうなの?第1回UE4勉強会 in 大阪 - エンジン改造ってどうなの?
第1回UE4勉強会 in 大阪 - エンジン改造ってどうなの?
 
PhysXを使って 揺れものやってみた~UE版~
PhysXを使って 揺れものやってみた~UE版~PhysXを使って 揺れものやってみた~UE版~
PhysXを使って 揺れものやってみた~UE版~
 
Modeling FSMs
Modeling FSMsModeling FSMs
Modeling FSMs
 
Fuzz-testing: A hacker's approach to making your code more secure | Pascal Ze...
Fuzz-testing: A hacker's approach to making your code more secure | Pascal Ze...Fuzz-testing: A hacker's approach to making your code more secure | Pascal Ze...
Fuzz-testing: A hacker's approach to making your code more secure | Pascal Ze...
 
UE4でAIとビヘイビアツリーと-基礎-
UE4でAIとビヘイビアツリーと-基礎-UE4でAIとビヘイビアツリーと-基礎-
UE4でAIとビヘイビアツリーと-基礎-
 
【Unite Tokyo 2019】たのしいDOTS〜初級から上級まで〜
【Unite Tokyo 2019】たのしいDOTS〜初級から上級まで〜【Unite Tokyo 2019】たのしいDOTS〜初級から上級まで〜
【Unite Tokyo 2019】たのしいDOTS〜初級から上級まで〜
 
UE4とBlenderでランニングコストを抑えるモダンなワークフロー
UE4とBlenderでランニングコストを抑えるモダンなワークフローUE4とBlenderでランニングコストを抑えるモダンなワークフロー
UE4とBlenderでランニングコストを抑えるモダンなワークフロー
 
CG2013 09
CG2013 09CG2013 09
CG2013 09
 
文字コードに起因する脆弱性とその対策(増補版)
文字コードに起因する脆弱性とその対策(増補版)文字コードに起因する脆弱性とその対策(増補版)
文字コードに起因する脆弱性とその対策(増補版)
 
【GTMF2018TOKYO】ScriptableRenderPipelineでアプリに最適な描画をしよう
【GTMF2018TOKYO】ScriptableRenderPipelineでアプリに最適な描画をしよう【GTMF2018TOKYO】ScriptableRenderPipelineでアプリに最適な描画をしよう
【GTMF2018TOKYO】ScriptableRenderPipelineでアプリに最適な描画をしよう
 
Visual Dataprepで建築データを美味しく下ごしらえ UNREAL FEST EXTREME 2021 SUMMER
Visual Dataprepで建築データを美味しく下ごしらえ UNREAL FEST EXTREME 2021 SUMMERVisual Dataprepで建築データを美味しく下ごしらえ UNREAL FEST EXTREME 2021 SUMMER
Visual Dataprepで建築データを美味しく下ごしらえ UNREAL FEST EXTREME 2021 SUMMER
 
UE4ディープラーニングってやつでなんとかして!環境構築編(Python3+TensorFlow)
UE4ディープラーニングってやつでなんとかして!環境構築編(Python3+TensorFlow) UE4ディープラーニングってやつでなんとかして!環境構築編(Python3+TensorFlow)
UE4ディープラーニングってやつでなんとかして!環境構築編(Python3+TensorFlow)
 
20101022 構成管理勉強会資料
20101022 構成管理勉強会資料20101022 構成管理勉強会資料
20101022 構成管理勉強会資料
 

Ähnlich wie 한국어와 NLTK, Gensim의 만남

Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataKernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Anne Nicolas
 
python chapter 1
python chapter 1python chapter 1
python chapter 1
Raghu nath
 
Python chapter 2
Python chapter 2Python chapter 2
Python chapter 2
Raghu nath
 

Ähnlich wie 한국어와 NLTK, Gensim의 만남 (20)

Compiler Construction | Lecture 5 | Transformation by Term Rewriting
Compiler Construction | Lecture 5 | Transformation by Term RewritingCompiler Construction | Lecture 5 | Transformation by Term Rewriting
Compiler Construction | Lecture 5 | Transformation by Term Rewriting
 
Natural Language Processing and Python
Natural Language Processing and PythonNatural Language Processing and Python
Natural Language Processing and Python
 
CS4200 2019 | Lecture 5 | Transformation by Term Rewriting
CS4200 2019 | Lecture 5 | Transformation by Term RewritingCS4200 2019 | Lecture 5 | Transformation by Term Rewriting
CS4200 2019 | Lecture 5 | Transformation by Term Rewriting
 
Python memory management_v2
Python memory management_v2Python memory management_v2
Python memory management_v2
 
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataKernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary data
 
Scala eXchange opening
Scala eXchange openingScala eXchange opening
Scala eXchange opening
 
Yoyak ScalaDays 2015
Yoyak ScalaDays 2015Yoyak ScalaDays 2015
Yoyak ScalaDays 2015
 
RNN, LSTM and Seq-2-Seq Models
RNN, LSTM and Seq-2-Seq ModelsRNN, LSTM and Seq-2-Seq Models
RNN, LSTM and Seq-2-Seq Models
 
Music as data
Music as dataMusic as data
Music as data
 
Alastair Butler - 2015 - Round trips with meaning stopovers
Alastair Butler - 2015 - Round trips with meaning stopoversAlastair Butler - 2015 - Round trips with meaning stopovers
Alastair Butler - 2015 - Round trips with meaning stopovers
 
python chapter 1
python chapter 1python chapter 1
python chapter 1
 
Python chapter 2
Python chapter 2Python chapter 2
Python chapter 2
 
Gpu programming with java
Gpu programming with javaGpu programming with java
Gpu programming with java
 
Python
PythonPython
Python
 
Python lecture 05
Python lecture 05Python lecture 05
Python lecture 05
 
Data Exploration and Visualization with R
Data Exploration and Visualization with RData Exploration and Visualization with R
Data Exploration and Visualization with R
 
Class 31: Deanonymizing
Class 31: DeanonymizingClass 31: Deanonymizing
Class 31: Deanonymizing
 
Declare Your Language: Transformation by Strategic Term Rewriting
Declare Your Language: Transformation by Strategic Term RewritingDeclare Your Language: Transformation by Strategic Term Rewriting
Declare Your Language: Transformation by Strategic Term Rewriting
 
Super Advanced Python –act1
Super Advanced Python –act1Super Advanced Python –act1
Super Advanced Python –act1
 
Lies, Damned Lies, and Substrings
Lies, Damned Lies, and SubstringsLies, Damned Lies, and Substrings
Lies, Damned Lies, and Substrings
 

Mehr von Eunjeong (Lucy) Park

Introduction to Data Mining for Newbies
Introduction to Data Mining for NewbiesIntroduction to Data Mining for Newbies
Introduction to Data Mining for Newbies
Eunjeong (Lucy) Park
 

Mehr von Eunjeong (Lucy) Park (6)

파이썬과 커뮤니티와 한국어 오픈데이터
파이썬과 커뮤니티와 한국어 오픈데이터파이썬과 커뮤니티와 한국어 오픈데이터
파이썬과 커뮤니티와 한국어 오픈데이터
 
The beginner’s guide to 웹 크롤링 (스크래핑)
The beginner’s guide to 웹 크롤링 (스크래핑)The beginner’s guide to 웹 크롤링 (스크래핑)
The beginner’s guide to 웹 크롤링 (스크래핑)
 
자바, 미안하다! 파이썬 한국어 NLP
자바, 미안하다! 파이썬 한국어 NLP자바, 미안하다! 파이썬 한국어 NLP
자바, 미안하다! 파이썬 한국어 NLP
 
대한민국을 위한 Open Linked Political Data 플랫폼, "정치in" 제안
대한민국을 위한 Open Linked Political Data 플랫폼, "정치in" 제안대한민국을 위한 Open Linked Political Data 플랫폼, "정치in" 제안
대한민국을 위한 Open Linked Political Data 플랫폼, "정치in" 제안
 
On Semi-Supervised Learning and Beyond
On Semi-Supervised Learning and BeyondOn Semi-Supervised Learning and Beyond
On Semi-Supervised Learning and Beyond
 
Introduction to Data Mining for Newbies
Introduction to Data Mining for NewbiesIntroduction to Data Mining for Newbies
Introduction to Data Mining for Newbies
 

Kürzlich hochgeladen

Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
nirzagarg
 
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
nirzagarg
 
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
gajnagarg
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Klinik kandungan
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
nirzagarg
 
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
vexqp
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
gajnagarg
 
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
wsppdmt
 
Gartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptxGartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptx
chadhar227
 
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
vexqp
 
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
Health
 

Kürzlich hochgeladen (20)

Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
 
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
 
Dubai Call Girls Peeing O525547819 Call Girls Dubai
Dubai Call Girls Peeing O525547819 Call Girls DubaiDubai Call Girls Peeing O525547819 Call Girls Dubai
Dubai Call Girls Peeing O525547819 Call Girls Dubai
 
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
 
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
 
Sequential and reinforcement learning for demand side management by Margaux B...
Sequential and reinforcement learning for demand side management by Margaux B...Sequential and reinforcement learning for demand side management by Margaux B...
Sequential and reinforcement learning for demand side management by Margaux B...
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Digital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham WareDigital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham Ware
 
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
 
Data Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdfData Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdf
 
7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 
Gartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptxGartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptx
 
SR-101-01012024-EN.docx Federal Constitution of the Swiss Confederation
SR-101-01012024-EN.docx  Federal Constitution  of the Swiss ConfederationSR-101-01012024-EN.docx  Federal Constitution  of the Swiss Confederation
SR-101-01012024-EN.docx Federal Constitution of the Swiss Confederation
 
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
 
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
 

한국어와 NLTK, Gensim의 만남

  • 1. NLTK, Gensim ( : ) PyCon Korea 2015 2015-08-29 Lucy Park ( ) 1 / 69
  • 2. KoNLPy , ? ! NLTK , Gensim , . bag-of-words, document embedding , KoNLPy, NLTK, Gensim . KoNLPy: http://konlpy.org NLTK: http://nltk.org Gensim: http://radimrehurek.com/gensim 2 / 69
  • 3. (a.k.a. lucypark, echojuliett, e9t) . CAVEAT: KoNLPy Just another yak shaver... 11:49 <@sanxiyn> 또다시 yak shaving의 신비한 세계 11:51 <@sanxiyn> yak shaving이 뭔지 다 아시죠? 11:51 <디토군> 방금 찾아보고 왔음 11:51 <@mana> (조용히 설명을 기대중) 11:51 <@sanxiyn> 나무를 베려고 하는데 11:52 <@sanxiyn> 도끼질을 하다가 11:52 <@sanxiyn> 도끼가 더 잘 들면 나무를 쉽게 벨텐데 해서 11:52 <@sanxiyn> 도끼 날을 세우다가 11:52 <@sanxiyn> 도끼 가는 돌이 더 좋으면 도끼 날을 더 빨리 세울텐데 해서 11:52 <@sanxiyn> 좋은 숫돌이 있는 곳을 수소문해 보니 11:52 <@mana> … 11:52 <&홍민희> 그거 전형적인 제 행동이네요 11:52 <@sanxiyn> 저 멀이 어디에 세계 최고의 숫돌이 난다고 11:52 <@sanxiyn> 거기까지 야크를 타고 가려다가 11:52 <@mana> 항상하던 짓이라서 타이핑을 할 수 없었습니다 11:52 <@sanxiyn> 야크 털을 깎아서… 11:52 <@sanxiyn> etc. 3 / 69
  • 4. / KoNLPy, NLTK, Gensim (?) ! Gensim , . , . . ? 3 4 / 69
  • 7. . , . . -- Bengio et al., Deep Learning 1 , Book in preparation for MIT Press, 2015. 7 / 69
  • 9. ? , . " " . 9 / 69
  • 11. , , (binary) set one-hot vector 1-of-V = , = , = , . . . =w ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 1 0 0 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ w ( ) ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 0 1 0 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ w ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 0 0 1 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ w ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 0 0 0 ⋮ 1 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ wi i |V| 11 / 69
  • 12. . ex: " " " ", " " " " 0 . ( = ( = 0w )⊤ w w )⊤ w * BOW (discrete) (local representation) . (distributed representation) 12 / 69
  • 13. , bag of words(BOW) / ( "term existance" ) n (a.k.a. term frequency (TF)) ex: term frequency–inverse document frequency (TF-IDF) =d1 binary ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 1 1 0 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ =d1 tf ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 3 1 0 ⋮ 0 ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 13 / 69
  • 14. .* . . " ". * : Vincent Spruyt, The Curse of Dimensionality in classification, 2014. 14 / 69
  • 15. " ?" (WordNet) X is-a Y (hypernym) (DAG; directed acyclic graph) . ( : .) from nltk.corpus import wordnet as wn wn.synsets('computer') # "computer" #=> [Synset('computer.n.01'), Synset('calculator.n.01')] : ( ) / 15 / 69
  • 16. : 1. , . from nltk.corpus import wordnet as wn wn.synsets('nginx') #=> [] 2. . wn.synsets('yolo') # : you only live once #=> [] 3. NLTK ... ( !) wn.synsets(' ', lang='kor') #=> WordNetError: Language is not supported. 16 / 69
  • 17. .. . ? 17 / 69
  • 19. . You shall know a word by the company it keeps. -- J.R. Firth (1957) 19 / 69
  • 20. 1: ? __ . , __ . __ . : [ ] 20 / 69
  • 21. 2: " " ? ? 21 / 69
  • 24. , (context) . (context) := (window) / 1-8 , 3-17 * " " ex: "PyCon" "R" "Python" ? * Jurafsky, Martin, Speech and Language Processing 3rd Ed. 19.1 , Book in preparation, 2015. 24 / 69
  • 26. Co-occurrence documents = [[" ", " ", " ", " "], [" ", "R", " ", " "]] 1. Term-document matrix ( ): Computation (!) x_td = [[1, 1], # [1, 0], # [0, 1], # R [1, 1], # [1, 1]] # 1. Term-term matrix ( ): syntactic, semantic ( ==5): x_tt = [[0, 1, 1, 2, 0], # [1, 0, 0, 1, 1], # [1, 0, 0, 1, 1], # R [2, 1, 1, 0, 2], # [0, 1, 1, 2, 0]] # ∈Xtd ℝ|V|×M M ∈Xtt ℝ|V|×|V| 26 / 69
  • 27. Co-occurrence matrix . import math def dot_product(v1, v2): return sum(map(lambda x: x[0] * x[1], zip(v1, v2))) def cosine_measure(v1, v2): prod = dot_product(v1, v2) len1 = math.sqrt(dot_product(v1, v1)) len2 = math.sqrt(dot_product(v2, v2)) return prod / (len1 * len2) : skewed ( , ) discriminative (ex: list(' ')) ? 27 / 69
  • 28. Neural embeddings : ! (language model) ex: [" ", " ", " ", " "] ? (ex: "!", "." "?") , . Neural network language model (NNLM)* * Bengio, Ducharme, Vincent, A Neural Probabilistic Language Model, {NIPS, 2000; JMLR, 2003}. 28 / 69
  • 29. word2vec * T. Mikolov word2vec neural embedding ! (n -> m ) * Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013. 29 / 69
  • 30. : word2vec * * , , , (1946-2015) , 49 2 , 2015. 30 / 69
  • 32. doc2vec * : ( ) ! . . |V| * Le, Mikolov, Distributed Representations of Sentences and Documents, ICML, 2014. ( doc2vec paragraph vector ) 32 / 69
  • 33. : Term frequency -> . . documents = [ [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '] ] words = set(sum(documents, [])) print(words) # => {' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '} def term_frequency(document): # do something return vector vectors = [term_frequency(doc) for doc in documents] print(vectors) # => { # 'S1': [1, 1, 0, 1, 0, 0, 1, 1], # 'S2': [1, 1, 0, 1, 0, 0, 1, 1], # 'S3': [1, 0, 0, 1, 1, 1, 1, 0], # 'S4': [1, 1, 0, 1, 1, 1, 0, 0], # 'S5': [1, 0, 1, 1, 1, 0, 1, 0], # 'S6': [1, 1, 1, 1, 1, 0, 0, 0] # } |V| 33 / 69
  • 34. : doc2vec -> . . documents = [ [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' '] ] words = set(sum(documents, [])) print(words) # => {' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '} def doc2vec(document): # do something else ( Gensim !) return vector vectors = [doc2vec(doc) for doc in documents] print(vectors) # => { # 'S1': [-0.01599941, 0.02135301, 0.0927715], # 'S2': [0.02333261, 0.00211833, 0.00254255], # 'S3': [-0.00117272, -0.02043504, 0.00593186], # 'S4': [0.0089237, -0.00397806, -0.13199195], # 'S5': [0.02555059, 0.01561624, 0.03603476], # 'S6': [-0.02114407, -0.01552016, 0.01289922] # } |V| 34 / 69
  • 35. . : Sparse, long vectors Dense, short vectors 1-hot-vectors word2vec bag-of-words (term existance, TF, TF-IDF ) doc2vec . 35 / 69
  • 36. Sentiment analysis for Korean movie reviews 36 / 69
  • 37. , ? 37 / 69
  • 39. Naver sentiment movie corpus v1.0 Maas et al., 2011 : 100 140 ( ' ') 20 ( 64 ) ratings_train.txt: 15 , ratings_test.txt: 5 / (i.e., random guess yields 50% accuracy) : 19MB URL: http://github.com/e9t/nsmc/ 39 / 69
  • 41. $ cat ratings_train.txt | head -n 25 id document label 9976970 .. 0 3819312 ... .... 1 10265843 0 9045019 .. .. 0 6483659 ! 5403919 3 1 8 . ... . 0 7797314 . 0 9443947 .. 7156791 1 5912145 ? .. ? 9008700 . ♥ 1 10217543 90 !! ~ 5957425 0 8628627 . . . 9864035 6852435 1 9143163 . 4891476 0 7465483 !!♥ 3989148 , . . 1 4581211 . 1 2718894 1 9705777 . .... 471131 . 1 41 / 69
  • 42. , ? 1. KoNLPy 2. NLTK 3. Bag-of-words(term existence) classify 4. Gensim doc2vec classify 42 / 69
  • 43. 1. Data preprocessing (feat. KoNLPy) def read_data(filename): with open(filename, 'r') as f: data = [line.split('t') for line in f.read().splitlines()] data = data[1:] # header return data train_data = read_data('ratings_train.txt') test_data = read_data('ratings_test.txt') NOTE: pandas.read_csv() quoting parameter csv.QUOTE_NONE (3) (ex: ) 43 / 69
  • 44. 1. Data preprocessing (feat. KoNLPy) # row, column print(len(train_data)) # nrows: 150000 print(len(train_data[0])) # ncols: 3 print(len(test_data)) # nrows: 50000 print(len(test_data[0])) # ncols: 3 44 / 69
  • 45. 1. Data preprocessing (feat. KoNLPy) from konlpy.tag import Twitter pos_tagger = Twitter() def tokenize(doc): # norm, stem optional return ['/'.join(t) for t in pos_tagger.pos(doc, norm=True, stem=True)] train_docs = [(tokenize(row[1]), row[2]) for row in train_data] test_docs = [(tokenize(row[1]), row[2]) for row in test_data] # from pprint import pprint pprint(train_docs[0]) # => [([' /Exclamation', # ' /Noun', # '../Punctuation', # ' /Noun', # ' /Noun', # ' /Verb', # ' /Noun'], # '0')] NOTE: . 45 / 69
  • 46. 1. Data preprocessing (feat. KoNLPy) " ?" , x * " (POS) ?" ex: ' /Josa' vs ' /Noun' * https://github.com/karpathy/char-rnn 46 / 69
  • 47. 2. Data exploration (feat. NLTK) ! Training data token tokens = [t for d in train_docs for t in d[0]] print(len(tokens)) # => 2194536 47 / 69
  • 48. 2. Data exploration (feat. NLTK) nltk.Text() ! (Exploration ) document , . import nltk text = nltk.Text(tokens, name='NMSC') print(text) # => <Text: NMSC> 48 / 69
  • 49. 2. Data exploration (feat. NLTK) Count tokens print(len(text.tokens)) # returns number of tokens # => 2194536 print(len(set(text.tokens))) # returns number of unique tokens # => 48765 pprint(text.vocab().most_common(10)) # returns frequency distribution # => [('./Punctuation', 68630), # (' /Noun', 51365), # (' /Verb', 50281), # (' /Josa', 39123), # (' /Verb', 34764), # (' /Josa', 30480), # ('../Punctuation', 29055), # (' /Josa', 27108), # (' /Josa', 26696), # (' /Josa', 23481)] 49 / 69
  • 50. 2. Data exploration (feat. NLTK) Plot tokens by frequency text.plot(50) # Plot sorted frequency of top 50 tokens ;; 50 / 69
  • 51. 2. Data exploration (feat. NLTK) ! Troubleshooting: For those who see rectangles instead of letters in the saved plot file, include the following configurations before drawing the plot: Some example fonts: Mac OS: /Library/Fonts/AppleGothic.ttf from matplotlib import font_manager, rc font_fname = 'c:/windows/fonts/gulim.ttc' # A font of your choice font_name = font_manager.FontProperties(fname=font_fname).get_name() rc('font', family=font_name) 51 / 69
  • 52. 2. Data exploration (feat. NLTK) . . . 52 / 69
  • 53. 2. Data exploration (feat. NLTK) Collocations ( ): (ex: "text" + "mining") text.collocations() # # => /Determiner /Noun; /Suffix /Josa; /Determiner /Noun; /Noun # /Verb; /Noun /Josa; 10/Number /Noun; /Noun /Suffix; /Noun # /Adjective; /Noun /Josa; /Noun /Josa; /Noun /Josa; /Suffix # /Josa; /Noun /Noun; /Noun /Josa; /Suffix /Josa; /Noun # /Josa; /Noun /Josa; /Suffix /Josa; /Noun /Suffix; /Noun # /Josa NOTE: nltk.Text() , source code API . . 53 / 69
  • 54. 3. Sentiment classification with term-existance* term . # 2000 # WARNING: time/memory efficient selected_words = [f[0] for f in text.vocab().most_common(2000)] def term_exists(doc): return {'exists({})'.format(word): (word in set(doc)) for word in selected_words} # training corpus train_docs = train_docs[:10000] train_xy = [(term_exists(d), c) for d, c in train_docs] test_xy = [(term_exists(d), c) for d, c in test_docs] Try this: . 50 ? TF, TF-IDF ? (sparse matrix) or scikit-learn TfidfVectorizer() ! * Natural Language Processing with Python Chapter 6. Learning to classify text 54 / 69
  • 55. 3. Sentiment classification with term-existance . ! . NLTK : Naive Bayes Classifiers Decision Tree Classifiers Maximum Entropy Classifiers . scikit-learn . NLTK Naive Bayes Classifier ! 55 / 69
  • 56. 3. Sentiment classification with term-existance Naive Bayes classifier # classifier = nltk.NaiveBayesClassifier.train(train_xy) print(nltk.classify.accuracy(classifier, test_xy)) # => 0.80418 classifier.show_most_informative_features(10) # => Most Informative Features # exists( /Noun) = True 1 : 0 = 38.0 : 1.0 # exists( /Noun) = True 0 : 1 = 30.1 : 1.0 # exists(♥/Foreign) = True 1 : 0 = 24.5 : 1.0 # exists( /Noun) = True 0 : 1 = 22.1 : 1.0 # exists( /Noun) = True 0 : 1 = 19.5 : 1.0 # exists( /Noun) = True 0 : 1 = 19.4 : 1.0 # exists( /Noun) = True 1 : 0 = 18.9 : 1.0 # exists( /Noun) = True 0 : 1 = 16.9 : 1.0 # exists( /Noun) = True 1 : 0 = 16.9 : 1.0 # exists( /Noun) = True 1 : 0 = 15.9 : 1.0 / baseline accuracy 0.50 1 accuracy 0.80 . ? 56 / 69
  • 57. 4. Sentiment classification with doc2vec (feat. Gensim) doc2vec / . Gensim . * : Radim Rehureck * Gensim 0.12.0 . ! 57 / 69
  • 58. 4. Sentiment classification with doc2vec (feat. Gensim) !!! Gensim ... 58 / 69
  • 59. 4. Sentiment classification with doc2vec (feat. Gensim) doc2vec . doc2vec from gensim.models.doc2vec import TaggedDocument OK from collections import namedtuple TaggedDocument = namedtuple('TaggedDocument', 'words tags') # 15 training documents tagged_train_docs = [TaggedDocument(d, [c]) for d, c in train_docs] tagged_test_docs = [TaggedDocument(d, [c]) for d, c in test_docs] 59 / 69
  • 60. 4. Sentiment classification with doc2vec (feat. Gensim) from gensim.models import doc2vec # doc_vectorizer = doc2vec.Doc2Vec(size=300, alpha=0.025, min_alpha=0.025, seed=1234) doc_vectorizer.build_vocab(tagged_train_docs) # Train document vectors! for epoch in range(10): doc_vectorizer.train(tagged_train_docs) doc_vectorizer.alpha -= 0.002 # decrease the learning rate doc_vectorizer.min_alpha = doc_vectorizer.alpha # fix the learning rate, no decay # To save # doc_vectorizer.save('doc2vec.model') Try this: Vector size, epoch 60 / 69
  • 61. 4. Sentiment classification with doc2vec (feat. Gensim) ! (1/3) pprint(doc_vectorizer.most_similar(' /Noun')) # => [(' /Noun', 0.5669919848442078), # (' /Noun', 0.5522832274436951), # (' /Noun', 0.5021427869796753), # (' /Noun', 0.5000861287117004), # (' /Noun', 0.4368450343608856), # (' /Noun', 0.42848479747772217), # (' /Noun', 0.42714330554008484), # (' /Noun', 0.41590073704719543), # (' /Noun', 0.41056352853775024), # (' /Noun', 0.4052993059158325)] ... . 61 / 69
  • 62. 4. Sentiment classification with doc2vec (feat. Gensim) ! (2/3) pprint(doc_vectorizer.most_similar(' /KoreanParticle')) # => [(' /KoreanParticle', 0.5768033862113953), # (' /KoreanParticle', 0.4822016954421997), # ('!!!!/Punctuation', 0.4395076632499695), # ('!!!/Punctuation', 0.4077949523925781), # ('!!/Punctuation', 0.4026390314102173), # ('~/Punctuation', 0.40038347244262695), # ('~~/Punctuation', 0.39946430921554565), # ('!/Punctuation', 0.3899948000907898), # ('^^/Punctuation', 0.3852730989456177), # ('~^^/Punctuation', 0.3797937035560608)] ! . 62 / 69
  • 63. 4. Sentiment classification with doc2vec (feat. Gensim) , v(KING) – v(MAN) + v(WOMAN) = v(QUEEN) ? 63 / 69
  • 64. 4. Sentiment classification with doc2vec (feat. Gensim) ! (3/3) pprint(doc_vectorizer.most_similar(positive=[' /Noun', ' /Noun'], negative=[' /Noun'])) # => [(' /Noun', 0.32974398136138916), # (' /Noun', 0.305545836687088), # (' /Noun', 0.2899821400642395), # (' /Noun', 0.2856029272079468), # (' /Noun', 0.2840743064880371), # (' /Noun', 0.28247544169425964), # (' /Noun', 0.2795347571372986), # (' /Noun', 0.2794691324234009), # (' /Noun', 0.27567577362060547), # (' /Noun', 0.2734225392341614)] ...? ? 64 / 69
  • 65. 4. Sentiment classification with doc2vec (feat. Gensim) , ;; , . ? * text.concordance(' /Noun', lines=10) # => Displaying 10 of 145 matches: # Josa /Noun /Josa ,,/Punctuation /Noun /Noun ...../Punctuation /Noun # /Noun /Noun ../Punctuation /Noun /Noun /Noun /Noun /Noun /Josa /Nou # nction /Noun /Josa /Adjective /Noun /Verb /Verb /Noun /Josa # /Noun /Noun /Noun ?/Punctuation /Noun /Noun ./Punctuation /Noun /No # b /Noun /Noun /KoreanParticle /Noun /Noun /Adjective ./Punctuation # osa /Noun /Josa /Noun /Josa /Noun /Josa /Noun /Noun /Josa /Nou # /Noun /Josa /Noun ./Punctuation /Noun ,/Punctuation /Noun /Noun /Suf # Josa /Noun /Noun /Noun /Noun /Noun /Josa /Noun /Suffix /Josa / # /Verb ./Punctuation /Adjective /Noun /Noun .../Punctuation /Noun # tive /Noun /Noun .../Punctuation /Noun /Noun ../Punctuation /Noun /J * Huang, Socher, Improving Word Representations via Global Context and Multiple Word Prototypes, ACL, 2012. 65 / 69
  • 66. 4. Sentiment classification with doc2vec (feat. Gensim) . . train_x = [doc_vectorizer.infer_vector(doc.words) for doc in tagged_train_docs] train_y = [doc.tags[0] for doc in tagged_train_docs] len(train_x) # term existance # => 150000 len(train_x[0]) # => 300 test_x = [doc_vectorizer.infer_vector(doc.words) for doc in tagged_test_docs] test_y = [doc.tags[0] for doc in tagged_test_docs] len(test_x) # => 50000 len(test_x[0]) # => 300 66 / 69
  • 67. 4. Sentiment classification with doc2vec (feat. Gensim) scikit-learn LogisticRegression() . from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state=1234) classifier.fit(train_x, train_y) classifier.score(test_x, test_y) # => 0.78246000000000004 ! Accuracy 0.78 . Term existance accuracy 2% , . , term existance , doc2vec ! 67 / 69
  • 68. . KoNLPy , NLTK , Gensim . Term existance document vector . task apply ! / * / ! * , , , : , SNUDM Technical Reports, Apr 14, 2015. 68 / 69