SlideShare ist ein Scribd-Unternehmen logo
1 von 10
Downloaden Sie, um offline zu lesen
Sundarapandian et al. (Eds) : ACITY, AIAA, CNSA, DPPR, NeCoM, WeST, DMS, P2PTM, VLSI - 2013
pp. 85–94, 2013. © CS & IT-CSCP 2013 DOI : 10.5121/csit.2013.3410
ANALYTICAL STUDY OF FEATURE
EXTRACTION TECHNIQUES IN OPINION
MINING
Pravesh Kumar Singh1
, Mohd Shahid Husain2
1
M.Tech, Department of Computer Science and Engineering, Integral
University, Lucknow, India
erpraveshkumar@gmail.com
2
Assistant Professor, Department of Computer Science and Engineering,
Integral University, Lucknow, India
siddiquisahil@gmail.com
ABSTRACT
Although opinion mining is in a nascent stage of development but still the ground is set for
dense growth of researches in the field. One of the important activities of opinion mining is to
extract opinions of people based on characteristics of the object under study. Feature extraction
in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first
part discusses various techniques and second part makes a detailed appraisal of the major
techniques used for feature extraction.
KEYWORDS
Opinion Mining, Feature Extraction, Clustering, Support Vector Machines.
1. INTRODUCTION
People are generally more interested in other’s opinion and when it comes to company product
then it becomes even more important. So, the information gathering behaviour makes us to collect
and understand other’s views. Increasing networking capabilities provide a way for surprisingly
large multitude of resources that contains options like
blogs, reviews etc. As a result, now businesses are trying
to analyse these floating opinions for competitive advan-
tages. Now with the burst of activities in the area of sen-
timent analysis they deal with opinions in the first class
object as there is increasing interest in the systems of
opinion mining.
A fundamental step in opinion-mining and sentiment-
analysis applications is feature extraction. The combined
opinion mining procedure is illustrated in figure 1. Several
methods are used for feature extraction among which, fol-
lowing are the important ones:
1) Naïve –Bayes Classifier (NB)
2) Support Vector Machine (SVM)
Opinions
Feature
Extraction
Sentence
Opinion
Sentence Analysis
Overall Review
Score Calculation
Results and
Reviews
Figure 1: Opinion Mining Process
Sentence
Extraction
86 Computer Science & Information Technology (CS & IT)
3) Multi Layer Perceptron (MLP)
4) Clustering Classifier
In this paper, I have categorized the work done for feature extraction and classification in opinion
mining and sentiment analysis. We have also tried to find out the performance, advantages and
disadvantages of different techniques.
2. DATA SETS
This section provides brief details of datasets used by us in our experiments.
2.1 Product Review Dataset
The following dataset is taken from the work of blitzer and is of multi-domain characteristics. It
consists of product reviews taken from amazon.com which belongs to a total of 25 categories like
toys, videos, reviews etc. From the randomly selected five domains 4000 +ve and 4000-ve re-
views are randomly samples.
2.2 Movie Review Dataset
The second dataset is taken from the work of pang & lee (2004). It contains movie review with of
the feature of 1000+ve and 1000-ve processed movie reviews.
Table 1: Result of Simple n-gram
Movie Review Product Reviews
Unigram only 64.1 42.91
Bigram 76.15 69.62
Trigram 76.1 71.37
(uni+bi) gram 77.15 72.94
(uni+bi+tri)
gram
80.15 78.67
3. CLASSIFICATION TECHNIQUES
3.1 Naïve Bayes Classifier
This classifier is based on the probability statement that was given by Bayes. This theorem pro-
vides conditional probability of occurrence of event E1 when E2 has already occurred, the vice
versa can also be calculated by following mathematical statement.
2
112
21
E
)E(P)E|E(P
)E|E(P =
This basically helps in deciding the polarity of data in which opinions / reviews / arguments can
be classified as positive or negative which is facilitated by collection of positive or negative ex-
amples already fed.
Naïve Bayes algorithm is implemented to estimate the probability of a data to be negative or posi-
tive. The aforesaid probability is calculated by studying positive and negative examples & then
calculating the frequency of each pole which is technically termed as learning. This learning is
actually supervised as there is an existence of examples1
. Thus, the conditional probability of a
word with positive or negative meaning is calculated in view of a plethora of positive and negative ex-
amples & calculating the frequency of each of class.
Computer Science & Information Technology (CS & IT) 87
)Sentence(P
)Sentiment|Sentence(P)Sentiment(P
)Sentence|Sentiment(P =
So,
WordofnosTotalclassatobelongingwordsofNumber
1classinoccurencewordofNumber
)Sentiment|Word(P
+
+
= 1
Algorithm is shown in figure 2:
Algorithm has following steps
S1: Initialize P(positive) ← num − popozitii (positive)/ num_total_propozitii
S2: Initialize P(negative) ← num − popozitii (negative) / num_total_propozitii
S3: Convert sentences into words
for each class of {positive, negative}:
for each word in {phrase}
P(word | class) < num_apartii (word | class) 1 | num_cuv (class) +
num_total_cuvinte
P (class) ←P (class) * P (word | class)
Returns max {P(pos), P(neg)}
3.1.1. Evaluation of Algorithm
The measures used for algorithm evaluation are:
Accuracy
Precision
Recall
Relevance
Contingency table for analysis of algorithm:
Relevant Irrelevant
Detected Opinions True Positive (tp) False Positive (fp)
Undetected Opinions False Negative (fn) True Negative (tn)
Now, Precision =
fp+tp
tp
Accuracy = F,
fn+fp+tn+tp
tn+tp
=
callRe+ecisionPr
callRe*ecisionPr*2
; Recall =
fn+tp
tp
1
The concept is to actually a fact that each & every word in a label is already referred in learning set.
Classifier
Classifier
Training Set
+ve Sentence
−ve Sentence
Classifier
Sentence
Review
Book Review
Figure 2
88 Computer Science & Information Technology (CS & IT)
3.1.2. Accuracy
[1] Ion SMEUREANU, Cristian BUCUR, training the naïve gauss algorithm on 5000 sentences
and get 0.79939209726444 accuracy where no of groups (n) is 2.
3.1.3. Advantages of Naïve Bayes Classification Methods
1. Model is easy to interpret
2. Efficient computation.
3.1.4. Disadvantage of Naïve Bayes Classification Methods
Assumptions of attributes being independent, which may not be necessarily valid.
3.2 Support Vector Machine (SVM)
The basic goal of support vector machine is to search a decision boundary between two classes
that is excellently far away from any point in the training data2
.
SVM develops a hyper planes or a set of hyper planes in infinite dimension space. This distance
from decision surface to closest data point determines the margin of classifier. So the hyper
planes act as decision surface which act as criteria to decide the distance of any data point from it.
The margin of classifier is calculated by the distance from the closest data point. This success-
fully creates a classification but a slight error will not cause a misclassification.
For a training data set D, a set of n points:
( ) { }{ } )1.......(1,1¬εc,Rεxc,x=D
n
1=ii
p
iii
Where, xi is a p-dimensional real vector. Find the maximum-margin hyper plane i.e. splits the
points having ci = 1 from those having ci = -1. Any hyperplane can be written as the set of points
satisfying:
)........(21=b-x•w
Finding a maximum margin hyperplane, reduces to finding the pair w and b, such that the dis-
tance between the hyperplanes is maximal while still separating the data. These hyperplanes (fig.
1) are described by:
1b-xw =• and 1-b-xw =•
The distance between two hyperplanes is
w
b
and therefore w needs to be minimized. Or we
can say that minimize w in w, b subject to ci (w.xi - b) ≥ 1 for any i = 1, … , n.
Using Lagranges multipliers (αi) this optimization problem can be expressed as:
Input Space Feature Space
Figure 3: Principle of Support Vector Machine
φ
Separating
Hyperplane
Computer Science & Information Technology (CS & IT) 89
( )3.....}]1-)b-x.w(c[α-w
2
1
{
αb,w
maxmin
∑
n
1=i
iii
2
2
3.2.1. Extensions of SVM
To make SVM more robust and also more adaptable to real world problems, it has some exten-
sions, some of which include following:
1. Soft Margin Classification
For very high dimensional problems, common in text classification, sometimes, data are linearly
separable. For multidimensional problems like in classification of text, data are linearly separable
sometimes. But in most cases, the opinion solution is the one that classifies most of the data and
ignore some outliers or noisy data. If the training set D cannot be separated clearly then the solu-
tion is to have fat decision classifiers and make some mistake.
Mathematically, a stack variable ξi are introduced that are not equal to zero which allow xi to not
meet the margin requirements with a cost i.e., proportional to ξ.
2. Non-linear Classification
The basic linear SVM was given by Vapnik (1963) later on Bernhard Boser, Isabelle Guyon and
Vapnik in 1992 pavd a way for non-linear classifiers by using kernel to max. margin hyper
planes. The only difference from th kernel trick given by Aizerman is that every dot product is
replaced by non-linear kernel function.
The effectiveness of SVM in this case lies in the selection of the kernel and soft margin parame-
ters.
3. Multiclass SVM
Generally SVM is applicable for two class tasks. But multiclass problems can be deal with multi-
class SVM. In this case labels are designed to instances which are drawn from a finite set of vari-
ous elements. These binary classifiers can be built by two classifiers like by either distinguish one
versus all labels or between every pair of classes one versus one.
3.2.2. Accuracy
Usenet reviews were classified by pang by using numerical ratings which were accompanied as
basic truth. Various learning methods are used but unigrams gave best outputs in a presence based
frequency model run by SVM. The accuracy undergone is the process was 82.9%.
3.2.3. Advantages of Support Vector Machine Method
1. Very good performance on experimental results
2. Low dependency on data set dimensionality.
3.2.4. Disadvantages of Support Vector Machine Method
1. Categorical or missing values need to be pre-processed.
2. Difficult interpretation of resulting model.
2
Possibly discounting some points as outliers or noise.
90 Computer Science & Information Technology (CS & IT)
3.3 Multi-Layer Perceptron (MLP)
MLP is a neural network which is feed forward with one or more layers between input and out-
put. Feed forward implies that, data flows in one direction i.e., from input layer to output layer
(i.e., in forward direction). This ANN which multilayer perceptron starts with input layer where
each node or neuron means a predicator variable. Input neurons are connected with each neuron
in hidden layers. The neurons in hidden layer are in turn connected to neuron in other hidden lay-
ers. The output layer is made up of one neuron in case of binary prediction or more than one neu-
ron in case of non-binary prediction. Such arrangement makes a streamlined flow of information
from input layer to output layer.
MLP technique is quite popular owing to the fact that it can act as universal function approxima-
tor. A “back propagation” network has at least one hidden layer with many non-linear units that
can learn any function or relationship between group of input variable whether discrete and for
continuous and output variable whether discrete and for continuous. This makes the technique of
MLP quite general, flexible ad non-linear tools.
When output layers is to be classified that has total number of nodes as total number of classes
and the node having highest value then it gives the output i.e., estimate of a class for which an in-
put is made. If there is a special case of two classes than, generally there is a node in output layer
and classification is carried between two classes and is done by applying cut off point to node
value.
An advantages of this technique, compared to classical modelling techniques, is that it does not
impose any sort of restriction with respect to the starting data (type of functional relationship be-
tween variables), neither does it usually start from specific assumptions (like the type of distribu-
tion the data follow). Another virtue of the technique lies in its capacity to estimate good models
even despite the existence of noise in the information analyzed, as occurs when there is a pres-
ence of omitted values or outlier values in the distribution of the variables. Hence, it is a robust
technique when dealing with problems of noise in the information presented; however, this does
not mean that the cleaning criteria of the data matrix should be relaxed.
3.3.1. Accuracy
Ludmila I. Kuncheva, Member, IEEE on health care data calculated accuracy of Multilayer perceptron
(MLP) as 84.25–89.50%.
3.3.2. Advantages of MLP
1) Capable of acting as a universal function approximator.
2) Capability to learn almost any relationship between input and output variables.
3.3.3. Disadvantages of MLP
1) Flexibility lies in the need to have sufficient training data and that it requires more time for its execution
than other techniques.
2) It is somewhat considered as complex “black box”.
3.4 Clustering Classifier
To identifying the prominent features of human and objects and recognizing them with a type one
can require the object clustering.
Basically clustering is unsupervised learning technique.
Computer Science & Information Technology (CS & IT) 91
The objective of clustering is to determine a fresh or different set
of classes or categories, the new groups are of concern in them-
selves, and their valuation is intrinsic. In this method, data objects
or instances groups into subset in such a method that similar in-
stances are grouped together and various different instances be-
long to the different groups.
Clustering is an unsupervised learning task, so no class values
representing a former combination of the data instances are given
(situation of supervised learning).
Clustering basically assembles data objects or instances into subset in such a method that similar
objects are assembled together, while different objects belong to different groups. The objects are
thereby prepared or organized into efficient representations that characterize the population being
sampled.
Clustering organization is denoted as a set of subsets C = C1 . . . Ck of S, such that:
φ=∩=
= ji
k
1i i CCandCS U for ji ≠ . Therefore, any object in S related to exactly one and only one
subset.
For example, consider the figure 4 where data set has three normal clusters.
Now consider the some real-life examples for illustrating clustering:
Example 1: Consider the people having similar size together to make small and large shirts.
1. Tailor-made for each person: expensive
2. One-size-fits-all: does not fit all.
Example 2: In advertising, segment consumers according to their similarities: To do targeted ad-
vertising.
Example 3: To create a topic hierarchy, we can take a group of text and organize those texts ac-
cording to their content matches.
There are two main types of measures used to estimate this relation: distance measures and simi-
larity measures.
Basically following are two kinds of measures used to guesstimate this relation
1. Distance measures and
2. Similarity measures
Distance Measures
To conclude the similarity and difference between the pair of instances, many clustering ap-
proaches usage the distance measures.
It is convenient to represent the distance between two instances let say xi and xj as: d (xi,xj). A
valid distance measure should be symmetric and gains its minimum value (usually zero) in case
of identical vectors.
Distance measures method is known as metric distance measure if follow the following proper-
ties:
Figure 4
92 Computer Science & Information Technology (CS & IT)
S∈x,x∀
x=x⇒0=)x,x(d.2
S∈x,x,x∀
)x,x(d+)x,x(d≤)x,x(dinequalityTriangle.1
ji
jiji
kji
kjjiki
There are variations in distance measures depending upon the attribute in question.
3.4.1. Clustering Methods
A number of clustering algorithms are getting popular. The basic reason of a number of clustering
methods is that “cluster” is not accurately defined (Estivill-Castro, 2000). As a result many clus-
tering methods have been developed, using a different induction principle. Farley and Raftery
(1998) gave a classification of clustering methods into two main groups: hierarchical and parti-
tioning methods. Han and Kamber (2001) gave a different categorization: density-based methods,
model-based clustering and gridbased methods. A totally different categorization based on the in-
duction principle is given by (Estivill-Castro, 2000).
1. Hierarchical Methods
According to this method clusters are created by recursive partitioning of the instances in either a
top-down or bottom-up way.
2. Partitioning Methods
Here, starting from the first (starting) partition, instances are rearranged by changing the position
of instances from one cluster to another cluster.
The basic assumption is that the number of clusters will be set by the user before. In order to get
global optimality in partitioned-based clustering, an exhaustive enumeration process of all possi-
ble partitions is done.
3. Density-based Methods
The basis of this method is probability. It suggests that the instances that belong to each cluster
are taken from a specific probability distribution. The entire distribution of the data is supposed to
be a combination of numerous disseminations. The objective of these methods is to identify the
clusters and their distribution parameters. This particular method is designed in order to discover
clusters of arbitrary shape which are not necessarily convex.
4. Grid-based Methods
In grid-based method all the operations for clustering are performed in a grid structure and for
performing in the grid structure all the available space is divided into a fixed number of cells.
The basic main advantage of this method is its speed (Han and Kamber, 2001).
5. Soft-computing Methods
This includes techniques like that of neural networks.
3.4.2. Evaluation Criteria Measures for Clustering Technique
Generally, these criteria are split into two groups named Internal and External.
1. Internal Quality Criteria
This criteria generally measures compactness of clusters using similarity measures. It generally
takes into consideration intra-cluster homogeneity, the inter-cluster separability or a combination
of these two. It doesn’t use any exterior information beside the data itself.
Computer Science & Information Technology (CS & IT) 93
2. External Quality Criteria
They are useful for examining the structure of the clusters match to some already defined classifi-
cation of the objects.
3.4.3. Accuracy
Several different classifiers were used and the accuracy of the best classifier varied from 99.57% to
65.33%, depending on the data.
3.4.4. Advantages of Clustering Method
The main advantage of this method is that it offers the groups that (approximately) fulfil an opti-
mality measure.
3.4.5. Disadvantages of Clustering Method
1. There is no learning set of labelled observations.
2. Number of groups is usually unknown.
3. Implicitly, users already chooses the appropriate features and distance measure.
4. CONCLUSION
The important part to gather information always seems as what the people think. The rising ac-
cessibility of opinion rich resources such as online analysis websites and blogs rises as one can
simply search and recognize the opinions of some other one. One can precise his/her ideas and
opinions concerning goods and facilities. These view and thoughts are subjective figures which
signifies someone opinions, sentiments, emotional state or evaluation. In this paper, we present
different methods for data (feature or text) extraction and every method have some benefits and
limitations and one can use these methods according to the situation for feature and text extrac-
tion.
Based on the survey we can find the accuracy of different methods in different data set using n-
gram feature shown in table 2.
Table 2: Accuracy of Different Methods
Movie Reviews Product Reviews
Ngram
Feature
NB MLP SVM NB MLP SVM
75.50 81.05 81.15 62.50 79.27 79.40
According to our survey, accuracy of MLP is better than other three methods when we use Ngram
feature.
The four methods discussed in the paper are actually applicable in different areas like clustering is
applied in movie reviews and SVM techniques is applied in biological reviews & analysis. Al-
though in the field of opinion mining is new, but still the diverse methods available to provide a
way to implement these methods in various programming languages like PHP, Python etc. with
an outcome of innumerable applications. From a convergent point of view Naïve Bayes is best
suitable for textual classification, clustering for consumer services and SVM for biological read-
ing and interpretation.
94 Computer Science & Information Technology (CS & IT)
ACKNOWLEDGEMENTS
Every good writing requires the help and support of many people for it to be truly good. I would
take the opportunity of thanking all those who extended a helping hand whenever I needed one.
I offer my heartfelt gratitude to Mr. Mohd. Shahid Husain who encouraged, guided and helped
me a lot in the project. I extent my thanks to Miss. Ratna Singh (fiancee) for her incandescent
help to complete this paper.
A vote of thanks to my family for their moral and emotional support. Above all utmost thanks to
the Almighty God for the divine intervention in this academic endeavour.
REFERENCES
[1] Ion SMEUREANU, Cristian BUCUR, Applying Supervised Opinion Mining Techniques on Online
User Reviews, Informatica Economică vol. 16, no. 2/2012.
[2] Bo Pang and Lillian Lee, “Opinion Mining and Sentiment Analysis”, Foundations and TrendsR_ in
Information Retrieval Vol. 2, Nos. 1–2 (2008).
[3] Abbasi, “Affect intensity analysis of dark web forums,” in Proceedings of Intelligence and Security
Informatics (ISI), pp. 282–288, 2007.
[4] K. Dave, S. Lawrence & D. Pennock. Mining the Peanut Gallery: Opinion Extraction and Semantic
Classi_cation of Product Reviews." Proceedings of the 12th International Conference on World Wide
Web, pp. 519-528, 2003.
[5] B. Liu. Web Data Mining: Exploring hyperlinks, contents, and usage data," Opinion Mining.
Springer, 2007.
[6] B. Pang & L. Lee, Seeing stars: Exploiting class relationships for sentiment categorization with re-
spect to rating scales." Proceedings of the Association for Computational Linguistics (ACL), pp.
15124,2005.
[7] Nilesh M. Shelke, Shriniwas Deshpande, Vilas Thakre, Survey of Techniques for Opinion Mining,
International Journal of Computer Applications (0975 – 8887) Volume 57– No.13, November 2012.
[8] Nidhi Mishra and C K Jha, Classification of Opinion Mining Techniques, International Journal of
Computer Applications 56 (13):1-6, October 2012, Published by Foundation of Computer Science,
New York, USA.
[9] Oded Z. Maimon, Lior Rokach, “Data Mining and Knowledge Discovery Handbook” Springer, 2005.
[10] Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. “Sentiment classification using machine learn-
ing techniques.” In Proceedings of the 2002 Conference on Empirical Methods in Natural Language
Processing (EMNLP), pages 79–86.
[11] Towards Enhanced Opinion Classification using NLP Techniques, IJCNLP 2011, pages 101–107,
Chiang Mai, Thailand, November 13, 2011
Author
Pravesh Kumar Singh is a fine blend of strong scientific orientation and editing.
He is a Computer Science (Bachelor in Technology) graduate from a renowned
gurukul in India called Dr. Ram Manohar Lohia Awadh University with excel-
lence not only in academics but also had flagship in choreography. He mastered
in Computer Science and engineering from Integral University, Lucknow, India.
Currently he is acting as Head MCA (Master in Computer Applications) de-
partment in Thakur Publications and also working in the capacity of Senior Edi-
tor.

Weitere ähnliche Inhalte

Was ist angesagt?

Statistical Pattern recognition(1)
Statistical Pattern recognition(1)Statistical Pattern recognition(1)
Statistical Pattern recognition(1)
Syed Atif Naseem
 
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
cscpconf
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machines
nextlib
 

Was ist angesagt? (16)

Statistical Pattern recognition(1)
Statistical Pattern recognition(1)Statistical Pattern recognition(1)
Statistical Pattern recognition(1)
 
Implementing the Perceptron Algorithm for Finding the weights of a Linear Dis...
Implementing the Perceptron Algorithm for Finding the weights of a Linear Dis...Implementing the Perceptron Algorithm for Finding the weights of a Linear Dis...
Implementing the Perceptron Algorithm for Finding the weights of a Linear Dis...
 
Implementing Minimum Error Rate Classifier
Implementing Minimum Error Rate ClassifierImplementing Minimum Error Rate Classifier
Implementing Minimum Error Rate Classifier
 
Multiclass Recognition with Multiple Feature Trees
Multiclass Recognition with Multiple Feature TreesMulticlass Recognition with Multiple Feature Trees
Multiclass Recognition with Multiple Feature Trees
 
Implementation of K-Nearest Neighbor Algorithm
Implementation of K-Nearest Neighbor AlgorithmImplementation of K-Nearest Neighbor Algorithm
Implementation of K-Nearest Neighbor Algorithm
 
Fuzzy entropy based optimal
Fuzzy entropy based optimalFuzzy entropy based optimal
Fuzzy entropy based optimal
 
Using Deep Learning to Find Similar Dresses
Using Deep Learning to Find Similar DressesUsing Deep Learning to Find Similar Dresses
Using Deep Learning to Find Similar Dresses
 
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
 
15857 cse422 unsupervised-learning
15857 cse422 unsupervised-learning15857 cse422 unsupervised-learning
15857 cse422 unsupervised-learning
 
Lesson 33
Lesson 33Lesson 33
Lesson 33
 
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...
 
Designing a Minimum Distance classifier to Class Mean Classifier
Designing a Minimum Distance classifier to Class Mean ClassifierDesigning a Minimum Distance classifier to Class Mean Classifier
Designing a Minimum Distance classifier to Class Mean Classifier
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machines
 
DCSM report2
DCSM report2DCSM report2
DCSM report2
 
Feed forward neural network for sine
Feed forward neural network for sineFeed forward neural network for sine
Feed forward neural network for sine
 
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
 

Andere mochten auch

Text mining by examples, By Hadi Mohammadzadeh
Text mining by examples, By Hadi MohammadzadehText mining by examples, By Hadi Mohammadzadeh
Text mining by examples, By Hadi Mohammadzadeh
Hadi Mohammadzadeh
 
Sentiment analysis of tweets
Sentiment analysis of tweetsSentiment analysis of tweets
Sentiment analysis of tweets
Vasu Jain
 
Information extraction for Free Text
Information extraction for Free TextInformation extraction for Free Text
Information extraction for Free Text
butest
 

Andere mochten auch (10)

JAVA 2013 IEEE DATAMINING PROJECT Comparable entity mining from comparative q...
JAVA 2013 IEEE DATAMINING PROJECT Comparable entity mining from comparative q...JAVA 2013 IEEE DATAMINING PROJECT Comparable entity mining from comparative q...
JAVA 2013 IEEE DATAMINING PROJECT Comparable entity mining from comparative q...
 
Dan Sullivan - Data Analytics and Text Mining with MongoDB - NoSQL matters Du...
Dan Sullivan - Data Analytics and Text Mining with MongoDB - NoSQL matters Du...Dan Sullivan - Data Analytics and Text Mining with MongoDB - NoSQL matters Du...
Dan Sullivan - Data Analytics and Text Mining with MongoDB - NoSQL matters Du...
 
Text mining by examples, By Hadi Mohammadzadeh
Text mining by examples, By Hadi MohammadzadehText mining by examples, By Hadi Mohammadzadeh
Text mining by examples, By Hadi Mohammadzadeh
 
A Survey on Sentiment Analysis and Opinion Mining
A Survey on Sentiment Analysis and Opinion MiningA Survey on Sentiment Analysis and Opinion Mining
A Survey on Sentiment Analysis and Opinion Mining
 
Textmining Introduction
Textmining IntroductionTextmining Introduction
Textmining Introduction
 
Sentiment analysis of tweets
Sentiment analysis of tweetsSentiment analysis of tweets
Sentiment analysis of tweets
 
Information extraction for Free Text
Information extraction for Free TextInformation extraction for Free Text
Information extraction for Free Text
 
Opinion Mining Tutorial (Sentiment Analysis)
Opinion Mining Tutorial (Sentiment Analysis)Opinion Mining Tutorial (Sentiment Analysis)
Opinion Mining Tutorial (Sentiment Analysis)
 
Big Data & Text Mining
Big Data & Text MiningBig Data & Text Mining
Big Data & Text Mining
 
Introduction to Text Mining and Semantics
Introduction to Text Mining and SemanticsIntroduction to Text Mining and Semantics
Introduction to Text Mining and Semantics
 

Ähnlich wie ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MINING

Huong dan cu the svm
Huong dan cu the svmHuong dan cu the svm
Huong dan cu the svm
taikhoan262
 
Lecture7 cross validation
Lecture7 cross validationLecture7 cross validation
Lecture7 cross validation
Stéphane Canu
 
IJCSI-10-6-1-288-292
IJCSI-10-6-1-288-292IJCSI-10-6-1-288-292
IJCSI-10-6-1-288-292
HARDIK SINGH
 
SVM Tutorial
SVM TutorialSVM Tutorial
SVM Tutorial
butest
 
SPEECH CLASSIFICATION USING ZERNIKE MOMENTS
SPEECH CLASSIFICATION USING ZERNIKE MOMENTSSPEECH CLASSIFICATION USING ZERNIKE MOMENTS
SPEECH CLASSIFICATION USING ZERNIKE MOMENTS
cscpconf
 

Ähnlich wie ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MINING (20)

Methodological Study Of Opinion Mining And Sentiment Analysis Techniques
Methodological Study Of Opinion Mining And Sentiment Analysis Techniques  Methodological Study Of Opinion Mining And Sentiment Analysis Techniques
Methodological Study Of Opinion Mining And Sentiment Analysis Techniques
 
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHES
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHESIMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHES
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHES
 
Guide
GuideGuide
Guide
 
Huong dan cu the svm
Huong dan cu the svmHuong dan cu the svm
Huong dan cu the svm
 
Application of combined support vector machines in process fault diagnosis
Application of combined support vector machines in process fault diagnosisApplication of combined support vector machines in process fault diagnosis
Application of combined support vector machines in process fault diagnosis
 
SVM - Functional Verification
SVM - Functional VerificationSVM - Functional Verification
SVM - Functional Verification
 
Machine Learning and Artificial Neural Networks.ppt
Machine Learning and Artificial Neural Networks.pptMachine Learning and Artificial Neural Networks.ppt
Machine Learning and Artificial Neural Networks.ppt
 
Lecture7 cross validation
Lecture7 cross validationLecture7 cross validation
Lecture7 cross validation
 
A Fuzzy Interactive BI-objective Model for SVM to Identify the Best Compromis...
A Fuzzy Interactive BI-objective Model for SVM to Identify the Best Compromis...A Fuzzy Interactive BI-objective Model for SVM to Identify the Best Compromis...
A Fuzzy Interactive BI-objective Model for SVM to Identify the Best Compromis...
 
61_Empirical
61_Empirical61_Empirical
61_Empirical
 
Machine Learning Algorithms for Image Classification of Hand Digits and Face ...
Machine Learning Algorithms for Image Classification of Hand Digits and Face ...Machine Learning Algorithms for Image Classification of Hand Digits and Face ...
Machine Learning Algorithms for Image Classification of Hand Digits and Face ...
 
Classifiers
ClassifiersClassifiers
Classifiers
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
 
IRJET- Evaluation of Classification Algorithms with Solutions to Class Imbala...
IRJET- Evaluation of Classification Algorithms with Solutions to Class Imbala...IRJET- Evaluation of Classification Algorithms with Solutions to Class Imbala...
IRJET- Evaluation of Classification Algorithms with Solutions to Class Imbala...
 
IJCSI-10-6-1-288-292
IJCSI-10-6-1-288-292IJCSI-10-6-1-288-292
IJCSI-10-6-1-288-292
 
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERS
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERSFIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERS
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERS
 
SVM Tutorial
SVM TutorialSVM Tutorial
SVM Tutorial
 
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
 
SPEECH CLASSIFICATION USING ZERNIKE MOMENTS
SPEECH CLASSIFICATION USING ZERNIKE MOMENTSSPEECH CLASSIFICATION USING ZERNIKE MOMENTS
SPEECH CLASSIFICATION USING ZERNIKE MOMENTS
 

Kürzlich hochgeladen

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 

Kürzlich hochgeladen (20)

GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 

ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MINING

  • 1. Sundarapandian et al. (Eds) : ACITY, AIAA, CNSA, DPPR, NeCoM, WeST, DMS, P2PTM, VLSI - 2013 pp. 85–94, 2013. © CS & IT-CSCP 2013 DOI : 10.5121/csit.2013.3410 ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MINING Pravesh Kumar Singh1 , Mohd Shahid Husain2 1 M.Tech, Department of Computer Science and Engineering, Integral University, Lucknow, India erpraveshkumar@gmail.com 2 Assistant Professor, Department of Computer Science and Engineering, Integral University, Lucknow, India siddiquisahil@gmail.com ABSTRACT Although opinion mining is in a nascent stage of development but still the ground is set for dense growth of researches in the field. One of the important activities of opinion mining is to extract opinions of people based on characteristics of the object under study. Feature extraction in opinion mining can be done by various ways like that of clustering, support vector machines etc. This paper is an attempt to appraise the various techniques of feature extraction. The first part discusses various techniques and second part makes a detailed appraisal of the major techniques used for feature extraction. KEYWORDS Opinion Mining, Feature Extraction, Clustering, Support Vector Machines. 1. INTRODUCTION People are generally more interested in other’s opinion and when it comes to company product then it becomes even more important. So, the information gathering behaviour makes us to collect and understand other’s views. Increasing networking capabilities provide a way for surprisingly large multitude of resources that contains options like blogs, reviews etc. As a result, now businesses are trying to analyse these floating opinions for competitive advan- tages. Now with the burst of activities in the area of sen- timent analysis they deal with opinions in the first class object as there is increasing interest in the systems of opinion mining. A fundamental step in opinion-mining and sentiment- analysis applications is feature extraction. The combined opinion mining procedure is illustrated in figure 1. Several methods are used for feature extraction among which, fol- lowing are the important ones: 1) Naïve –Bayes Classifier (NB) 2) Support Vector Machine (SVM) Opinions Feature Extraction Sentence Opinion Sentence Analysis Overall Review Score Calculation Results and Reviews Figure 1: Opinion Mining Process Sentence Extraction
  • 2. 86 Computer Science & Information Technology (CS & IT) 3) Multi Layer Perceptron (MLP) 4) Clustering Classifier In this paper, I have categorized the work done for feature extraction and classification in opinion mining and sentiment analysis. We have also tried to find out the performance, advantages and disadvantages of different techniques. 2. DATA SETS This section provides brief details of datasets used by us in our experiments. 2.1 Product Review Dataset The following dataset is taken from the work of blitzer and is of multi-domain characteristics. It consists of product reviews taken from amazon.com which belongs to a total of 25 categories like toys, videos, reviews etc. From the randomly selected five domains 4000 +ve and 4000-ve re- views are randomly samples. 2.2 Movie Review Dataset The second dataset is taken from the work of pang & lee (2004). It contains movie review with of the feature of 1000+ve and 1000-ve processed movie reviews. Table 1: Result of Simple n-gram Movie Review Product Reviews Unigram only 64.1 42.91 Bigram 76.15 69.62 Trigram 76.1 71.37 (uni+bi) gram 77.15 72.94 (uni+bi+tri) gram 80.15 78.67 3. CLASSIFICATION TECHNIQUES 3.1 Naïve Bayes Classifier This classifier is based on the probability statement that was given by Bayes. This theorem pro- vides conditional probability of occurrence of event E1 when E2 has already occurred, the vice versa can also be calculated by following mathematical statement. 2 112 21 E )E(P)E|E(P )E|E(P = This basically helps in deciding the polarity of data in which opinions / reviews / arguments can be classified as positive or negative which is facilitated by collection of positive or negative ex- amples already fed. Naïve Bayes algorithm is implemented to estimate the probability of a data to be negative or posi- tive. The aforesaid probability is calculated by studying positive and negative examples & then calculating the frequency of each pole which is technically termed as learning. This learning is actually supervised as there is an existence of examples1 . Thus, the conditional probability of a word with positive or negative meaning is calculated in view of a plethora of positive and negative ex- amples & calculating the frequency of each of class.
  • 3. Computer Science & Information Technology (CS & IT) 87 )Sentence(P )Sentiment|Sentence(P)Sentiment(P )Sentence|Sentiment(P = So, WordofnosTotalclassatobelongingwordsofNumber 1classinoccurencewordofNumber )Sentiment|Word(P + + = 1 Algorithm is shown in figure 2: Algorithm has following steps S1: Initialize P(positive) ← num − popozitii (positive)/ num_total_propozitii S2: Initialize P(negative) ← num − popozitii (negative) / num_total_propozitii S3: Convert sentences into words for each class of {positive, negative}: for each word in {phrase} P(word | class) < num_apartii (word | class) 1 | num_cuv (class) + num_total_cuvinte P (class) ←P (class) * P (word | class) Returns max {P(pos), P(neg)} 3.1.1. Evaluation of Algorithm The measures used for algorithm evaluation are: Accuracy Precision Recall Relevance Contingency table for analysis of algorithm: Relevant Irrelevant Detected Opinions True Positive (tp) False Positive (fp) Undetected Opinions False Negative (fn) True Negative (tn) Now, Precision = fp+tp tp Accuracy = F, fn+fp+tn+tp tn+tp = callRe+ecisionPr callRe*ecisionPr*2 ; Recall = fn+tp tp 1 The concept is to actually a fact that each & every word in a label is already referred in learning set. Classifier Classifier Training Set +ve Sentence −ve Sentence Classifier Sentence Review Book Review Figure 2
  • 4. 88 Computer Science & Information Technology (CS & IT) 3.1.2. Accuracy [1] Ion SMEUREANU, Cristian BUCUR, training the naïve gauss algorithm on 5000 sentences and get 0.79939209726444 accuracy where no of groups (n) is 2. 3.1.3. Advantages of Naïve Bayes Classification Methods 1. Model is easy to interpret 2. Efficient computation. 3.1.4. Disadvantage of Naïve Bayes Classification Methods Assumptions of attributes being independent, which may not be necessarily valid. 3.2 Support Vector Machine (SVM) The basic goal of support vector machine is to search a decision boundary between two classes that is excellently far away from any point in the training data2 . SVM develops a hyper planes or a set of hyper planes in infinite dimension space. This distance from decision surface to closest data point determines the margin of classifier. So the hyper planes act as decision surface which act as criteria to decide the distance of any data point from it. The margin of classifier is calculated by the distance from the closest data point. This success- fully creates a classification but a slight error will not cause a misclassification. For a training data set D, a set of n points: ( ) { }{ } )1.......(1,1¬εc,Rεxc,x=D n 1=ii p iii Where, xi is a p-dimensional real vector. Find the maximum-margin hyper plane i.e. splits the points having ci = 1 from those having ci = -1. Any hyperplane can be written as the set of points satisfying: )........(21=b-x•w Finding a maximum margin hyperplane, reduces to finding the pair w and b, such that the dis- tance between the hyperplanes is maximal while still separating the data. These hyperplanes (fig. 1) are described by: 1b-xw =• and 1-b-xw =• The distance between two hyperplanes is w b and therefore w needs to be minimized. Or we can say that minimize w in w, b subject to ci (w.xi - b) ≥ 1 for any i = 1, … , n. Using Lagranges multipliers (αi) this optimization problem can be expressed as: Input Space Feature Space Figure 3: Principle of Support Vector Machine φ Separating Hyperplane
  • 5. Computer Science & Information Technology (CS & IT) 89 ( )3.....}]1-)b-x.w(c[α-w 2 1 { αb,w maxmin ∑ n 1=i iii 2 2 3.2.1. Extensions of SVM To make SVM more robust and also more adaptable to real world problems, it has some exten- sions, some of which include following: 1. Soft Margin Classification For very high dimensional problems, common in text classification, sometimes, data are linearly separable. For multidimensional problems like in classification of text, data are linearly separable sometimes. But in most cases, the opinion solution is the one that classifies most of the data and ignore some outliers or noisy data. If the training set D cannot be separated clearly then the solu- tion is to have fat decision classifiers and make some mistake. Mathematically, a stack variable ξi are introduced that are not equal to zero which allow xi to not meet the margin requirements with a cost i.e., proportional to ξ. 2. Non-linear Classification The basic linear SVM was given by Vapnik (1963) later on Bernhard Boser, Isabelle Guyon and Vapnik in 1992 pavd a way for non-linear classifiers by using kernel to max. margin hyper planes. The only difference from th kernel trick given by Aizerman is that every dot product is replaced by non-linear kernel function. The effectiveness of SVM in this case lies in the selection of the kernel and soft margin parame- ters. 3. Multiclass SVM Generally SVM is applicable for two class tasks. But multiclass problems can be deal with multi- class SVM. In this case labels are designed to instances which are drawn from a finite set of vari- ous elements. These binary classifiers can be built by two classifiers like by either distinguish one versus all labels or between every pair of classes one versus one. 3.2.2. Accuracy Usenet reviews were classified by pang by using numerical ratings which were accompanied as basic truth. Various learning methods are used but unigrams gave best outputs in a presence based frequency model run by SVM. The accuracy undergone is the process was 82.9%. 3.2.3. Advantages of Support Vector Machine Method 1. Very good performance on experimental results 2. Low dependency on data set dimensionality. 3.2.4. Disadvantages of Support Vector Machine Method 1. Categorical or missing values need to be pre-processed. 2. Difficult interpretation of resulting model. 2 Possibly discounting some points as outliers or noise.
  • 6. 90 Computer Science & Information Technology (CS & IT) 3.3 Multi-Layer Perceptron (MLP) MLP is a neural network which is feed forward with one or more layers between input and out- put. Feed forward implies that, data flows in one direction i.e., from input layer to output layer (i.e., in forward direction). This ANN which multilayer perceptron starts with input layer where each node or neuron means a predicator variable. Input neurons are connected with each neuron in hidden layers. The neurons in hidden layer are in turn connected to neuron in other hidden lay- ers. The output layer is made up of one neuron in case of binary prediction or more than one neu- ron in case of non-binary prediction. Such arrangement makes a streamlined flow of information from input layer to output layer. MLP technique is quite popular owing to the fact that it can act as universal function approxima- tor. A “back propagation” network has at least one hidden layer with many non-linear units that can learn any function or relationship between group of input variable whether discrete and for continuous and output variable whether discrete and for continuous. This makes the technique of MLP quite general, flexible ad non-linear tools. When output layers is to be classified that has total number of nodes as total number of classes and the node having highest value then it gives the output i.e., estimate of a class for which an in- put is made. If there is a special case of two classes than, generally there is a node in output layer and classification is carried between two classes and is done by applying cut off point to node value. An advantages of this technique, compared to classical modelling techniques, is that it does not impose any sort of restriction with respect to the starting data (type of functional relationship be- tween variables), neither does it usually start from specific assumptions (like the type of distribu- tion the data follow). Another virtue of the technique lies in its capacity to estimate good models even despite the existence of noise in the information analyzed, as occurs when there is a pres- ence of omitted values or outlier values in the distribution of the variables. Hence, it is a robust technique when dealing with problems of noise in the information presented; however, this does not mean that the cleaning criteria of the data matrix should be relaxed. 3.3.1. Accuracy Ludmila I. Kuncheva, Member, IEEE on health care data calculated accuracy of Multilayer perceptron (MLP) as 84.25–89.50%. 3.3.2. Advantages of MLP 1) Capable of acting as a universal function approximator. 2) Capability to learn almost any relationship between input and output variables. 3.3.3. Disadvantages of MLP 1) Flexibility lies in the need to have sufficient training data and that it requires more time for its execution than other techniques. 2) It is somewhat considered as complex “black box”. 3.4 Clustering Classifier To identifying the prominent features of human and objects and recognizing them with a type one can require the object clustering. Basically clustering is unsupervised learning technique.
  • 7. Computer Science & Information Technology (CS & IT) 91 The objective of clustering is to determine a fresh or different set of classes or categories, the new groups are of concern in them- selves, and their valuation is intrinsic. In this method, data objects or instances groups into subset in such a method that similar in- stances are grouped together and various different instances be- long to the different groups. Clustering is an unsupervised learning task, so no class values representing a former combination of the data instances are given (situation of supervised learning). Clustering basically assembles data objects or instances into subset in such a method that similar objects are assembled together, while different objects belong to different groups. The objects are thereby prepared or organized into efficient representations that characterize the population being sampled. Clustering organization is denoted as a set of subsets C = C1 . . . Ck of S, such that: φ=∩= = ji k 1i i CCandCS U for ji ≠ . Therefore, any object in S related to exactly one and only one subset. For example, consider the figure 4 where data set has three normal clusters. Now consider the some real-life examples for illustrating clustering: Example 1: Consider the people having similar size together to make small and large shirts. 1. Tailor-made for each person: expensive 2. One-size-fits-all: does not fit all. Example 2: In advertising, segment consumers according to their similarities: To do targeted ad- vertising. Example 3: To create a topic hierarchy, we can take a group of text and organize those texts ac- cording to their content matches. There are two main types of measures used to estimate this relation: distance measures and simi- larity measures. Basically following are two kinds of measures used to guesstimate this relation 1. Distance measures and 2. Similarity measures Distance Measures To conclude the similarity and difference between the pair of instances, many clustering ap- proaches usage the distance measures. It is convenient to represent the distance between two instances let say xi and xj as: d (xi,xj). A valid distance measure should be symmetric and gains its minimum value (usually zero) in case of identical vectors. Distance measures method is known as metric distance measure if follow the following proper- ties: Figure 4
  • 8. 92 Computer Science & Information Technology (CS & IT) S∈x,x∀ x=x⇒0=)x,x(d.2 S∈x,x,x∀ )x,x(d+)x,x(d≤)x,x(dinequalityTriangle.1 ji jiji kji kjjiki There are variations in distance measures depending upon the attribute in question. 3.4.1. Clustering Methods A number of clustering algorithms are getting popular. The basic reason of a number of clustering methods is that “cluster” is not accurately defined (Estivill-Castro, 2000). As a result many clus- tering methods have been developed, using a different induction principle. Farley and Raftery (1998) gave a classification of clustering methods into two main groups: hierarchical and parti- tioning methods. Han and Kamber (2001) gave a different categorization: density-based methods, model-based clustering and gridbased methods. A totally different categorization based on the in- duction principle is given by (Estivill-Castro, 2000). 1. Hierarchical Methods According to this method clusters are created by recursive partitioning of the instances in either a top-down or bottom-up way. 2. Partitioning Methods Here, starting from the first (starting) partition, instances are rearranged by changing the position of instances from one cluster to another cluster. The basic assumption is that the number of clusters will be set by the user before. In order to get global optimality in partitioned-based clustering, an exhaustive enumeration process of all possi- ble partitions is done. 3. Density-based Methods The basis of this method is probability. It suggests that the instances that belong to each cluster are taken from a specific probability distribution. The entire distribution of the data is supposed to be a combination of numerous disseminations. The objective of these methods is to identify the clusters and their distribution parameters. This particular method is designed in order to discover clusters of arbitrary shape which are not necessarily convex. 4. Grid-based Methods In grid-based method all the operations for clustering are performed in a grid structure and for performing in the grid structure all the available space is divided into a fixed number of cells. The basic main advantage of this method is its speed (Han and Kamber, 2001). 5. Soft-computing Methods This includes techniques like that of neural networks. 3.4.2. Evaluation Criteria Measures for Clustering Technique Generally, these criteria are split into two groups named Internal and External. 1. Internal Quality Criteria This criteria generally measures compactness of clusters using similarity measures. It generally takes into consideration intra-cluster homogeneity, the inter-cluster separability or a combination of these two. It doesn’t use any exterior information beside the data itself.
  • 9. Computer Science & Information Technology (CS & IT) 93 2. External Quality Criteria They are useful for examining the structure of the clusters match to some already defined classifi- cation of the objects. 3.4.3. Accuracy Several different classifiers were used and the accuracy of the best classifier varied from 99.57% to 65.33%, depending on the data. 3.4.4. Advantages of Clustering Method The main advantage of this method is that it offers the groups that (approximately) fulfil an opti- mality measure. 3.4.5. Disadvantages of Clustering Method 1. There is no learning set of labelled observations. 2. Number of groups is usually unknown. 3. Implicitly, users already chooses the appropriate features and distance measure. 4. CONCLUSION The important part to gather information always seems as what the people think. The rising ac- cessibility of opinion rich resources such as online analysis websites and blogs rises as one can simply search and recognize the opinions of some other one. One can precise his/her ideas and opinions concerning goods and facilities. These view and thoughts are subjective figures which signifies someone opinions, sentiments, emotional state or evaluation. In this paper, we present different methods for data (feature or text) extraction and every method have some benefits and limitations and one can use these methods according to the situation for feature and text extrac- tion. Based on the survey we can find the accuracy of different methods in different data set using n- gram feature shown in table 2. Table 2: Accuracy of Different Methods Movie Reviews Product Reviews Ngram Feature NB MLP SVM NB MLP SVM 75.50 81.05 81.15 62.50 79.27 79.40 According to our survey, accuracy of MLP is better than other three methods when we use Ngram feature. The four methods discussed in the paper are actually applicable in different areas like clustering is applied in movie reviews and SVM techniques is applied in biological reviews & analysis. Al- though in the field of opinion mining is new, but still the diverse methods available to provide a way to implement these methods in various programming languages like PHP, Python etc. with an outcome of innumerable applications. From a convergent point of view Naïve Bayes is best suitable for textual classification, clustering for consumer services and SVM for biological read- ing and interpretation.
  • 10. 94 Computer Science & Information Technology (CS & IT) ACKNOWLEDGEMENTS Every good writing requires the help and support of many people for it to be truly good. I would take the opportunity of thanking all those who extended a helping hand whenever I needed one. I offer my heartfelt gratitude to Mr. Mohd. Shahid Husain who encouraged, guided and helped me a lot in the project. I extent my thanks to Miss. Ratna Singh (fiancee) for her incandescent help to complete this paper. A vote of thanks to my family for their moral and emotional support. Above all utmost thanks to the Almighty God for the divine intervention in this academic endeavour. REFERENCES [1] Ion SMEUREANU, Cristian BUCUR, Applying Supervised Opinion Mining Techniques on Online User Reviews, Informatica Economică vol. 16, no. 2/2012. [2] Bo Pang and Lillian Lee, “Opinion Mining and Sentiment Analysis”, Foundations and TrendsR_ in Information Retrieval Vol. 2, Nos. 1–2 (2008). [3] Abbasi, “Affect intensity analysis of dark web forums,” in Proceedings of Intelligence and Security Informatics (ISI), pp. 282–288, 2007. [4] K. Dave, S. Lawrence & D. Pennock. Mining the Peanut Gallery: Opinion Extraction and Semantic Classi_cation of Product Reviews." Proceedings of the 12th International Conference on World Wide Web, pp. 519-528, 2003. [5] B. Liu. Web Data Mining: Exploring hyperlinks, contents, and usage data," Opinion Mining. Springer, 2007. [6] B. Pang & L. Lee, Seeing stars: Exploiting class relationships for sentiment categorization with re- spect to rating scales." Proceedings of the Association for Computational Linguistics (ACL), pp. 15124,2005. [7] Nilesh M. Shelke, Shriniwas Deshpande, Vilas Thakre, Survey of Techniques for Opinion Mining, International Journal of Computer Applications (0975 – 8887) Volume 57– No.13, November 2012. [8] Nidhi Mishra and C K Jha, Classification of Opinion Mining Techniques, International Journal of Computer Applications 56 (13):1-6, October 2012, Published by Foundation of Computer Science, New York, USA. [9] Oded Z. Maimon, Lior Rokach, “Data Mining and Knowledge Discovery Handbook” Springer, 2005. [10] Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. “Sentiment classification using machine learn- ing techniques.” In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 79–86. [11] Towards Enhanced Opinion Classification using NLP Techniques, IJCNLP 2011, pages 101–107, Chiang Mai, Thailand, November 13, 2011 Author Pravesh Kumar Singh is a fine blend of strong scientific orientation and editing. He is a Computer Science (Bachelor in Technology) graduate from a renowned gurukul in India called Dr. Ram Manohar Lohia Awadh University with excel- lence not only in academics but also had flagship in choreography. He mastered in Computer Science and engineering from Integral University, Lucknow, India. Currently he is acting as Head MCA (Master in Computer Applications) de- partment in Thakur Publications and also working in the capacity of Senior Edi- tor.