SlideShare ist ein Scribd-Unternehmen logo
1 von 6
How to create a Word Cloud in R

Mona Swarnakar
Date: 9th January 2014
Email id: s.mona8@gmail.com
Blog: http://monaprofile.blogspot.in/
I have seen Word-cloud in so many places such as magazines, websites, blogs, etc. however
never thought of making it by myself. I would like to thank R that taught me such a wonderful
technique which everybody would like to learn. I don’t know if there are any other free options
are available to create word-cloud. Let me give a quick explanation about R first, R is a free
source packages and very useful for statistical analysis. We can use R for various purposes, from
data mining to data visualization. Word-cloud is a tool where you can highlight the words which
have been used the most in quick visualization. If you know the correct procedure and usage,
then Word Cloud is simple in R Studio. Further, a package called “Word-cloud” is released in R,
which will help us to create word-cloud. You can follow my simple four steps mentioned below
to create word-cloud.
Those are new to R or Word Cloud, I would suggest first install R studio from the link
rstudio.com
Also, the following packages are required to create word cloud in R, so install these following
packages as well:
library(twitteR)
library(tm)
library(SnowballC)
library(wordcloud)

Note: You can see on the right side of the image, there is an option of the packages you want to
install
Step 1->
First we have to install the below package in R:
library (twitteR)
Once installation is done, we will load the Tweets data from D drive (that you have saved in
your drive) in the below mentioned codes:
> load("F:MonaMona RTweets.RData")

For the Tweets to convert into a data frame, we will write the below codes;
>df=do.call("rbind",lapply(tweets, as.data.frame))
>dim(df)

Step 2 ->
Now install the below package:
library(tm)
Corpus is collection of data texts. VectorSource is a very useful command based on which we
can create a corpus of character vectors.
>mydata=Corpus(VectorSource(df$text))

Transformations: Once we have corpus we can modify the document (for example stopwords
removal, stemming, etc.). Transformations are done via tm_map () function which applies to all
elements of corpus and all transformation can be done in single text documents.
To clean the data file various commands are used, which are listed below:
To Eliminating extra white spaces:

> mydata=tm_map(mydata, stripWhitespace)

To Convert to Lower Case:
>mydata=tm_map(mydata, tolower)

To remove punctuations:
>mydata=tm_map(mydata,removePunctuation)

To remove numbers:

>mydata=tm_map(mydata, removeNumbers
Stopwords: A further preprocessing technique is the removal of stopwords. Stopwords are
words that are so common in a language that their information value is almost zero, in other
words their entropy is very low. Therefore it is usual to remove them before further analysis.
At first we set up a tiny list of stopwords:
In this we are adding “R” and “online” to remove from wordlist.
>my_stopwords=c(stopwords('english'),c('R','online'))
>mydata=tm_map(mydata, removeWords, my_stopwords)

Stemming: Stemming is the process of removing suffixes from words to get the common
origin. For example, remove ing, ed from word to make it simple. Another example would be we would like to count the words stopped and stopping as being the same and derived from
stop.

Step 3 ->
Now install the below package:
library(SnowballC)
>mydata=tm_map(mydata, stemDocument)

Term-Document Matrix:

A common approach in text mining is to create a term-document

matrix from a corpus. In the tm package the classes Term Document Matrix (tdm)and Document
Term Matrix(dtm) (depending on whether you want terms as rows and documents as columns,
or vice versa) employ sparse matrices for corpora.
>tdm<-TermDocumentMatrix(mydata)

Frequent Terms: Now we can have a look at the popular words in the term-document matrix.
>wordfreq=findFreqTerms(tdm, lowfreq=70)
>termFrequency=rowSums(as.matrix(tdm1[wordfreq,]))

Now we can have a look at the popular words in the term-document matrix.
Step 4 ->

Word Cloud: After building a term-document matrix and frequency terms, we can show the
importance of words with a word cloud.
Now install the below package:
library(wordcloud)
library(RColorBrewer)
pal2 <- brewer.pal(8,"Dark2")

There are three options; you can apply any one for different wordcloud colour:

>wordcloud(words=names(wordFreq),freq=wordFreq,min.freq=5,max.words=50,random.order=
F,colors="red")
>wordcloud(words=names(wordFreq),freq=wordFreq,scale=c(5,.2),min.freq=3,max.words= 200,
random.order=F, rot.per=.15, colors=brewer.pal(8, "Dark2"))
>wordcloud(words=names(wordFreq),freq=wordFreq, scale=c(5,.2),min.freq=3, max.words=Inf,
random.order=F,rot.per=.15,random.color=TRUE,colors=rainbow(7))

To get multiple colour in word cloud we use (pal2 <- brewer.pal(8,"Dark2"), if you want only
one colour in word cloud you can simply write “red” or “blue” in colour option.

The above word cloud clearly shows that "data", "example" and "research" are the three most
important words, which validates that the in twitter these words have been used the most.
o
o
o
o
o
o
o
o
o
o

Words: the words
Freq: their frequencies
Scale: A vector of length 2 indicating the range of the size of the words.
min.freq: words with frequency below min.freq will not be plotted
max.words: Maximum number of words to be plotted. least frequent terms dropped
random.order: plot words in random order. If false, they will be plotted in decreasing
frequency
random.color: choose colors randomly from the colors. If false, the color is chosen
based on the frequency
rot.per: proportion words with 90 degree rotation
Colors color words from least to most frequent
Ordered.colors if true, then colors are assigned to words in order

Hope this helps.

Thanks for reading………….

Weitere ähnliche Inhalte

Kürzlich hochgeladen

Kürzlich hochgeladen (20)

GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 

Empfohlen

How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental Health
ThinkNow
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie Insights
Kurio // The Social Media Age(ncy)
 

Empfohlen (20)

2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot
 
Everything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPTEverything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPT
 
Product Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsProduct Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage Engineerings
 
How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental Health
 
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfAI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
 
Skeleton Culture Code
Skeleton Culture CodeSkeleton Culture Code
Skeleton Culture Code
 
PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024
 
Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)
 
How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie Insights
 
Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024
 
5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary
 
ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd
 
Getting into the tech field. what next
Getting into the tech field. what next Getting into the tech field. what next
Getting into the tech field. what next
 
Google's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentGoogle's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search Intent
 
How to have difficult conversations
How to have difficult conversations How to have difficult conversations
How to have difficult conversations
 
Introduction to Data Science
Introduction to Data ScienceIntroduction to Data Science
Introduction to Data Science
 
Time Management & Productivity - Best Practices
Time Management & Productivity -  Best PracticesTime Management & Productivity -  Best Practices
Time Management & Productivity - Best Practices
 
The six step guide to practical project management
The six step guide to practical project managementThe six step guide to practical project management
The six step guide to practical project management
 
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
 

How to create a word cloud in R

  • 1. How to create a Word Cloud in R Mona Swarnakar Date: 9th January 2014 Email id: s.mona8@gmail.com Blog: http://monaprofile.blogspot.in/
  • 2. I have seen Word-cloud in so many places such as magazines, websites, blogs, etc. however never thought of making it by myself. I would like to thank R that taught me such a wonderful technique which everybody would like to learn. I don’t know if there are any other free options are available to create word-cloud. Let me give a quick explanation about R first, R is a free source packages and very useful for statistical analysis. We can use R for various purposes, from data mining to data visualization. Word-cloud is a tool where you can highlight the words which have been used the most in quick visualization. If you know the correct procedure and usage, then Word Cloud is simple in R Studio. Further, a package called “Word-cloud” is released in R, which will help us to create word-cloud. You can follow my simple four steps mentioned below to create word-cloud. Those are new to R or Word Cloud, I would suggest first install R studio from the link rstudio.com Also, the following packages are required to create word cloud in R, so install these following packages as well: library(twitteR) library(tm) library(SnowballC) library(wordcloud) Note: You can see on the right side of the image, there is an option of the packages you want to install
  • 3. Step 1-> First we have to install the below package in R: library (twitteR) Once installation is done, we will load the Tweets data from D drive (that you have saved in your drive) in the below mentioned codes: > load("F:MonaMona RTweets.RData") For the Tweets to convert into a data frame, we will write the below codes; >df=do.call("rbind",lapply(tweets, as.data.frame)) >dim(df) Step 2 -> Now install the below package: library(tm) Corpus is collection of data texts. VectorSource is a very useful command based on which we can create a corpus of character vectors. >mydata=Corpus(VectorSource(df$text)) Transformations: Once we have corpus we can modify the document (for example stopwords removal, stemming, etc.). Transformations are done via tm_map () function which applies to all elements of corpus and all transformation can be done in single text documents. To clean the data file various commands are used, which are listed below: To Eliminating extra white spaces: > mydata=tm_map(mydata, stripWhitespace) To Convert to Lower Case: >mydata=tm_map(mydata, tolower) To remove punctuations: >mydata=tm_map(mydata,removePunctuation) To remove numbers: >mydata=tm_map(mydata, removeNumbers
  • 4. Stopwords: A further preprocessing technique is the removal of stopwords. Stopwords are words that are so common in a language that their information value is almost zero, in other words their entropy is very low. Therefore it is usual to remove them before further analysis. At first we set up a tiny list of stopwords: In this we are adding “R” and “online” to remove from wordlist. >my_stopwords=c(stopwords('english'),c('R','online')) >mydata=tm_map(mydata, removeWords, my_stopwords) Stemming: Stemming is the process of removing suffixes from words to get the common origin. For example, remove ing, ed from word to make it simple. Another example would be we would like to count the words stopped and stopping as being the same and derived from stop. Step 3 -> Now install the below package: library(SnowballC) >mydata=tm_map(mydata, stemDocument) Term-Document Matrix: A common approach in text mining is to create a term-document matrix from a corpus. In the tm package the classes Term Document Matrix (tdm)and Document Term Matrix(dtm) (depending on whether you want terms as rows and documents as columns, or vice versa) employ sparse matrices for corpora. >tdm<-TermDocumentMatrix(mydata) Frequent Terms: Now we can have a look at the popular words in the term-document matrix. >wordfreq=findFreqTerms(tdm, lowfreq=70) >termFrequency=rowSums(as.matrix(tdm1[wordfreq,])) Now we can have a look at the popular words in the term-document matrix.
  • 5. Step 4 -> Word Cloud: After building a term-document matrix and frequency terms, we can show the importance of words with a word cloud. Now install the below package: library(wordcloud) library(RColorBrewer) pal2 <- brewer.pal(8,"Dark2") There are three options; you can apply any one for different wordcloud colour: >wordcloud(words=names(wordFreq),freq=wordFreq,min.freq=5,max.words=50,random.order= F,colors="red") >wordcloud(words=names(wordFreq),freq=wordFreq,scale=c(5,.2),min.freq=3,max.words= 200, random.order=F, rot.per=.15, colors=brewer.pal(8, "Dark2")) >wordcloud(words=names(wordFreq),freq=wordFreq, scale=c(5,.2),min.freq=3, max.words=Inf, random.order=F,rot.per=.15,random.color=TRUE,colors=rainbow(7)) To get multiple colour in word cloud we use (pal2 <- brewer.pal(8,"Dark2"), if you want only one colour in word cloud you can simply write “red” or “blue” in colour option. The above word cloud clearly shows that "data", "example" and "research" are the three most important words, which validates that the in twitter these words have been used the most.
  • 6. o o o o o o o o o o Words: the words Freq: their frequencies Scale: A vector of length 2 indicating the range of the size of the words. min.freq: words with frequency below min.freq will not be plotted max.words: Maximum number of words to be plotted. least frequent terms dropped random.order: plot words in random order. If false, they will be plotted in decreasing frequency random.color: choose colors randomly from the colors. If false, the color is chosen based on the frequency rot.per: proportion words with 90 degree rotation Colors color words from least to most frequent Ordered.colors if true, then colors are assigned to words in order Hope this helps. Thanks for reading………….