SlideShare ist ein Scribd-Unternehmen logo
1 von 47
DISCOVER . LEARN . EMPOWER
More on Pandas
INSTITUTE - UIE
DEPARTMENT- ACADEMIC
UNIT-1
Bachelor of Engineering (Computer Science &
Engineering)
Disruptive Technologies – I
Prepared By: Dr. Anuj Kumar Goel
1
2
More on Pandas
CO
Number
Title Level
CO1 Grasp the characteristics of disruptive technologies and
understand building blocks of artificial intelligence, data science
and cloud computing.
Remember
CO2 Develop simple intelligent system using available tools and
techniques of AI to analyze and interpret domain knowledge.
Understand
CO3 Build effective data visualizations, and learn to work with data
through the entire datascience process.
Apply
CO4 Deploy, build, and monitor cloud-based applications. Analyze and
evaluate
CO5 Work in a team that can propose, design, implement and report on
their selected domain.
Create
Course Outcome
Python for Data Analysis
Tutorial
Content
4
Overview of Python Libraries for Data
Scientists
Reading Data; Selecting and Filtering the Data; Data manipulation,
sorting, grouping, rearranging
Plotting the data
Descriptive statistics
Inferential statistics
Python Libraries for Data Science
Many popular Python toolboxes/libraries:
• NumPy
• SciPy
• Pandas
• SciKit-Learn
Visualization libraries
• matplotlib
• Seaborn
and many more …
5
All these libraries are
installed on the SCC
Python Libraries for Data Science
NumPy:
 introduces objects for multidimensional arrays and matrices, as well as
functions that allow to easily perform advanced mathematical and statistical
operations on those objects
 provides vectorization of mathematical operations on arrays and matrices
which significantly improves the performance
 many other python libraries are built on NumPy
6
Link: http://www.numpy.org/
Python Libraries for Data Science
SciPy:
 collection of algorithms for linear algebra, differential equations, numerical
integration, optimization, statistics and more
 part of SciPy Stack
 built on NumPy
7
Link: https://www.scipy.org/scipylib/
Python Libraries for Data Science
Pandas:
 adds data structures and tools designed to work with table-like data (similar
to Series and Data Frames in R)
 provides tools for data manipulation: reshaping, merging, sorting, slicing,
aggregation etc.
 allows handling missing data
8
Link: http://pandas.pydata.org/
Link: http://scikit-learn.org/
Python Libraries for Data Science
SciKit-Learn:
 provides machine learning algorithms: classification, regression, clustering,
model validation etc.
 built on NumPy, SciPy and matplotlib
9
matplotlib:
 python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats
 a set of functionalities similar to those of MATLAB
 line plots, scatter plots, barcharts, histograms, pie charts etc.
 relatively low-level; some effort needed to create advanced visualization
Link: https://matplotlib.org/
Python Libraries for Data Science
10
Seaborn:
 based on matplotlib
 provides high level interface for drawing attractive statistical graphics
 Similar (in style) to the popular ggplot2 library in R
Link: https://seaborn.pydata.org/
Python Libraries for Data Science
11
Start Jupyter nootebook
# On the Shared Computing Cluster
[scc1 ~] jupyter notebook
12
Figure 1. Starting Jupyter notebook
In [ ]:
Loading Python Libraries
13
#Import Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
import seaborn as sns
Press Shift+Enter to execute the jupyter cell
Figure 2. Loading Python Libraries
In [ ]:
Reading data using pandas
14
#Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")
There is a number of pandas commands to read other data formats:
pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None, na_values=['NA'])
pd.read_stata('myfile.dta')
pd.read_sas('myfile.sas7bdat')
pd.read_hdf('myfile.h5','df')
Note: The above command has many optional arguments to fine-tune the data import process.
Figure 3. Reading data files in Pandas
In [3]:
Exploring data frames
15
#List first 5 records
df.head()
Out[3]:
Figure 4. Making Data Frames
Hands-on exercises
16
 Try to read the first 10, 20, 50 records;
 Can you guess how to view the last few records; Hint:
Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be
assigned to your column if column
has mixed types (numbers and
strings).
int64 int Numeric characters. 64 refers to
the memory allocated to hold this
character.
float64 float Numeric characters with decimals.
If a column contains numbers and
NaNs(see below), pandas will
default to float64, in case your
missing value has a decimal.
datetime64, timedelta[ns] N/A (but see the datetime module
in Python’s standard library)
Values meant to hold time data.
Look into these for time series
experiments.
17
Table 1. Data Types
In [4]:
Data Frame data types
18
#Check a particular column type
df['salary'].dtype
Out[4]: dtype('int64')
In [5]: #Check types for all the columns
df.dtypes
Out[4]: rank
discipline
phd
service
sex
salary
dtype: object
object
object
int64
int64
object
int64
Figure 5. Column wise data reading
Data Frames attributes
19
Python objects have attributes and methods.
df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions
size number of elements
shape return a tuple representing the dimensionality
values numpy representation of the data
Table 2. Data Frames attributes
Hands-on exercises
20
 Find how many records this data frame has;
 How many elements are there?
 What are the column names?
 What types of columns we have in this data frame?
Data Frames methods
21
df.method() description
head( [n] ), tail( [n] ) first/last n rows
describe() generate descriptive statistics (for numeric columns only)
max(), min() return max/min values for all numeric columns
mean(), median() return mean/median values for all numeric columns
std() standard deviation
sample([n]) returns a random sample of the data frame
dropna() drop all the records with missing values
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)
Table 3. Data frame
methods
Hands-on exercises
22
 Give the summary for the numeric columns in the dataset
 Calculate standard deviation for all numeric columns;
 What are the mean values of the first 50 records in the dataset? Hint: use
head() method to subset the first 50 records and then calculate the mean
Selecting a column in a Data Frame
Method 1: Subset the data frame using column name:
df['sex']
Method 2: Use the column name as an attribute:
df.sex
Note: there is an attribute rank for pandas data frames, so to select a column with a name
"rank" we should use method 1.
23
Hands-on exercises
24
 Calculate the basic statistics for the salary column;
 Find how many values in the salary column (use count method);
 Calculate the average salary;
Data Frames groupby method
25
Using "group by" method we can:
• Split the data into groups based on some criteria
• Calculate statistics (or apply a function) to each group
• Similar to dplyr() function in R
In [ ]: #Group data using rank
df_rank = df.groupby(['rank'])
In [ ]: #Calculate mean value for each numeric column per each group
df_rank.mean()
Figure 6. Groupby method in Pandas(a)
Data Frames groupby method
26
Once groupby object is create we can calculate various statistics for each group:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby('rank')[['salary']].mean()
Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object.
When double brackets are used the output is a Data Frame
Figure 6. Groupby method in Pandas(b)
Data Frames groupby method
27
groupby performance notes:
- no grouping/splitting occurs until it's needed. Creating the groupby object
only verifies that you have passed a valid mapping
- by default the group keys are sorted during the groupby operation. You may
want to pass sort=False for potential speedup:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby(['rank'], sort=False)[['salary']].mean()
Figure 6. Groupby method in Pandas(c)
Data Frame: filtering
28
To subset the data we can apply Boolean indexing. This indexing is commonly
known as a filter. For example if we want to subset the rows in which the salary
value is greater than $120K:
In [ ]: #Calculate mean salary for each professor rank:
df_sub = df[ df['salary'] > 120000 ]
In [ ]:#Select only those rows that contain female professors:
df_f = df[ df['sex'] == 'Female' ]
Any Boolean operator can be used to subset the data:
> greater; >= greater or equal;
< less; <= less or equal;
== equal; != not equal;
Figure 7. Filtering method in Pandas
Data Frames: Slicing
29
There are a number of ways to subset the Data Frame:
• one or more columns
• one or more rows
• a subset of rows and columns
Rows and columns can be selected by their position or label
Data Frames: Slicing
30
When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):
In [ ]: #Select column salary:
df['salary']
When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:
In [ ]: #Select column salary:
df[['rank','salary']]
Figure 8. Data Slicing in Pandas
Data Frames: Selecting rows
31
If we need to select a range of rows, we can specify the range using ":"
In [ ]: #Select rows by their position:
df[10:20]
Notice that the first row has a position 0, and the last value in the range is omitted:
So for 0:10 range the first 10 rows are returned with the positions starting with 0
and ending with 9
Data Frames: method loc
32
If we need to select a range of rows, using their labels we can use method loc:
In [ ]: #Select rows by their labels:
df_sub.loc[10:20,['rank','sex','salary']]
Out[ ]:
Figure 9. selecting rows through labels
Data Frames: method iloc
33
If we need to select a range of rows and/or columns, using their positions we can
use method iloc:
In [ ]: #Select rows by their labels:
df_sub.iloc[10:20,[0, 3, 4, 5]]
Out[ ]:
Figure 10. selecting rows / columns through position
Data Frames: method iloc (summary)
34
df.iloc[0] # First row of a data frame
df.iloc[i] #(i+1)th row
df.iloc[-1] # Last row
df.iloc[:, 0] # First column
df.iloc[:, -1] # Last column
df.iloc[0:7] #First 7 rows
df.iloc[:, 0:2] #First 2 columns
df.iloc[1:3, 0:2] #Second through third rows and first 2 columns
df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns
Figure 11. selecting rows/column through position (summary)
Data Frames: Sorting
35
We can sort the data by a value in the column. By default the sorting will occur in
ascending order and a new data frame is return.
In [ ]: # Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='service')
df_sorted.head()
Out[ ]:
Figure 12. Data sorting(a)
Data Frames: Sorting
36
We can sort the data using 2 or more columns:
In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False])
df_sorted.head(10)
Out[ ]:
Figure 12. Data sorting(b)
Missing Values
37
Missing values are marked as NaN
In [ ]: # Read a dataset with missing values
flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv")
In [ ]: # Select the rows that have at least one missing value
flights[flights.isnull().any(axis=1)].head()
Out[ ]:
Figure 13. Filling of missing values
Missing Values
38
There are a number of methods to deal with missing values in the data frame:
df.method() description
dropna() Drop missing observations
dropna(how='all') Drop observations where all cells is NA
dropna(axis=1, how='all') Drop column if all the values are missing
dropna(thresh = 5) Drop rows that contain less than 5 non-missing values
fillna(0) Replace missing values with zeros
isnull() returns True if the value is missing
notnull() Returns True for non-missing values
Table 4. filling/replacing missing values
Missing Values
39
• When summing the data, missing values will be treated as zero
• If all values are missing, the sum will be equal to NaN
• cumsum() and cumprod() methods ignore missing values but preserve them in
the resulting arrays
• Missing values in GroupBy method are excluded (just like in R)
• Many descriptive statistics methods have skipna option to control if missing
data should be excluded . This value is set to True by default (unlike R)
Aggregation Functions in Pandas
40
Aggregation - computing a summary statistic about each group, i.e.
• compute group sums or means
• compute group sizes/counts
Common aggregation functions:
min, max
count, sum, prod
mean, median, mode, mad
std, var
Aggregation Functions in Pandas
41
agg() method are useful when multiple statistics are computed per column:
In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max'])
Out[ ]:
Figure 14. Aggregation functions
Basic Descriptive Statistics
42
df.method() description
describe Basic statistics (count, mean, std, min, quantiles, max)
min, max Minimum and maximum values
mean, median, mode Arithmetic average, median and mode
var, std Variance and standard deviation
sem Standard error of mean
skew Sample skewness
kurt kurtosis
Table 5. Available statistics methods
Graphics to explore the data
43
To show graphs within Python notebook include inline directive:
In [ ]: %matplotlib inline
Seaborn package is built on matplotlib but provides high level
interface for drawing attractive statistical graphics, similar to ggplot2
library in R. It specifically targets statistical data visualization
Graphics
44
description
distplot histogram
barplot estimate of central tendency for a numeric variable
violinplot similar to boxplot, also shows the probability density of the
data
jointplot Scatterplot
regplot Regression plot
pairplot Pairplot
boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot
Table 6. Data plotting
Basic statistical Analysis
45
statsmodel and scikit-learn - both have a number of function for statistical analysis
The first one is mostly used for regular analysis using R style formulas, while scikit-learn is
more tailored for Machine Learning.
statsmodels:
• linear regressions
• ANOVA tests
• hypothesis testings
• many more ...
scikit-learn:
• kmeans
• support vector machines
• random forests
• many more ...
References
• http://www.numpy.org/
• https://www.scipy.org/scipylib/
• http://pandas.pydata.org/
• https://matplotlib.org/
46
THANK YOU
For queries
Email: anuj.e10160@cumail.in
47

Weitere ähnliche Inhalte

Ähnlich wie More on Pandas.pptx

Lecture 1 Pandas Basics.pptx machine learning
Lecture 1 Pandas Basics.pptx machine learningLecture 1 Pandas Basics.pptx machine learning
Lecture 1 Pandas Basics.pptx machine learningmy6305874
 
Meetup Junio Data Analysis with python 2018
Meetup Junio Data Analysis with python 2018Meetup Junio Data Analysis with python 2018
Meetup Junio Data Analysis with python 2018DataLab Community
 
Unit 3_Numpy_Vsp.pptx
Unit 3_Numpy_Vsp.pptxUnit 3_Numpy_Vsp.pptx
Unit 3_Numpy_Vsp.pptxprakashvs7
 
XII - 2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdf
XII -  2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdfXII -  2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdf
XII - 2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdfKrishnaJyotish1
 
Lesson 2 data preprocessing
Lesson 2   data preprocessingLesson 2   data preprocessing
Lesson 2 data preprocessingAbdurRazzaqe1
 
PYTHON-Chapter 4-Plotting and Data Science PyLab - MAULIK BORSANIYA
PYTHON-Chapter 4-Plotting and Data Science  PyLab - MAULIK BORSANIYAPYTHON-Chapter 4-Plotting and Data Science  PyLab - MAULIK BORSANIYA
PYTHON-Chapter 4-Plotting and Data Science PyLab - MAULIK BORSANIYAMaulik Borsaniya
 
Start machine learning in 5 simple steps
Start machine learning in 5 simple stepsStart machine learning in 5 simple steps
Start machine learning in 5 simple stepsRenjith M P
 
Lecture 3 intro2data
Lecture 3 intro2dataLecture 3 intro2data
Lecture 3 intro2dataJohnson Ubah
 
Intro to Machine Learning for non-Data Scientists
Intro to Machine Learning for non-Data ScientistsIntro to Machine Learning for non-Data Scientists
Intro to Machine Learning for non-Data ScientistsParinaz Ameri
 
Spark MLlib - Training Material
Spark MLlib - Training Material Spark MLlib - Training Material
Spark MLlib - Training Material Bryan Yang
 
A Hands-on Intro to Data Science and R Presentation.ppt
A Hands-on Intro to Data Science and R Presentation.pptA Hands-on Intro to Data Science and R Presentation.ppt
A Hands-on Intro to Data Science and R Presentation.pptSanket Shikhar
 
Pandas Dataframe reading data Kirti final.pptx
Pandas Dataframe reading data  Kirti final.pptxPandas Dataframe reading data  Kirti final.pptx
Pandas Dataframe reading data Kirti final.pptxKirti Verma
 
pandasppt with informative topics coverage.pptx
pandasppt with informative topics coverage.pptxpandasppt with informative topics coverage.pptx
pandasppt with informative topics coverage.pptxvallarasu200364
 
K-Means Algorithm Implementation In python
K-Means Algorithm Implementation In pythonK-Means Algorithm Implementation In python
K-Means Algorithm Implementation In pythonAfzal Ahmad
 
James Jesus Bermas on Crash Course on Python
James Jesus Bermas on Crash Course on PythonJames Jesus Bermas on Crash Course on Python
James Jesus Bermas on Crash Course on PythonCP-Union
 

Ähnlich wie More on Pandas.pptx (20)

Lecture 1 Pandas Basics.pptx machine learning
Lecture 1 Pandas Basics.pptx machine learningLecture 1 Pandas Basics.pptx machine learning
Lecture 1 Pandas Basics.pptx machine learning
 
interenship.pptx
interenship.pptxinterenship.pptx
interenship.pptx
 
Meetup Junio Data Analysis with python 2018
Meetup Junio Data Analysis with python 2018Meetup Junio Data Analysis with python 2018
Meetup Junio Data Analysis with python 2018
 
Unit 3_Numpy_Vsp.pptx
Unit 3_Numpy_Vsp.pptxUnit 3_Numpy_Vsp.pptx
Unit 3_Numpy_Vsp.pptx
 
XII - 2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdf
XII -  2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdfXII -  2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdf
XII - 2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdf
 
Lesson 2 data preprocessing
Lesson 2   data preprocessingLesson 2   data preprocessing
Lesson 2 data preprocessing
 
PYTHON-Chapter 4-Plotting and Data Science PyLab - MAULIK BORSANIYA
PYTHON-Chapter 4-Plotting and Data Science  PyLab - MAULIK BORSANIYAPYTHON-Chapter 4-Plotting and Data Science  PyLab - MAULIK BORSANIYA
PYTHON-Chapter 4-Plotting and Data Science PyLab - MAULIK BORSANIYA
 
Unit 3_Numpy_VP.pptx
Unit 3_Numpy_VP.pptxUnit 3_Numpy_VP.pptx
Unit 3_Numpy_VP.pptx
 
Unit 3_Numpy_VP.pptx
Unit 3_Numpy_VP.pptxUnit 3_Numpy_VP.pptx
Unit 3_Numpy_VP.pptx
 
Start machine learning in 5 simple steps
Start machine learning in 5 simple stepsStart machine learning in 5 simple steps
Start machine learning in 5 simple steps
 
PyData Paris 2015 - Track 1.2 Gilles Louppe
PyData Paris 2015 - Track 1.2 Gilles LouppePyData Paris 2015 - Track 1.2 Gilles Louppe
PyData Paris 2015 - Track 1.2 Gilles Louppe
 
Data Exploration in R.pptx
Data Exploration in R.pptxData Exploration in R.pptx
Data Exploration in R.pptx
 
Lecture 3 intro2data
Lecture 3 intro2dataLecture 3 intro2data
Lecture 3 intro2data
 
Intro to Machine Learning for non-Data Scientists
Intro to Machine Learning for non-Data ScientistsIntro to Machine Learning for non-Data Scientists
Intro to Machine Learning for non-Data Scientists
 
Spark MLlib - Training Material
Spark MLlib - Training Material Spark MLlib - Training Material
Spark MLlib - Training Material
 
A Hands-on Intro to Data Science and R Presentation.ppt
A Hands-on Intro to Data Science and R Presentation.pptA Hands-on Intro to Data Science and R Presentation.ppt
A Hands-on Intro to Data Science and R Presentation.ppt
 
Pandas Dataframe reading data Kirti final.pptx
Pandas Dataframe reading data  Kirti final.pptxPandas Dataframe reading data  Kirti final.pptx
Pandas Dataframe reading data Kirti final.pptx
 
pandasppt with informative topics coverage.pptx
pandasppt with informative topics coverage.pptxpandasppt with informative topics coverage.pptx
pandasppt with informative topics coverage.pptx
 
K-Means Algorithm Implementation In python
K-Means Algorithm Implementation In pythonK-Means Algorithm Implementation In python
K-Means Algorithm Implementation In python
 
James Jesus Bermas on Crash Course on Python
James Jesus Bermas on Crash Course on PythonJames Jesus Bermas on Crash Course on Python
James Jesus Bermas on Crash Course on Python
 

Kürzlich hochgeladen

Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 

Kürzlich hochgeladen (20)

Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 

More on Pandas.pptx

  • 1. DISCOVER . LEARN . EMPOWER More on Pandas INSTITUTE - UIE DEPARTMENT- ACADEMIC UNIT-1 Bachelor of Engineering (Computer Science & Engineering) Disruptive Technologies – I Prepared By: Dr. Anuj Kumar Goel 1
  • 2. 2 More on Pandas CO Number Title Level CO1 Grasp the characteristics of disruptive technologies and understand building blocks of artificial intelligence, data science and cloud computing. Remember CO2 Develop simple intelligent system using available tools and techniques of AI to analyze and interpret domain knowledge. Understand CO3 Build effective data visualizations, and learn to work with data through the entire datascience process. Apply CO4 Deploy, build, and monitor cloud-based applications. Analyze and evaluate CO5 Work in a team that can propose, design, implement and report on their selected domain. Create Course Outcome
  • 3. Python for Data Analysis
  • 4. Tutorial Content 4 Overview of Python Libraries for Data Scientists Reading Data; Selecting and Filtering the Data; Data manipulation, sorting, grouping, rearranging Plotting the data Descriptive statistics Inferential statistics
  • 5. Python Libraries for Data Science Many popular Python toolboxes/libraries: • NumPy • SciPy • Pandas • SciKit-Learn Visualization libraries • matplotlib • Seaborn and many more … 5 All these libraries are installed on the SCC
  • 6. Python Libraries for Data Science NumPy:  introduces objects for multidimensional arrays and matrices, as well as functions that allow to easily perform advanced mathematical and statistical operations on those objects  provides vectorization of mathematical operations on arrays and matrices which significantly improves the performance  many other python libraries are built on NumPy 6 Link: http://www.numpy.org/
  • 7. Python Libraries for Data Science SciPy:  collection of algorithms for linear algebra, differential equations, numerical integration, optimization, statistics and more  part of SciPy Stack  built on NumPy 7 Link: https://www.scipy.org/scipylib/
  • 8. Python Libraries for Data Science Pandas:  adds data structures and tools designed to work with table-like data (similar to Series and Data Frames in R)  provides tools for data manipulation: reshaping, merging, sorting, slicing, aggregation etc.  allows handling missing data 8 Link: http://pandas.pydata.org/
  • 9. Link: http://scikit-learn.org/ Python Libraries for Data Science SciKit-Learn:  provides machine learning algorithms: classification, regression, clustering, model validation etc.  built on NumPy, SciPy and matplotlib 9
  • 10. matplotlib:  python 2D plotting library which produces publication quality figures in a variety of hardcopy formats  a set of functionalities similar to those of MATLAB  line plots, scatter plots, barcharts, histograms, pie charts etc.  relatively low-level; some effort needed to create advanced visualization Link: https://matplotlib.org/ Python Libraries for Data Science 10
  • 11. Seaborn:  based on matplotlib  provides high level interface for drawing attractive statistical graphics  Similar (in style) to the popular ggplot2 library in R Link: https://seaborn.pydata.org/ Python Libraries for Data Science 11
  • 12. Start Jupyter nootebook # On the Shared Computing Cluster [scc1 ~] jupyter notebook 12 Figure 1. Starting Jupyter notebook
  • 13. In [ ]: Loading Python Libraries 13 #Import Python Libraries import numpy as np import scipy as sp import pandas as pd import matplotlib as mpl import seaborn as sns Press Shift+Enter to execute the jupyter cell Figure 2. Loading Python Libraries
  • 14. In [ ]: Reading data using pandas 14 #Read csv file df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv") There is a number of pandas commands to read other data formats: pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None, na_values=['NA']) pd.read_stata('myfile.dta') pd.read_sas('myfile.sas7bdat') pd.read_hdf('myfile.h5','df') Note: The above command has many optional arguments to fine-tune the data import process. Figure 3. Reading data files in Pandas
  • 15. In [3]: Exploring data frames 15 #List first 5 records df.head() Out[3]: Figure 4. Making Data Frames
  • 16. Hands-on exercises 16  Try to read the first 10, 20, 50 records;  Can you guess how to view the last few records; Hint:
  • 17. Data Frame data types Pandas Type Native Python Type Description object string The most general dtype. Will be assigned to your column if column has mixed types (numbers and strings). int64 int Numeric characters. 64 refers to the memory allocated to hold this character. float64 float Numeric characters with decimals. If a column contains numbers and NaNs(see below), pandas will default to float64, in case your missing value has a decimal. datetime64, timedelta[ns] N/A (but see the datetime module in Python’s standard library) Values meant to hold time data. Look into these for time series experiments. 17 Table 1. Data Types
  • 18. In [4]: Data Frame data types 18 #Check a particular column type df['salary'].dtype Out[4]: dtype('int64') In [5]: #Check types for all the columns df.dtypes Out[4]: rank discipline phd service sex salary dtype: object object object int64 int64 object int64 Figure 5. Column wise data reading
  • 19. Data Frames attributes 19 Python objects have attributes and methods. df.attribute description dtypes list the types of the columns columns list the column names axes list the row labels and column names ndim number of dimensions size number of elements shape return a tuple representing the dimensionality values numpy representation of the data Table 2. Data Frames attributes
  • 20. Hands-on exercises 20  Find how many records this data frame has;  How many elements are there?  What are the column names?  What types of columns we have in this data frame?
  • 21. Data Frames methods 21 df.method() description head( [n] ), tail( [n] ) first/last n rows describe() generate descriptive statistics (for numeric columns only) max(), min() return max/min values for all numeric columns mean(), median() return mean/median values for all numeric columns std() standard deviation sample([n]) returns a random sample of the data frame dropna() drop all the records with missing values Unlike attributes, python methods have parenthesis. All attributes and methods can be listed with a dir() function: dir(df) Table 3. Data frame methods
  • 22. Hands-on exercises 22  Give the summary for the numeric columns in the dataset  Calculate standard deviation for all numeric columns;  What are the mean values of the first 50 records in the dataset? Hint: use head() method to subset the first 50 records and then calculate the mean
  • 23. Selecting a column in a Data Frame Method 1: Subset the data frame using column name: df['sex'] Method 2: Use the column name as an attribute: df.sex Note: there is an attribute rank for pandas data frames, so to select a column with a name "rank" we should use method 1. 23
  • 24. Hands-on exercises 24  Calculate the basic statistics for the salary column;  Find how many values in the salary column (use count method);  Calculate the average salary;
  • 25. Data Frames groupby method 25 Using "group by" method we can: • Split the data into groups based on some criteria • Calculate statistics (or apply a function) to each group • Similar to dplyr() function in R In [ ]: #Group data using rank df_rank = df.groupby(['rank']) In [ ]: #Calculate mean value for each numeric column per each group df_rank.mean() Figure 6. Groupby method in Pandas(a)
  • 26. Data Frames groupby method 26 Once groupby object is create we can calculate various statistics for each group: In [ ]: #Calculate mean salary for each professor rank: df.groupby('rank')[['salary']].mean() Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object. When double brackets are used the output is a Data Frame Figure 6. Groupby method in Pandas(b)
  • 27. Data Frames groupby method 27 groupby performance notes: - no grouping/splitting occurs until it's needed. Creating the groupby object only verifies that you have passed a valid mapping - by default the group keys are sorted during the groupby operation. You may want to pass sort=False for potential speedup: In [ ]: #Calculate mean salary for each professor rank: df.groupby(['rank'], sort=False)[['salary']].mean() Figure 6. Groupby method in Pandas(c)
  • 28. Data Frame: filtering 28 To subset the data we can apply Boolean indexing. This indexing is commonly known as a filter. For example if we want to subset the rows in which the salary value is greater than $120K: In [ ]: #Calculate mean salary for each professor rank: df_sub = df[ df['salary'] > 120000 ] In [ ]:#Select only those rows that contain female professors: df_f = df[ df['sex'] == 'Female' ] Any Boolean operator can be used to subset the data: > greater; >= greater or equal; < less; <= less or equal; == equal; != not equal; Figure 7. Filtering method in Pandas
  • 29. Data Frames: Slicing 29 There are a number of ways to subset the Data Frame: • one or more columns • one or more rows • a subset of rows and columns Rows and columns can be selected by their position or label
  • 30. Data Frames: Slicing 30 When selecting one column, it is possible to use single set of brackets, but the resulting object will be a Series (not a DataFrame): In [ ]: #Select column salary: df['salary'] When we need to select more than one column and/or make the output to be a DataFrame, we should use double brackets: In [ ]: #Select column salary: df[['rank','salary']] Figure 8. Data Slicing in Pandas
  • 31. Data Frames: Selecting rows 31 If we need to select a range of rows, we can specify the range using ":" In [ ]: #Select rows by their position: df[10:20] Notice that the first row has a position 0, and the last value in the range is omitted: So for 0:10 range the first 10 rows are returned with the positions starting with 0 and ending with 9
  • 32. Data Frames: method loc 32 If we need to select a range of rows, using their labels we can use method loc: In [ ]: #Select rows by their labels: df_sub.loc[10:20,['rank','sex','salary']] Out[ ]: Figure 9. selecting rows through labels
  • 33. Data Frames: method iloc 33 If we need to select a range of rows and/or columns, using their positions we can use method iloc: In [ ]: #Select rows by their labels: df_sub.iloc[10:20,[0, 3, 4, 5]] Out[ ]: Figure 10. selecting rows / columns through position
  • 34. Data Frames: method iloc (summary) 34 df.iloc[0] # First row of a data frame df.iloc[i] #(i+1)th row df.iloc[-1] # Last row df.iloc[:, 0] # First column df.iloc[:, -1] # Last column df.iloc[0:7] #First 7 rows df.iloc[:, 0:2] #First 2 columns df.iloc[1:3, 0:2] #Second through third rows and first 2 columns df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns Figure 11. selecting rows/column through position (summary)
  • 35. Data Frames: Sorting 35 We can sort the data by a value in the column. By default the sorting will occur in ascending order and a new data frame is return. In [ ]: # Create a new data frame from the original sorted by the column Salary df_sorted = df.sort_values( by ='service') df_sorted.head() Out[ ]: Figure 12. Data sorting(a)
  • 36. Data Frames: Sorting 36 We can sort the data using 2 or more columns: In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False]) df_sorted.head(10) Out[ ]: Figure 12. Data sorting(b)
  • 37. Missing Values 37 Missing values are marked as NaN In [ ]: # Read a dataset with missing values flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv") In [ ]: # Select the rows that have at least one missing value flights[flights.isnull().any(axis=1)].head() Out[ ]: Figure 13. Filling of missing values
  • 38. Missing Values 38 There are a number of methods to deal with missing values in the data frame: df.method() description dropna() Drop missing observations dropna(how='all') Drop observations where all cells is NA dropna(axis=1, how='all') Drop column if all the values are missing dropna(thresh = 5) Drop rows that contain less than 5 non-missing values fillna(0) Replace missing values with zeros isnull() returns True if the value is missing notnull() Returns True for non-missing values Table 4. filling/replacing missing values
  • 39. Missing Values 39 • When summing the data, missing values will be treated as zero • If all values are missing, the sum will be equal to NaN • cumsum() and cumprod() methods ignore missing values but preserve them in the resulting arrays • Missing values in GroupBy method are excluded (just like in R) • Many descriptive statistics methods have skipna option to control if missing data should be excluded . This value is set to True by default (unlike R)
  • 40. Aggregation Functions in Pandas 40 Aggregation - computing a summary statistic about each group, i.e. • compute group sums or means • compute group sizes/counts Common aggregation functions: min, max count, sum, prod mean, median, mode, mad std, var
  • 41. Aggregation Functions in Pandas 41 agg() method are useful when multiple statistics are computed per column: In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max']) Out[ ]: Figure 14. Aggregation functions
  • 42. Basic Descriptive Statistics 42 df.method() description describe Basic statistics (count, mean, std, min, quantiles, max) min, max Minimum and maximum values mean, median, mode Arithmetic average, median and mode var, std Variance and standard deviation sem Standard error of mean skew Sample skewness kurt kurtosis Table 5. Available statistics methods
  • 43. Graphics to explore the data 43 To show graphs within Python notebook include inline directive: In [ ]: %matplotlib inline Seaborn package is built on matplotlib but provides high level interface for drawing attractive statistical graphics, similar to ggplot2 library in R. It specifically targets statistical data visualization
  • 44. Graphics 44 description distplot histogram barplot estimate of central tendency for a numeric variable violinplot similar to boxplot, also shows the probability density of the data jointplot Scatterplot regplot Regression plot pairplot Pairplot boxplot boxplot swarmplot categorical scatterplot factorplot General categorical plot Table 6. Data plotting
  • 45. Basic statistical Analysis 45 statsmodel and scikit-learn - both have a number of function for statistical analysis The first one is mostly used for regular analysis using R style formulas, while scikit-learn is more tailored for Machine Learning. statsmodels: • linear regressions • ANOVA tests • hypothesis testings • many more ... scikit-learn: • kmeans • support vector machines • random forests • many more ...
  • 46. References • http://www.numpy.org/ • https://www.scipy.org/scipylib/ • http://pandas.pydata.org/ • https://matplotlib.org/ 46
  • 47. THANK YOU For queries Email: anuj.e10160@cumail.in 47