Resume

P
Praveen GHadoop Developer at Wipro. um Wipro
PRAVEEN REDDY GAJJALA
prawin1044@gmail.com
+91-8886312512
Professional Summary :
 Having 2 years of overall IT experience.
 2 years experience in Big Data and Hadoop and its components HDFS, Map
Reduce, Pig, Hive, Sqoop.Oozie and Hbase.
 Good knowledge on Map Reduce Programming in Java programming language..
 Extensive knowledge in Hive installation, configuration, setting up Metastore
and UDF Creation.
 Good in integrating computer skills, customer support and education to meet
client needs and make customer delight.
 Involved in Schema Design and installing the Hive software.
 Involved in writing the Pig scripts to reduce the job execution.
 Exceptional ability to learn new concepts.
 Hard working and enthusiastic.
 Knowledge on FLUME and NO-SQL
 Experience in different operating Systems UNIX, LINUX, and WINDOWS.
Work Experience:
 Working as a Hadoop Developer in WIPRO , Hyderabad from Dec 2012 to Till
Date.
Academic profile :
Bachelor of Engineering in Electrical and Electronics (2008-2012),
KORM College Of Engineering, kadapa. (JNTU ANANTAPUR)
Technical skills :
Languages Core Java, Map Reduce, Pig, Sqoop, Hive, Hbase.
Web Technologies HTML, Hadoop API.
Frameworks Hadoop.
Java IDEs Eclipse.
Databases Oracle, My SQL
Operating Systems Windows7, Windows XP, 2000, 2003, Unix and Linux
Project :
Client : Sears
Environment : Hadoop , Apache Pig, Hive, HBase,Oozie. Java, Linux ,SQL
Eclipse, Oracle 10g, MapReduce, HDFS
Duration : Jan 2013 to till Date.
Role : Hadoop Developer
Description:
The purpose of the project is to improve customer's shopping
experience with Sears. Collect click stream data from Sears websites and mobile apps and
analyze the shopping patterns with these application and customize customer facing
applications to make it user friendly for the customer in reaching the products in fewer
clicks and also personalizing the shopping experience. Hadoop is used to collect and store
data from various data points and perform various analyses on these data using
MapReduce jobs, Pig and Hive Scripts. Aggregated results are then exported over to
downstream RDBMS for Business Intelligence reporting.
Roles and Responsibilities:
 Worked on a live Hadoop production CDH3 cluster with 35 nodes
 Worked with highly unstructured and semi structured data of 25 TB in size
 Good experience in benchmarking Hadoop cluster.
 Used Sqoop to import data from DB2 system in to HDFS
 Worked on custom Map Reduce programs using Java
 Designed and developed PIG data transformation scripts to work against
unstructured data from various data points and created a base line.
 Worked on creating and optimizing Hive scripts for data analysts based on the
requirements.
 Good experience in working with Sequence files and compressed file formats.
 Worked with performance issues and tuning the Pig and Hive scripts.
 Exported the analyzed data to the relational databases using Sqoop for
visualization and to generate reports for the BI team.
 Writing the script files for processing data and loading to HDFS
 Worked with the infrastructure and the admin teams to set up monitoring probes
to track the health of the nodes
 Created and maintained Technical documentation for launching Hadoop Clusters
and for executing Hive queries and Pig Scripts.

Recomendados

RENUGA VEERARAGAVAN Resume HADOOP von
RENUGA VEERARAGAVAN Resume HADOOPRENUGA VEERARAGAVAN Resume HADOOP
RENUGA VEERARAGAVAN Resume HADOOPrenuga V
5K views6 Folien
sudipto_resume von
sudipto_resumesudipto_resume
sudipto_resumeSudipto Saha
478 views7 Folien
Pankaj Resume for Hadoop,Java,J2EE - Outside World von
Pankaj Resume for Hadoop,Java,J2EE -  Outside WorldPankaj Resume for Hadoop,Java,J2EE -  Outside World
Pankaj Resume for Hadoop,Java,J2EE - Outside WorldPankaj Kumar
30K views10 Folien
hadoop resume von
hadoop resumehadoop resume
hadoop resumeHassan Qureshi
2.5K views4 Folien
Resume von
ResumeResume
ResumeSergey Sundukovskiy
3.6K views7 Folien
Resume_Abhinav_Hadoop_Developer von
Resume_Abhinav_Hadoop_DeveloperResume_Abhinav_Hadoop_Developer
Resume_Abhinav_Hadoop_DeveloperAbhinav khanduja
344 views3 Folien

Más contenido relacionado

Was ist angesagt?

Hadoop Big Data Resume von
Hadoop Big Data ResumeHadoop Big Data Resume
Hadoop Big Data Resumearbind_jha
2.7K views4 Folien
Madhu Kopparapu Resume von
Madhu Kopparapu ResumeMadhu Kopparapu Resume
Madhu Kopparapu ResumeMadhu Kopparapu
19.7K views4 Folien
Jayaram_Parida- Big Data Architect and Technical Scrum Master von
Jayaram_Parida- Big Data Architect and Technical Scrum MasterJayaram_Parida- Big Data Architect and Technical Scrum Master
Jayaram_Parida- Big Data Architect and Technical Scrum MasterJayaram Parida
2.2K views7 Folien
Kumaresan kaliappan resume von
Kumaresan kaliappan resumeKumaresan kaliappan resume
Kumaresan kaliappan resumeSam Walsh
2.7K views11 Folien
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop von
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopSenior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopabinash bindhani
474 views3 Folien
Resume (2) von
Resume (2)Resume (2)
Resume (2)Romy Khetan
382 views6 Folien

Was ist angesagt?(19)

Hadoop Big Data Resume von arbind_jha
Hadoop Big Data ResumeHadoop Big Data Resume
Hadoop Big Data Resume
arbind_jha2.7K views
Jayaram_Parida- Big Data Architect and Technical Scrum Master von Jayaram Parida
Jayaram_Parida- Big Data Architect and Technical Scrum MasterJayaram_Parida- Big Data Architect and Technical Scrum Master
Jayaram_Parida- Big Data Architect and Technical Scrum Master
Jayaram Parida2.2K views
Kumaresan kaliappan resume von Sam Walsh
Kumaresan kaliappan resumeKumaresan kaliappan resume
Kumaresan kaliappan resume
Sam Walsh2.7K views
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop von abinash bindhani
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopSenior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
abinash bindhani474 views
Manoj(Java Developer)_Resume von Vamsi Manoj
Manoj(Java Developer)_ResumeManoj(Java Developer)_Resume
Manoj(Java Developer)_Resume
Vamsi Manoj1.4K views
Anil_BigData Resume von Anil Sokhal
Anil_BigData ResumeAnil_BigData Resume
Anil_BigData Resume
Anil Sokhal1.8K views
Suresh_Yadav_Hadoop_Fresher_Resume von Suresh Yadav
Suresh_Yadav_Hadoop_Fresher_ResumeSuresh_Yadav_Hadoop_Fresher_Resume
Suresh_Yadav_Hadoop_Fresher_Resume
Suresh Yadav90 views
Resume_Triveni_Bigdata_Hadoop Professional von TRIVENI PATRO
Resume_Triveni_Bigdata_Hadoop ProfessionalResume_Triveni_Bigdata_Hadoop Professional
Resume_Triveni_Bigdata_Hadoop Professional
TRIVENI PATRO387 views
Rajeshwari K A 9+ years as Java Developer and Team lead-1 von Rajeshwari KA
Rajeshwari K A 9+ years as Java Developer and Team lead-1Rajeshwari K A 9+ years as Java Developer and Team lead-1
Rajeshwari K A 9+ years as Java Developer and Team lead-1
Rajeshwari KA856 views
Shanthkumar 6yrs-java-analytics-resume von Shantha Kumar N
Shanthkumar 6yrs-java-analytics-resumeShanthkumar 6yrs-java-analytics-resume
Shanthkumar 6yrs-java-analytics-resume
Shantha Kumar N585 views

Similar a Resume

Srikanth hadoop 3.6yrs_hyd von
Srikanth hadoop 3.6yrs_hydSrikanth hadoop 3.6yrs_hyd
Srikanth hadoop 3.6yrs_hydsrikanth K
354 views4 Folien
Nagesh Hadoop Profile von
Nagesh Hadoop ProfileNagesh Hadoop Profile
Nagesh Hadoop Profilenagesh madanala
173 views3 Folien
Resume (1) von
Resume (1)Resume (1)
Resume (1)NAGESWARA RAO DASARI
99 views4 Folien
RESUME_N von
RESUME_NRESUME_N
RESUME_NNageswara Rao Dasari
55 views4 Folien
Sureh hadoop 3 years t von
Sureh hadoop 3 years tSureh hadoop 3 years t
Sureh hadoop 3 years tsuresh thallapelly
105 views4 Folien
Hadoop Developer von
Hadoop DeveloperHadoop Developer
Hadoop Developermallikarjunkoriindia
143 views4 Folien

Similar a Resume(20)

Srikanth hadoop 3.6yrs_hyd von srikanth K
Srikanth hadoop 3.6yrs_hydSrikanth hadoop 3.6yrs_hyd
Srikanth hadoop 3.6yrs_hyd
srikanth K354 views
Srikanth hadoop hyderabad_3.4yeras - copy von srikanth K
Srikanth hadoop hyderabad_3.4yeras - copySrikanth hadoop hyderabad_3.4yeras - copy
Srikanth hadoop hyderabad_3.4yeras - copy
srikanth K155 views
Nagarjuna_Damarla von Nag Arjun
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_Damarla
Nag Arjun153 views
KOTI_RESUME_(1) (2) von ch koti
KOTI_RESUME_(1) (2)KOTI_RESUME_(1) (2)
KOTI_RESUME_(1) (2)
ch koti194 views
Nagarjuna_Damarla_Resume von Nag Arjun
Nagarjuna_Damarla_ResumeNagarjuna_Damarla_Resume
Nagarjuna_Damarla_Resume
Nag Arjun287 views
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUME von vamshi krishna
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUMEVAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
vamshi krishna452 views

Resume

  • 1. PRAVEEN REDDY GAJJALA prawin1044@gmail.com +91-8886312512 Professional Summary :  Having 2 years of overall IT experience.  2 years experience in Big Data and Hadoop and its components HDFS, Map Reduce, Pig, Hive, Sqoop.Oozie and Hbase.  Good knowledge on Map Reduce Programming in Java programming language..  Extensive knowledge in Hive installation, configuration, setting up Metastore and UDF Creation.  Good in integrating computer skills, customer support and education to meet client needs and make customer delight.  Involved in Schema Design and installing the Hive software.  Involved in writing the Pig scripts to reduce the job execution.  Exceptional ability to learn new concepts.  Hard working and enthusiastic.  Knowledge on FLUME and NO-SQL  Experience in different operating Systems UNIX, LINUX, and WINDOWS. Work Experience:  Working as a Hadoop Developer in WIPRO , Hyderabad from Dec 2012 to Till Date. Academic profile : Bachelor of Engineering in Electrical and Electronics (2008-2012), KORM College Of Engineering, kadapa. (JNTU ANANTAPUR) Technical skills : Languages Core Java, Map Reduce, Pig, Sqoop, Hive, Hbase. Web Technologies HTML, Hadoop API. Frameworks Hadoop. Java IDEs Eclipse. Databases Oracle, My SQL Operating Systems Windows7, Windows XP, 2000, 2003, Unix and Linux
  • 2. Project : Client : Sears Environment : Hadoop , Apache Pig, Hive, HBase,Oozie. Java, Linux ,SQL Eclipse, Oracle 10g, MapReduce, HDFS Duration : Jan 2013 to till Date. Role : Hadoop Developer Description: The purpose of the project is to improve customer's shopping experience with Sears. Collect click stream data from Sears websites and mobile apps and analyze the shopping patterns with these application and customize customer facing applications to make it user friendly for the customer in reaching the products in fewer clicks and also personalizing the shopping experience. Hadoop is used to collect and store data from various data points and perform various analyses on these data using MapReduce jobs, Pig and Hive Scripts. Aggregated results are then exported over to downstream RDBMS for Business Intelligence reporting. Roles and Responsibilities:  Worked on a live Hadoop production CDH3 cluster with 35 nodes  Worked with highly unstructured and semi structured data of 25 TB in size  Good experience in benchmarking Hadoop cluster.  Used Sqoop to import data from DB2 system in to HDFS  Worked on custom Map Reduce programs using Java  Designed and developed PIG data transformation scripts to work against unstructured data from various data points and created a base line.  Worked on creating and optimizing Hive scripts for data analysts based on the requirements.  Good experience in working with Sequence files and compressed file formats.  Worked with performance issues and tuning the Pig and Hive scripts.  Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.  Writing the script files for processing data and loading to HDFS  Worked with the infrastructure and the admin teams to set up monitoring probes to track the health of the nodes  Created and maintained Technical documentation for launching Hadoop Clusters and for executing Hive queries and Pig Scripts.