1. Ramez Rangrez
SQL, big Data and Eco-System: Apache Hadoop including Hive, Pig, Hbase, Sqoop, Flume, HDFS,
Mapreduce, Cloudera and AWS cloud.
Working knowledge of JIRA ticketing tool.
Working knowledge of Salesforce(creating SFDC cases and escalating to approperiate team )
hands on expertise in Hadoop administration, Linux, configuration management on large scale
results oriented with a laser sharp delivery focus
good exposure of technical approach, strategic analysis and leadership
innovative and resourceful multitasker with strong communication and technical writing skills
Persistent Systems Limited,Pune.
September 26,2016 till date.
Configuring, Deploying,Maintaining and Monitoring HadoopCluster, Performing Various Operations whileStoring, Processing
and Analyzing the Big Data on Cloud.
Integrating with system hardware in the data centres at AWS Locations.
Creating Data Pipeline from object storage to our compute server in cloud.
Troubleshooting, tuning, and solving Hadoop issues using CLI or by WebUI.
Moving/migrating customer data from one data center to other.
Supporting Business Intelligence Team for Processing Data.
Providing L1.5 Hadoop/Linux support for super Giant client in US.
CloudAge, Pune January 1st
2013- September 23rd
Configured and deployed fully-distributed Apache Hadoop cluster in the cloud. Deploy new Hadoop
infrastructure,Hadoop cluster upgrades,Cluster maintenance,Troubleshooting,Capacity planning, JVM
performance tuning and resource optimization.
Troubleshoot and debug Hadoop eco system runtime issues. Recovering from node failures and
troubleshooting common Hadoop cluster issues
Working knowledge of Cloudera Manager, Cloudera Director and Cloud Watch.
Commissioning and decommissioning of nodes in a cloud and in-house.
Hands on experience with the Hadoop stack HDFS, MapReduce,Hbase,Pig, Hive, Oozie.Data injection
into HDFS and Trash Configuration.
Report generation of running nodes using various benchmarking operations
Experienced in installing hadoop in the Amazon cloud and using Amazon cloud products like EC2, EMR,
S3, EBS etc. Connecting S3 to EC2 via data pipeline in CLI
Overriding Hadoop defaults configurations for customization. Configuring Linux Security Services like
SSH (Secure Shell).
2. Evaluation of Hadoop infrastructure requirements and design/deploysolutions (high availability, big data
Work with the team in providing hardware architectural guidance, planning and estimating cluster
capacity, and creating roadmaps for Hadoop cluster deployment.Preparation ofarchitecture,design and
Expertise in performance tuning, storage capacity management.
Documenting all production scenarios, issues and resolutions. Experience with versioning, change
control, problem management.
Title: Customers churn analysis
Data analysis used as a background application to motivate manyproblems.Customer churn analysis is
done on the telecom data in both volume and complexity. We use cloud-based system,TeleData, which
combines data mining, social network analysis and statistics analysis with Mapreduce framework, for
knowledge discoveryin telecommunications.
• Analysis of specifications provided by client.
• Involved in requirement analysis and understanding the functionalities.
• Interaction with the team and business users.
Education and Certification
HSC – SSA juniour college of Science, Solapur(2008)
BE Eletronics and telecomunication-BMIT Solapur(2012)
CDAC in embedded systems and VLSI designing
Big Data Administrator-From Big Data University
CLOUDU certification from Rackspace.
Hadoop fundamentals and foundation-Big Data University.
Ready for Cloudera Certification.
Birth Date : 13 October 1990
Nationality : Indian
Marital Status : looking
Languages Known : English, Marathi and Hindi,Arabic
Passport : No
PresentAddress : B-09, Rohan Enclave, Opp CME, Old Pune Mumbai Hoghway,Dapodi, Pune
Searching trick behind game logic for android games, Playing cricket