SlideShare ist ein Scribd-Unternehmen logo
1 von 7
Email: hari.duche@gmail.com Mobile: +91.9890653659
HARI ARJUN DUCHE
HIGHLIGHTS
 Over 5 years of experience in database internals and implementation.
 Over 5 years of experience in data warehouse and business intelligence, data engine
 Specialization in database internals & the implementation, C and C++.
 Good Experience in Internals of RDBMS like Netezza, PostgreSQL.
 Four patents published on my name.
EMPLOYMENT HISTORY (TOTAL 12 YEARS’ EXPERIENCE)
 Currently, I am working as a TECHNICAL SPECIALIST with Persistent Systems Limited
Pune, India since Sept 2014 till date.
 Worked with IBM India Software Labs (ISL) as a SYSTEM SOFTWARE ARCHITECT from
June 2008 till Aug 2014 (6.2 years)
 Worked with Persistent Systems from January 2004 to 26 June 2008 (4.6 years).
TECHNICAL SKILLS:
Languages C, C++, Python
RDBMS Netezza Performance Server , PostgresSQL, Oracle, SQL Server 2008, DB2
Operating Systems UNIX/Linux, Windows XP
Tools GDB, RTC, IBM Cognos Report Studio, IBM Cognos Framework Manager
WORK DETAILS:
Organization: Persistent Systems Limited
Project Title : Netezza
Scope of the Project : The Netezza Data Engine (NDE) is a single-focus computer system
specifically designed to support very large decision support
databases. It combines massively parallel processing and a
hierarchical model to produce a highly scalable yet straightforward,
fault-tolerant design.
The NPS appliance manages extremely fast data loads and delivers
10 to 50 times the performance at less than half the cost of
comparable data warehouse platforms
Duration : 17 months
Team Size : 70
Role/Responsibilities : I am responsible for design and development of new product
features/enhancements in addition to handling critical customer
escalations. I have developed following features
1. “CASE EXPR” improvement
2. Delete row performance improvement
3. Oracle to Netezza data migration tool
4. Adding actual row count information to plan file
5. Snipper result cache instrumentation project
6. Supporting MAX number of tables creation
I am a technical specialist.
Software Tools : C,C++, GDB
Organization: IBM India Software Lab (Cognos project), Pune
Project Title : IBM – StoredIQ
Scope of the Project : The biggest source of unstructured growth is coming from data inside
the enterprise. Behind every corporate firewall lies petabytes of
unstructured and unmanaged business-critical data. IBM StoredIQ
provides the first solution to identify, analyze and act on unstructured
data in-place, without moving your data to a repository or specialty
application. It’s a different approach to solving Big Data problems.
We call it Active Information Management.
The difference is a powerful, open platform that dynamically indexes
and analyzes data in-place. It’s a practical approach that dramatically
improves the speed and reliability of information management
With StoredIQ, we can take Big Data head on to:
1. Gain insight and control over corporate data environments
2. Lower legal discovery cost and risk
3. Apply policies and govern data according to regulatory and
corporate mandates
4. Spot patterns and trends to optimize storage resources
Active Information Management turns petabytes of raw data into
smaller, structured ‘information sets’ that improve every business
process in your organization. It’s an approach that allows you to
reduce the time, cost and resources required to managed and use
unstructured data by more than 95%!
Duration : 6 months
Team Size : 10
Role/Responsibilities : I was the senior developer of StoredIQ Chennai team. I was responsible for
envisioning and implementing new features into the product.
Below are some key features I implemented in StoredIQ.
1. Customizable report framework
2. Ability to export all available attributes of an Infoset to CSV.
3. On the fly serving aggregates requested.. This feature is so generic
that we can serve any type of aggregation request on the fly.
E.g
1. Duplicates information
2. Term hit information.
Software Tools : Python, PDB
Project Title : IBM Cognos – PowerPlay
Scope of the Project : IBM Cognos PowerPlay lets you identify and analyse trends in
business and financial performance for better business decisions.
IBM Cognos PowerPlay lets you analyse large volumes of
dimensionally modelled data with sub-second response times using
either a Windows client or Web browser. It allows viewing data from
any angle and in any combination to identify and analyse the driving
factors behind your business results.
It is a Web-based business intelligence solution with integrated
reporting and data exploration features. It is used to create and view
reports that are based on PowerCube data sources. IBM Cognos
PowerPlay Studio lets you view, explore, format, and distribute
reports.
Duration : 43 months
Team Size : 4
Role/Responsibilities : ISL PowerPlay team has world-wide responsibility of this product. I was the
senior key developer of this team and responsible for envisioning and
implementing new features into the product. I was also responsible for doing
feature enhancements; bug fixing, helping customer and 3LS support team
to resolve customer issues. Below are some key features I implemented in
PowerPlay.
4. Implemented “Export to Excel 2007" feature in PowerPlay.
5. Implemented opening PowerPlay report in “Business Insight
Advanced” feature.
6. Provided IPV6 support for PowerPlay
7. Performance improvement in PowerPlay
8. “Drill through to a macro” feature support
9. Implemented urgent enhancement “Add ability to not pass default
MEASURE to target report in PowerPlay” which saved customer
2000 hours of work per month
Software Tools : C,C++
Project Title : Cognos – Universal Data Access (UDA)
Scope of the Project : I worked on a component called UDA (Universal Data Access – a
Cognos aware component). UDA provides uniform method to access
heterogeneous database management systems. It also provides call-level
interface for query execution and metadata access. A query – in Cognos-
SQL form, generated by Cognos products is consumed by UDA. UDA
acquires knowledge of all the capabilities exposed by the underlying
database management systems. Considering these capabilities UDA
decomposes the Cognos-SQL with the goal being to push as much as, if not
all of the computational load, to the database server. Rest of the processing
(which the underlying DBMS is incapable of) is taken place at UDA side;
we call it as local processing.
Duration : 27 months
Team Size : 4
Role/Responsibilities : 1. Here my role was to study different features provided by different
databases like Netezza, Oracle etc and integrate those features in UDA
so that UDA can retrieve data very efficiently.
2. Supported Informix database in UDA for Linux and HP Itanium
platforms.
3. Supported postgres database in UDA.
4. Supported DataDirect driver in UDA.
5. Certified new releases of DB2, Oracle, MySQL and Netezza.
6. Fixed many core UDA bugs.
7. I have Used IBM Cognos report studio, framework manager (metadata
modeling tool) and metric studio. Also I have done three customer POCs
using these tools.
Software Tools : C,C++
Organization: Persistent Systems Private Limited
Project Title : Netezza
Scope of the Project : The Netezza Data Engine (NDE) is a single-focus computer system
specifically designed to support very large decision support
databases. It combines massively parallel processing and a
hierarchical model to produce a highly scalable yet straightforward,
fault-tolerant design.
The NPS appliance manages extremely fast data loads and delivers
10 to 50 times the performance at less than half the cost of
comparable data warehouse platforms
Duration : 36 months
Team Size : 70
Role/Responsibilities : Extensively involved in bug fixing of Netezza database internals.
Following are the key areas that I worked in
7. Zonemap: A new indexing technique developed at Netezza
which improves performance dramatically
8. Designed and Developed modules for Date type conversion
mechanism
9. Support for internationalization
10. SQL query parsing
11. Code generation for SQL queries
12. transformation of postgres generated query execution plan to
DBOS (a module) aware structures
Also, I worked on many performances related issues. I know all the
features of Netezza database.
I was a Module Leader of PostgreSQL team, leading team of three
people.
Software Tools : C,C++, GDB
Organization: Persistent Systems Private Limited
Project Title : Solidcore
Scope of the Project : SolidOS:
SolidOS is a product of Solidcore Systems Inc.
(http://www.solidcore.com). It is a server integrity and security solution
that provides two important benefits:
1. No unauthorized code can run on the system
2. Data is changed only when you want it to
The threats to the integrity and security of a system can come from both,
external sources (e.g. a buffer overflow attack over the network), and internal
sources (a user of the system attempts to exploit). Solidcore “Server Integrity
Suite”, a.k.a. SIS (of which SolidOS product is a part) attempts to address
above-mentioned problems.
The “code” that runs on a system could be
1. Executable code (e.g. binary executable and shared libraries)
2. Scripts (e.g. Perl)
3. Application specific (e.g. PL/SQL code running inside Oracle
server or Java code running inside a J2EE server)
SolidOS protects against the above types of code attacks.
SolidDB
SolidDB is a change control enforcement product developed by
Solidcore systems for enterprise database environment.
SolidDB top level feature list
SolidDB code protection
1. Modes of operation
a. Production mode: Any change in the supported executable
is not allowed. This mode will prevent the
create/modify/drop of any of the supported database
executable. All attempts to change the database executable
are logged.
b. Maintenance mode: In this mode, any user code except the
Solidcore administrator code can be
created/modified/dropped. All activities of changing the
supported executable are logged.
c. Disable mode: In this mode, the SolidDB is not in path at
all, it is uninstalled. No triggers of SolidDB user are
present, and hence any code can be
creatd/modified/dropped. No logging happens in this
mode.
2. Create, modify or drop of the supported database executable
are strictly prohibited in the production mode.
3. Any database executable, other than that owned by soliddb
administrator user can be modified in the maintenance mode.
4. Traces generated for all activities.
SolidDB data protection:
1. Modes of operation
a. Production mode: Any change in the protected tables is not
allowed. This mode will prevent the
create/modify/drop/insert/update/delete of any of the
protected database tables. All attempts to change the
protected tables are logged.
b. Maintenance mode: This mode will prevent the
create/modify/drop of any of the protected database tables.
In this mode, any user tables except the Solidcore
administrator tables allow DML operations. All activities of
performing DML operations on protected tables are logged.
c. Disable mode: In this mode, the SolidDB is not in path at
all, it is uninstalled. No triggers of SolidDB user are
present, and hence any table can be
created/modified/dropped/inserted/updated/deleted. No
logging happens in this mode.
.
Duration : 16 months
Team Size : 33
Role/Responsibilities : • Involved in design, development, testing and bug fixing of SolidDB
product.
Software Tools : C, C++, Oracle, PL-SQL
COLLABORATIONS:
 Co-ordinated and Integrated INFORMIX and RED-BRICKS databases feature in IBM Cognos UDA
(Universal Data Access layer)
 Created Classic and Advanced reports on IM tool using IBM Cognos.
 Created “ISL Level Attrition Analysis” tool using IBM Cognos Framework Manager, Data Manager
ETL tool and Report Studio.
CUSTOMER ENGAGEMENTS:
 I have done POC’s for Reliance Health Insurance, TSS and CRIS using IBM Cognos.
 Created Classic and Advanced reports on IM tool using IBM Cognos..
ACHIEVEMENS:
 I have four patents published on my name.
 Got IBM “Eminence and Excellence” award with 500$ cash.
 Got award from IBM Cognos VP Eric Yau for the excellent work done and all-rounder performance in
UDA (Universal Data Access layer).
 Got award from IBM Country Manager Bhushan Kelkar for remote mentoring work done as part of
IBM “Be a Start Be a Guide” program.
 Got two “You Made A Difference” awards from Persistent Systems Ltd.
CERTIFICATIONS:
 I am a IBM certified “IBM Cognos FM Metadata Model” Developer
 I have cleared Persistent exams for Java Script, HTML5 and Angular JS technologies
PERSONAL DETAILS
 Date of Birth: 28 Sept 1980
 Contact Number: +91.9890653659
 Residence Address: Aranyeshwar Nagar, 47/26, Pune-9 (Parvati), Maharashtra.
 Got two “You Made A Difference” awards from Persistent Systems Ltd.
CERTIFICATIONS:
 I am a IBM certified “IBM Cognos FM Metadata Model” Developer
 I have cleared Persistent exams for Java Script, HTML5 and Angular JS technologies
PERSONAL DETAILS
 Date of Birth: 28 Sept 1980
 Contact Number: +91.9890653659
 Residence Address: Aranyeshwar Nagar, 47/26, Pune-9 (Parvati), Maharashtra.

Weitere ähnliche Inhalte

Was ist angesagt?

Cloudwatt pioneers big_data
Cloudwatt pioneers big_dataCloudwatt pioneers big_data
Cloudwatt pioneers big_dataxband
 
N(ot)-o(nly)-(Ha)doop - the DAG showdown
N(ot)-o(nly)-(Ha)doop - the DAG showdownN(ot)-o(nly)-(Ha)doop - the DAG showdown
N(ot)-o(nly)-(Ha)doop - the DAG showdownDataWorks Summit
 
Intel® Xeon® Processor E5-2600 v4 Product Family EAMG
Intel® Xeon® Processor E5-2600 v4 Product Family EAMGIntel® Xeon® Processor E5-2600 v4 Product Family EAMG
Intel® Xeon® Processor E5-2600 v4 Product Family EAMGIntel IT Center
 
ETL_Developer_Resume_Shipra_7_02_17
ETL_Developer_Resume_Shipra_7_02_17ETL_Developer_Resume_Shipra_7_02_17
ETL_Developer_Resume_Shipra_7_02_17Shipra Jaiswal
 
Migration to Oracle 12c Made Easy Using Replication Technology
Migration to Oracle 12c Made Easy Using Replication TechnologyMigration to Oracle 12c Made Easy Using Replication Technology
Migration to Oracle 12c Made Easy Using Replication TechnologyDonna Guazzaloca-Zehl
 
Performance advantages of Hadoop ETL offload with the Intel processor-powered...
Performance advantages of Hadoop ETL offload with the Intel processor-powered...Performance advantages of Hadoop ETL offload with the Intel processor-powered...
Performance advantages of Hadoop ETL offload with the Intel processor-powered...Principled Technologies
 
AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
 
Update datacenter technology to consolidate and save on virtualization
Update datacenter technology to consolidate and save on virtualizationUpdate datacenter technology to consolidate and save on virtualization
Update datacenter technology to consolidate and save on virtualizationPrincipled Technologies
 
Hadoop Big Data Resume
Hadoop Big Data ResumeHadoop Big Data Resume
Hadoop Big Data Resumearbind_jha
 
Mukhtar resume etl_developer
Mukhtar resume etl_developerMukhtar resume etl_developer
Mukhtar resume etl_developerMukhtar Mohammed
 
Big_SQL_3.0_Whitepaper
Big_SQL_3.0_WhitepaperBig_SQL_3.0_Whitepaper
Big_SQL_3.0_WhitepaperScott Gray
 
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Telec...
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Telec...Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Telec...
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Telec...Intel IT Center
 
Basha_ETL_Developer
Basha_ETL_DeveloperBasha_ETL_Developer
Basha_ETL_Developerbasha shaik
 
Make sense of important data faster with AWS EC2 M6i instances
Make sense of important data faster with AWS EC2 M6i instancesMake sense of important data faster with AWS EC2 M6i instances
Make sense of important data faster with AWS EC2 M6i instancesPrincipled Technologies
 

Was ist angesagt? (20)

Cloudwatt pioneers big_data
Cloudwatt pioneers big_dataCloudwatt pioneers big_data
Cloudwatt pioneers big_data
 
N(ot)-o(nly)-(Ha)doop - the DAG showdown
N(ot)-o(nly)-(Ha)doop - the DAG showdownN(ot)-o(nly)-(Ha)doop - the DAG showdown
N(ot)-o(nly)-(Ha)doop - the DAG showdown
 
Resume_Ratna Rao updated
Resume_Ratna Rao updatedResume_Ratna Rao updated
Resume_Ratna Rao updated
 
Intel® Xeon® Processor E5-2600 v4 Product Family EAMG
Intel® Xeon® Processor E5-2600 v4 Product Family EAMGIntel® Xeon® Processor E5-2600 v4 Product Family EAMG
Intel® Xeon® Processor E5-2600 v4 Product Family EAMG
 
ETL_Developer_Resume_Shipra_7_02_17
ETL_Developer_Resume_Shipra_7_02_17ETL_Developer_Resume_Shipra_7_02_17
ETL_Developer_Resume_Shipra_7_02_17
 
Migration to Oracle 12c Made Easy Using Replication Technology
Migration to Oracle 12c Made Easy Using Replication TechnologyMigration to Oracle 12c Made Easy Using Replication Technology
Migration to Oracle 12c Made Easy Using Replication Technology
 
Performance advantages of Hadoop ETL offload with the Intel processor-powered...
Performance advantages of Hadoop ETL offload with the Intel processor-powered...Performance advantages of Hadoop ETL offload with the Intel processor-powered...
Performance advantages of Hadoop ETL offload with the Intel processor-powered...
 
Bala_New
Bala_NewBala_New
Bala_New
 
Aditya_2015
Aditya_2015Aditya_2015
Aditya_2015
 
SESHAKRISHNA
SESHAKRISHNASESHAKRISHNA
SESHAKRISHNA
 
CV_Rishi_Gupta
CV_Rishi_GuptaCV_Rishi_Gupta
CV_Rishi_Gupta
 
AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.
 
Update datacenter technology to consolidate and save on virtualization
Update datacenter technology to consolidate and save on virtualizationUpdate datacenter technology to consolidate and save on virtualization
Update datacenter technology to consolidate and save on virtualization
 
Hadoop Big Data Resume
Hadoop Big Data ResumeHadoop Big Data Resume
Hadoop Big Data Resume
 
Mukhtar resume etl_developer
Mukhtar resume etl_developerMukhtar resume etl_developer
Mukhtar resume etl_developer
 
Big_SQL_3.0_Whitepaper
Big_SQL_3.0_WhitepaperBig_SQL_3.0_Whitepaper
Big_SQL_3.0_Whitepaper
 
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Telec...
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Telec...Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Telec...
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Telec...
 
Basha_ETL_Developer
Basha_ETL_DeveloperBasha_ETL_Developer
Basha_ETL_Developer
 
Souvik_Das_CV
Souvik_Das_CVSouvik_Das_CV
Souvik_Das_CV
 
Make sense of important data faster with AWS EC2 M6i instances
Make sense of important data faster with AWS EC2 M6i instancesMake sense of important data faster with AWS EC2 M6i instances
Make sense of important data faster with AWS EC2 M6i instances
 

Andere mochten auch (18)

Apache Spark: What's under the hood
Apache Spark: What's under the hoodApache Spark: What's under the hood
Apache Spark: What's under the hood
 
Services & Client List : Resume
Services & Client List : ResumeServices & Client List : Resume
Services & Client List : Resume
 
AMatherneResume2016
AMatherneResume2016AMatherneResume2016
AMatherneResume2016
 
Christian Arias RESUME v2
Christian Arias RESUME v2Christian Arias RESUME v2
Christian Arias RESUME v2
 
Career Resume
Career ResumeCareer Resume
Career Resume
 
resume
resumeresume
resume
 
Rachel Harvey.Resume.Sept2016
Rachel Harvey.Resume.Sept2016Rachel Harvey.Resume.Sept2016
Rachel Harvey.Resume.Sept2016
 
Resume 8-11-15
Resume 8-11-15Resume 8-11-15
Resume 8-11-15
 
Resume
ResumeResume
Resume
 
Resume_LindsayWang
Resume_LindsayWangResume_LindsayWang
Resume_LindsayWang
 
Demetrisse-Harris
Demetrisse-HarrisDemetrisse-Harris
Demetrisse-Harris
 
Img 20141207 0024
Img 20141207 0024Img 20141207 0024
Img 20141207 0024
 
Sr Manager Resume
Sr Manager ResumeSr Manager Resume
Sr Manager Resume
 
2015 resume with summary
2015 resume with summary2015 resume with summary
2015 resume with summary
 
Resume_Tutt_Marian
Resume_Tutt_MarianResume_Tutt_Marian
Resume_Tutt_Marian
 
AGHresume
AGHresumeAGHresume
AGHresume
 
Maintenance_anand New
Maintenance_anand NewMaintenance_anand New
Maintenance_anand New
 
Hussein Resume 5
Hussein Resume 5Hussein Resume 5
Hussein Resume 5
 

Ähnlich wie hari_duche_updated

Ähnlich wie hari_duche_updated (20)

Brijesh Soni
Brijesh SoniBrijesh Soni
Brijesh Soni
 
Resume (1)
Resume (1)Resume (1)
Resume (1)
 
Resume (1)
Resume (1)Resume (1)
Resume (1)
 
Mrithyunjaya_V_Sarangmath
Mrithyunjaya_V_SarangmathMrithyunjaya_V_Sarangmath
Mrithyunjaya_V_Sarangmath
 
Symphony Driver Essay
Symphony Driver EssaySymphony Driver Essay
Symphony Driver Essay
 
Shivaprasada_Kodoth
Shivaprasada_KodothShivaprasada_Kodoth
Shivaprasada_Kodoth
 
Documentation
DocumentationDocumentation
Documentation
 
Sivagama_sundari_Sakthivel_Resume_2016
Sivagama_sundari_Sakthivel_Resume_2016Sivagama_sundari_Sakthivel_Resume_2016
Sivagama_sundari_Sakthivel_Resume_2016
 
CV_Vasili_Tegza 2G
CV_Vasili_Tegza 2GCV_Vasili_Tegza 2G
CV_Vasili_Tegza 2G
 
Abhishek-Resume_latest.doc
Abhishek-Resume_latest.docAbhishek-Resume_latest.doc
Abhishek-Resume_latest.doc
 
Resume
ResumeResume
Resume
 
miniprojectreport
miniprojectreportminiprojectreport
miniprojectreport
 
BigData_Krishna Kumar Sharma
BigData_Krishna Kumar SharmaBigData_Krishna Kumar Sharma
BigData_Krishna Kumar Sharma
 
GERSIS INDUSTRY CASES
GERSIS INDUSTRY CASESGERSIS INDUSTRY CASES
GERSIS INDUSTRY CASES
 
Business Intelligence Best Practice Summit: BI Quo Vadis
Business Intelligence Best Practice Summit:  BI Quo VadisBusiness Intelligence Best Practice Summit:  BI Quo Vadis
Business Intelligence Best Practice Summit: BI Quo Vadis
 
Resume Pallavi Mishra as of 2017 Feb
Resume Pallavi Mishra as of 2017 FebResume Pallavi Mishra as of 2017 Feb
Resume Pallavi Mishra as of 2017 Feb
 
Uma SunilKumar Resume
Uma SunilKumar ResumeUma SunilKumar Resume
Uma SunilKumar Resume
 
ABHIJEET MURLIDHAR GHAG Axisbank
ABHIJEET MURLIDHAR GHAG AxisbankABHIJEET MURLIDHAR GHAG Axisbank
ABHIJEET MURLIDHAR GHAG Axisbank
 
PRATIK MUNDRA
PRATIK MUNDRAPRATIK MUNDRA
PRATIK MUNDRA
 
Sanjaykumar Kakaso Mane_MAY2016
Sanjaykumar Kakaso Mane_MAY2016Sanjaykumar Kakaso Mane_MAY2016
Sanjaykumar Kakaso Mane_MAY2016
 

hari_duche_updated

  • 1. Email: hari.duche@gmail.com Mobile: +91.9890653659 HARI ARJUN DUCHE HIGHLIGHTS  Over 5 years of experience in database internals and implementation.  Over 5 years of experience in data warehouse and business intelligence, data engine  Specialization in database internals & the implementation, C and C++.  Good Experience in Internals of RDBMS like Netezza, PostgreSQL.  Four patents published on my name. EMPLOYMENT HISTORY (TOTAL 12 YEARS’ EXPERIENCE)  Currently, I am working as a TECHNICAL SPECIALIST with Persistent Systems Limited Pune, India since Sept 2014 till date.  Worked with IBM India Software Labs (ISL) as a SYSTEM SOFTWARE ARCHITECT from June 2008 till Aug 2014 (6.2 years)  Worked with Persistent Systems from January 2004 to 26 June 2008 (4.6 years). TECHNICAL SKILLS: Languages C, C++, Python RDBMS Netezza Performance Server , PostgresSQL, Oracle, SQL Server 2008, DB2 Operating Systems UNIX/Linux, Windows XP Tools GDB, RTC, IBM Cognos Report Studio, IBM Cognos Framework Manager WORK DETAILS: Organization: Persistent Systems Limited Project Title : Netezza Scope of the Project : The Netezza Data Engine (NDE) is a single-focus computer system specifically designed to support very large decision support databases. It combines massively parallel processing and a hierarchical model to produce a highly scalable yet straightforward, fault-tolerant design. The NPS appliance manages extremely fast data loads and delivers 10 to 50 times the performance at less than half the cost of comparable data warehouse platforms Duration : 17 months Team Size : 70 Role/Responsibilities : I am responsible for design and development of new product features/enhancements in addition to handling critical customer escalations. I have developed following features 1. “CASE EXPR” improvement 2. Delete row performance improvement 3. Oracle to Netezza data migration tool 4. Adding actual row count information to plan file 5. Snipper result cache instrumentation project 6. Supporting MAX number of tables creation I am a technical specialist. Software Tools : C,C++, GDB
  • 2. Organization: IBM India Software Lab (Cognos project), Pune Project Title : IBM – StoredIQ Scope of the Project : The biggest source of unstructured growth is coming from data inside the enterprise. Behind every corporate firewall lies petabytes of unstructured and unmanaged business-critical data. IBM StoredIQ provides the first solution to identify, analyze and act on unstructured data in-place, without moving your data to a repository or specialty application. It’s a different approach to solving Big Data problems. We call it Active Information Management. The difference is a powerful, open platform that dynamically indexes and analyzes data in-place. It’s a practical approach that dramatically improves the speed and reliability of information management With StoredIQ, we can take Big Data head on to: 1. Gain insight and control over corporate data environments 2. Lower legal discovery cost and risk 3. Apply policies and govern data according to regulatory and corporate mandates 4. Spot patterns and trends to optimize storage resources Active Information Management turns petabytes of raw data into smaller, structured ‘information sets’ that improve every business process in your organization. It’s an approach that allows you to reduce the time, cost and resources required to managed and use unstructured data by more than 95%! Duration : 6 months Team Size : 10 Role/Responsibilities : I was the senior developer of StoredIQ Chennai team. I was responsible for envisioning and implementing new features into the product. Below are some key features I implemented in StoredIQ. 1. Customizable report framework 2. Ability to export all available attributes of an Infoset to CSV. 3. On the fly serving aggregates requested.. This feature is so generic that we can serve any type of aggregation request on the fly. E.g 1. Duplicates information 2. Term hit information. Software Tools : Python, PDB Project Title : IBM Cognos – PowerPlay Scope of the Project : IBM Cognos PowerPlay lets you identify and analyse trends in business and financial performance for better business decisions. IBM Cognos PowerPlay lets you analyse large volumes of dimensionally modelled data with sub-second response times using either a Windows client or Web browser. It allows viewing data from any angle and in any combination to identify and analyse the driving factors behind your business results. It is a Web-based business intelligence solution with integrated reporting and data exploration features. It is used to create and view reports that are based on PowerCube data sources. IBM Cognos PowerPlay Studio lets you view, explore, format, and distribute reports.
  • 3. Duration : 43 months Team Size : 4 Role/Responsibilities : ISL PowerPlay team has world-wide responsibility of this product. I was the senior key developer of this team and responsible for envisioning and implementing new features into the product. I was also responsible for doing feature enhancements; bug fixing, helping customer and 3LS support team to resolve customer issues. Below are some key features I implemented in PowerPlay. 4. Implemented “Export to Excel 2007" feature in PowerPlay. 5. Implemented opening PowerPlay report in “Business Insight Advanced” feature. 6. Provided IPV6 support for PowerPlay 7. Performance improvement in PowerPlay 8. “Drill through to a macro” feature support 9. Implemented urgent enhancement “Add ability to not pass default MEASURE to target report in PowerPlay” which saved customer 2000 hours of work per month Software Tools : C,C++ Project Title : Cognos – Universal Data Access (UDA) Scope of the Project : I worked on a component called UDA (Universal Data Access – a Cognos aware component). UDA provides uniform method to access heterogeneous database management systems. It also provides call-level interface for query execution and metadata access. A query – in Cognos- SQL form, generated by Cognos products is consumed by UDA. UDA acquires knowledge of all the capabilities exposed by the underlying database management systems. Considering these capabilities UDA decomposes the Cognos-SQL with the goal being to push as much as, if not all of the computational load, to the database server. Rest of the processing (which the underlying DBMS is incapable of) is taken place at UDA side; we call it as local processing. Duration : 27 months Team Size : 4 Role/Responsibilities : 1. Here my role was to study different features provided by different databases like Netezza, Oracle etc and integrate those features in UDA so that UDA can retrieve data very efficiently. 2. Supported Informix database in UDA for Linux and HP Itanium platforms. 3. Supported postgres database in UDA. 4. Supported DataDirect driver in UDA. 5. Certified new releases of DB2, Oracle, MySQL and Netezza. 6. Fixed many core UDA bugs. 7. I have Used IBM Cognos report studio, framework manager (metadata modeling tool) and metric studio. Also I have done three customer POCs using these tools. Software Tools : C,C++ Organization: Persistent Systems Private Limited Project Title : Netezza Scope of the Project : The Netezza Data Engine (NDE) is a single-focus computer system specifically designed to support very large decision support databases. It combines massively parallel processing and a hierarchical model to produce a highly scalable yet straightforward, fault-tolerant design. The NPS appliance manages extremely fast data loads and delivers 10 to 50 times the performance at less than half the cost of
  • 4. comparable data warehouse platforms Duration : 36 months Team Size : 70 Role/Responsibilities : Extensively involved in bug fixing of Netezza database internals. Following are the key areas that I worked in 7. Zonemap: A new indexing technique developed at Netezza which improves performance dramatically 8. Designed and Developed modules for Date type conversion mechanism 9. Support for internationalization 10. SQL query parsing 11. Code generation for SQL queries 12. transformation of postgres generated query execution plan to DBOS (a module) aware structures Also, I worked on many performances related issues. I know all the features of Netezza database. I was a Module Leader of PostgreSQL team, leading team of three people. Software Tools : C,C++, GDB Organization: Persistent Systems Private Limited Project Title : Solidcore Scope of the Project : SolidOS: SolidOS is a product of Solidcore Systems Inc. (http://www.solidcore.com). It is a server integrity and security solution that provides two important benefits: 1. No unauthorized code can run on the system 2. Data is changed only when you want it to The threats to the integrity and security of a system can come from both, external sources (e.g. a buffer overflow attack over the network), and internal sources (a user of the system attempts to exploit). Solidcore “Server Integrity Suite”, a.k.a. SIS (of which SolidOS product is a part) attempts to address above-mentioned problems. The “code” that runs on a system could be 1. Executable code (e.g. binary executable and shared libraries) 2. Scripts (e.g. Perl) 3. Application specific (e.g. PL/SQL code running inside Oracle server or Java code running inside a J2EE server) SolidOS protects against the above types of code attacks. SolidDB SolidDB is a change control enforcement product developed by Solidcore systems for enterprise database environment. SolidDB top level feature list SolidDB code protection 1. Modes of operation a. Production mode: Any change in the supported executable is not allowed. This mode will prevent the create/modify/drop of any of the supported database executable. All attempts to change the database executable are logged. b. Maintenance mode: In this mode, any user code except the Solidcore administrator code can be
  • 5. created/modified/dropped. All activities of changing the supported executable are logged. c. Disable mode: In this mode, the SolidDB is not in path at all, it is uninstalled. No triggers of SolidDB user are present, and hence any code can be creatd/modified/dropped. No logging happens in this mode. 2. Create, modify or drop of the supported database executable are strictly prohibited in the production mode. 3. Any database executable, other than that owned by soliddb administrator user can be modified in the maintenance mode. 4. Traces generated for all activities. SolidDB data protection: 1. Modes of operation a. Production mode: Any change in the protected tables is not allowed. This mode will prevent the create/modify/drop/insert/update/delete of any of the protected database tables. All attempts to change the protected tables are logged. b. Maintenance mode: This mode will prevent the create/modify/drop of any of the protected database tables. In this mode, any user tables except the Solidcore administrator tables allow DML operations. All activities of performing DML operations on protected tables are logged. c. Disable mode: In this mode, the SolidDB is not in path at all, it is uninstalled. No triggers of SolidDB user are present, and hence any table can be created/modified/dropped/inserted/updated/deleted. No logging happens in this mode. . Duration : 16 months Team Size : 33 Role/Responsibilities : • Involved in design, development, testing and bug fixing of SolidDB product. Software Tools : C, C++, Oracle, PL-SQL COLLABORATIONS:  Co-ordinated and Integrated INFORMIX and RED-BRICKS databases feature in IBM Cognos UDA (Universal Data Access layer)  Created Classic and Advanced reports on IM tool using IBM Cognos.  Created “ISL Level Attrition Analysis” tool using IBM Cognos Framework Manager, Data Manager ETL tool and Report Studio. CUSTOMER ENGAGEMENTS:  I have done POC’s for Reliance Health Insurance, TSS and CRIS using IBM Cognos.  Created Classic and Advanced reports on IM tool using IBM Cognos.. ACHIEVEMENS:  I have four patents published on my name.  Got IBM “Eminence and Excellence” award with 500$ cash.  Got award from IBM Cognos VP Eric Yau for the excellent work done and all-rounder performance in UDA (Universal Data Access layer).  Got award from IBM Country Manager Bhushan Kelkar for remote mentoring work done as part of IBM “Be a Start Be a Guide” program.
  • 6.  Got two “You Made A Difference” awards from Persistent Systems Ltd. CERTIFICATIONS:  I am a IBM certified “IBM Cognos FM Metadata Model” Developer  I have cleared Persistent exams for Java Script, HTML5 and Angular JS technologies PERSONAL DETAILS  Date of Birth: 28 Sept 1980  Contact Number: +91.9890653659  Residence Address: Aranyeshwar Nagar, 47/26, Pune-9 (Parvati), Maharashtra.
  • 7.  Got two “You Made A Difference” awards from Persistent Systems Ltd. CERTIFICATIONS:  I am a IBM certified “IBM Cognos FM Metadata Model” Developer  I have cleared Persistent exams for Java Script, HTML5 and Angular JS technologies PERSONAL DETAILS  Date of Birth: 28 Sept 1980  Contact Number: +91.9890653659  Residence Address: Aranyeshwar Nagar, 47/26, Pune-9 (Parvati), Maharashtra.