SlideShare a Scribd company logo
1 of 41
Download to read offline
Percona Live Santa Clara 2014
Be the hero of the day with Data
recovery for InnoDB
Marco Tusa – Aleksandr Kuzminsky
April 2014
Who
• Marco “The Grinch”
• Manager Rapid Response
• Former Pythian MySQL cluster
technical leader
• Former MySQL AB PS (EMEA)
• Love programming
• History of religions
• Ski; Snowboard; scuba diving;
Mountain trekking
2
What we will cover
• Recovery toolkit introduction
• Show how to extract data from IBD data file
• Attach ibd files after IBDATA corruption
• Recover deleted records
• Recover drop table
• Bad update
3
What is Percona Data recovery Tool for InnoDB
• Set of open source tools
• Work directly on the data files
• Recover lost data (no backup available)
• Wrappers (you can help)
4
A
What files?
• Server wide files
– <Table>.frm
• InnoDB files
– ibdata1
• InnoDB dictionary
• UNDO segment
• All tables if innodb_file_per_table=OFF
– <Table>.ibd
– Reads raw partition
5
A
InnoDB file format
• Antelope
– REDUNDANT (versions 4.x)
– COMPACT (5.X)
• Barracuda
– REDUNDANT, COMPACT
– New time formats in BARRACUDA
– COMPRESSED
6
A
What is a InnoDB Table space?
Tablespace consists of pages
• InnoDB page
– 16k by default
– page_id is file offset in 16k chunks
– Records never fragmented
– Types:
• FIL_PAGE_INDEX
• FIL_PAGE_TYPE_BLOB
• FIL_PAGE_UNDO_LOG
7
A
InnoDB index (B+ Tree) 8
A
Requirements and minimal skill to use the tool
• You need to know how to compile (make)
• MySQL you know what it is right?
• How to import data from a tab separated file
9
M
Show the tool in action - Process
Process:
10
data extraction
Extract data
From ibdataX
Read SYS_X
Tables
Generate Table
filters files
Extract data from
ibd Tbspaces
Validate Data
Import data back
Final clean up
Production
ready
Show the tool in action - page_parser
• Extract pages from InnoDB files
– (In case of innodb_file_per_table =0 it also extract real data)
– page_parser -f ibdata1 (or table space like employees.ibd)
11
Extract data from idbata
Show the tool in action – contraints_parser
• Extract data from InnoDB pages
• IE SYS_TABLE/INDEX/COLUMN
– bin/constraints_parser.SYS_TABLES -4Uf
FIL_PAGE_INDEX/0-1
– bin/constraints_parser.SYS_INDEXES -4Uf
FIL_PAGE_INDEX/0-3
12
Extract data from idbata
Show the tool in action - contraints_parser
Output
SYS_TABLES "employees/salaries" 811 
…
SYS_INDEXES 811 1889 "PRIMARY“ 
SYS_INDEXES 811 1890 "emp_no"
Table ID Index Id
13
Read SYS_Tables/Indexes
Show the tool in action - sys_parser
Why:
Lost FRM file
Two possible ways:
• Easy: copy it from slave/backup
• Less easy:
– Run sys_parser on a new create instance (info not accessible – require
dictionary table load)
14
Lost FRM and IBD (1)
Show the tool in action - sys_parser
Output:
./sys_parser -h192.168.0.35 -u stress -p tool –d <database>
employees/salaries
CREATE TABLE `salariesR`(
`emp_no` INT NOT NULL,
`salary` INT NOT NULL,
`from_date` DATE NOT NULL,
`to_date` DATE NOT NULL,
PRIMARY KEY (`emp_no`, `from_date`)) ENGINE=InnoDB;
15
Lost FRM and IBD (2)
Show the tool in action - ibdconnect
• Accidental removal of the IBDATA file
• IBDATA table space corruption
• Only file per table available (IE employees.ibd)
16
Attach Table from another source (1)
Show the tool in action - ibdconnect
What to do?
1. Start a new clean MySQL
2. Create empty structure
(same tables definition)
3. Copy over the table spaces
4. Run ibdconnect
5. Run
innochecksum_changer
17
Attach Table from another source (2)
./ibdconnect -o ibdata1 -f salaries.ibd -d
employees -t salaries
salaries.ibd belongs to space #15
Initializing table definitions...
Updating employees/salaries (table_id 797)
SYS_TABLES is updated successfully
Initializing table definitions...
Processing table: SYS_TABLES
…
Processing table: SYS_INDEXES
Setting SPACE=15 in SYS_INDEXES for TABLE_ID =
797
Show the tool in action – fix filters
Table filters use for:
• Identify the data inside the ibd
• Filter out the damage records
• Bound to table definition
• Must recompile for each table definition
• Generated with the tool create_def.pl
18
Generate Table filters (1)
Show the tool in action - page_parser filters
• Generated with the tool create_def.pl
create_defs.pl --db=$schema --table=$table >
$myPath/include/table_defs.${schema}.$table.defreco
very
• Create symbolic link to include/table_defs.h
• Compile again
19
Generate Table filters (1)
Show the tool in action - constraints_parser
The data is extracted by the tool specifying the table
space an possible BLOB directory.
/constraints_parser -5Uf FIL_PAGE_INDEX/0-${INDEXID} –b FIL_PAGE_TYPE_BLOB/ >
$DESTDIR/${SCHEMA}_${TABLE}.csv“
FIL_PAGE_INDEX/0-${INDEXID} is the ID of the PK
FIL_PAGE_TYPE_BLOB is the directory containing the BLOB
21
Extract data from Table space (1)
Show the tool in action - constraints_parser
Example of the data:
-- Page id: 4, Format: COMPACT, Records list: Valid, Expected records: (164 164)
00000000150B 88000002260084 employees 10001 "1953-09-02“ "G" "eorgiF" "(null)" "12722-11-12"
00000000150B 88000002260091 employees 10002 "1964-06-02" "B" "ezalelS“ "(null)" "14006-11-05"
00000000150B 8800000226009E employees 10003 "1959-12-03" "P" "artoB" "(null)" "14003-03-15"
00000000150B 880000022600AB employees 10004 "1954-05-01" "C" "hirstianK""(null)“ "12598-03-09"
00000000150B 880000022600B8 employees 10005 "1955-01-21" "K" "yoichiM" "(null)" "13876-11-14"
<snip>
00000000150B 880000022608EE employees 10164 "1956-01-19" "J" "agodaB" "(null)" "12474-11-14"
-- Page id: 4, Found records: 164, Lost records: NO, Leaf page: YES
22
Validate data
Show the tool in action - LOAD DATA INFILE
How to import the data back?
Easy as:
LOAD DATA INFILE ‘PLMC_employees/employees' REPLACE INTO
TABLE `employees` FIELDS TERMINATED BY 't' OPTIONALLY
ENCLOSED BY '"' LINES STARTING BY 'employeest' (`emp_no`,
`birth_date`, `first_name`, `last_name`, `gender`,
`hire_date`);
Done
23
Import data back
How recover deleted record -
Identify the records just for this exercise:
select count(emp_no) from employeesR where hire_date > '1999-08-24';
+---------------+
| count(emp_no) |
+---------------+
| 279 |
+---------------+
1 row in set (0.23 sec)
And bravely delete them:
delete from employeesR where hire_date > '1999-08-24';
Query OK, 279 rows affected (0.55 sec)
24
Delete records
How recover deleted record -
To recover deleted record we must use the –D flag:
constraints_parser -5Df /FIL_PAGE_INDEX/0-1975 -b
/FIL_PAGE_TYPE_BLOB/ > employees_employeesDeleted.csv
cat employees_employeesDeleted.csv |grep -i -v -e "-- Page id"|wc -l
55680  Too many because I take unfiltered records
25
Recover delete records
How recover deleted record -
name: "employeesR",
{
{ /* int(11) */
name: "emp_no",
type: FT_INT,
fixed_length: 4,
has_limits: TRUE,
limits: {
can_be_null: FALSE,
int_min_val: 10001,
int_max_val: 499999
},
26
Use filters to clean up results
name: "first_name",
type: FT_CHAR,
min_length: 0,
max_length: 42,
has_limits: TRUE,
limits: {
can_be_null: FALSE,
char_min_len: 3,
char_max_len: 42,
char_ascii_only: TRUE
},
can_be_null: FALSE
},
name: "last_name",
type: FT_CHAR,
min_length: 0,
max_length: 48,
has_limits: TRUE,
limits: {
can_be_null: FALSE,
char_min_len: 3,
char_max_len: 48,
char_ascii_only: TRUE
},
How recover deleted record -
Now let us recompile and run the extraction again:
cat employees_employeesDeleted.csv |grep -i -v -e "-- Page
id"|wc -l
279 <------ Bingo!
27
Check if it fits and reload
How recover Drop tables -
• Different methods if using innodb_file_per_table=[0|1].
– Must act fast because files can be overwritten quickly
• In the first case pages are mark free for reuse
• In the second the file is removed and we need to scan
the device
28
How recover Drop tables -
What we need then ?
• The table definition
• The PK index
– Parse the dictionary with the “D” flag
• constraints_parser.SYS_TABLES -4D
• Extract the InnoDB pages
29
How recover Drop tables -
Not using innodb_file_per_table method:
1. Extract the ibdataX as usual
2. Run contraints_parser
constraints_parser -5Uf ./FIL_PAGE_INDEX/0-1975 -b
./FIL_PAGE_TYPE_BLOB/ > employees_employeesDroped.csv
cat employees_employeesDroped.csv |grep -i -v -e "-- Page id"|wc -l
300024 <---- done
30
Not using file per table
How recover Drop tables -
Innodb_file_per_table=1 method:
What we need more?
• To know what device is containing the dropped
table
31
using file per table
How recover Drop tables -
Identify the PK id from dictionary:
cat SYS_TABLE.csv|grep employeesR
SYS_TABLES "employees/employeesR" 855
cat SYS_INDEX.csv|grep 855
SYS_INDEXES 855 1979 "PRIMARY”
32
using file per table
How recover Drop tables -
This time we must run the page_parser against the device
not the file using the T option.
-T -- retrieves only pages with index id = NM (N - high
word, M - low word of id)
page_parser -t 100G -T 0:1979 -f /dev/mapper/EXT_VG-extlv
Parse in this case take longer.
33
Run the page extraction
How recover wrong updates -
Doing with the tool is possible when new data is larger
then the original, and will not fit in the original page,
otherwise the old one will be replaced, as such the only
way is to parse the undo segment.
Tools: s_indexes, s_tables recover the dictionary from the
undo.
34
How recover wrong updates -
Other method:
Possible to use the binary log for that
When in ROW format .
35
How recover wrong updates -
You can use binary log to recover your data if:
• Binlog format = ROW
• binlog_row_image = FULL (from 5.6 you can change it)
36
Prerequisite
How recover wrong updates -
Assume a table like :
+--------+------------+------------+-----------+--------+------------+
| emp_no | birth_date | first_name | last_name | gender | hire_date |
+--------+------------+------------+-----------+--------+------------+
| 10001 | 1953-09-02 | Georgi | Facello | M | 1986-06-26 |
| 10002 | 1964-06-02 | Bezalel | Simmel | F | 1985-11-21 |
| 10003 | 1959-12-03 | Parto | Bamford | M | 1986-08-28 |
+--------+------------+------------+-----------+--------+------------+
An action like :
update employeesR set last_name="WRONG-NAME" where emp_no <
10010;
37
Scenario (1)
How recover wrong updates -
You will have to recover something like:
+--------+------------+------------+------------+--------+------------+
| emp_no | birth_date | first_name | last_name | gender | hire_date |
+--------+------------+------------+------------+--------+------------+
| 10001 | 1953-09-02 | Georgi | WRONG-NAME | M | 1986-06-26 |
| 10002 | 1964-06-02 | Bezalel | WRONG-NAME | F | 1985-11-21 |
| 10003 | 1959-12-03 | Parto | WRONG-NAME | M | 1986-08-28 |
…
38
Scenario (2)
How recover wrong updates -
with a simple command like :
mysqlbinlog -vvv logs/binlog.000034 --start-datetime="2014-03-19
11:00:07"|grep -e "@1" -e "@4"|awk -F '/*' '{print $1}'|awk '{print
$2}'
@1=10001
@4='Facello'
@1=10001
@4='WRONG-NAME'
@1=10002
@4='Simmel'
@1=10002
@4='WRONG-NAME'..
39
Recover from binary log (2)
Reference and repositories
Main Percona branch:
bzr branch lp:percona-data-recovery-tool-for-innodb
Marco branch:
https://github.com/Tusamarco/drtools
40
Recover from binary log (2)
Q&A 41
Contacts 42
To contact Marco
marco.tusa@percona.com
marcotusa@tusacentral.net
To follow me
http://www.tusacentral.net/
https://www.facebook.com/marco.tusa.94
@marcotusa
http://it.linkedin.com/in/marcotusa/
To contact Aleksander
aleksandr.kuzminsky@percona.com
To follow me
http://www.mysqlperformanceblog.com/author/akuzminsk
y/
https://www.linkedin.com/in/akuzminsky

More Related Content

What's hot

MySQL Utilities -- Cool Tools For You: PHP World Nov 16 2016
MySQL Utilities -- Cool Tools For You: PHP World Nov 16 2016MySQL Utilities -- Cool Tools For You: PHP World Nov 16 2016
MySQL Utilities -- Cool Tools For You: PHP World Nov 16 2016Dave Stokes
 
Managing Exadata in the Real World
Managing Exadata in the Real WorldManaging Exadata in the Real World
Managing Exadata in the Real WorldEnkitec
 
Indexing in Exadata
Indexing in ExadataIndexing in Exadata
Indexing in ExadataEnkitec
 
Embracing Database Diversity: The New Oracle / MySQL DBA - UKOUG
Embracing Database Diversity: The New Oracle / MySQL DBA -   UKOUGEmbracing Database Diversity: The New Oracle / MySQL DBA -   UKOUG
Embracing Database Diversity: The New Oracle / MySQL DBA - UKOUGKeith Hollman
 
MySQL Advanced Administrator 2021 - 네오클로바
MySQL Advanced Administrator 2021 - 네오클로바MySQL Advanced Administrator 2021 - 네오클로바
MySQL Advanced Administrator 2021 - 네오클로바NeoClova
 
Mysql database basic user guide
Mysql database basic user guideMysql database basic user guide
Mysql database basic user guidePoguttuezhiniVP
 
Advanced MySQL Query Optimizations
Advanced MySQL Query OptimizationsAdvanced MySQL Query Optimizations
Advanced MySQL Query OptimizationsDave Stokes
 
MySQL 8 Tips and Tricks from Symfony USA 2018, San Francisco
MySQL 8 Tips and Tricks from Symfony USA 2018, San FranciscoMySQL 8 Tips and Tricks from Symfony USA 2018, San Francisco
MySQL 8 Tips and Tricks from Symfony USA 2018, San FranciscoDave Stokes
 
MariaDB 10.5 binary install (바이너리 설치)
MariaDB 10.5 binary install (바이너리 설치)MariaDB 10.5 binary install (바이너리 설치)
MariaDB 10.5 binary install (바이너리 설치)NeoClova
 
Understanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleUnderstanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleGuatemala User Group
 
Highload Perf Tuning
Highload Perf TuningHighload Perf Tuning
Highload Perf TuningHighLoad2009
 
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...Dave Stokes
 
MySQL as a Document Store
MySQL as a Document StoreMySQL as a Document Store
MySQL as a Document StoreDave Stokes
 
MySQL database replication
MySQL database replicationMySQL database replication
MySQL database replicationPoguttuezhiniVP
 
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Marco Tusa
 
MySQL Indexierung CeBIT 2014
MySQL Indexierung CeBIT 2014MySQL Indexierung CeBIT 2014
MySQL Indexierung CeBIT 2014FromDual GmbH
 
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sadDevelopers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sadmCloud
 
Dbvisit replicate: logical replication made easy
Dbvisit replicate: logical replication made easyDbvisit replicate: logical replication made easy
Dbvisit replicate: logical replication made easyFranck Pachot
 
Meb Backup & Recovery Performance
Meb Backup & Recovery PerformanceMeb Backup & Recovery Performance
Meb Backup & Recovery PerformanceKeith Hollman
 

What's hot (20)

MySQL Utilities -- Cool Tools For You: PHP World Nov 16 2016
MySQL Utilities -- Cool Tools For You: PHP World Nov 16 2016MySQL Utilities -- Cool Tools For You: PHP World Nov 16 2016
MySQL Utilities -- Cool Tools For You: PHP World Nov 16 2016
 
Managing Exadata in the Real World
Managing Exadata in the Real WorldManaging Exadata in the Real World
Managing Exadata in the Real World
 
Indexing in Exadata
Indexing in ExadataIndexing in Exadata
Indexing in Exadata
 
Embracing Database Diversity: The New Oracle / MySQL DBA - UKOUG
Embracing Database Diversity: The New Oracle / MySQL DBA -   UKOUGEmbracing Database Diversity: The New Oracle / MySQL DBA -   UKOUG
Embracing Database Diversity: The New Oracle / MySQL DBA - UKOUG
 
MySQL Advanced Administrator 2021 - 네오클로바
MySQL Advanced Administrator 2021 - 네오클로바MySQL Advanced Administrator 2021 - 네오클로바
MySQL Advanced Administrator 2021 - 네오클로바
 
Mysql database basic user guide
Mysql database basic user guideMysql database basic user guide
Mysql database basic user guide
 
Advanced MySQL Query Optimizations
Advanced MySQL Query OptimizationsAdvanced MySQL Query Optimizations
Advanced MySQL Query Optimizations
 
MySQL 8 Tips and Tricks from Symfony USA 2018, San Francisco
MySQL 8 Tips and Tricks from Symfony USA 2018, San FranciscoMySQL 8 Tips and Tricks from Symfony USA 2018, San Francisco
MySQL 8 Tips and Tricks from Symfony USA 2018, San Francisco
 
MariaDB 10.5 binary install (바이너리 설치)
MariaDB 10.5 binary install (바이너리 설치)MariaDB 10.5 binary install (바이너리 설치)
MariaDB 10.5 binary install (바이너리 설치)
 
Understanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleUnderstanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
Understanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
 
Highload Perf Tuning
Highload Perf TuningHighload Perf Tuning
Highload Perf Tuning
 
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...
 
MySQL as a Document Store
MySQL as a Document StoreMySQL as a Document Store
MySQL as a Document Store
 
MySQL database replication
MySQL database replicationMySQL database replication
MySQL database replication
 
Postgresql
PostgresqlPostgresql
Postgresql
 
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
 
MySQL Indexierung CeBIT 2014
MySQL Indexierung CeBIT 2014MySQL Indexierung CeBIT 2014
MySQL Indexierung CeBIT 2014
 
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sadDevelopers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
 
Dbvisit replicate: logical replication made easy
Dbvisit replicate: logical replication made easyDbvisit replicate: logical replication made easy
Dbvisit replicate: logical replication made easy
 
Meb Backup & Recovery Performance
Meb Backup & Recovery PerformanceMeb Backup & Recovery Performance
Meb Backup & Recovery Performance
 

Similar to Plmce 14 be a_hero_16x9_final

Open sql2010 recovery-of-lost-or-corrupted-innodb-tables
Open sql2010 recovery-of-lost-or-corrupted-innodb-tablesOpen sql2010 recovery-of-lost-or-corrupted-innodb-tables
Open sql2010 recovery-of-lost-or-corrupted-innodb-tablesArvids Godjuks
 
Postgresql Database Administration Basic - Day2
Postgresql  Database Administration Basic  - Day2Postgresql  Database Administration Basic  - Day2
Postgresql Database Administration Basic - Day2PoguttuezhiniVP
 
DB2UDB_the_Basics Day2
DB2UDB_the_Basics Day2DB2UDB_the_Basics Day2
DB2UDB_the_Basics Day2Pranav Prakash
 
Optimizing InnoDB bufferpool usage
Optimizing InnoDB bufferpool usageOptimizing InnoDB bufferpool usage
Optimizing InnoDB bufferpool usageZarafa
 
PL/SQL New and Advanced Features for Extreme Performance
PL/SQL New and Advanced Features for Extreme PerformancePL/SQL New and Advanced Features for Extreme Performance
PL/SQL New and Advanced Features for Extreme PerformanceZohar Elkayam
 
Advanced PLSQL Optimizing for Better Performance
Advanced PLSQL Optimizing for Better PerformanceAdvanced PLSQL Optimizing for Better Performance
Advanced PLSQL Optimizing for Better PerformanceZohar Elkayam
 
data loading and unloading in IBM Netezza by www.etraining.guru
data loading and unloading in IBM Netezza by www.etraining.gurudata loading and unloading in IBM Netezza by www.etraining.guru
data loading and unloading in IBM Netezza by www.etraining.guruRavikumar Nandigam
 
MySQL innoDB split and merge pages
MySQL innoDB split and merge pagesMySQL innoDB split and merge pages
MySQL innoDB split and merge pagesMarco Tusa
 
MariaDB Optimizer
MariaDB OptimizerMariaDB Optimizer
MariaDB OptimizerJongJin Lee
 
ExtBase workshop
ExtBase workshop ExtBase workshop
ExtBase workshop schmutt
 
Apache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the CoversApache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
 
pg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLpg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLCommand Prompt., Inc
 
pg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLpg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
 
Advanced PL/SQL Optimizing for Better Performance 2016
Advanced PL/SQL Optimizing for Better Performance 2016Advanced PL/SQL Optimizing for Better Performance 2016
Advanced PL/SQL Optimizing for Better Performance 2016Zohar Elkayam
 
Machine Learning Game Changer for IT - Maartens Lourens
Machine Learning Game Changer for IT - Maartens LourensMachine Learning Game Changer for IT - Maartens Lourens
Machine Learning Game Changer for IT - Maartens LourensOpenCredo
 
What's new in Redis v3.2
What's new in Redis v3.2What's new in Redis v3.2
What's new in Redis v3.2Itamar Haber
 
Geek Sync I Polybase and Time Travel (Temporal Tables)
Geek Sync I Polybase and Time Travel (Temporal Tables)Geek Sync I Polybase and Time Travel (Temporal Tables)
Geek Sync I Polybase and Time Travel (Temporal Tables)IDERA Software
 

Similar to Plmce 14 be a_hero_16x9_final (20)

Undrop for InnoDB
Undrop for InnoDBUndrop for InnoDB
Undrop for InnoDB
 
Open sql2010 recovery-of-lost-or-corrupted-innodb-tables
Open sql2010 recovery-of-lost-or-corrupted-innodb-tablesOpen sql2010 recovery-of-lost-or-corrupted-innodb-tables
Open sql2010 recovery-of-lost-or-corrupted-innodb-tables
 
Postgresql Database Administration Basic - Day2
Postgresql  Database Administration Basic  - Day2Postgresql  Database Administration Basic  - Day2
Postgresql Database Administration Basic - Day2
 
DB2UDB_the_Basics Day2
DB2UDB_the_Basics Day2DB2UDB_the_Basics Day2
DB2UDB_the_Basics Day2
 
Optimizing InnoDB bufferpool usage
Optimizing InnoDB bufferpool usageOptimizing InnoDB bufferpool usage
Optimizing InnoDB bufferpool usage
 
PL/SQL New and Advanced Features for Extreme Performance
PL/SQL New and Advanced Features for Extreme PerformancePL/SQL New and Advanced Features for Extreme Performance
PL/SQL New and Advanced Features for Extreme Performance
 
Advanced PLSQL Optimizing for Better Performance
Advanced PLSQL Optimizing for Better PerformanceAdvanced PLSQL Optimizing for Better Performance
Advanced PLSQL Optimizing for Better Performance
 
data loading and unloading in IBM Netezza by www.etraining.guru
data loading and unloading in IBM Netezza by www.etraining.gurudata loading and unloading in IBM Netezza by www.etraining.guru
data loading and unloading in IBM Netezza by www.etraining.guru
 
MySQL innoDB split and merge pages
MySQL innoDB split and merge pagesMySQL innoDB split and merge pages
MySQL innoDB split and merge pages
 
MariaDB Optimizer
MariaDB OptimizerMariaDB Optimizer
MariaDB Optimizer
 
ExtBase workshop
ExtBase workshop ExtBase workshop
ExtBase workshop
 
Hive in Practice
Hive in PracticeHive in Practice
Hive in Practice
 
Lecture 9.pptx
Lecture 9.pptxLecture 9.pptx
Lecture 9.pptx
 
Apache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the CoversApache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the Covers
 
pg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLpg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQL
 
pg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQLpg_proctab: Accessing System Stats in PostgreSQL
pg_proctab: Accessing System Stats in PostgreSQL
 
Advanced PL/SQL Optimizing for Better Performance 2016
Advanced PL/SQL Optimizing for Better Performance 2016Advanced PL/SQL Optimizing for Better Performance 2016
Advanced PL/SQL Optimizing for Better Performance 2016
 
Machine Learning Game Changer for IT - Maartens Lourens
Machine Learning Game Changer for IT - Maartens LourensMachine Learning Game Changer for IT - Maartens Lourens
Machine Learning Game Changer for IT - Maartens Lourens
 
What's new in Redis v3.2
What's new in Redis v3.2What's new in Redis v3.2
What's new in Redis v3.2
 
Geek Sync I Polybase and Time Travel (Temporal Tables)
Geek Sync I Polybase and Time Travel (Temporal Tables)Geek Sync I Polybase and Time Travel (Temporal Tables)
Geek Sync I Polybase and Time Travel (Temporal Tables)
 

More from Marco Tusa

My sql on kubernetes demystified
My sql on kubernetes demystifiedMy sql on kubernetes demystified
My sql on kubernetes demystifiedMarco Tusa
 
Comparing high availability solutions with percona xtradb cluster and percona...
Comparing high availability solutions with percona xtradb cluster and percona...Comparing high availability solutions with percona xtradb cluster and percona...
Comparing high availability solutions with percona xtradb cluster and percona...Marco Tusa
 
Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...Marco Tusa
 
Best practice-high availability-solution-geo-distributed-final
Best practice-high availability-solution-geo-distributed-finalBest practice-high availability-solution-geo-distributed-final
Best practice-high availability-solution-geo-distributed-finalMarco Tusa
 
Robust ha solutions with proxysql
Robust ha solutions with proxysqlRobust ha solutions with proxysql
Robust ha solutions with proxysqlMarco Tusa
 
Fortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuFortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuMarco Tusa
 
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...Marco Tusa
 
Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...Marco Tusa
 
Improve aws withproxysql
Improve aws withproxysqlImprove aws withproxysql
Improve aws withproxysqlMarco Tusa
 
Fortify aws aurora_proxy
Fortify aws aurora_proxyFortify aws aurora_proxy
Fortify aws aurora_proxyMarco Tusa
 
Mysql8 advance tuning with resource group
Mysql8 advance tuning with resource groupMysql8 advance tuning with resource group
Mysql8 advance tuning with resource groupMarco Tusa
 
Proxysql sharding
Proxysql shardingProxysql sharding
Proxysql shardingMarco Tusa
 
Geographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deploymentGeographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deploymentMarco Tusa
 
Sync rep aurora_2016
Sync rep aurora_2016Sync rep aurora_2016
Sync rep aurora_2016Marco Tusa
 
Proxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynoteProxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynoteMarco Tusa
 
Empower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsEmpower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsMarco Tusa
 
Galera explained 3
Galera explained 3Galera explained 3
Galera explained 3Marco Tusa
 
Scaling with sync_replication using Galera and EC2
Scaling with sync_replication using Galera and EC2Scaling with sync_replication using Galera and EC2
Scaling with sync_replication using Galera and EC2Marco Tusa
 

More from Marco Tusa (18)

My sql on kubernetes demystified
My sql on kubernetes demystifiedMy sql on kubernetes demystified
My sql on kubernetes demystified
 
Comparing high availability solutions with percona xtradb cluster and percona...
Comparing high availability solutions with percona xtradb cluster and percona...Comparing high availability solutions with percona xtradb cluster and percona...
Comparing high availability solutions with percona xtradb cluster and percona...
 
Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...
 
Best practice-high availability-solution-geo-distributed-final
Best practice-high availability-solution-geo-distributed-finalBest practice-high availability-solution-geo-distributed-final
Best practice-high availability-solution-geo-distributed-final
 
Robust ha solutions with proxysql
Robust ha solutions with proxysqlRobust ha solutions with proxysql
Robust ha solutions with proxysql
 
Fortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuFortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleu
 
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
 
Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...
 
Improve aws withproxysql
Improve aws withproxysqlImprove aws withproxysql
Improve aws withproxysql
 
Fortify aws aurora_proxy
Fortify aws aurora_proxyFortify aws aurora_proxy
Fortify aws aurora_proxy
 
Mysql8 advance tuning with resource group
Mysql8 advance tuning with resource groupMysql8 advance tuning with resource group
Mysql8 advance tuning with resource group
 
Proxysql sharding
Proxysql shardingProxysql sharding
Proxysql sharding
 
Geographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deploymentGeographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deployment
 
Sync rep aurora_2016
Sync rep aurora_2016Sync rep aurora_2016
Sync rep aurora_2016
 
Proxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynoteProxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynote
 
Empower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsEmpower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instruments
 
Galera explained 3
Galera explained 3Galera explained 3
Galera explained 3
 
Scaling with sync_replication using Galera and EC2
Scaling with sync_replication using Galera and EC2Scaling with sync_replication using Galera and EC2
Scaling with sync_replication using Galera and EC2
 

Recently uploaded

Carero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptxCarero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptxolyaivanovalion
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusTimothy Spann
 
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service BangaloreCall Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangaloreamitlee9823
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceDelhi Call girls
 
Smarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptxSmarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptxolyaivanovalion
 
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightCheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightDelhi Call girls
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...amitlee9823
 
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...SUHANI PANDEY
 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxolyaivanovalion
 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxolyaivanovalion
 
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...amitlee9823
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfMarinCaroMartnezBerg
 
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...amitlee9823
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...amitlee9823
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxolyaivanovalion
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxolyaivanovalion
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxolyaivanovalion
 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsJoseMangaJr1
 

Recently uploaded (20)

Carero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptxCarero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptx
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and Milvus
 
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service BangaloreCall Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
 
Smarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptxSmarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptx
 
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightCheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
 
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFx
 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFx
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
 
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptx
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptx
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptx
 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
 

Plmce 14 be a_hero_16x9_final

  • 1. Percona Live Santa Clara 2014 Be the hero of the day with Data recovery for InnoDB Marco Tusa – Aleksandr Kuzminsky April 2014
  • 2. Who • Marco “The Grinch” • Manager Rapid Response • Former Pythian MySQL cluster technical leader • Former MySQL AB PS (EMEA) • Love programming • History of religions • Ski; Snowboard; scuba diving; Mountain trekking 2
  • 3. What we will cover • Recovery toolkit introduction • Show how to extract data from IBD data file • Attach ibd files after IBDATA corruption • Recover deleted records • Recover drop table • Bad update 3
  • 4. What is Percona Data recovery Tool for InnoDB • Set of open source tools • Work directly on the data files • Recover lost data (no backup available) • Wrappers (you can help) 4 A
  • 5. What files? • Server wide files – <Table>.frm • InnoDB files – ibdata1 • InnoDB dictionary • UNDO segment • All tables if innodb_file_per_table=OFF – <Table>.ibd – Reads raw partition 5 A
  • 6. InnoDB file format • Antelope – REDUNDANT (versions 4.x) – COMPACT (5.X) • Barracuda – REDUNDANT, COMPACT – New time formats in BARRACUDA – COMPRESSED 6 A
  • 7. What is a InnoDB Table space? Tablespace consists of pages • InnoDB page – 16k by default – page_id is file offset in 16k chunks – Records never fragmented – Types: • FIL_PAGE_INDEX • FIL_PAGE_TYPE_BLOB • FIL_PAGE_UNDO_LOG 7 A
  • 8. InnoDB index (B+ Tree) 8 A
  • 9. Requirements and minimal skill to use the tool • You need to know how to compile (make) • MySQL you know what it is right? • How to import data from a tab separated file 9 M
  • 10. Show the tool in action - Process Process: 10 data extraction Extract data From ibdataX Read SYS_X Tables Generate Table filters files Extract data from ibd Tbspaces Validate Data Import data back Final clean up Production ready
  • 11. Show the tool in action - page_parser • Extract pages from InnoDB files – (In case of innodb_file_per_table =0 it also extract real data) – page_parser -f ibdata1 (or table space like employees.ibd) 11 Extract data from idbata
  • 12. Show the tool in action – contraints_parser • Extract data from InnoDB pages • IE SYS_TABLE/INDEX/COLUMN – bin/constraints_parser.SYS_TABLES -4Uf FIL_PAGE_INDEX/0-1 – bin/constraints_parser.SYS_INDEXES -4Uf FIL_PAGE_INDEX/0-3 12 Extract data from idbata
  • 13. Show the tool in action - contraints_parser Output SYS_TABLES "employees/salaries" 811  … SYS_INDEXES 811 1889 "PRIMARY“  SYS_INDEXES 811 1890 "emp_no" Table ID Index Id 13 Read SYS_Tables/Indexes
  • 14. Show the tool in action - sys_parser Why: Lost FRM file Two possible ways: • Easy: copy it from slave/backup • Less easy: – Run sys_parser on a new create instance (info not accessible – require dictionary table load) 14 Lost FRM and IBD (1)
  • 15. Show the tool in action - sys_parser Output: ./sys_parser -h192.168.0.35 -u stress -p tool –d <database> employees/salaries CREATE TABLE `salariesR`( `emp_no` INT NOT NULL, `salary` INT NOT NULL, `from_date` DATE NOT NULL, `to_date` DATE NOT NULL, PRIMARY KEY (`emp_no`, `from_date`)) ENGINE=InnoDB; 15 Lost FRM and IBD (2)
  • 16. Show the tool in action - ibdconnect • Accidental removal of the IBDATA file • IBDATA table space corruption • Only file per table available (IE employees.ibd) 16 Attach Table from another source (1)
  • 17. Show the tool in action - ibdconnect What to do? 1. Start a new clean MySQL 2. Create empty structure (same tables definition) 3. Copy over the table spaces 4. Run ibdconnect 5. Run innochecksum_changer 17 Attach Table from another source (2) ./ibdconnect -o ibdata1 -f salaries.ibd -d employees -t salaries salaries.ibd belongs to space #15 Initializing table definitions... Updating employees/salaries (table_id 797) SYS_TABLES is updated successfully Initializing table definitions... Processing table: SYS_TABLES … Processing table: SYS_INDEXES Setting SPACE=15 in SYS_INDEXES for TABLE_ID = 797
  • 18. Show the tool in action – fix filters Table filters use for: • Identify the data inside the ibd • Filter out the damage records • Bound to table definition • Must recompile for each table definition • Generated with the tool create_def.pl 18 Generate Table filters (1)
  • 19. Show the tool in action - page_parser filters • Generated with the tool create_def.pl create_defs.pl --db=$schema --table=$table > $myPath/include/table_defs.${schema}.$table.defreco very • Create symbolic link to include/table_defs.h • Compile again 19 Generate Table filters (1)
  • 20. Show the tool in action - constraints_parser The data is extracted by the tool specifying the table space an possible BLOB directory. /constraints_parser -5Uf FIL_PAGE_INDEX/0-${INDEXID} –b FIL_PAGE_TYPE_BLOB/ > $DESTDIR/${SCHEMA}_${TABLE}.csv“ FIL_PAGE_INDEX/0-${INDEXID} is the ID of the PK FIL_PAGE_TYPE_BLOB is the directory containing the BLOB 21 Extract data from Table space (1)
  • 21. Show the tool in action - constraints_parser Example of the data: -- Page id: 4, Format: COMPACT, Records list: Valid, Expected records: (164 164) 00000000150B 88000002260084 employees 10001 "1953-09-02“ "G" "eorgiF" "(null)" "12722-11-12" 00000000150B 88000002260091 employees 10002 "1964-06-02" "B" "ezalelS“ "(null)" "14006-11-05" 00000000150B 8800000226009E employees 10003 "1959-12-03" "P" "artoB" "(null)" "14003-03-15" 00000000150B 880000022600AB employees 10004 "1954-05-01" "C" "hirstianK""(null)“ "12598-03-09" 00000000150B 880000022600B8 employees 10005 "1955-01-21" "K" "yoichiM" "(null)" "13876-11-14" <snip> 00000000150B 880000022608EE employees 10164 "1956-01-19" "J" "agodaB" "(null)" "12474-11-14" -- Page id: 4, Found records: 164, Lost records: NO, Leaf page: YES 22 Validate data
  • 22. Show the tool in action - LOAD DATA INFILE How to import the data back? Easy as: LOAD DATA INFILE ‘PLMC_employees/employees' REPLACE INTO TABLE `employees` FIELDS TERMINATED BY 't' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'employeest' (`emp_no`, `birth_date`, `first_name`, `last_name`, `gender`, `hire_date`); Done 23 Import data back
  • 23. How recover deleted record - Identify the records just for this exercise: select count(emp_no) from employeesR where hire_date > '1999-08-24'; +---------------+ | count(emp_no) | +---------------+ | 279 | +---------------+ 1 row in set (0.23 sec) And bravely delete them: delete from employeesR where hire_date > '1999-08-24'; Query OK, 279 rows affected (0.55 sec) 24 Delete records
  • 24. How recover deleted record - To recover deleted record we must use the –D flag: constraints_parser -5Df /FIL_PAGE_INDEX/0-1975 -b /FIL_PAGE_TYPE_BLOB/ > employees_employeesDeleted.csv cat employees_employeesDeleted.csv |grep -i -v -e "-- Page id"|wc -l 55680  Too many because I take unfiltered records 25 Recover delete records
  • 25. How recover deleted record - name: "employeesR", { { /* int(11) */ name: "emp_no", type: FT_INT, fixed_length: 4, has_limits: TRUE, limits: { can_be_null: FALSE, int_min_val: 10001, int_max_val: 499999 }, 26 Use filters to clean up results name: "first_name", type: FT_CHAR, min_length: 0, max_length: 42, has_limits: TRUE, limits: { can_be_null: FALSE, char_min_len: 3, char_max_len: 42, char_ascii_only: TRUE }, can_be_null: FALSE }, name: "last_name", type: FT_CHAR, min_length: 0, max_length: 48, has_limits: TRUE, limits: { can_be_null: FALSE, char_min_len: 3, char_max_len: 48, char_ascii_only: TRUE },
  • 26. How recover deleted record - Now let us recompile and run the extraction again: cat employees_employeesDeleted.csv |grep -i -v -e "-- Page id"|wc -l 279 <------ Bingo! 27 Check if it fits and reload
  • 27. How recover Drop tables - • Different methods if using innodb_file_per_table=[0|1]. – Must act fast because files can be overwritten quickly • In the first case pages are mark free for reuse • In the second the file is removed and we need to scan the device 28
  • 28. How recover Drop tables - What we need then ? • The table definition • The PK index – Parse the dictionary with the “D” flag • constraints_parser.SYS_TABLES -4D • Extract the InnoDB pages 29
  • 29. How recover Drop tables - Not using innodb_file_per_table method: 1. Extract the ibdataX as usual 2. Run contraints_parser constraints_parser -5Uf ./FIL_PAGE_INDEX/0-1975 -b ./FIL_PAGE_TYPE_BLOB/ > employees_employeesDroped.csv cat employees_employeesDroped.csv |grep -i -v -e "-- Page id"|wc -l 300024 <---- done 30 Not using file per table
  • 30. How recover Drop tables - Innodb_file_per_table=1 method: What we need more? • To know what device is containing the dropped table 31 using file per table
  • 31. How recover Drop tables - Identify the PK id from dictionary: cat SYS_TABLE.csv|grep employeesR SYS_TABLES "employees/employeesR" 855 cat SYS_INDEX.csv|grep 855 SYS_INDEXES 855 1979 "PRIMARY” 32 using file per table
  • 32. How recover Drop tables - This time we must run the page_parser against the device not the file using the T option. -T -- retrieves only pages with index id = NM (N - high word, M - low word of id) page_parser -t 100G -T 0:1979 -f /dev/mapper/EXT_VG-extlv Parse in this case take longer. 33 Run the page extraction
  • 33. How recover wrong updates - Doing with the tool is possible when new data is larger then the original, and will not fit in the original page, otherwise the old one will be replaced, as such the only way is to parse the undo segment. Tools: s_indexes, s_tables recover the dictionary from the undo. 34
  • 34. How recover wrong updates - Other method: Possible to use the binary log for that When in ROW format . 35
  • 35. How recover wrong updates - You can use binary log to recover your data if: • Binlog format = ROW • binlog_row_image = FULL (from 5.6 you can change it) 36 Prerequisite
  • 36. How recover wrong updates - Assume a table like : +--------+------------+------------+-----------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-----------+--------+------------+ | 10001 | 1953-09-02 | Georgi | Facello | M | 1986-06-26 | | 10002 | 1964-06-02 | Bezalel | Simmel | F | 1985-11-21 | | 10003 | 1959-12-03 | Parto | Bamford | M | 1986-08-28 | +--------+------------+------------+-----------+--------+------------+ An action like : update employeesR set last_name="WRONG-NAME" where emp_no < 10010; 37 Scenario (1)
  • 37. How recover wrong updates - You will have to recover something like: +--------+------------+------------+------------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+------------+--------+------------+ | 10001 | 1953-09-02 | Georgi | WRONG-NAME | M | 1986-06-26 | | 10002 | 1964-06-02 | Bezalel | WRONG-NAME | F | 1985-11-21 | | 10003 | 1959-12-03 | Parto | WRONG-NAME | M | 1986-08-28 | … 38 Scenario (2)
  • 38. How recover wrong updates - with a simple command like : mysqlbinlog -vvv logs/binlog.000034 --start-datetime="2014-03-19 11:00:07"|grep -e "@1" -e "@4"|awk -F '/*' '{print $1}'|awk '{print $2}' @1=10001 @4='Facello' @1=10001 @4='WRONG-NAME' @1=10002 @4='Simmel' @1=10002 @4='WRONG-NAME'.. 39 Recover from binary log (2)
  • 39. Reference and repositories Main Percona branch: bzr branch lp:percona-data-recovery-tool-for-innodb Marco branch: https://github.com/Tusamarco/drtools 40 Recover from binary log (2)
  • 41. Contacts 42 To contact Marco marco.tusa@percona.com marcotusa@tusacentral.net To follow me http://www.tusacentral.net/ https://www.facebook.com/marco.tusa.94 @marcotusa http://it.linkedin.com/in/marcotusa/ To contact Aleksander aleksandr.kuzminsky@percona.com To follow me http://www.mysqlperformanceblog.com/author/akuzminsk y/ https://www.linkedin.com/in/akuzminsky