How to Recover Data using the InnoDB Recovery Tool
Berikut melakukan recovery data dengan menggunakan InnoDB Recovery Tool.
Sumber : http://www.chriscalender.com/?p=49
As you may or may not know, there is a tool called the InnoDB Recovery Tool which can allow you to recover data from InnoDB tables that you cannot otherwise get the data from.
“This set of tools could be used to check InnoDB tablespaces and to recover data from damaged tablespaces or from dropped/truncated InnoDB tables.”
This is a very handy tool, however, the documentation on how to use it is a bit limited when it comes to actually recovering the data, so I thought I’d post a step-by-step tutorial on how to use this tool.
1. Download the InnoDB Recovery Tool (latest version is 0.3)
2. Unpack the download to the location of your choice
3. Create your table_defs.h file using the create_defs.pl script. Note the below creates a table_defs.h file based on only one table, t1, from the database named test:
cd innodb-recovery-tool-0.3/ ./create_defs.pl --user=root --password=mysql --db=test --table=t1 > table_defs.h
4. Copy the newly created table_defs.h fle to the innodb-recovery-tool-0.3/include/ directory.
5. Now is time to build/compile the InnoDB Recovery Tool
cd innodb-recovery-tool-0.3/mysql-source/ ./configure cd .. make
At this point, you’re almost ready to begin to recover the data. However, let me point out a couple items at this stage. The InnoDB Recovery Tool documentation says you can use the page_parser program to split up the tablespace. Since this page_parser program is now created (after compilation and make), you can use this to break apart the tablespace. However, in my case, I did this, but the page_parser didn’t work as well as I expected. This could be due to the corruption in the tablespace files (ibdata1 and ibdata2). So, I simply tried to recover based off the entire ibdata files. I found that I recovered much more data by just running the recovery against the ibdata files, rather than against the split-up pages. If you opt for this method, then you can skip steps 6, 7, and 8.
6. Should you want to use the page_parser, here is how you run it:
cd innodb-recovery-tool-0.3/ ./page_parser -f /home/chris/Desktop/test/ibdata1 /home/chris/Desktop/test/ibdata2 -5
Note that the -f indicates the file(s) to use, and the -5 indicates the ibdata files are from MySQL version 5.0.
7. Should you use the page_parser, you must also load the ibdata file(s) and capture the InnoDB tablespace monitor output. This part is described on the InnoDB Tools how-to.
8. After running the above, you’ll want to capture all of the primary key index positions for each table you want to recover. For instance, you might see something like “0 135″ for the index position of a primary key. This will correspond to the folder named “0-135″ that is created by page_parser.
9. Now you are ready to recover the data for the first table.
(Note that you could create a table_defs.h file based on all of the tables you want to recover. And then you can recover all of the data at once, however, the problem with this is the data is all mis-matched into one big file. So you might have a row for one table followed by a row from another table. If you’re good with sed/awk, this might not be a problem for you, as you can then split it apart. However, it might be easier to create a single table_defs.h file for each table, and then recover the data table-by-table.)
If you want to recover the data based on the page_parser output, then you would use the following command:
./constraints_parser -f /home/chris/Desktop/innodb-recovery-0.3/pages-1239037839/0-135/50-00000050.page -5 -V
Note that the -V is for verbose mode. It is best to use this initially to make sure the data being recovered looks to be correct. Once you’ve verified it looks correct, then simply run the above command without the -V and pipe the output to a text file.
Should you not want to use the page_parser, and just run constraints_parser directly against the ibdata file(s), then issue the following command instead:
./constraints_parser -f /home/chris/Desktop/test/ibdata1 /home/chris/Desktop/test/ibdata2 -5 > output.txt
As for the recovered data itself, note that this data is displayed in a tab-delimited text format that the InnoDB Recovery tool dumps it in (default, not configurable yet).
For instance, here is a sample of data recovered for the t1 table:
t1 128992703 84118144 301989888 224000 33558272 268435456 "" t1 0 0 34796032 0 530 838926338 "" t1 1886545261 268455808 256 497 880803840 2949392 "" t1 1398034253 1953654117 1952672116 2037609569 1952801647 1970173042 "" t1 402667648 755047491 1431524431 1296388657 825372977 825308725 "5" t1 536884352 755050563 1431524431 1296388658 842150450 842162531 "t" t1 671103872 755053635 1431524431 1296388663 926365495 926365495 "77" t1 524288 0 755056707 1431524431 1296388705 1668573558 "" t1 524288 0 755059779 1431524431 1296388705 1668573558 "" t1 524288 0 755062851 1431524431 1296388705 1668573558 "" t1 525312 0 755065923 1431524431 1296388705 1668573558 "" t1 524288 0 755068995 1431524431 1296388705 1668573558 "" t1 524288 0 755072067 1431524431 1296388705 1668573558 "" t1 524288 0 755075139 1431524431 1296388705 1668573558 "" t1 525312 0 755078211 1431524431 1296388705 1668573558 "" t1 524288 0 755081283 1431524431 1296388705 1668573558 "" t1 524288 0 755084355 1431524431 1296388705 1668573558 "" t1 524288 0 755047491 1431524431 1296388705 1668573558 "" t1 524288 0 755047491 1431524431 1296388705 1668573558 "" t1 0 0 0 0 0 0 "" t1 0 0 0 0 0 0 "" t1 0 0 0 0 0 0 "" t1 0 0 0 0 0 0 "" t1 0 0 0 0 0 0 ""
You can see each line is pre-pended with the table name (followed by a tab).
You can also see at the end (of the above output) there are a number of empty rows. These are just garbage rows, and can be deleted before you import or afterwards. You’ll see similar such rows in most of the recovered tables’ data as well. However, don’t just delete from the end of the file, as actual data rows are scattered all throughout the files.
I’d also suggest creating some temporary tables using the same CREATE TABLE commands but without any keys or indexes. This will allow you to import the data easier, and then you can clean it up with simple SQL commands. And after that, then you could simply add back your primary keys, indexes, and referential keys.
Should you follow my approach and do this per-table, then you just need to create your new table_defs.h file, re-compile and make, then re-run the constraints_parser just as you did above. Since it is built with the new table_defs.h file, it will now extract the data for this table, so no other changes need to be made.
10. Format the dump file(s) so that it can be imported into the appropriate table(s).
11. Import the data, and clean up the garbage rows.
12. Re-create any needed indexes and/or referential keys.