15. rpoolのmirror化
rpoolとはOSのインストールされたストレージプール
OSをインストールした(c3t0d0)側のパーティショ
ン情報を確認する(シリンダー表示)
# parted /dev/dsk/c3t0d0p0 u s p
Model: Generic Ide (ide)
Disk /dev/dsk/c3t0d0p0: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system
Flags
1 64260s 81867239s 81802980s primary solaris
boot
2 81867240s 3907024064s 3825156825s primary
・c3t0d0p0は正常に動作中。
・mirrorにするHDDはパーティションが切れていない
ため、OSをインストールしたHDDから情報をメモる
16. rpoolのmirror化
さきほどメモしたシリンダー情報をもとにpartedで新
しいHDDにパーティションを作成(c3t1d0s0)
# parted /dev/dsk/c3t1d0p0
GNU Parted 2.3.0
Using /dev/dsk/c3t1d0p0
Welcome to GNU Parted! Type 'help' to view a list of
commands.
(parted) u s
us
(parted) mkpart
mkpart
Partition type? primary/extended? primary
primary
File system type? [ext2]? solaris
solaris
Start? 64260s
64260s
End? 81867239s
81867239s
Warning: The resulting partition is not properly aligned for best
performance.
Ignore/Cancel? I
・シリンダーで合わせておかないと、正常にmirrorが構成でき
ない?
17. rpoolのmirror化
さきほどメモしたシリンダー情報をもとに新しい
HDDにパーティションを作成(c3t1d0p2)
(parted) mkpart
mkpart
Partition type? primary/extended? primary
primary
File system type? [ext2]?
Start? 81867240s
81867240s
End? 3907024064s
3907024064s
Warning: The resulting partition is not properly aligned for best
performance.
Ignore/Cancel? i
I
19. rpoolのmirror化
作成したパーテション情報を確認する
(parted) p
p
Model: Generic Ide (ide)
Disk /dev/dsk/c3t1d0p0: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system
Flags
1 64260s 81867239s 81802980s primary
boot
2 81867240s 3907024064s 3825156825s primary
(parted) quit
・c3t0d0と同じになっていればOK
20. rpoolのmirror化
rpoolにc3t0d0s0とc3t1d0s0をmirrorで追加
# zpool attach -f rpool c3t0d0s0 c3t1d0s0
Make sure to wait until resilver is done before rebooting.
# zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered.
The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu Nov 24 01:16:34 2011
513M scanned out of 9.50G at 8.56M/s, 0h17m to go
513M resilvered, 5.28% done
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c3t0d0s0 ONLINE 0 0 0
c3t1d0s0 ONLINE 0 0 0 (resilvering)
errors: No known data errors
Solaris 11は標準でzfsでファイルシステムが作成される
21. zpoolの作成
raid-z1でストレージプール(tank)を作成
# zpool create -f tank raidz c3t0d0p2 c3t1d0p2 c3t2d0
c3t3d0
# zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c3t0d0p2 ONLINE 0 0 0
c3t1d0p2 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
errors: No known data errors
22. ZILの追加
tankにZILを追加する
# zpool add tank log c3t5d0
# zpool status tank
pool: tank
state: ONLINE
scan: resilvered 864G in 31h39m with 0 errors on Fri Feb
24 10:21:44 2012
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c3t0d0p2 ONLINE 0 0 0
c3t1d0p2 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
logs
c3t5d0 ONLINE 0 0 0
25. FCドライバーの入れ替え
qlcドラバからqltドライバに変更
# update_drv -d -i 'pciex1077,2432' qlc
Cannot unload module: qlc
Will be unloaded upon reboot.
# update_drv -a -i 'pciex1077,2432' qlt
devfsadm: driver failed to attach: qlt
Warning: Driver (qlt) successfully added to system but failed to attach
qlt(ターゲットモード)ドライバーを使用するとFC tar
getで使用可能
30. LUの作成
作成したzvolを指定してLUを作成
# stmfadm create-lu /dev/zvol/rdsk/tank/fc_esx01
Logical unit created:
600144F0AEAE0F0000004ECD1BA10001
# stmfadm list-lu -v
LU Name: 600144F0AEAE0F0000004ECD1BA10001
Operational Status: Online
Provider Name : sbd
Alias : /dev/zvol/rdsk/tank/fc_esx01
View Entry Count : 0
Data File : /dev/zvol/rdsk/tank/fc_esx01
Meta File : not set
Size : 1099511627776
Block Size : 512
Management URL : not set
Vendor ID : SUN
Product ID : COMSTAR
Serial Num : not set
Write Protect : Disabled
Writeback Cache : Enabled
Access State : Active
35. HDD故障時の対応
zpoolの確認 (rpool)
# zpool status rpool
pool: rpool
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: resilvered 10.7G in 0h11m with 0 errors on Thu Feb 2 00:37:41 2012
config:
NAME STATE READ WRITE CKSUM statusが
rpool DEGRADED 0 0 0 DEGRADEDとなり、
mirror-0 DEGRADED 0 0 0
c3t0d0s0 FAULTED 4 63 0 too many errors c3t0d0s0が故障!!
c3t1d0s0 ONLINE 0 0 0
errors: No known data errors
Seagate根性なさすぎなのか壊れまし
36. HDD故障時の対応
zpoolの確認 (tank)
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: resilvered 851G in 25h40m with 0 errors on Fri Feb 3 02:29:42 2012
config:
statusが
NAME STATE READ WRITE CKSUM DEGRADEDとなり、
tank DEGRADED 0 0 0 c3t0d0p0が故障!!
raidz1-0 DEGRADED 0 0 0
c3t0d0p2 FAULTED 6 10 0 too many errors
c3t1d0p2 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
logs
c3t5d0 ONLINE 0 0 0
errors: No known data errors
どうぜん、こっちも故障しますよ
37. HDD故障時の対応
SATAの状態確認
# cfgadm
Ap_Id Type Receptacle Occupant Condition
Slot128 unknown empty unconfigured unknown
Slot129 unknown connected configured ok
sata6/0 sata-port disconnected unconfigured failed
sata6/1::dsk/c3t1d0 disk connected configured ok
sata6/2::dsk/c3t2d0 disk connected configured ok
sata6/3::dsk/c3t3d0 disk connected configured ok
sata6/4 sata-port empty unconfigured ok
sata6/5::dsk/c3t5d0 disk connected configured ok
sata6/0がdisconnectedとなりHDDが見えなくなっ
ている
38. HDD故障時の対応
故障HDDをホットスワップで交換し、認識させる
# cfgadm -f -c configure sata6/0
# cfgadm
Ap_Id Type Receptacle Occupant Condition
Slot128 unknown empty unconfigured unknown
Slot129 unknown connected configured ok
sata6/0::dsk/c3t0d0 disk connected configured ok
sata6/1::dsk/c3t1d0 disk connected configured ok
sata6/2::dsk/c3t2d0 disk connected configured ok
sata6/3::dsk/c3t3d0 disk connected configured ok
sata6/4 sata-port empty unconfigured ok
sata6/5::dsk/c3t5d0 disk connected configured ok
sata6/0がconnectedとなり新しいHDDが
認識
39. HDD故障時の対応
MirrorしているHDDの正常なHDD(c3t1d0)側の
パーティション情報を確認する(シリンダー表示)
# parted /dev/dsk/c3t1d0p0 u s p
Model: Generic Ide (ide)
Disk /dev/dsk/c3t1d0p0: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 64260s 81867239s 81802980s primary solaris boot
2 81867240s 3907024064s 3825156825s primary
・c3t1d0p0は正常に動作中。
・交換したHDDはパーティションが切れていないた
め、正常に動作してるHDDから情報をメモる
40. HDD故障時の対応
さきほどメモしたシリンダー情報をもとにpartedで新
しいHDDにパーティションを作成(c3t0d0s0)
# parted /dev/dsk/c3t0d0p0
GNU Parted 2.3.0
Using /dev/dsk/c3t0d0p0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) u s
us
(parted) mkpart
mkpart
Partition type? primary/extended? primary
primary
File system type? [ext2]? solaris
solaris
Start? 64260s
64260s
End? 81867239s
81867239s
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? I
・シリンダーで合わせておかないと、正常に
mirrorが構成できない?
41. HDD故障時の対応
さきほどメモしたシリンダー情報をもとに新しい
HDDにパーティションを作成(c3t0d0p2)
(parted) mkpart
mkpart
Partition type? primary/extended? primary
primary
File system type? [ext2]?
Start? 81867240s
81867240s
End? 3907024064s
3907024064s
Warning: The resulting partition is not properly aligned for best
performance.
Ignore/Cancel? i
I
42. HDD故障時の対応
boot flagを設定する
(parted) set
set
Partition number? 1
1
Flag to Invert? boot
boot
New state? [on]/off?
43. HDD故障時の対応
作成したパーテション情報を確認する
(parted) p
p
Model: Generic Ide (ide)
Disk /dev/dsk/c3t0d0p0: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 64260s 81867239s 81802980s primary boot
2 81867240s 3907024064s 3825156825s primary
(parted) quit
・c3t1d0と同じになっていればOK
44. HDD故障時の対応
zpool(tank)の修復を行います
# zpool replace tank c3t0d0p2
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu Feb 23 02:42:40 2012
773G scanned out of 3.38T at 10.2M/s, 74h52m to go
193G resilvered, 22.37% done 100%になれば修復完
config:
了
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
replacing-0 DEGRADED 0 0 0
c3t0d0p2/old FAULTED 6 10 0 too many errors
c3t0d0p2 ONLINE 0 0 0 (resilvering)
c3t1d0p2 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
logs
c3t5d0 ONLINE 0 0 0
errors: No known data errors
45. HDD故障時の対応
zpool(rpool)の修復を行います
# zpool replace -f rpool c3t0d0s0
# zpool status rpool
pool: rpool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Feb 24 23:51:16 2012
189M scanned out of 12.9G at 8.20M/s, 0h26m to go
188M resilvered, 1.43% done 100%になれば修復完
config: 了
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
replacing-0 DEGRADED 0 0 0
c3t0d0s0/old FAULTED 4 63 0 too many errors
c3t0d0s0 ONLINE 0 0 0 (resilvering)
c3t1d0s0 ONLINE 0 0 0
errors: No known data errors
・mirrorの場合-fで強制的に実行しないと修復できない場合があ
る
46. HDD故障時の対応
grubの修復を行います
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t0d 0s0
Solaris fdisk partition is inactive.
stage2 written to partition 0, 282 sectors starting at 50 (abs 64310)
stage1 written to partition 0 sector 0 (abs 64260)