###volume management###
Solaris volume managment permits the
creation of 5 object types:
1. volume Raid 0(concatenation or
stripe)/1(mirror)/5(striping with
parity)
2. soft partitions -permits the
creation of very large storage devices
3. Hot spare pools - facilitates
provisioning of spare storage for use
when RAID-1/5 voulume has failed
i.e. Mirror
-Disk1
-Disk2
-Disk3 -spare
4. state database replica -must be
created prior to volumes
-contains configuration & status of
all managed objects(volumes/hot spare
pools/soft partitions/etc/)
5. disk sets -used when clustering
solaris in failover mode
Note:Volume Management faciliates the
creation of virtual disks
Note: virtual disks are accessible
via:/dev/md/dsk&/dev/md/rdsk
Rules regarding volumes:
1.state database replicas are required
2.volumes can be created using
dedicated slices
3.volumes can be created on slices
with state database replicas
4.volumes created by voulume manager
(annot be managed using
'format',however, can be managed using
CLI-tools(metadb,metainit) and GUI
tool(SMC)
5. you may use tools such as
'mkfs','newfs','growfs'
6. you may grow volumes using 'growfs'
###state database replicas###
Note: At least 3 replicas are required
for a consistent,functional, muti-user
solaris system.
3 - yields at least 2 replicas in the
event of a failure
Note:if replicas are on same slice or
media and are lost, then voulume
management will fail,causing loss of
data.
Note: Max of 50 replicas per disk set
Note: Volume management relies upon
Majority consensu algorithm(MCA) to
determine the consistency of the
volume information
3 replicas =1.5(half) = 1-rounded-
down+1 =2 = MCA(half+1)
Note: try to create an even amount of
replicas
4 replicas = 2(half)+1=3
stat databasse replicas is
approximately 4MB by default - for
local storage
Rules regarding storage location of
state database replicas:
1. dedicated partition/slice -
c0t1d0s3
2. local partition that is to be used
in a volume(raid 0/1/5)
3. UFS logging devices
4. '/','/usr','swap', and other UFS
partitions cannot be used to store
stat database replicas
###Configure slices to accomodate
state database replicas###
c0t1d0s0 -
c0t2d0s0 -
RAID 0(stripe) -60G - /dev/md/dsk/d0
Note: Volumes can be created using
slices from a single or multiple disks
Note: state database replicas serve
for All volumes managed by bolume
manager
Note: Raid 0 concatenatio - exhausts
disk1 before writing to disk2
Note: Raid 0 stripe -distribute data
evenly across members
Note: use the same size slices when
using RAID0 with striping
Note: after defining volume,create
file system
newfs /dev/md/rdsk/d0
###suggested layout for creating
volumes using volume manager###
Server
disk0 -system disk
volume manager secondary disks
-disk1 -secondary disk
-disk2 -secondary disk
###Raid-1 configuration###
Note: Raid-1 relies upon submirriors
or existing raid-0 volumes
c0t1d0s0 - /dev/md/dsk/d0
c0t2d0s0 - /dev/md/dsk/d1
/dev/md/dsk/d2
d0 -source sub-mirror
d1 -destination sub-mirror
create file system on mirrored volume
'/dev/md/dsk/d2'
newfs /dev/md/rdsk/d2
###Raid-5 configuration###
steps:
1. ensure that 3 components
(slices/disks) are available for
configuration
2. ensure that components are
identical in size
slices for raid-5
c0t1d0s0 -10G
c0t1d0s0 -10G
c0t2d0s0 -10G
/dev/md/dsk/d0 =Raid-5=20GB
Note: you may attach components to
raid-5 volume, but they will not store
parity information,however, their data
will be protected.
###using growfs to extend volumes###
growfs extends mounted /unmounted
volumes(UFS/ZFS)
steps to grow a mounted/unmounted file
system
1. find free slices to add as
components to volume using SMC or
mettattach CLI
2. Add component slice -wait for
initialization(concatenation) to
complete
3.execute 'growfs -M /d0
/dev/md/rdsk/d0'
NOte: once you've extended a
volume,you cannot decrease it in size.
Note: concatenation of raid-1/5
volumes yields an untrue raid-1/5
volume.
slice1
slice2
slice3
slice4 -concatenated -Not a true raid
-1/5 member( no parity is stored)
Note: when extending raid-1 volumes,
extend each sub-mirror first, and then
sloaris will automatically extend the
raid-1 volume. then run 'growfs'
Solaris volume managment permits the
creation of 5 object types:
1. volume Raid 0(concatenation or
stripe)/1(mirror)/5(striping with
parity)
2. soft partitions -permits the
creation of very large storage devices
3. Hot spare pools - facilitates
provisioning of spare storage for use
when RAID-1/5 voulume has failed
i.e. Mirror
-Disk1
-Disk2
-Disk3 -spare
4. state database replica -must be
created prior to volumes
-contains configuration & status of
all managed objects(volumes/hot spare
pools/soft partitions/etc/)
5. disk sets -used when clustering
solaris in failover mode
Note:Volume Management faciliates the
creation of virtual disks
Note: virtual disks are accessible
via:/dev/md/dsk&/dev/md/rdsk
Rules regarding volumes:
1.state database replicas are required
2.volumes can be created using
dedicated slices
3.volumes can be created on slices
with state database replicas
4.volumes created by voulume manager
(annot be managed using
'format',however, can be managed using
CLI-tools(metadb,metainit) and GUI
tool(SMC)
5. you may use tools such as
'mkfs','newfs','growfs'
6. you may grow volumes using 'growfs'
###state database replicas###
Note: At least 3 replicas are required
for a consistent,functional, muti-user
solaris system.
3 - yields at least 2 replicas in the
event of a failure
Note:if replicas are on same slice or
media and are lost, then voulume
management will fail,causing loss of
data.
Note: Max of 50 replicas per disk set
Note: Volume management relies upon
Majority consensu algorithm(MCA) to
determine the consistency of the
volume information
3 replicas =1.5(half) = 1-rounded-
down+1 =2 = MCA(half+1)
Note: try to create an even amount of
replicas
4 replicas = 2(half)+1=3
stat databasse replicas is
approximately 4MB by default - for
local storage
Rules regarding storage location of
state database replicas:
1. dedicated partition/slice -
c0t1d0s3
2. local partition that is to be used
in a volume(raid 0/1/5)
3. UFS logging devices
4. '/','/usr','swap', and other UFS
partitions cannot be used to store
stat database replicas
###Configure slices to accomodate
state database replicas###
c0t1d0s0 -
c0t2d0s0 -
RAID 0(stripe) -60G - /dev/md/dsk/d0
Note: Volumes can be created using
slices from a single or multiple disks
Note: state database replicas serve
for All volumes managed by bolume
manager
Note: Raid 0 concatenatio - exhausts
disk1 before writing to disk2
Note: Raid 0 stripe -distribute data
evenly across members
Note: use the same size slices when
using RAID0 with striping
Note: after defining volume,create
file system
newfs /dev/md/rdsk/d0
###suggested layout for creating
volumes using volume manager###
Server
disk0 -system disk
volume manager secondary disks
-disk1 -secondary disk
-disk2 -secondary disk
###Raid-1 configuration###
Note: Raid-1 relies upon submirriors
or existing raid-0 volumes
c0t1d0s0 - /dev/md/dsk/d0
c0t2d0s0 - /dev/md/dsk/d1
/dev/md/dsk/d2
d0 -source sub-mirror
d1 -destination sub-mirror
create file system on mirrored volume
'/dev/md/dsk/d2'
newfs /dev/md/rdsk/d2
###Raid-5 configuration###
steps:
1. ensure that 3 components
(slices/disks) are available for
configuration
2. ensure that components are
identical in size
slices for raid-5
c0t1d0s0 -10G
c0t1d0s0 -10G
c0t2d0s0 -10G
/dev/md/dsk/d0 =Raid-5=20GB
Note: you may attach components to
raid-5 volume, but they will not store
parity information,however, their data
will be protected.
###using growfs to extend volumes###
growfs extends mounted /unmounted
volumes(UFS/ZFS)
steps to grow a mounted/unmounted file
system
1. find free slices to add as
components to volume using SMC or
mettattach CLI
2. Add component slice -wait for
initialization(concatenation) to
complete
3.execute 'growfs -M /d0
/dev/md/rdsk/d0'
NOte: once you've extended a
volume,you cannot decrease it in size.
Note: concatenation of raid-1/5
volumes yields an untrue raid-1/5
volume.
slice1
slice2
slice3
slice4 -concatenated -Not a true raid
-1/5 member( no parity is stored)
Note: when extending raid-1 volumes,
extend each sub-mirror first, and then
sloaris will automatically extend the
raid-1 volume. then run 'growfs'
转载于:https://blog.51cto.com/johnnyxing/290463