SAN
Storage Area Networks
SAN introduces the flexibility of networking to enable one server or many heterogeneous servers to share a common storage utility, which may comprise many storage devices, including disk, tape, and optical storage. And, the storage utility may be located far from the servers that use it.
SAN allows any-to-any connection across the network,
using interconnect elements such as routers, gateways, hubs, switches,
and directors. It eliminates the traditional dedicated connection
between a server and storage, and the concept that the server
effectively owns and manages the storage devices.
EMC
1.1 Adding HBA access to Symmetrix Devices 1.2 Removing HBA Access to Symmetrix Devices 1.3 Mapping Symmetrix devices to a director and port
Powerpath CLI commands Powermt command options Powermt command examples Powermt Command Examples with outputs
Symcli commands
To list all the Symmetrix systems
# syscfg list
To list all the Logical volumes assigned to director 9b
# symcfg -dir 9b -address -available list
# symcfg -sid 0039 -sa all list # syscfg -sid 0880 -fa 16A -p1 -address -available list
Adding HBA Access to Symmetrix Devices
01. List devices mapped to the FA director that you will be configuring (for example, director 16A).
# symcfg -sid 0280 list -FA 16A -addr
02. Make an entry for the HBA-to-FA connection
in the VCMDB2, specifying devices that the HBA can access. For example,
add a range of devices (0030 through 0034and 0038) to the VCMDB on the
Symmetrix array (sid 814), specifying the HBA's WWN and the FA
director/port that the HBA connects to.
# symmask sid 814 -wwn 20000000c920b484 add devs 0030:0034,0038 -dir 16A -p 0
03. Back up the revised VCMDB to a file (for example, a file called MyDevMaskBackup).
# symmask -sid 814 backup file MyDevMaskBackup
04.Refresh the WWN-related profile tables in the Symmetrix cache with the latest VCMDB data
# symmak -sid 814 refresh
Removing HBA Access to Symmetrix Devices
To remove 0030 and 0031 devices that was added previously:
# symmask sid 814 -wwn 20000000c920b484 remove devs 0031,0033 -dir 16A -p 0
To remove the remaining devices in the
0030-to-0034 device range, you can specify individual devices or the
range with an option (force) that allows you to remove a noncontiguous
range. For example:
# symmask sid 814 -wwn 20000000c920b484 remove devs 0030:0034 -dir 16A -p 0 -force
To remove the entire set of devices that an HBA
can access, use symmask delete and specify the WWN of the HBA. The
delete action removes the HBA entry completely, including any attributes
set previously. For example:
# symmask sid 814 delete -wwn 20000000c920b484
Mapping Symmetrix devices to a director and port
1a. Obtain a list of used addresses, including the next available address
# symcfg list -SA all -address -available
1b. Obtain list of used addresses, including the next available mapped to director 13B
# symcfg -sid 1188 list -dir 13b -address -available list
02. Create a mapfile (map_1) with list of LUNs to be mapped to director 13bA
map dev 02c7 to dir 13B:0 target=0 lun=0f8 ; map dev 02c8 to dir 13B:0 target=0 lun=0f9 ; map dev 02c9 to dir 13b:0 target=0 lun=0fa ;
03. run the symconfigure command
# symconfigure -sid 1188 -f map_1 commit
Other commands
To list all the LUNS accessible by HBA with WWN 10...xxxxxx# symmaskdb -sid 0665 -wwn 10...xxxxxx list devs
To delete client HBA form VCMDB
# symmask -sid 0665 -wwn 10.....xxxxxx delete # symmask -sid 0665 refresh
To list all the HBAs logged in to DIR 4b and port 0 (4bA)
# symmask -sid 0280 -dir 4b -p1 list logins
fpath commands
# fpath adddev # fpath chgname
To refresh Volume Logic Database
# fpath refresh
To backup Volume logix database
# fpath backupdb
Powerpath
Powerpath CLI Commands
Command | Description |
powermt | Manages a PowerPath environment |
powercf | Configures PowerPath devices |
emcpreg -install | Manages PowerPath license registration |
emcpminor | Checks for free minor numbers |
emcpupgrade | Converts PowerPath configuration files |
Command | Description |
powermt check | Checks for, and optionally removes, dead paths. |
powermt check_ registration | Checks the state of the PowerPath license. |
powermt config | Configures logical devices as PowerPath devices. |
powermt display powermt watch | Displays the state of HBAs configured for PowerPath. powermt watch is deprecated. |
powermt display options | Displays the periodic autorestore setting. |
powermt load | Loads a PowerPath configuration. |
powermt remove | Removes a path from the PowerPath configuration. |
powermt restore | Tests and restores paths. |
powermt save | Saves a custom PowerPath configuration. |
powermt set mode | Sets paths to active or standby mode. |
powermt set periodic_autorestore | Enables or disables periodic autorestore. |
powermt set policy | Changes the load balancing and failover policy. |
powermt set priority | Sets the I/O priority |
powermt version | Returns the number of the PowerPath version for which powermt was created. |
powermt command examples
powermt display:# powermt display paths class=all # powermt display ports dev=all # powermt display dev=all
powermt set:
To disable a HBA from passing I/O
To disable a HBA from passing I/O
# powermt set mode=standby adapter=<adapter#>
To enable a HBA from passing I/O
# powermt set mode=active adapter=<adapter#>
To set or validate the Load balancing policy
To see the current load-balancing policy and I/Os run the following command
# powermt display dev=<device>
- so = Symmetrix Optimization (default)
- co = Clariion Optimization
- li = Least I/Os (queued)
- lb = Least Blocks (queued)
- rr = Round Robin (one path after another)
- re = Request (failover only)
- nr = No Redirect (no load-balancing or failover)
To set to no load balancing
# powermt set policy=nr dev=<device>
To set the policy to default Symmetrix Optimization
# powermt set policy=so dev=<device>
To set the policy to default Clariion Optimization
# powermt set policy=co dev=<device>
pprootdev
To bring the rootvg devices under powerpath control# pprootdev on
To bring back the rootvg disks back to hdisk control
# pprootdev off
To temporarily bring the rootvg disks to hdisk control for running "bosboot"
# pprootdev fix
powermt command examples with output
To validate the installation# powermt check_registration Key B3P3-HB43-CFMR-Q2A6-MX9V-O9P3 Product: PowerPath Capabilities: Symmetrix CLARiiON
To display each device's path, state, policy and average I/O information
# powermt display dev=emcpower6a Pseudo name=emcpower6a Symmetrix ID=000184503070 Logical device ID=0021 state=alive; policy=SymmOpt; priority=0; queued-IOs=0 ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors 0 sbus@2,0/fcaw@2,0 c4t25d225s0 FA 13bA active dead 0 1 1 sbus@6,0/fcaw@1,0 c5t26d225s0 FA 4bA active alive 0 0
To show the paths and dead paths to the storage port
# powermt display paths Symmetrix logical device count=20 ----- Host Bus Adapters --------- ------ Storage System ----- - I/O Paths - ### HW Path ID Interface Total Dead 0 sbus@2,0/fcaw@2,0 000184503070 FA 13bA 20 20 1 sbus@6,0/fcaw@1,0 000184503070 FA 4bA 20 0 CLARiiON logical device count=0 ----- Host Bus Adapters --------- ------ Storage System ----- - I/O Paths - ### HW Path ID Interface Total Dead
To display the storage ports information
# powermt display ports Storage class = Symmetrix ----------- Storage System --------------- -- I/O Paths -- --- Stats --- ID Interface Wt_Q Total Dead Q-IOs Errors 000184503070 FA 13bA 256 20 20 0 20 000184503070 FA 4bA 256 20 0 0 0 Storage class = CLARiiON ----------- Storage System --------------- -- I/O Paths -- --- Stats --- ID Interface Wt_Q Total Dead Q-IOs Errors
Powerpath on HP-UX
powermt display dev=all CLARiiON ID=APM00080702201 [AEMSAQC1] Logical device ID=6006016035901E000ABE8A31B53CDD11 [LUN 12] state=alive; policy=BasicFailover; priority=0; queued-IOs=0 Owner: default=SP A, current=SP B ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 10 0/3/1/0.1.0.0.0.0.1 c10t0d1 SP A5 unlic alive 0 0 11 0/3/1/0.2.0.0.0.0.1 c11t0d1 SP B5 unlic alive 0 0 12 0/7/1/0.1.0.0.0.0.1 c12t0d1 SP A4 active alive 0 0 13 0/7/1/0.2.0.0.0.0.1 c13t0d1 SP B4 active alive 0 0 powermt set policy=co dev=all powermt display dev=all CLARiiON ID=APM00080702201 [AEMSAQC1] Logical device ID=6006016035901E000ABE8A31B53CDD11 [LUN 12] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP B ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 10 0/3/1/0.1.0.0.0.0.1 c10t0d1 SP A5 active alive 0 0 11 0/3/1/0.2.0.0.0.0.1 c11t0d1 SP B5 active alive 0 0 12 0/7/1/0.1.0.0.0.0.1 c12t0d1 SP A4 active alive 0 0 13 0/7/1/0.2.0.0.0.0.1 c13t0d1 SP B4 active alive 0 0
Clarrion Storage
Snapview Clone and Snapview Snapshots
EMC SnapView is a storage-system-based software application that allows you to create a copy of a LUN by using either clones or snapshots. A clone is an actual copy of a LUN and takes time to create, depending on the size of the source LUN. A snapshot is a virtual point-in-time copy of a LUN which tracks differences to your original data, and takes only seconds to create.
SnapView has the following important benefits:
- It allows full access to a point-in-time copy of your production data with modest impact on performance and without modifying the actual production data.
- For decision support or revision testing, it provides a coherent, readable and writable copy of real production data.
- For backup, it practically eliminates the time that production data spends offline or in hot backup mode. And it offloads the backup overhead from the production server to another server.
- It provides instantaneous data recovery if the source LUN becomes corrupt. You can perform a recovery operation on a clone by initiating a reverse synchronization and on a snapshot session by initiating a rollback operation.
Clones
A clone is a complete copy of a source LUN. You specify a source LUN when you create a clone group. The copy of the source LUN begins when you add a clone LUN to the clone group. The software assigns each clone a clone ID. This ID remains with the clone until you remove the clone from its group.
While the clone is part of the clone group and
unfractured, any production write requests made to the source LUN are
simultaneously copied to the clone. Once the clone contains the desired
data, you can fracture the clone. Fracturing the clone separates it from
its source LUN, after which you can make it available to a secondary
server.

Snapshot
A snapshot is a virtual LUN that allows a secondary server to view a point-in-time copy of a source LUN. You determine the point in time when you start a SnapView session. The session keeps track of the source LUN’s data at a particular point in time. A snapshot is a composite of the unchanged data chunks on the source LUN and data chunks on the reserved LUN.
During a session, the production server can still
write to the source LUN and modify data. When this happens, the software
stores a copy of the original point-in-time data on a reserved LUN in
the reserved LUN pool. This operation is referred to as
copy-on-first-write because it occurs only when a data chunk is first
modified on the source LUN.

Create and activate a snapshot
01. Select the Source LUN to which the snapshot to be created. Start a SnapView Session. Provide a unique name to the SnapView Session. Once, the session is started, this will start copying the original data to Reserved LUN pool when ever the source LUN data is changed.
01. Select the Source LUN to which the snapshot to be created. Start a SnapView Session. Provide a unique name to the SnapView Session. Once, the session is started, this will start copying the original data to Reserved LUN pool when ever the source LUN data is changed.
02. Create snapshot for the LUN. Give unique name for
the snapshot. This snapshot is a virtual LUN that allows a secondary
server to view a SnapView session. An active snapshot is a composite of a
source LUN and reserved LUN data that lasts until you destroy the
snapshot.
03. Add this snapshot to the appropriate storage group so that the hosts can access the snapshot.
04. Activate the snapshot. The Navisphere Manager
activate option maps the snapshot to a SnapView session. The secondary
server must be rebooted, or use some other means, so that this server
recognizes the new device created when SnapView session started.
Deactivating a snapshot:
The deactivate option unmaps a snapshot from a SnapView session and destroys any secondary server writes made to the snapshot. The snapshot and session still exist but are not visible from the secondary server. The deactivate function is available only when a snapshot is active.
The deactivate option unmaps a snapshot from a SnapView session and destroys any secondary server writes made to the snapshot. The snapshot and session still exist but are not visible from the secondary server. The deactivate function is available only when a snapshot is active.
Stopping a SnapView session
Stopping a SnapView session ends the session’s point-in-time copy. Stopping the last SnapView session of a source LUN frees the reserved LUN(s) used by the session and any SP memory used to maintain the session image. The newly freed reserved LUN(s) becomes available for another session. Stopping a session also changes the snapshots' status from active to inactive.
Stopping a SnapView session ends the session’s point-in-time copy. Stopping the last SnapView session of a source LUN frees the reserved LUN(s) used by the session and any SP memory used to maintain the session image. The newly freed reserved LUN(s) becomes available for another session. Stopping a session also changes the snapshots' status from active to inactive.
Destroying a snapshot
If the snapshot is inactive, the software destroys only the selected snapshot.
If the snapshot is inactive, the software destroys only the selected snapshot.
If the snapshot is active, a warning message appears
indicating that you should deactivate the snapshot before destroying it.
If you accept the warning message, the software deactivates the
snapshot, and destroys it (the snapshot) and any server writes made to
the snapshot.
If the snapshot belongs to a storage group(s), an
error message appears indicating that you cannot destroy a snapshot that
is in a storage group. Remove the snapshot from the storage group(s),
and then destroy the snapshot.
Navisphere Agent
Creating the Navisphere Agent file: agentID.txtIf you have a multihomed host and are running like :
- IBM AIX,
- HP-UX,
- Linux,
- Solaris,
- VMware ESX Server (2.5.0 or later), or
- Microsoft Windows
About the agentID.txt file:
This file, agentID.txt (case sensitive), ensures that the Navisphere Agent binds to the correct HBA/NIC for registration and therefore registers the host with the correct storage system. The agentID.txt file must contain the following two lines:
Line1: Fully-qualified hostname of the host
Line 2: IP address of the HBA/NIC port that you want Navisphere Agent to use
For example, if your host is named host28 on the domain
mydomain.com and your host contains two HBAs/NICs, HBA/NIC1 with IP
address 192.111.222.2 and HBA/NIC2 with IP address 192.111.222.3, and
you want the Navisphere Agent to use NIC 2, you would configure
agentID.txt as follows:
This file, agentID.txt (case sensitive), ensures that the Navisphere Agent binds to the correct HBA/NIC for registration and therefore registers the host with the correct storage system. The agentID.txt file must contain the following two lines:
host28.mydomain.com 192.111.222.3
To create the agentID.txt file, continue with the appropriate procedure for your operating system:
For IBM AIX, HP-UX, Linux, and Solaris:
- Using a text editor that does not add special formatting, create or edit a file named agentID.txt in either / (root) or in a directory of your choice.
- Add the hostname and IP address lines as described above. This file should contain only these two lines, without formatting.
- Save the agentID.txt file.
- If you created the agentID.txt file in a directory other than root, for Navisphere Agent to restart after a system reboot using the correct path to the agentID.txt file, set the environment variable EV_AGENTID_DIRECTORY to point to the directory where you created agentID.txt.
- If a HostIdFile.txt file is present in the directory shown for
your operating system, delete or rename it. The HostIdFile.txt file is
located in the following directory for your operating system:
AIX :- /etc/log/HostIdFile.txt
HP-UX :- /etc/log/HostIdFile.txt
Linux :- /var/log/HostIdFile.txt
Solaris :- /etc/log/HostIdFile.txt - Stop and then restart the Navisphere Agent.
NOTE: Navisphere may take some time to update, however, it should update within 10 minutes. - Once the Navisphere Agent has restarted, verify that Navisphere
Agent is using the IP address that is entered in the agentID.txt file.
To do this, check the new HostIdFile.txt file. You should see the IP
address that is entered in the agentID.txt file.The HostIdFile.txt file
is in the following directory for your operating system:
AIX :/etc/log/HostIdFile.txt
HP-UX :/etc/log/HostIdFile.txt
Linux :-/var/log/HostIdFile.txt
Solaris :-/etc/log/HostIdFile.txt
For VMware ESX Server 2.5.0 and later
- Confirm that Navisphere agent is not installed.
- Using a text editor that does not add special formatting, create or edit a file named agentID.txt in either / (root) or in a directory of your choice.
- Add the hostname and IP address lines as described above. This file should contain only these two lines, without formatting.
- Save the agentID.txt file.
- If you created the agentID.txt file in a directory other than root, for subsequent Agent restarts to use the correct path to the agentID.txt file, set the environment variable EV_AGENTID_DIRECTORY to point to the directory where you created agentID.txt.
- If a HostIdFile.txt file is present in the /var/log/ directory, delete or rename it.
- Reboot the VMWARE ESX server.
- Install and start Navisphere Agent and confirm that it has started.
NOTE: Navisphere may take some time to update, however, it should update within 10 minutes. - Once the Navisphere Agent has restarted, verify that Navisphere Agent is using the IP address that is entered in the agentID.txt file. To do this, check the new HostIdFile.txt file which is located in the /var/log/ directory. You should see the IP address that is entered in the agentID.txt file.
For Microsoft Windows:
- Using a text editor that does not add special formatting, create a file named agentID.txt in the directory C:/ProgramFiles/EMC/Navisphere Agent.
- Add the hostname and IP address lines as described above. This file should contain only these two lines, without formatting.
- Save the agentID.txt file.
- If a HostIdFile.txt file is present in the C:/ProgramFiles/EMC/Navisphere Agent directory, delete or rename it.
- Restart the Navisphere Agent
- Once the Navisphere Agent has restarted, verify that Navisphere Agent is using the correct IP address that is entered in the agentID.txt file. Either:
- In Navisphere Manager, verify that the host IP address is the same as the IP address that you you entered in the agentID.txt file. If the address is the same, the agentID.txt file is configured correctly.
- Check the new HostIdFile.txt file. You should see the IP address that is entered in the agentID.txt file.
Celerra
$ nas_server -list id type acl slot groupID state name 1 1 0 2 0 server_2 2 4 0 3 0 server_3
server_export: Exports file systems, and manages access on the specified Data Mover(s) for NFS and CIFS clients
$ server_export server_2 -o anon=0 /dir1 $ server_export server_2 -i -o server_export server_2 -o anon=0 /tools/site/tools/kickstart /toolsNote: -i (ignore) over writes the previous options and comments
$ server_export server_2 export "/dir1" anon=0 export "/tools" rw=allhosts root=server1:server2 access=allhosts
To extend a filesystem
nas_fs -xtend tools size=50G pool=clarata_archive
Quota
To edit the default user and group quota for a filesystem$ nas_quotas -edit -config -fs home1The system opens an edit session as shown in the Output table that follows. Edit it as required.
File System Quota Parameters: fs "home1" Block Grace: (1.0 weeks) Inode Grace: (1.0 weeks) * Default Quota Limits: User: block (soft = 13670400, hard = 15728640) inodes (soft = 0, hard= 0) Group: block (soft = 0, hard = 0) inodes (soft = 0, hard= 0) Deny disk space to users exceeding quotas: (yes) * Generate Events when: Quota check starts: (no) Quota check ends: (no) soft quota crossed: (no) hard quota crossed: (yes)
To edit a quota for user id 500 for file system fs1 (The bellow command will open a vi screen. Edit it as per requirement)
$ nas_quotas -edit -user -fs fs1 500 Userid : 501 fs "fs1" blocks (soft = 29360128, hard = 31457280) inodes (soft = 0, hard = 0)
To view the quota usage of user ID 501 for file system /home
$ nas_quotas -report -user -fs home2 501 Report for user quotas on filesystem home2 mounted on /home +------------+-----------------------------------+-------------------------------+ |User | Bytes Used (1K) | Files | +------------+--------+--------+--------+--------+--------+------+------+--------+ | | Used | Soft | Hard |Timeleft| Used | Soft | Hard |Timeleft| +------------+--------+--------+--------+--------+--------+------+------+--------+ |#501 | 3960316|29360128|31457280| | 188| 0| 0| | +------------+--------+--------+--------+--------+--------+------+------+--------+
To list quota usage for all users
$ nas_quotas -report -user -fs home all
IBM
Subsystem Device Driver(SDD)
A storage complex is a group of DS8000s managed by a single S-HMC (Storage Hardware Management Console). It may consist of a single DS8000 storage unit. A storage complex is sometimes referred to as a storage-plex.
To remove hdisks corresponding to IBM ESS devices
# lsdev -Ct2105* -Fname | xargs -n1 rmdev -dl
To remove all the SDD disks (Vpaths)
# rmdev -dl dpo -R
DS8000 Terms and Concepts
Storage complexA storage complex is a group of DS8000s managed by a single S-HMC (Storage Hardware Management Console). It may consist of a single DS8000 storage unit. A storage complex is sometimes referred to as a storage-plex.
Storage facility
Storage facility refers to a single DS8000 unit (including the base frame and the optional expansion frames). A storage facility is also referred to as a storage unit. As an example, if your organization has one DS8000, then you have a single storage complex that contains a single storage unit.
Storage facility refers to a single DS8000 unit (including the base frame and the optional expansion frames). A storage facility is also referred to as a storage unit. As an example, if your organization has one DS8000, then you have a single storage complex that contains a single storage unit.
Processor complex
Processor complex refers to one of the Power5 servers which runs, controls, and delivers all the services of the DS8000. There are two processor complexes in one storage facility: processor complex 0 and processor complex 1. Each processor complex can support one or more LPARs concurrently.
Processor complex refers to one of the Power5 servers which runs, controls, and delivers all the services of the DS8000. There are two processor complexes in one storage facility: processor complex 0 and processor complex 1. Each processor complex can support one or more LPARs concurrently.
Logical partition (LPAR)
An LPAR uses software and firmware to logically partition the resources on a system. An LPAR consists of processors, memory, and I/O slots available in one processor complex.
An LPAR uses software and firmware to logically partition the resources on a system. An LPAR consists of processors, memory, and I/O slots available in one processor complex.
Storage facility image (SFI)
A storage facility image consists of two LPARs, one on each processor complex in a storage facility. A storage facility image is capable of performing all functions of a storage server from the host's perspective. More than one SFI can be configured on a storage facility. A storage facility image might also be referred to as a storage image.
A storage facility image consists of two LPARs, one on each processor complex in a storage facility. A storage facility image is capable of performing all functions of a storage server from the host's perspective. More than one SFI can be configured on a storage facility. A storage facility image might also be referred to as a storage image.
Storage facility image server
One SFI consists of two LPARs; each LPAR hosts a storage facility image server running a specific AIX instance. Thus, one SFI has two storage facility image servers, often referred as Server 0 and Server 1.
One SFI consists of two LPARs; each LPAR hosts a storage facility image server running a specific AIX instance. Thus, one SFI has two storage facility image servers, often referred as Server 0 and Server 1.
Array site
An array site is a group of 8 DDMs selected by the DS8000 server algorithm in a storage facility image. An array site is managed by one storage facility image.
An array site is a group of 8 DDMs selected by the DS8000 server algorithm in a storage facility image. An array site is managed by one storage facility image.
Array
Each array site can be individually formatted by the user to a specific RAID format. A formatted array site is called an array. The supported RAID formats are RAID-5 and RAID-10. The process of selecting the RAID format for an array is also called defining an array.
Each array site can be individually formatted by the user to a specific RAID format. A formatted array site is called an array. The supported RAID formats are RAID-5 and RAID-10. The process of selecting the RAID format for an array is also called defining an array.
Rank
A rank is defined by the user. The user selects an array and defines the storage format for the rank, which is either Count Key Data (CKD) or Fixed Block (FB) data. One rank will be assigned to one extent pool by the user.
A rank is defined by the user. The user selects an array and defines the storage format for the rank, which is either Count Key Data (CKD) or Fixed Block (FB) data. One rank will be assigned to one extent pool by the user.
Extents
The available space on each rank is divided into extents. The extents are the building blocks of the logical volumes. The characteristic of the extent is its size, which depends on the specified device type when defining a rank:
The available space on each rank is divided into extents. The extents are the building blocks of the logical volumes. The characteristic of the extent is its size, which depends on the specified device type when defining a rank:
For fixed block format, the extent size is 1 GB. For CKD format, the extent size is .94GB for model 1.
Extent pools
An extent pool refers to a logical construct to manage a set of extents. The user defines extent pools by selecting one to N ranks managed by one storage facility image. The user defines which storage facility image server (Server 0 or Server 1) will manage the extent pool. All extents in an extent pool must be of the same storage type (CKD or FB). Extents in an extent pool can come from ranks defined with arrays of different RAID formats, but the same RAID configuration within an extent pool is recommended. The minimum number of extent pools in a storage facility image is two (each storage facility image server manages a minimum of one extent pool).
An extent pool refers to a logical construct to manage a set of extents. The user defines extent pools by selecting one to N ranks managed by one storage facility image. The user defines which storage facility image server (Server 0 or Server 1) will manage the extent pool. All extents in an extent pool must be of the same storage type (CKD or FB). Extents in an extent pool can come from ranks defined with arrays of different RAID formats, but the same RAID configuration within an extent pool is recommended. The minimum number of extent pools in a storage facility image is two (each storage facility image server manages a minimum of one extent pool).
Rank groups
Ranks are organized in two rank groups:
Ranks are organized in two rank groups:
Rank group 0 is controlled by server 0. Rank group 1 is controlled by server 1.
Logical volume
A logical volume is composed of a set of extents from one extent pool.
A logical volume is composed of a set of extents from one extent pool.
A logical volume composed of fixed block extents is called a LUN. A logical volume composed of CKD extents is referred to as a CKD volume or logical device.
Logical subsystem
A logical subsystem (LSS) is a logical construct grouping logical volumes. One LSS can group up to 256 logical volumes from extent pools. The user can define up to 255 LSSs in a storage facility image with the following restriction: the logical volumes in one LSS must be of extent pools with identical extent types and from the same rank pool in one storage facility image. As a result, LSSs are either CKD or FB and have affinity with one storage facility image server. Up to 128 LSSs can be managed by Server 0 and up to 127 LSSs can be managed by Server 1 (one LSS address is reserved).
A logical subsystem (LSS) is a logical construct grouping logical volumes. One LSS can group up to 256 logical volumes from extent pools. The user can define up to 255 LSSs in a storage facility image with the following restriction: the logical volumes in one LSS must be of extent pools with identical extent types and from the same rank pool in one storage facility image. As a result, LSSs are either CKD or FB and have affinity with one storage facility image server. Up to 128 LSSs can be managed by Server 0 and up to 127 LSSs can be managed by Server 1 (one LSS address is reserved).
Address group
An address group refers to a group of LSSs. Up to 16 LSSs can be grouped into one address group. All LSSs in an address group must be of the same format (CKD or FB). The address groups are defined by the user. A storage facility image can manage up to 16 address groups.
An address group refers to a group of LSSs. Up to 16 LSSs can be grouped into one address group. All LSSs in an address group must be of the same format (CKD or FB). The address groups are defined by the user. A storage facility image can manage up to 16 address groups.
Host attachment
One host attachment is a named group of World Wide Port Names (WWPNs) defined by the user. The definition of host attachment is necessary to manage the LUN masking. One WWPN can be defined in only one host attachment. The user assigns one host attachment to one volume group. Each WWPN in the host attachment will get access to all of the LUNs defined in the volume group.
One host attachment is a named group of World Wide Port Names (WWPNs) defined by the user. The definition of host attachment is necessary to manage the LUN masking. One WWPN can be defined in only one host attachment. The user assigns one host attachment to one volume group. Each WWPN in the host attachment will get access to all of the LUNs defined in the volume group.
Volume group
The user gathers LUNs into volume groups. The definition of volume groups is necessary to manage the LUN masking. One LUN can be defined in several volume groups. One volume group can be assigned to several host attachments.
The user gathers LUNs into volume groups. The definition of volume groups is necessary to manage the LUN masking. One LUN can be defined in several volume groups. One volume group can be assigned to several host attachments.
Volumegroups are added to Host Ports
Host ports are assigned to IO ports
HP
- clarrion_clone.jpg Δ ... 19,141 bytes ... March 18, 2011, at 04:40 PM
- clarrion_snapshot.jpg Δ ... 20,509 bytes ... March 18, 2011, at 04:41 PM
- favicon.ico Δ ... 43 bytes ... April 16, 2012, at 05:17 AM
- hldm_admin_guide.pdf Δ ... 3,850,709 bytes ... July 29, 2007, at 02:41 PM
- Powerpath.pdf Δ ... 1,075,476 bytes ... August 23, 2009, at 06:05 PM
AutoPath
To remove hdisks corresponding to Hitachi Lightning and HP# lsdev -CtHitachi* -Fname | xargs -n1 rmdev -dl
To remove all the dlm drives from the system
# dlmrmdev
To get the detailed information of all the LUNS
# xpinfo -l
Scanning disk devices...
Device File : /dev/rhdisk17 Model : XP1024
Port : CL1E Serial # : 00040318
Host Target : --- Code Rev : 2108
Array LUN : 00 Subsystem : 0004 CU:LDev : 00:69 CT Group : --- Type : OPEN-V CA Volume : SMPL Size : --- BC0 (MU#0) : SMPL ALPA : e1 BC1 (MU#1) : SMPL Loop Id : 04 BC2 (MU#2) : SMPL SCSI Id : 0x610813 RAID Level : RAID5 FC-LUN : 0000000000000000 RAID Group : 2-9 Port WWN : 50060e80039d7e04 ACP Pair : 2 Disk Mechs : R1408 R1508 R1608 R1708
Device File : /dev/rhdisk2 Model : XP1024
Port : CL2E Serial # : 00040318
Host Target : --- Code Rev : 2108
Array LUN : 00 Subsystem : 0004 CU:LDev : 00:69 CT Group : --- Type : OPEN-V CA Volume : SMPL Size : --- BC0 (MU#0) : SMPL ALPA : c9 BC1 (MU#1) : SMPL Loop Id : 14 BC2 (MU#2) : SMPL SCSI Id : 0x620813 RAID Level : RAID5 FC-LUN : 0000000000000000 RAID Group : 2-9 Port WWN : 50060e80039d7e14 ACP Pair : 2 Disk Mechs : R1408 R1508 R1608 R1708
There are two ways to limit the DLM drivers managed by HDLM:
- Define the disks (hdisk) that you would like the DLM driver to recognize in the
/usr/DynamicLinkManager/drv/dlmfdrv.conf file.
- Define the disks that you would not like the DLM driver to recognize in the
/usr/DynamicLinkManager/drv/dlmfdrv.unconf file.
A specification in the dlmfdrv.unconf file has
priority over a specification in the dlmfdrv.conf file. Therefore, if
the same disk is defined in both the dlmfdrv.conf and dlmfdrv.unconf
files, the DLM driver will not recognize the defined disk
To start or stop the HDLM Manager
# startsrc -s DLMManager
# stopsrc -s DLMManager
To list all the HDLM drivers
# lsdev -C | grep dlm
dlmadrv Available DLM Alert Driver
dlmfdrv Available DLM Driver
dlmfdrv5 Available DLM Driver
- dlmfdrv is the driver instance for internal management.
- dlmfdrv5 (5 indicate the instance numbers of drivers)
- dlmadrv is the file name of the DLM alert driver.
HDLM Commands operation
# dlnkmgr operation-name [parameter [parameter-value]]
To clears statistics such as the path error count
# dlnkmgr clear -pdst
To make an online path to offline
# dlnkmgr offline -pathid 1 -s
KAPL01022-I 1 path(s) were processed. Operation name = offline
KAPL01001-I The DLM command completed successfully. Operation name =
offline
To set various options
# dlnkmgr set <parameters>
Parameters for the Set Command
-lb{on|off} Enables or disables the load balancing function. Default = on.
-ellv log-level The level of error information you want to collect
in the error log. Default = 3
-afb{on[-intvl execution- interval]|off} on: Enables automatic failback
execution interval: in minutes. Can be set from 1 to 1440 minutes.
To display the path or drive details
# dlnkmgr view -path
# dlnkmgr view -drv
Problem: In AIX, hdisk devices are getting PVIDs instead of dlmfdrv drives.
Sol: Make sure the the hdisk names which are to be
controlled by DLM are there in /usr/DynamicLinkManager/drv/dlmfdrv.conf
file. If any of the hdisk names are missing, add it there.
XP Storage works XP series storage
HP XP 1024 - Creating Business copy
Creating LUSE Volume
Login to Storage work command view GUI
Click on LUN and VOL management
Click on Vol Management ICON
Expand LDEV
Select the appropriate CU unit (ie. CU-8) (which has enough LDEV we required)
select the starting LDEV and number of counts (for total size) and click set and apply
The above step create the LUSE volume
Assinging LUSE Volume PATH
Click on LUN management ICON
Expand the Fibre
Expand the appropriate controller (ie. CL1-R)
Select the appropriate system
On the LDEV section select the CU (ie. CU-8)
Select the LUSE name created in the LDEV section
Go to LUN section and go to the last empty field and click on the "Add LU path"
Copy the Path and paste in to other alternate path control unit (ie CL2-R)
click on apply
This process create the 2 paths to the created LUN
Creating Business copy
Click on the BC tab
Select the system LUN which you want to create the BC ( CL1-A -> systemname001 -> LUSE name )
Right click on the LUSE and select create pair
Select the port (CL1-R)
Select the LUSE created from above process and click on set and apply Switches
Brocase switches
Some Usefull Brocade Switch commandsaliadd Add a member to a zone alias alicreate Create a zone alias alidelete Delete a zone alias aliremove Remove a member from a zone alias alishow Print zone alias information cfgadd Add a member to a configuration cfgclear Clear all zone configurations cfgcreate Create a zone configuration cfgdelete Delete a zone configuration cfgdisable Disable a zone configuration cfgenable Enable a zone configuration cfgremove Remove a member from a configuration cfgsave Save zone configurations in flash cfgshow Print zone configuration information cfgsize Print size details of zone database cfgtransabort Abort zone configuration transaction cfgtransshow Print zone configurations in transaction buffer fabportshow Display contents of a particular port's data fabricshow Print fabric membership info fabstateclear Clears the fabric state information fabstateshow Displays the fabric state information fabstatsshow Displays the fabric statistics information fabswitchshow Display fabric switch state information nsaliasshow Display local Name Server information with Aliases nsallshow Print global Name Server information nscamshow Print local Name Server Cache information nsshow Print local Name Server information information nszonemember Display the information of all the online devices switchshow Print switch and port status switchuptime Displays the amount of time for which the switch is up portdisable Disable a specified port portenable Enable a specified port zoneadd Add a member to a zone zonecreate Create a zone zonedelete Delete a zone zonehelp Print zoning help info zoneremove Remove a member from a zone zoneshow Print zone information
Cookbook to create a zone and add it to a config
01. Create the alias for the Device WWNsswd77:admin> alicreate "ET_CM0_CA0_P1", "21:40:00:0b:5d:6a:05:82" swd77:admin> alicreate "test1_fcd0", "50:01:43:80:03:3a:86:ca"02. Create a new zone called test1
swd77:admin> zonecreate "test1", "ET_CM0_CA0_P0;test1_fcd0"03. Create a Config and Add the zone to SAN0 config
swd77:admin> cfgcreate "SAN0", "test31" ## To create a config first time swd77:admin> cfgadd "SAN0", "test31"04. Save the zone to flash memeory
swd77:admin> cfgsave05. Enable the modified configuration
swd77:admin> cfgenable SAN006. Verify the zone
swd77:admin> zoneshow Defined configuration: cfg: SAN0 aembwpd1; aemtest3 zone: aembwpd1 Aembwpd1_P0; ET_CM0_CA0_P0 zone: test1 ET_CM0_CA0_P0; aemtest3_fcd0 alias: Aembwpd1_P0 50:01:43:80:02:9a:92:f0 alias: ET_CM0_CA0_P0 20:40:00:0b:5d:6a:05:82 alias: ET_CM0_CA0_P1 21:40:00:0b:5d:6a:05:82 alias: test1_fcd0 50:01:43:80:03:3a:86:ca Effective configuration: cfg: SAN0 zone: aembwpd1 50:01:43:80:02:9a:92:f0 20:40:00:0b:5d:6a:05:82 zone: test1 20:40:00:0b:5d:6a:05:82 50:01:43:80:03:3a:86:ca
switchshow command
switchshow [-portcount | -iscsi] swd77:admin> switchshow -portcount FC ports = 24 swd77:admin> switchshow switchName: swd77 switchType: 71.2 switchState: Online switchMode: Native switchRole: Principal switchDomain: 1 switchId: fffc01 switchWwn: 10:00:00:05:1e:9b:db:72 zoning: ON (SAN0) switchBeacon: OFF Area Port Media Speed State Proto ===================================== 0 0 id N4 Online F-Port 50:01:43:80:02:9a:92:f0 1 1 id N4 Online F-Port 50:01:43:80:02:9a:92:4c 2 2 id N4 Online F-Port 50:01:43:80:03:3a:86:ca 3 3 id N4 No_Light 4 4 id N4 No_Light 5 5 id N4 No_Light 6 6 id N4 No_Light 7 7 id N4 Online F-Port 20:40:00:0b:5d:6a:05:82 8 8 id N4 No_Light 9 9 id N4 No_Light 10 10 id N4 No_Light 11 11 id N4 No_Light 12 12 id N4 No_Light 13 13 id N4 No_Light 14 14 id N4 No_Light 15 15 id N4 Online F-Port 21:40:00:0b:5d:6a:05:82 16 16 -- N8 No_Module (No POD License) Disabled 17 17 -- N8 No_Module (No POD License) Disabled 18 18 -- N8 No_Module (No POD License) Disabled 19 19 -- N8 No_Module (No POD License) Disabled 20 20 -- N8 No_Module (No POD License) Disabled 21 21 -- N8 No_Module (No POD License) Disabled 22 22 -- N8 No_Module (No POD License) Disabled 23 23 -- N8 No_Module (No POD License) Disabled
To delete an alias
Switch1:admin> alidelete "test1_fcd0"
To Delete a Zone
Switch1:admin> zonedelete "test01"
To remove test4 zone from configuration SAN1
Switch2:admin> cfgremove "SAN1", "test4"
McData Switches:
Fabric Manager:
Create a zone
add it to a zoneset
Activate the Zone set
add it to a zoneset
Activate the Zone set
Cisco Switches
To show all fclias Names:
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
0xb70037 N 21:00:00:24:ff:39:fe:a2 scsi-fcp:target
0xb70038 N 21:00:00:24:ff:39:f6:a0 scsi-fcp:target
0xb70039 N 21:00:00:24:ff:39:f5:c2 scsi-fcp:target
0xce0000 N 50:0a:09:81:8d:55:7f:20 (NetApp) scsi-fcp
0xce0001 N 50:0a:09:83:9d:55:7f:20 (NetApp) scsi-fcp
0xce0002 N 50:0a:09:82:8d:55:7f:20 (NetApp) scsi-fcp
0xce0003 N 50:0a:09:84:9d:55:7f:20 (NetApp) scsi-fcp
0xce0004 N 50:0a:09:85:8d:55:7f:20 (NetApp) scsi-fcp
0xce0005 N 50:0a:09:86:9d:55:7f:20 (NetApp) scsi-fcp
0xce0006 N 10:00:00:00:c9:a1:24:86 (Emulex) scsi-fcp:init
0xce0007 N 10:00:00:00:c9:a1:24:78 (Emulex) scsi-fcp:init
0xce0008 N 10:00:00:00:c9:b5:02:b5 (Emulex) scsi-fcp:init
0xce0009 N 10:00:00:00:c9:b2:7d:2f (Emulex) scsi-fcp:init
0xce000a N 10:00:00:00:c9:a1:7a:7a (Emulex) scsi-fcp:init
0xce000d N 10:00:00:00:c9:a8:98:bf (Emulex) scsi-fcp:init
0xce000e N 10:00:00:00:c9:f1:ab:b1 (Emulex) scsi-fcp:init
# show fcalias fcalias name BC20-io1-0 vsan 101 pwwn 20:00:00:c0:dd:0d:72:bd fcalias name BC20-qlogic3-0 vsan 101 pwwn 20:00:00:c0:dd:0d:73:a5 fcalias name BC21-io1-0 vsan 101 pwwn 20:00:00:c0:dd:0d:77:7c ..... .....
To show the fcalias of angel alone
# show fcalias name angel-fcs0 fcalias name angel-fcs0 vsan 201 pwwn 10:00:00:00:c9:78:11:47
To rename the fcalias name
# config t # fcalias rename old_name new_name vsan vsan_number
Zones
To list a zone to which fcalias is part of# show zone member fcalias <fcalias_name> fcalias <alias_name> vsan 202 zone <zon_ename>
To rename a zone
# config t # zone rename old_name new_name vsan vsan_number
Zoneset:
# show zoneset zoneset name svcprod101 vsan 101 zone name svcprod101 vsan 101 fcalias name svcprod101-2-4-prod0 vsan 101 pwwn 50:05:07:68:01:40:36:ea fcalias name svcprod101-6-4-prod2 vsan 101 pwwn 50:05:07:68:01:40:2c:a0 fcalias name svcprod101-7-4-backup vsan 101 pwwn 50:05:07:68:01:40:37:43 fcalias name svcprod101-7-3-backup vsan 101 pwwn 50:05:07:68:01:30:37:43 fcalias name svcprod101-2-3-prod0 vsan 101 pwwn 50:05:07:68:01:30:36:ea fcalias name svcprod101-6-3-prod2 vsan 101 pwwn 50:05:07:68:01:30:2c:a0
Switch Configuration
To list which WWN are logged into the system# ssh admin@san03a show flog database MDS Switch -------------------------------------------------------------------------------- INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------- fc1/1 2001 0xc70018 50:05:07:68:01:10:5d:e5 50:05:07:68:01:00:5d:e5 fc1/2 4001 0xe8000e 50:05:07:68:01:10:5f:09 50:05:07:68:01:00:5f:09 fc1/3 2001 0xc70004 50:05:07:68:01:10:5d:af 50:05:07:68:01:00:5d:af fc1/4 4001 0xe80019 50:05:07:68:01:10:5e:2a 50:05:07:68:01:00:5e:2a fc1/5 2001 0xc7000f 50:05:07:68:01:10:5b:25 50:05:07:68:01:00:5b:25 fc1/6 4001 0xe8001a 50:05:07:68:01:10:5f:3f 50:05:07:68:01:00:5f:3f fc1/7 2001 0xc70008 50:05:07:68:01:10:5e:02 50:05:07:68:01:00:5e:02 fc1/8 4001 0xe8001b 50:05:07:68:01:10:5f:0e 50:05:07:68:01:00:5f:0e
- show fcns database
VSAN 1:
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
0xb70037 N 21:00:00:24:ff:39:fe:a2 scsi-fcp:target
0xb70038 N 21:00:00:24:ff:39:f6:a0 scsi-fcp:target
0xb70039 N 21:00:00:24:ff:39:f5:c2 scsi-fcp:target
Total number of entries = 3
VSAN 1001: -------------------------------------------------------------------------- FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
0xce0000 N 50:0a:09:81:8d:55:7f:20 (NetApp) scsi-fcp
0xce0001 N 50:0a:09:83:9d:55:7f:20 (NetApp) scsi-fcp
0xce0002 N 50:0a:09:82:8d:55:7f:20 (NetApp) scsi-fcp
0xce0003 N 50:0a:09:84:9d:55:7f:20 (NetApp) scsi-fcp
0xce0004 N 50:0a:09:85:8d:55:7f:20 (NetApp) scsi-fcp
0xce0005 N 50:0a:09:86:9d:55:7f:20 (NetApp) scsi-fcp
0xce0006 N 10:00:00:00:c9:a1:24:86 (Emulex) scsi-fcp:init
0xce0007 N 10:00:00:00:c9:a1:24:78 (Emulex) scsi-fcp:init
0xce0008 N 10:00:00:00:c9:b5:02:b5 (Emulex) scsi-fcp:init
0xce0009 N 10:00:00:00:c9:b2:7d:2f (Emulex) scsi-fcp:init
0xce000a N 10:00:00:00:c9:a1:7a:7a (Emulex) scsi-fcp:init
0xce000d N 10:00:00:00:c9:a8:98:bf (Emulex) scsi-fcp:init
0xce000e N 10:00:00:00:c9:f1:ab:b1 (Emulex) scsi-fcp:init
Displaying Switch Configuration:
switch# show interface fc1/1 fc1/1 is Down (Administratively down) Hardware is Fibre Channel, SFP is long wave laser Port WWN is 20:00:00:0d:ec:19:cb:0e Admin port mode is auto Receive data field Size is 2112 Beacon is turned off 5 minutes input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 5 minutes output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 0 frames input, 0 bytes 0 discards, 0 errors 0 CRC 0 too long, 0 too short 0 frames output, 0 bytes 0 errors 0 input OLS, 0 LRR, 0 loop inits 5 output OLS, 0 LRR, 1 loop init
Diplaying Version:
switch# show version Cisco MDS 9000 FabricWare Copyright (C) 2002-2005, by Cisco Systems, Inc. and its suppliers. All rights reserved. Copyrights to certain works contained herein are owned by third parties, and used and distributed under license. Portions of this software are governed by the GNU Public License, which is available at http://www.gnu.org/licenses/gpl.html. Software system: 2.1(2) system compile time: Thu Apr 21 12:48:49 2005 Hardware switch uptime is 0 days 11 hours 34 minute(s) 3 second(s) Last reset at 41643 usecs after Mon Apr 25 11:01:12 2005 Reason: PowerUp
Display Running Config
switch# show running ip default-gateway 10.20.83.1 logging level fcdomain 2 logging level fspf 2 logging level fcns 2 logging level fcs 2 logging level port 2 logging level zone 2 ..... .....
Displays the Difference Between the Running and Startup Configuration
switch# show running diff switchname rtp-9020-top + ip default-gateway 172.18.172.1 ssh server enable logging level fcdomain 2 logging level fspf 2 logging level fcns 2 logging level fcs 2 logging level port 2 logging level zone 2 logging level auth 2 ..... ..... .....
Saving Configuration
switch# copy running-config startup-config
User Management
To create a user or change the password of userswitch# config t switch(config)# user <user> password <password> switch# copy running-config startup-config [########################################] 100%
Keine Kommentare:
Kommentar veröffentlichen