Sun
Welcome to Sun Solaris Documents home!!
More sun tips
http://sysunconfig.net/unixtips/
http://sysunconfig.net/unixtips/
Sun Free software sites
Solaris Operating System - Freeware
http://www.sun.com/software/solaris/freeware/
http://www.sun.com/software/solaris/freeware/
Boot
To reboot the system in single user mode
This specific procedure must be used when replacing one of the internal fibre drives within the following servers and/or arrays:
a. Determine which drive is failing per messages files or console messages.
Split-Bus
Left Slot = t6
Center Top = t0
Center Bottom = t0
Right Slot = t6
Single-Bus
Left Slot = t6
Center Top = t1
Center Bottom = t0
Right Slot = t4
Jumpstart server in Solaris enables installation of OS over network
to multiple range of servers differing in architecture ,softwares,
packages, disk capacity etc .The process consists of setting up of a
boot server/install server with OS media image and jumpstart client
configurations and connecting clients through network .
flarcreate
luactivate
lucancel
lucompare
lucreate
lucurr
ludelete
ludesc
lufslist
lumake
lumount
lurename
lustatus
luumount
luupgrade
luxadm
02. Set mutiple IP address for a NIC in solaris
03. Change the default Route in Solaris systems
04. Change the network speed and Duplex settings in Solaris
05. Determine the current speed and duplex settting of NIC?
06. Capture and inspect network packets
07. Rename the hostname
08. IP Multipathing
Change or bind new IP address to an interface
IP Multipathing:
http://docsun.cites.uiuc.edu/sun_docs/C/solaris_9/SUNWaadm/IPNETMPADMIN/p1.html
http://docs.sun.com/source/820-0184-10/link_aggregation.html
# prtconf | grep Mem Memory size: 8192 Megabytes
/dev/dsk/c0t1d0s4 32,12 16 4198368 4191696
/dev/dsk/c0t1d0s5 32,13 16 4198368 4191888
/dev/dsk/c0t1d0s1 32,9 16 4198368 4190192
/dev/dsk/c0t1d0s3 32,11 16 4198368 4192000
Sun Fire midrange systems platform admin manual
Sun fire midrange system controller command reference
03. Volume Tasks
State Database and Replicas:
The Solaris Volume Manager state database contains configuration and status information for all volumes, hot spares, and disk sets. Solaris Volume Manager maintains multiple copies (replicas) of the state database to provide redundancy and to prevent the database from being corrupted during a system crash.
# reboot -- -sNote: the option after -- is the options for boot command in OK prompt.
Cook Books
To create a identical copy of a boot disk in Solaris
Using ufsdump and ufsrestore, we are going to make a copy of disk c0t0d0 to c0t1d0 including boot block
01. Partition the second disk same as first disk
prtvtoc /dev/dsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2
02. Create New filesystems on all the required slices on the second disk
newfs -v /dev/rdsk/c0t1d0s0 Continue to cretae filesystems on remaining slices
03. Create new filesystems mount directory under /mnt directory
mkdir /mnt/root Continue to create the directories for each filesystems to be copied
04. Mount the newly created filesystems on the new mount points just created
mount /dev/dsk/c1t1d0s0 /mnt/root Continue mounting the filesystems for the remaining slices
05. Dump the data from the old disk to the newly created filesystems
ufsdump 0f - /dev/rdsk/c0t0d0s0 |(cd /mnt/root; ufsrestore xf -) Contiune dumping data for the remaining slices
06. Change the /etc/vfstab entry in the newly created filesytesms
sed < /mnt/etc/vfstab -e s/c0t0d0/c0t1d0/g > /tmp/vfstab.new mv /tmp/vfstab.new /mnt/etc/vfstab
07. Install the boot block in the second disk
installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0
08. Unmount the filesystems and removed the directory which we created under /mnt.
umount /mnt/root rmdir /mnt/root Continue doing this for remaining filesystesm.
09. Change the bootlist from the OK prompt.
How to replace an internal FibreChannel drive that is under VERITAS Volume Manager (tm) control
Details:This specific procedure must be used when replacing one of the internal fibre drives within the following servers and/or arrays:
Sun Fire 280R, V480, and V880. SENA A5X00 Arrays.
Note: Failure to follow this procedure could
result in a duplicate device entry for the replaced disk in Volume
Manager. This is most notable when running a vxdisk list command.
Example:
# vxdisk list DEVICE TYPE DISK GROUP STATUS c1t0d0s2 sliced rootdisk rootdg online c1t1d0s2 sliced - - error c1t1d0s2 sliced - - error
1. Select vxdiskadm option 4 - Select the Volume Manager disk to be replaced
2. luxadm -e offline <device_path> - detach ssd instance
Use luxadm to get this disk out of the Solaris kernel
configuration. The device path should end in ",raw" (for example,
pci@1f,0/ide@d/dad@0,0:a,raw). This is the path from the /devices
directory, not /dev/rdsk/c?t?d?s?.
- If the disk is multipathed, run the luxadm -e offline on the second path as well
3. devfsadm -C
The -C option cleans up the /dev directory, and removes any lingering logical links to the device link names. It should remove all the device paths for this particular disk. This can be verified with: # ls -ld /dev/dsk/c1t1d* - This should return no devices entries for c1t1d*.
4. The drive can now be pulled physically
5. luxadm insert_device
This is an interactive command. It will go through the
steps to insert the new device and create the necessary entries in the
Solaris device tree.
6. vxdctl enable
This is for Volume Manager to rescan the disks. It
should pick up the new disk with an "error" status. If not in error, the
disk might contain some Volume Manager information, and might need to
be formatted.
7. Select vxdiskadm option 5
This will start the recovery process (if needed).
Example show is for c1t0d0s2, please use your c#t#d#s# in place of for your problem.
# vxdisk list c1t0d0s2 sliced - - error c1t0d0s2 sliced - - error c1t1d0s2 sliced disk01 rootdg online - - root-disk rootdg removed was:c1t0d0s2
Remove c1t0d0s2 entries from vxvm control.
Do "vxdisk -f rm c1t0d0s2" for all the duplicate entries.
[since there is no telling which one is the valid one, do
it for all ...there can be more than 2 duplicate entries
also ...]
Do "vxdisk -f rm c1t0d0s2" for all the duplicate entries.
[since there is no telling which one is the valid one, do
it for all ...there can be more than 2 duplicate entries
also ...]
# vxdisk rm c1t0d0s2 # vxdisk rm c1t0d0s2 <-- do it again to remove all the entries.
2. Remove the disk c1t0d0s2 using luxadm.
Remove device c1t0d0s2 using "luxadm remove_device" command. [luxadm remove_device cxtxdxsx] 1. luxadm remove_device /dev/rdsk/c1t0d0s2 Pull the disk out as per luxadm instructions.
3. Run command "devfsadm -C"
4. Run command "vxdctl enable"
[Till this point, we have removed the dev_t
corresponding to the physical disk. Now we will remove all the stale
dev_t's.] Please loop as per below instructions.
LOOP :
[notice that there is one entry less, since we can have more than 2 duplicate
entries]
# vxdisk list c1t0d0s2 sliced c1t0d0 - error c1t1d0s2 sliced disk01 rootdg online - - root-disk rootdg removed was:c1t0d0s2
5. Again remove *ALL* duplicate c1t0d0s2 entries from vxvm control.
# vxdisk rm c1t0d0s2
6. Run command "luxadm -e offline <device
path>" on *ALL THE PATHS* to the disk [this removes the stale dev_t.
The test machine has 2 paths to the disk, one through controller c1, and
the other through controller c2.
# luxadm -e offline /dev/dsk/c1t0d0s2 # luxadm -e offline /dev/dsk/c2t0d0s2
7. Run command "devfsadm -C"
8. Run command "vxdctl enable"
goto LOOP:
[Continue this process until there are no more entries in vxdisk list of corresponding disk c1t0d0s2 ]
[Continue this process until there are no more entries in vxdisk list of corresponding disk c1t0d0s2 ]
Result:
# vxdisk list DEVICE TYPE DISK GROUP STATUS c1t0d0s2 sliced root-disk rootdg online c1t1d0s2 sliced disk01 rootdg online
Now both OS device tree and VxVM are in a clean state corresponding to disk c1t0d0s2.
Managing Dump and Swap Spaces
A crash dump is a disk copy of the physical memory of the
computer at the time of a fatal system error. When a fatal operating
system error occurs, a message describing the error is printed to
the console. The operating system then generates a crash dump by writing
the contents of physical memory to a predetermined dump
device,which is typically a local disk partition. The dump device can
be configured by way of dumpadm. Once the crash dump has been written to
the dump device, the system will reboot.
Following an operating system crash, the savecore
utility is executed automatically during boot to retrieve the crash
dump from the dump device, and write it to a pair of files in your file
system named unix.X and vmcore.X, where X is an integer identifying
the dump. Together, these data files form the saved crash dump. The
directory in which the crash dump is saved on reboot can also be
configured using dumpadm.
By default, the dump device is configured to be an appropriate swap partition.
# dumpadm Dump content: kernel pages Dump device: /dev/dsk/c0t0d0s1 (swap) Savecore directory: /var/crash/systemA Savecore enabled: yes
the options for dumpadm command are
-d <dump_device> -s <savecore_dir> -u # forcibly update the kernel dump configuration based on /etc/dumpadm.conf -n # Modify the dump configuration to not to run savecore automatically on reboot -y # Modify the dump configuration to automatically run savecore on reboot.
Use kdb (crash until solaris 8) command to analyze the crash dump.
Swap
swap -a swapname # To add swap space swap -d swapname # To delete swap space swap -l # To list all the swap spaces swap -s # Tp print summary info about swap usage and availablility
Hardware
Drive Replacement Procedures
ASSUMPTIONS- Servers are manufactured by Sun Microsystems, running Solaris 2.6-10.
- Storage is manufactured by Sun Microsystems.
- JBOD is defined as Just a Bunch Of Disks. No hardware RAID. RAID is achieved using software such as Veritas Volume Manager or Solaris Volume Manager.
- HW-RAID is defined as any storage device that has a built in controller with RAID built in. No software RAID required.
- This procedure is intended to locate the proper drive for replacement, not how to replace said drive.
- This document is not intended as a replacement for understanding and documenting the system configuration.
- This document only covers servers/arrays that support Hot-Plugging.
- Any procedure that removes the drive from Solaris requires that any other applications that might be using the drive be disabled on that drive. For example, Veritas must have already disabled the drive, and all mounts must have been unmounted.
For additional information on any specific platform, see
http://sunsolve.sun.com/handbook_pub/
http://sunsolve.sun.com/handbook_pub/
1. General Procedure for all storage other than HW-RAID
01. Obtain Serial Number of Disk drive from Solaris:a. Determine which drive is failing per messages files or console messages.
Example Message: Mar 24 23:50:17 wpsz36 sendmail[29001]: [ID 801593 mail.info] k2P7oGp28967: to=\root, ctladdr=root (0/1), delay=00:00:01, xdelay=00:00:00, mailer=local, pri=120252, relay=local, dsn=2.0.0, stat=Sent Mar 25 04:54:22 wpsz36 scsi: [ID 107833 kern.warning] WARNING: /pci@6,4000/scsi@3,1/sd@2,0 (sd47): Mar 25 04:54:22 wpsz36 Error for Command: read(10) Error Level: Retryable Mar 25 04:54:22 wpsz36 scsi: [ID 107833 kern.notice] Requested Block: 2219209 Error Block: 2219217 Mar 25 04:54:22 wpsz36 scsi: [ID 107833 kern.notice] Vendor: FUJITSU Serial Number: 0145022958 Mar 25 04:54:22 wpsz36 scsi: [ID 107833 kern.notice] Sense Key: Media Error Mar 25 04:54:22 wpsz36 scsi: [ID 107833 kern.notice] ASC: 0x11 (<vendor unique code 0x11>), ASCQ: 0x1, FRU: 0x0
b. Using the example from a. above, determine which drive is failed/failing:
In This example, “sd47” is the failed/failing drive.
Using the example from a. above determine the serial number using the
iostat command in Solaris:
# iostat -E | more Search for “sd47”. From this you would find something similar to: sd47 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: FUJITSU Product: MAJ3364M SUN36G Revision: 5804 Serial No: 0145022958 RPM: 10025 Heads: 27 Size: 36.42GB <36418595328 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0
c. From here you can see that the serial number is:
Serial Number = 0145022958
Serial Number = 0145022958
02. Provide the Serial Number of the drive to the Engineer physically replacing the drive.
In most cases the Serial Number and/or WWN is printed on the handle of the drive. The engineer can then replace that drive with confidence.
In most cases the Serial Number and/or WWN is printed on the handle of the drive. The engineer can then replace that drive with confidence.
03. If the Serial Number is not available for some
reason, then follow the specific procedures for each device as outlined
in the following procedures:
A5000/A5100/A5200
The following procedure will blink the light on the specified drive, using either the enclosure/device nomenclature, or the device path.# luxadm led_blink enclosure,dev # luxadm led_blink box1,f3 box1 = The name of the enclosure f3 = Front of the array, the third drive OR # luxadm led_blink pathname # luxadm led_blink /sbus@6,0/SUNW,socal@d,10000/sf@0,0/ssd@w220000203767d881,0
D1000
There are two versions of D1000's, an 8 drive version and a 12 drive version. Depending on your type and how the target id switch is flipped the slot may be different for the target id's. Also, the locations may depend on whether the tray is “split” or not. Many times if the failure is hard, then an amber light will be lit on the drive, making it easy to determine the failed component.
The target id can be determined by the cxtxdxsx
nomenclature used by Solaris, where c0t2d0s0 would mean that t2 is equal
to target 2 on the tray going from left to right.
8-Slot D1000
Option Switch 1=UP and 2=DOWN - DEFAULT
Slot 0 1 2 3 8 9 10 11
Option Switch 1=UP and 2=UP
Slot 8 9 10 11 8 9 10 11
Option Switch 1=DOWN and 2=UP
Slot 8 9 10 11 0 1 2 3
Option Switch 1=DOWN and 2=DOWN
Slot 8 9 10 11 8 9 10 11
12-Slot D1000
Option Switch 1=UP and 2=DOWN – DEFAULT
Slot 0 1 2 3 4 5 8 9 10 11 12 13
Option Switch 1=UP and 2=UP
Slot 8 9 10 11 12 13 8 9 10 11 12 13
Option Switch 1=DOWN and 2=UP
Slot 8 9 10 11 12 13 0 1 2 3 4 5
Option Switch 1=DOWN and 2=DOWN
Slot 8 9 10 11 12 13 8 9 10 11 12 13
Option Switch 1=UP and 2=DOWN - DEFAULT
Slot 0 1 2 3 8 9 10 11
Option Switch 1=UP and 2=UP
Slot 8 9 10 11 8 9 10 11
Option Switch 1=DOWN and 2=UP
Slot 8 9 10 11 0 1 2 3
Option Switch 1=DOWN and 2=DOWN
Slot 8 9 10 11 8 9 10 11
12-Slot D1000
Option Switch 1=UP and 2=DOWN – DEFAULT
Slot 0 1 2 3 4 5 8 9 10 11 12 13
Option Switch 1=UP and 2=UP
Slot 8 9 10 11 12 13 8 9 10 11 12 13
Option Switch 1=DOWN and 2=UP
Slot 8 9 10 11 12 13 0 1 2 3 4 5
Option Switch 1=DOWN and 2=DOWN
Slot 8 9 10 11 12 13 8 9 10 11 12 13
D2
D2 is like the D1000, but only comes in a 12-Slot configuration. In this case, however, the target ids for the slots are dependent on whether the tray is Split or not. The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t2d0s0 would mean that t2 is equal to slot 2 on the tray going from left to right.
Split-Bus
0 1 2 3 4 5 8 9 10 11 12 13
Single-Bus
8 9 10 11 12 13 8 9 10 11 12 13
0 1 2 3 4 5 8 9 10 11 12 13
Single-Bus
8 9 10 11 12 13 8 9 10 11 12 13
A3500/A3000/RSM2000/A1000
These arrays are controlled by the host based software called “RM6”. This GUI based software should be used to maintain these storage arrays. When a drive fails, this software will illuminate the drive in question. If for some reason the LED is not lit, then the software can be used to manually illuminate it.
These arrays are controlled by the host based software called “RM6”. This GUI based software should be used to maintain these storage arrays. When a drive fails, this software will illuminate the drive in question. If for some reason the LED is not lit, then the software can be used to manually illuminate it.
D240
On this boot device, only the two center drives are Hot Swappable. If the Optical Media or Tape drives are replaced with disk drives, then those disk drives are NOT hot swappable. The D240 can be in a split-bus configuration, which changes the target id's for each slot. The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t1d0s0 would mean that t1 is in slot t1 or the upper center of the tray.Split-Bus
Left Slot = t6
Center Top = t0
Center Bottom = t0
Right Slot = t6
Single-Bus
Left Slot = t6
Center Top = t1
Center Bottom = t0
Right Slot = t4
D130
The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t3d0s0 would mean that t3 is in slot t3. Depending on configuration, the SCSI id's on this three drive enclosure are from left to right when looking at the front of the array:
The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t3d0s0 would mean that t3 is in slot t3. Depending on configuration, the SCSI id's on this three drive enclosure are from left to right when looking at the front of the array:
3, 4, or 5 OR 10, 11, 12
SE6020/SE6120/T3/3310/3320/A3500/A3500FC
This hardware-raid controller based system is controlled by accessing the controller via serial port or network port. For this system the drive led should be illuminated, however if for some reason it is not, then the drive has not been failed by the system and may still be active. Please work with 1-800-USA-4SUN to determine the correct course of action.
S1
The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t3d0s0 would mean that t3 is in slot t3. On this array there is a toggle switch that allows you to set the Target id of the first drive in the tray. The first drive in the tray is always that far left drive when looking at the front of the tray. For example, if you set the base target to 2, then the next target is 3 and so on.
The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t3d0s0 would mean that t3 is in slot t3. On this array there is a toggle switch that allows you to set the Target id of the first drive in the tray. The first drive in the tray is always that far left drive when looking at the front of the tray. For example, if you set the base target to 2, then the next target is 3 and so on.
Multipack/Unipack
The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t3d0s0 would mean that t3 is in slot t3. The Unipacks are simple to determine the correct drive, as there is only one drive in the enclosure. The Multipack has a toggle switch which tells you which targets it will use. There are several types of Multipacks but they all follow the same basic process for setting target id's. The SCSI switch, located on the back, will determine which target id's are used. The actual target id's are silk screened on the slots.
The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t3d0s0 would mean that t3 is in slot t3. The Unipacks are simple to determine the correct drive, as there is only one drive in the enclosure. The Multipack has a toggle switch which tells you which targets it will use. There are several types of Multipacks but they all follow the same basic process for setting target id's. The SCSI switch, located on the back, will determine which target id's are used. The actual target id's are silk screened on the slots.
V880/V890/280R/480R/V490
To hot-plug a drive on this FCAL based systems you must follow the correct procedures. By following the correct procedure, the correct hot-plug LED will illuminate on the appropriate drive.
To hot-plug a drive in a V880/V890:
# luxadm remove_device -F /dev/rdsk/c#t#d#s2
This will illuminate the Hot-Plug LED next to
the correct drive and prepare the OS for a new drive. There are also
other commands to initiate the new drive into the configuration, but
that is beyond the scope of the document.
E450
This procedure definitively maps a logical drive to a physical drive for the E450.
01. Determine the UNIX physical device name from the
SCSI error message. SCSI error messages are typically displayed in the
system console and logged in the /usr/adm/messages file.
WARNING: /pci@6,4000/scsi@4,1/sd@3,0 (sd228) Error for Command: read(10) Error level: Retryable Requested Block: 3991014 Error Block: 3991269 Vendor: FUJITSU Serial Number: 9606005441 Sense Key: Media Error ASC: 0x11 (unrecovered read error), ASCQ: 0x0, FRU: 0x0 In the example SCSI error message above, the UNIX physical device name is /pci@6,4000/scsi@4,1/sd@3.
02. Determine the UNIX logical device name by
listing the contents of the /dev/rdsk directory. Use the grep command to
filter the output for any occurrence of the UNIX physical device name
determined in Step 1:
% ls -l /dev/rdsk | grep /pci@6,4000/scsi@4,1/sd@3 lrwxrwxrwx 1 root root 45 Jan 30 09:07 c12t3d0s0 -> ../../devices/pci@6,4000/scsi@4,1/sd@3,0:a,raw lrwxrwxrwx 1 root root 45 Jan 30 09:07 c12t3d0s1 -> ../../devices/pci@6,4000/scsi@4,1/sd@3,0:b,raw lrwxrwxrwx 1 root root 45 Jan 30 09:07 c12t3d0s2 -> ../../devices/pci@6,4000/scsi@4,1/sd@3,0:c,raw lrwxrwxrwx1 root root 45 Jan 30 09:07 c12t3d0s3 -> ../../devices/pci@6,4000/scsi@4,1/sd@3,0:d,raw lrwxrwxrwx 1 root root 45 Jan 30 09:07 c12t3d0s4 -> ../../devices/pci@6,4000/scsi@4,1/sd@3,0:e,raw lrwxrwxrwx 1 root root 45 Jan 30 09:07 c12t3d0s5 -> ../../devices/pci@6,4000/scsi@4,1/sd@3,0:f,raw lrwxrwxrwx 1 root root 45 Jan 30 09:07 c12t3d0s6 -> ../../devices/pci@6,4000/scsi@4,1/sd@3,0:g,raw lrwxrwxrwx 1 root root 45 Jan 30 09:07 c12t3d0s7 -> ../../devices/pci@6,4000/scsi@4,1/sd@3,0:h,raw
resulting output indicates the associated UNIX logical device name. In this example, the logical device name is c12t3d0.
03. Determine the disk slot number using the prtconf
command. Substitute the string disk@ for sd@ in the physical device name
determined in Step 1. The result in this example is
/pci@6,4000/scsi@4,1/disk@3.
Use the grep command to find this name in the output of the PRTCONF command:
% prtconf -vp | grep /pci@6,4000/scsi@4,1/disk@3 slot#11: '/pci@6,4000/scsi@4,1/disk@3'
NOTE! When using the "format" command you will need to type "/sd@3,0" in place of "/disk@3".
# format /pci@6,4000/scsi@4,1/sd@3,0The resulting output indicates the corresponding disk slot number. In this example, the disk slot number is 11.
V440/220R/420R
The internal drives correspond to the SCSI target ID. The target id can be determined by the cxtxdxsx nomenclature used by Solaris, where c0t3d0s0 would mean that t3 is in slot 3.
V210/V240
The drive must be removed from Solaris prior to removing from the hardware. The following commands will achieve disabling the drive from Solaris and illuminating the LED next to the drive.
1. Determine the correct Ap_Id of the drive prior to replacing it:
The drive must be removed from Solaris prior to removing from the hardware. The following commands will achieve disabling the drive from Solaris and illuminating the LED next to the drive.
1. Determine the correct Ap_Id of the drive prior to replacing it:
# cfgadm -al Ap_Id Type Receptacle Occupant Condition c0 scsi-bus connected configured unknown c0::dsk/c0t0d0 CD-ROM connected configured unknown c1 scsi-bus connected configured unknown c1::dsk/c1t0d0 disk connected configured unknown c1::dsk/c1t1d0 disk connected configured unknown c2 scsi-bus connected unconfigured unknown
2. Unconfigure the drive from Solaris:
# cfgadm -c unconfigure c1::dsk/c1t1d0
3. Verify that the drive is disabled:
# cfgadm -al | grep c1t1d0 c1::dsk/c1t1d0 unavailable connected unconfigured unknownThe Drive LED should now be lit.
E250
The E250, like other servers, has different target id's for different slot numbers. In the case of the E250, however, they do not necessarily match up easily. Here is the slot to target layout of the E250:
The E250, like other servers, has different target id's for different slot numbers. In the case of the E250, however, they do not necessarily match up easily. Here is the slot to target layout of the E250:
Disk Slot Number Logical Device Name Physical Device Name Slot 0 c0t0d0 /devices/pci@1f,4000/scsi@3/sd@0,0 Slot 1 c0t8d0 /devices/pci@1f,4000/scsi@3/sd@8,0 Slot 2 c0t9d0 /devices/pci@1f,4000/scsi@3/sd@9,0 Slot 3 c0t10d0 /devices/pci@1f,4000/scsi@3/sd@a,0 Slot 4 c0t11d0 /devices/pci@1f,4000/scsi@3/sd@b,0 Slot 5 c0t12d0 /devices/pci@1f,4000/scsi@3/sd@c,0
Jumpstart
Three specific types of services are required for a
successful JumpStart installation. These services, which can be provided
from one or multiple servers, are boot, profile, and install.
JumpStart Boot Server
The JumpStart boot server provides the services most critical to a successful JumpStart installation and supplies the following information:
The JumpStart boot server provides the services most critical to a successful JumpStart installation and supplies the following information:
- IP address of the client
- IP address(es) of both the JumpStart profile and install servers
A JumpStart boot server does not have to be a
separate server from the profile and install servers. However, it may
have to be a separate server when using RARP to provide the IP address
to a JumpStart client. The RARP protocol is not a routed protocol, so
RARP requests are not forwarded by routers between subnets.
JumpStart Profile Server
One of the major benefits of JumpStart technology is the ability to automate system installations so the installation occurs without any human intervention.
One of the major benefits of JumpStart technology is the ability to automate system installations so the installation occurs without any human intervention.
This configuration information can be provided through
several different mechanisms. The minimum amount of information
required, to automate a JumpStart installation, is the following:
- System locale
- Time zone
- Netmask
- IPv6
- Terminal type
- Security policy
- Name service
- Timeserver
A JumpStart profile server provides the configuration
information needed by a JumpStart client so that the JumpStart
installation can continue without any requests for additional
information. With the add_install_client command, specify the profile
server to be used and a fully qualified path to the sysidcfg file. This command runs on the JumpStart boot server.
Solaris Flash Update
flarflarcreate
Live Upgrade
luluactivate
lucancel
lucompare
lucreate
lucurr
ludelete
ludesc
lufslist
lumake
lumount
lurename
lustatus
luumount
luupgrade
luxadm
Networking
Table of Contents
01. Change or bind new IP address to an interface02. Set mutiple IP address for a NIC in solaris
03. Change the default Route in Solaris systems
04. Change the network speed and Duplex settings in Solaris
05. Determine the current speed and duplex settting of NIC?
06. Capture and inspect network packets
07. Rename the hostname
08. IP Multipathing
Change or bind new IP address to an interface
To change IP address of an NIC
- Change the IP address in /etc/hosts associated with the hostname for the IP address change to take effect after reboot.
- Change /etc/defaultrouter with the address of the host's default gateway, if applicable.
- Change /etc/netmasks file if the new IP address is in different subnet.
- Chage the IP address and default route (if different subnet) for the changes to take effect immediately.
ifconfig hem0 down ifcofnig hme0 unplumb ifconfig hme0 plumb ifconfig hme0 <IP address> netmask <subnet mask> up route delete default <old-gateway IP> route add default <new-gateway IP>
- Also, check /etc/inet/ipnodes file if it is solaris 10.
To bind an IP address to a network interface card
# ifconfig qfe0 plumb # ifconfig qfe0 <IP Address> netmask <subnet> up
- Create a file on /etc directory - hostname.qfe0 with hostname entry
- Add entry on /etc/netmasks if IP address is on different subnet
- Add entry on /etc/inet/hosts file with IP address and hostname
To set multiple IP addresses for interface hme0, run the following commands
ifconfig hme0:1 plumb ifconfig hme0:1 <ip-address> netmask <sub-net> up or ifconfig hme0 addif 192.168.1.10/24 up (it will add in the next pseudo interface like hme0:1)
Removing the pseudo interface and associated address is done with
ifconfig hme0:1 0.0.0.0 down ifconfig hme0:1 unplumb or ifconfig hme0:1 down unplumb
As with physical interfaces, all you need to do is make the appropriate /etc/hostname.IF:X file.
other example ifconfig hme1 plumb Ifconfig hme1 inet xxx.xxx.xxx.xxx netmask yyy.yyy.yyy.yyy broadcast + up Where x and y are the number for your IP address and netmask. The + sign after broadcast is telling ifconfig to calculate the broadcast address from the IP and netmask.
Solaris allows to set the maximum number of
virtual interfaces upto a hard maximum of 8192. This value can be set
using ndd command.
/usr/sbin/ndd -set /dev/ip ip_addrs_per_if 4000
modify the /etc/defaultrouter file and add the default router there.
Solaris is often unable to correctly auto-negotiate
duplex settings with a link partner. You can force the NIC into 100Mbit
full-duplex by disabling auto-negotiation and 100Mbit half-duplex
capability.
Example with hme0:
01. To make the changes to the running system, run the following commands.
ndd -set /dev/hme adv_100hdx_cap 0 ndd -set /dev/hme adv_100fdx_cap 1 ndd -set /dev/hme adv_autoneg_cap 0
02. To preserve the speed and duplex settings after a reboot, add the following lines to /etc/system file.
set hme:hme_adv_autoneg_cap=0 set hme:hme_adv_100hdx_cap=0 set hme:hme_adv_100fdx_cap=1
Note: the /etc/system change affects all hme
interfaces if multiple NICs are present (ex. hme0, hme1). In case you
have multiple instances, you need to select the specific hme instance
first, e.g., use the following to select hme1:
ndd -set /dev/hme instance 1
03. To see which parameters are supported by the hme driver
ndd /dev/hme \?
More information about setting network speed in solaris can be found here
I have a script which can be used to determine the
network speed in all types of Ethernet cards. That script can be
downloaded here.
ndd -get /dev/hme link_mode # 0 = halfdumplex; 1 = fullduplex ndd -get /dev/hme link_speed # 0 = 10 Mbps; 1 = 100 Mbps
'snoop' command captures packets from the network and
displays their contents. Captured packets can be displayed as they are
received, or saved to a file for later inspection.
To monitor hme0 port of packets coming from IP address 202.40.224.14
# snoop -d hme0 | grep 202.40.224.14
To monitor hme1 ports of all packets
# snoop -d hme1
There are four files that must be modified in order to rename the hostname: 1) /etc/hosts 2) /etc/net/ticlts/hosts 3) /etc/net/ticolts/hosts 4) /etc/nodename 5) /etc/hostname.hmex 6) /etc/net/ticotsord/hosts
IP Multipathing:
http://docsun.cites.uiuc.edu/sun_docs/C/solaris_9/SUNWaadm/IPNETMPADMIN/p1.html
http://docs.sun.com/source/820-0184-10/link_aggregation.html
Adding defautl route
http://www.cyberciti.biz/faq/howto-configuring-solaris-unix-static-routes/
http://www.cyberciti.biz/faq/howto-configuring-solaris-unix-static-routes/
NFS and NIS
NFS
NFS shares are defined in /etc/dfs/dfstab file. The dfstab file lists all the file systems that your server shares with its clients. This file also controls which clients can mount a file system.
Contents of /etc/dfs/dfstab file
share [-F nfs] [-o specific-options] [-d description] pathname share -F nfs -o ro,log=global /export/ftp
To share the contents of /etc/dfs/dfstab file
# shareall
NIS
All the NIS configuration information is in/var/yp/binding/<name_of_domain>
OK Prompt
How to get into OK prompt
From sun keyboard connected STOP + A From Serial console Ctrl + ] or Break key or Cntrl + L or ~break or Alt+B in teraterm Serial console connected in console server telnet <consoleserver> <connectedportnumber> i.e telnet consolsrv01 10001 Press Ctrl + ] This will get into "telnet>" type "send break" from "telnet>" prompt
Disable Stop + A
set a line in /etc/system like "set abort_enable=0" will disbale the Stop + A landing to OK prompt
eeprom:
eeprom displays or changes the values of parameters in the EEPROM (OPen Boot configuration settings).
eeprom displays or changes the values of parameters in the EEPROM (OPen Boot configuration settings).
# eeprom tpe-link-test?=true scsi-initiator-id=7 keyboard-click?=false keymap: data not available. ttyb-rts-dtr-off=false ttyb-ignore-cd=true ttya-rts-dtr-off=false ttya-ignore-cd=true ttyb-mode=9600,8,n,1,- ttya-mode=9600,8,n,1,- pcia-probe-list=1,2,3 ....... ........
To set the auto-boot? Parameter to true
# eeprom auto-boot?=true
To set the devalias for disks
# eeprom "nvramrc=devalias rootdisk /sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w220000203714f2ee,0:a devalias mirrordisk /sbus@6,0/SUNW,socal@d,10000/sf@1,0/ssd@w2200002037093df0,0:a" Make sure to also set use-nvramrc? to true. These device aliases will be stored after a system reset. # eeprom "use-nvramrc?=true"
Some useful OK prompt commands
OK showdevs OK show-disks
To temporarily create a device alias'''
OK devalias <alias> <path>
To search the scsi devices attached to the primary scsi controller
OK probe-scsi
To search all the scsi devices
OK probe-scsi-all
To view the current NVRAM settings
OK printenv
To set the evvironment variables
setenv <env> <value>
To set the Open Boot prompt settings to the factory default
OK set-defaults
To set the device alias permanently to NVRAM
OK nvalias <alias> <path>
To start nvramrc line editor using a temporary edit buffer
OK nvedit
To save the contents of the nvedit buffer into NVRAM
OK nvsave
To remove the nvalias 'cdrom1' from NVRAMRC
OK nvunalias cdrom1
To find out the Open boot prompt version
OK .version
To find out the ethernet MAC address
OK .enet_addr
To find out the CPU and PCI bus speeds
OK .speed
To display the Model, Architecture, processor, openboot version, ethernet address, host id and etc...
OK banner
To reset variable values to the factory defaults
OK set-defaults
To reboot the system from OK Prompt
OK reset-all
Package and Patch Management
To list the files in a installed package
# pkgchk -v <package_name> OR # grep <package_name> /var/sadm/install/contents
To list the files in a non-installed packagege file
# pkgchk -vd <device_name> <package_name>
To install a package
# pkgadd -d <source_dir> package_name
Patch Management
To list the installed pathches# showrev -p OR # patchadd -p
To install a patch
# patchadd Patch_DIR/PATCH_ID
To install a patch unconditionay
# patchadd -u Patch_DIR/PATCH_ID
To install Multiple patches
# patchadd -M <patch dir> <1st_patch-ID> <2nd_patch_ID> etc..
Some patches installations may give warnings/errors:
- "return code 8" means that the patch in question cannot be applied because the relevant Solaris modules are not installed. This error can be ignored.
- "return code 2" indicates patches are already installed. This error can be ignored.
- "return code 25" means a patches requires another patch that is not yet installed. Try re-installing the bundle when finished, it may be a problem with the order of the installation of patches.
- Other errors should be investigated. Check the patch installation log:
more /var/sadm/install_data/Solaris_*Recommended_log
Performance Monitoring and Tuning
mpstat
mpstat - report per-processor statistics
mpstat 1 2
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
6 93 57 617 414 302 452 14 73 25 0 284 2 3 46 48
7 83 49 444 557 542 452 18 73 23 0 285 2 5 44 49
prstat
http://developers.sun.com/solaris/articles/prstat.html
The prstat -s cpu -n 3 command is used to list the three processes that are consuming the most CPU resources
$ prstat -s cpu -n 3
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
13974 kincaid 888K 432K run 40 0 36:14.51 67% cpuhog/1
27354 kincaid 2216K 1928K run 31 0 314:48.51 27% server/5
14690 root 136M 46M sleep 59 0 0:00.59 2.3% Xsun/1
http://developers.sun.com/solaris/articles/prstat.html
The prstat -s cpu -n 3 command is used to list the three processes that are consuming the most CPU resources
sar
-u for processor statistics
-r for memory
-k Kernel memory use
vmstat
# vmstat 1 4
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr f0 s0 s1 -- in sy cs us sy id
0 0 0 2708524 711520 0 1 0 0 0 0 0 0 0 0 0 342 100 123 0 2 98
0 0 0 2691220 693244 1 44 0 0 0 0 0 0 0 0 0 355 202 150 0 1 99
0 0 0 2691220 693244 0 0 0 0 0 0 0 0 0 0 0 346 140 125 0 1 99
0 0 0 2691220 693244 0 0 0 0 0 0 0 0 0 0 0 350 141 129 1 0 99
The fields for vmstat's display are
kthr Number of kernel threads in each of the three following states
r run queue
b blocked kernel threads waiting for IO
w idle processes that have been swapped at some time
Memory Report on usage of virtual and real Memory
swap available swap space in KB
free size of the free list in KB
Page
re pages reclaimed from free list
pi pages paged in from filesystem or swap device
po pages paged out to filesystem or swap device
fr pages that have been freed or destroyed
de pages freed after writes
CPU Percentage usage of CPU
us user time
sy system time
id idle time
truss
The truss utility executes the specified command and produces a trace of the system calls it performs, the signals it receives, and the machine faults it incurs. Each line of the trace output reports either the fault or signal name or the system call name with its arguments and return value(s).
-o outfile -- Send the output to outfile instead of standard out
-e shows the enviroment strings that are passed on each system call
-d Includes time stamps along with each line of trace out-put
-c counts systems calls, faults and signals instead of displaying
-f to force truss to go into forked process
-p <pid> to attach to one or more process already running
The truss utility executes the specified command and produces a trace of the system calls it performs, the signals it receives, and the machine faults it incurs. Each line of the trace output reports either the fault or signal name or the system call name with its arguments and return value(s).
To attach the truss to already running process
# truss -p <pid>
Memory and Paging spaces
To find out the total memory# prtconf | grep Mem Memory size: 8192 Megabytes
sar -r gives report of Unused memory pages
(multiply by the pagesize to get the actual value) and number of disk
blocks (512bytes) for page swapping (freeswap)
# pagesize
8192
# sar -r 1 4
14:51:30 freemem freeswap
14:51:31 537978 37760301
14:51:32 537978 37760301
14:51:33 538037 37760824
14:51:34 538037 37760824
Average 538007 37760563
Swap spaces
Swap space usage: With the exception
of 'swap -l', most utilities on Solaris use 'swap' to mean 'Virtual
memory', not the space on disk dedicated for backing store. So, swap -s,
df -k and vmstat commands calculate swap space like below.
swap = physical memory + swap space - (minus ) memory used by OS/Kernel
swap = physical memory + swap space - (minus ) memory used by OS/Kernel
To list all the swap spaces
- swap -l
/dev/dsk/c0t1d0s4 32,12 16 4198368 4191696
/dev/dsk/c0t1d0s5 32,13 16 4198368 4191888
/dev/dsk/c0t1d0s1 32,9 16 4198368 4190192
/dev/dsk/c0t1d0s3 32,11 16 4198368 4192000
- swap -s
Printing
lp
lpadmin
enable
disable
accept
reject
lpstat
lpmove
lpsched
lpshut
lpadmin
enable
disable
accept
reject
lpstat
lpmove
lpsched
lpshut
SAN
To find out the wwn of SUN/Qlogic Fibre cards
To list all the Fibre channel devices
Port Speed : 2 GBit/sec.
Max Frame Size : 2048
OS Device Name : /devices/pci@84,2000/lpfc@1
Num Discovered Ports: 3
Fabric Name : 10 00 00 60 69 80 0e fc
In addition to providing super high performance HBAS, Emulex has done a great job with their CLI (their GUI is actually pretty good as well) management utilites!
First way:
# prtpicl -v | grep wwn :node-wwn 20 00 00 e0 8b 10 22 1a :port-wwn 21 00 00 e0 8b 10 22 1a :node-wwn 20 00 00 e0 8b 81 75 ed :port-wwn 21 00 00 e0 8b 81 75 ed :node-wwn 50 06 0e 80 03 9d 7e 06 :port-wwn 50 06 0e 80 03 9d 7e 06 :node-wwn 50 06 0e 80 03 9d 7e 06 :port-wwn 50 06 0e 80 03 9d 7e 06 :node-wwn 50 06 0e 80 03 9d 7e 06 :port-wwn 50 06 0e 80 03 9d 7e 06 :node-wwn 20 00 00 e0 8b a1 75 ed :port-wwn 21 01 00 e0 8b a1 75 ed :node-wwn 50 06 0e 80 03 9d 7e 16 :port-wwn 50 06 0e 80 03 9d 7e 16 :node-wwn 50 06 0e 80 03 9d 7e 16 :port-wwn 50 06 0e 80 03 9d 7e 16 :node-wwn 50 06 0e 80 03 9d 7e 16 :port-wwn 50 06 0e 80 03 9d 7e 16
Second Way:
# prtconf -vp | grep -i wwn port-wwn: 210000e0.8b10221a node-wwn: 200000e0.8b10221a port-wwn: 210000e0.8b8175ed node-wwn: 200000e0.8b8175ed port-wwn: 210100e0.8ba175ed node-wwn: 200000e0.8ba175ed
Third Way:
# luxadm qlgc Found Path to 3 FC100/P, ISP2200, ISP23xx Devices Opening Device: /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0:devctl Detected FCode Version: ISP2200 Host Adapter Driver: 1.14.02 05/28/03 Opening Device: /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0:devctl Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04 Opening Device: /devices/pci@1f,4000/SUNW,qlc@4,1/fp@0,0:devctl Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04 Complete # luxadm -e dump_map /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0:devctl
To list all the Fibre channel devices
- cfgadm -al -o show_FCP_dev
Ap_Id Type Receptacle Occupant Condition c2 fc-fabric connected configured unknown c2::50060e800329d500,0 disk connected configured unknown c2::50060e800329d500,1 disk connected configured unknown c2::50060e800329d500,2 disk connected configured unknown c2::50060e800329d500,3 disk connected configured unknown c3 fc-fabric connected configured unknown c3::50060e800329d500,0 disk connected configured unknown c3::50060e800329d500,1 disk connected configured unknown c3::50060e800329d500,2 disk connected configured unknown c3::50060e800329d500,3 disk connected configured unknown c4 fc connected unconfigured unknown c7 fc connected unconfigured unknown
On solaris 10
# fcinfo hba-port
Run the following to get the newly added LUNs to get detected.
# luxadm -e port |grep -v NOT|cut -f1 -d:|while read FIL do luxadm -e forcelip $FIL done # devfsadm
Emulex Fibre cards
On rare ocassions it may be necessary to reset an Emulex HBA to re-establish connectivity to fabric services, and to find new targets and LUNs that have been added to the fabric. This can be accomplished with a system reboot ( which takes a good deal of time, and is not an option when availability is the primary service driver), or with the Emeulex hbacmd(1m) or lputil(1m) utilities. To reset an adaptor with hbacmd(1m), the WWPN is passed as an option:
$/usr/sbin/hbanyware/hbacmd Reset 10:00:00:00:c9:49:2c:b4
Reset HBA 10:00:00:00:c9:49:2c:b4
To reset an HBA through the lputil(1m) text-base menu,
you can select option 4 from the main menu, and pick the adaptor to
reset:
$ /usr/sbin/lputil/lputil
LightPulse Common Utility for Solaris/SPARC. Version 2.0a5 (4/7/2005).
Copyright (c) 2005, Emulex Corporation
LightPulse Common Utility for Solaris/SPARC. Version 2.0a5 (4/7/2005).
Copyright (c) 2005, Emulex Corporation
Emulex Fibre Channel Host Adapters Detected: 3
Host Adapter 0 (lpfc3) is an LP9802 (Ready Mode)
Host Adapter 1 (lpfc4) is an LP9802 (Ready Mode)
Host Adapter 2 (lpfc5) is an LP9802 (Ready Mode)
Host Adapter 0 (lpfc3) is an LP9802 (Ready Mode)
Host Adapter 1 (lpfc4) is an LP9802 (Ready Mode)
Host Adapter 2 (lpfc5) is an LP9802 (Ready Mode)
MAIN MENU
1. List Adapters
2. Adapter Information
3. Firmware Maintenance
4. Reset Adapter
5. Persistent Bindings
2. Adapter Information
3. Firmware Maintenance
4. Reset Adapter
5. Persistent Bindings
0. Exit
Enter choice => 4
0. lpfc3
1. lpfc4
2. lpfc5
0. lpfc3
1. lpfc4
2. lpfc5
Select an adapter => 0
MAIN MENU
1. List Adapters
2. Adapter Information
3. Firmware Maintenance
4. Reset Adapter
5. Persistent Bindings
2. Adapter Information
3. Firmware Maintenance
4. Reset Adapter
5. Persistent Bindings
0. Exit
Enter choice => 0
Once the adaptor is reset, you should see a message similar to the following in the system logfile:
Sep 7 15:23:49 tiger lpfc: [ID 728700 kern.warning] WARNING: lpfc3:1303:LKe:Link Up Event x1 received Data: x1 x0 x8 x14
Emulex makes a killer HBA, and provides several awesome software utilities to manage host side SAN connectivity.
To view all of the Emulex HBAs installed in a server, hbacmd(1m) can be invoked with the “listhbas” option:
$ hbacmd listhbas
Manageable HBA List
Port WWN : 10:00:00:00:c9:49:28:42
Node WWN : 20:00:00:00:c9:49:28:42
Fabric Name: 10:00:00:60:69:80:2d:ee
Flags : 8000f980
Host Name : server01
Mfg : Emulex Corporation
Node WWN : 20:00:00:00:c9:49:28:42
Fabric Name: 10:00:00:60:69:80:2d:ee
Flags : 8000f980
Host Name : server01
Mfg : Emulex Corporation
Port WWN : 10:00:00:00:c9:49:28:47
Node WWN : 20:00:00:00:c9:49:28:47
Fabric Name: 10:00:00:60:69:80:0e:fc
Flags : 8000f980
Host Name : fraudmgmt01
Mfg : Emulex Corporation
Node WWN : 20:00:00:00:c9:49:28:47
Fabric Name: 10:00:00:60:69:80:0e:fc
Flags : 8000f980
Host Name : fraudmgmt01
Mfg : Emulex Corporation
[ ..... ]
To list firmware versions, serial numbers, WWN and a variety of model specific information, the hbacmd(1m) utility can be invoked with the “hbaattrib” option and the WWN to probe:
To list firmware versions, serial numbers, WWN and a variety of model specific information, the hbacmd(1m) utility can be invoked with the “hbaattrib” option and the WWN to probe:
$ hbacmd HBAAttrib 10:00:00:00:c9:49:28:47
HBA Attributes for 10:00:00:00:c9:49:28:47
Host Name : server01
Manufacturer : Emulex Corporation
Serial Number : MS51403247
Model : LP9802
Model Desc : Emulex LightPulse LP9802 2 Gigabit PCI Fibre Channel Adapter
Node WWN : 20 00 00 00 c9 49 28 47
Node Symname : Emulex LP9802 FV1.91A1 DV6.02f
HW Version : 2003806d
Opt ROM Version: 1.50a4
FW Version : 1.91A1 (H2D1.91A1)
Vender Spec ID : 80F9
Number of Ports: 1
Driver Name : lpfc
Device ID : F980
HBA Type : LP9802
Operational FW : SLI-2 Overlay
SLI1 FW : SLI-1 Overlay 1.91a1
SLI2 FW : SLI-2 Overlay 1.91a1
IEEE Address : 00 00 c9 49 28 47
Boot BIOS : Fcode Firmware1.50a4
Driver Version : 6.02f; HBAAPI(I) v2.0.e, 11-07-03
To view host port information (e.g., port speed, device paths) and fabric parameters (e.g., fabric ID (S_ID), # of ports zoned along with this port), hbacmd(1m) can be invoked with the “portattrib” option:
Manufacturer : Emulex Corporation
Serial Number : MS51403247
Model : LP9802
Model Desc : Emulex LightPulse LP9802 2 Gigabit PCI Fibre Channel Adapter
Node WWN : 20 00 00 00 c9 49 28 47
Node Symname : Emulex LP9802 FV1.91A1 DV6.02f
HW Version : 2003806d
Opt ROM Version: 1.50a4
FW Version : 1.91A1 (H2D1.91A1)
Vender Spec ID : 80F9
Number of Ports: 1
Driver Name : lpfc
Device ID : F980
HBA Type : LP9802
Operational FW : SLI-2 Overlay
SLI1 FW : SLI-1 Overlay 1.91a1
SLI2 FW : SLI-2 Overlay 1.91a1
IEEE Address : 00 00 c9 49 28 47
Boot BIOS : Fcode Firmware1.50a4
Driver Version : 6.02f; HBAAPI(I) v2.0.e, 11-07-03
To view host port information (e.g., port speed, device paths) and fabric parameters (e.g., fabric ID (S_ID), # of ports zoned along with this port), hbacmd(1m) can be invoked with the “portattrib” option:
$ hbacmd PortAttrib 10:00:00:00:c9:49:28:47
Port Attributes for 10:00:00:00:c9:49:28:47
Node WWN : 20 00 00 00 c9 49 28 47
Port WWN : 10 00 00 00 c9 49 28 47
Port Symname :
Port FCID : 6D0900
Port Type : Fabric
Port State : Operational
Port Service Type : 6
Port Supported FC4 : 00 00 01 20 00 00 00 01
Port WWN : 10 00 00 00 c9 49 28 47
Port Symname :
Port FCID : 6D0900
Port Type : Fabric
Port State : Operational
Port Service Type : 6
Port Supported FC4 : 00 00 01 20 00 00 00 01
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00Port Active FC4 : 00 00 01 20 00 00 00 01
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00Port Supported Speed: 2 GBit/sec.
Port Speed : 2 GBit/sec.
Max Frame Size : 2048
OS Device Name : /devices/pci@84,2000/lpfc@1
Num Discovered Ports: 3
Fabric Name : 10 00 00 60 69 80 0e fc
In addition to providing super high performance HBAS, Emulex has done a great job with their CLI (their GUI is actually pretty good as well) management utilites!
HP storage specifig
dlmsetconf dlmcfgmgr /kernel/drv/sd.conf devfsadm -c sysdef
Security
To display user's login status
/etc/default/passwd is the file related to password aging on new accounts.
# logins -x -l rimmer rimmer 500 staff 10 Annalee J. Rimmer /export/home/rimmer /bin/sh PS 010103 10 7 -1 PS 010103 10 7 -1 Specifies the password aging information: * Last date that the password was changed * Number of days that are required between changes * Number of days before a change is required * Warning periodTo display users without passwords
# logins -pTo find out the password age status
passwd -r files -saTo force the user "user1" to change his password during next login
# passwd -f user1To disable the passwd aging
Change below parameters as below in /etc/default/passwd MAXWEEKS= MINWEEKS=To Temporarily Disable user Logins
a. Create /etc/nologin file in a test editor. Include a message about system availability b. Close and save the file
To monitor failed login attempts
Note:The loginlog file contains one
entry for each failed attempt. Each entry contains the user's login
name, tty device, and time of the failed attempt. If
a person makes fewer than five unsuccessful attempts, no failed
attempts are logged. This procedure does not capture failed logins from a
CDE or GNOME login attempt
1. Create the loginlog file in the /var/adm directory # touch /var/adm/loginlog 2. Chage the permission of the file to be read/write only by root # chmod 600 /var/adm/loginlog 3. Change the group membership to sys # chgrp sys /var/adm/loginlog 4. Verify that the log works. Login with wrong passwd 5 times and check /var/adm/loginlog file # more /var/adm/loginlog jdoe:/dev/pts/2:Tue Nov 4 10:21:10 2003 jdoe:/dev/pts/2:Tue Nov 4 10:21:21 2003 jdoe:/dev/pts/2:Tue Nov 4 10:21:30 2003 jdoe:/dev/pts/2:Tue Nov 4 10:21:41 2003 jdoe:/dev/pts/2:Tue Nov 4 10:21:51 2003
To Monitor all the failed login attemtps
01. Check /etc/default/login file and make sure the following lines are there SYSLOG=YES SYSLOG_FAILED_LOGINS=0 02. Create /var/adm/autolog file and setup appropriate permissions # touch /var/adm/authlog # chmod 600 /var/adm/authlog # chgrp sys /var/adm/autolog 03. Edit the /etc/syslog.conf file to log failed password attempts. auth.info /var/adm/authlog auth.notice /var/adm/authlog 04. Restart the syslog daemon 05. Verify the log works more /var/adm/authlog
To close the connection after three login failures
Uncomment the RETRIES entry in the /etc/default/login file, then set the value of RETRIES to 3. Your edits take effect immediately. After three login retries in one session, the system closes the connection.
Password Aging, Length
New Accounts/etc/default/passwd is the file related to password aging on new accounts.
- MAXWEEKS= is the maximum number of weeks a password may be used.
- MINWEEKS= is the minimum number of weeks allowed between password changes.
- WARNWEEKS= (not present by default) is the number of weeks' warning given before a password expires.
- PASSLENGTH= is the Minimum password length
Existing Accounts
/usr/bin/passwd is used to modify password aging on existing accounts. passwd does not update the last password change field (field 3) in /etc/shadow, so passwords could expire immediately after running it.
Example
/usr/bin/passwd is used to modify password aging on existing accounts. passwd does not update the last password change field (field 3) in /etc/shadow, so passwords could expire immediately after running it.
Example
User user1 was already created with no password aging (MAXWEEKS= in /etc/default/passwd). To configure the following:
- A minimum of 7 days between password changes.
- Password expiration after 90 days.
- Begin warning about password expiration 14 days in advance.
# /usr/bin/passwd -n 7 -w 14 -x 90 user1
Sun Fire
Sun fire midrange system controller command reference
To connect to system controller
Telnet Hostname-sc and select 0 option for paltform shell
Telnet Hostname-sc and select 0 option for paltform shell
System Controller 'it-sun99-sc': Type 0 for Platform Shell Type 1 for domain A console Type 2 for domain B console Type 3 for domain C console Type 4 for domain D console Input:
To go to domain console to domain shell
#. or ctrl + Break or ctrl + ] and from the telnet prompt type send break (eg: telnet> send break )
TO go to the domain console from a domain shell type "resume" from domain shell
hostname-sc:A> resume
To go to main system controller from Domain shell type "disconnect"
hostname-sc:A> disconnect
To get a OK prompt of a domain, from that domain shell, type "break"
hostname-sc:A> break
To show the boards and status
hostnmae-sc:SC>showboards
To find the firmware version of CPU's
hostname-sc:SC>showboard -proms
copy the firmware from one cpu sb0 bord to another sb2
hostname-sc:SC>flashupdate -c sb0 sb2
Enable a disbabled component in SB0
hostname-sc:SC> enablecomponent /N0/SB0/P2
Add the SB2 board to domain A
hostname-sc:SC> addboard -d A SB2
Go to domain A console from platform Shell
hostname-sc:SC> cons -d A
Power on the domain A
hostname-sc:A> setkeyswitch on Powering boards on ...
To see the all the boards
hostname-sc:SC> showboards
Find the details in board SB2
hostname-sc:SC> showcomponent -v SB2
To setup the system in single or Dual partition mode
sun3800-sc:SC> setupplatform -p partition Partition Mode -------------- Configure chassis for single or dual partition mode? [dual]: dual sun3800-sc:SC>
4800/6800 System controller Password unlock procedure
For Firmware version 5.11.3
If the platform administrator's password is lost, the following procedure can be used to clear the password.
1. Connect the serial console in the SC card and
reboot the System Controller (SC) by pressing the reset button in the SC
controller.
2. The normal sequence of a System Controller
rebooting is for SCPOST to run, then ScApp. You'll need to wait for
ScApp to start loading, and showing the 'Copyright 2001 Sun
Microsystems, Inc. All rights reserved” then
Hit Control-A
It will diasplay “ -> Task not found” and continue to load until the SC login shell
Press Enter, it will get into below prompt
->
3. Make a note of the current boot flags settings. This will be used to restore the boot flags to the original value.
-> getBootFlags() value = 48 = 0xC = '0'
- Save the 0x number for # 7 below.
4. Change the boot flags to disable autoboot.
-> setBootFlags (0x10)
5. Reboot the System Controller. Once reset, it will stop at the -> prompt. Use CONTROL-X or the reboot command to do this.
6. Enter the following commands at the -> prompt.
-> kernelTimeSlice 5 -> javaConfig -> javaClassPathSet "/sc/flash/lib/scapp.jar:/sc/flash/lib/jdmkrt.jar" -> javaLoadLibraryPathSet "/sc/flash" -> java "-Djava.compiler=NONE -Dline.separator=\r\n sun.serengeti.cli.Password"
Wait for the following System Controller
messages to display. Your prompt will come back right away, but it'll
take about 10 seconds for these messages to show up:
Clearing SC Platform password... Done. Reboot System Controller.
7. After the above messages are displayed, restore the bootflags to the original value using the setBootFlags() command.
-> setBootFlags (0xC) * Use the value returned from #3 above.
8. Reboot the System Controller using CONTROL-X
or the reboot command. Once rebooted, the platform administrator's
password will be cleared.
FOR Firmware version 5:17.x and above
1. Connect the serial console in the System controller
2. Press the Reset button in the SC
3. Wait for ScApp to start loading and when you see
the Copyright message Copyright 2005 Sun Microsytems, Inc. All rights
reserved. Use in Subject to license terms.
4. Press Ctrl + A - you wil see this messages
Task not found Spawning new shell.. ->It will continue to load the ScApp until the menu displayesd
5. Press Enter from the platform login menu.
6. Make note of the current BootFlag value
getBootFlags() value = 13 = 0xd Note : Save the 0x number for step
7. Chang the boot flags to disable autoboot
->setBootFlags (0x10) Value = 13 = 0xd
8. Verify the changed value
->getBootFlags() value = 16 = 0x10
9. Reboot the SC using either “Ctrl + x” or press the reset
10. If the firemware is between 5.17.x to 5.18.x then
->ld 1,0,”/sc/flash/vxAddOn.o” ->uncompressJVM(“/sc/flash/JVM.zip”,”/sc/flash/JVM”);
If the firmware is 5.19.x and later then
->ld 1,0,”/sc/flash/vxAddOn.o” ->uncompressFile(“/sc/flash/JVM.zip”,”/sc/flash/JVM”);
11. Enter the following commands at the -> prompt
-> kernelTimeSlice 5 -> javaConfig -> javaClassPathSet “/sc/flash/lib/scapp.jar:/sc/flash/lib/jdmkrt.jar” -> javaLoadLibraryPathSet “/sc/flash” -> java “-Djava.compiler=NONE –Dline.separator=\r\n sun.serengeti.cli.Password”
12. Wait for the following message to display (it may take upto 10 seconds after the prompt retun to ->)
Clearing SC Platform password … Done. Reboot System Controller.
13. After the above message display restore the bootflags to the original
-> setBootFlags(0xd)
14. Reboot the SC using Ctrl + x or press the reboot command.
15. If the system has more than one system controller for failover then follow the below command from platform shell
Hostname-sc:SC> setfailover on
Sun Volume Manager
1.1 Creating or removing state database replicas 1.2 Displaying the replica’s status 1.3 Deleting state replicas02. Stripe and Concatenation
03. Volume Tasks
3.1 Creating a Stripe - (RAID 0) 3.2 Creating a Concatenation - (RAID 0) 3.3 Creating a simple mirror from new partitions 3.4 Mirroring Partitions with data which can be unmounted 3.5 Mirroring Partitions with data which can not be unmounted - root and /usr 3.6 Creating RAID 5 3.7 Removing a Volume04. Hotspare Pool
4.1 Associating a Hot Spare Pool with Submirrors 4.2 Associating or changing a Hot Spare Pool with a RAID5 Metadevice 4.3 Adding a Hot Spare Slice to All Hot Spare Pools05 TransMeta Device
5.1 TransMeta device for unmountable partition 5.2 TransMeta device for non unmountable partition 5.3 TransMeta device using Mirror04. Cookbooks
4.1 Recovering from a Boot Disk Failure with Solstice
State Database and Replicas:
The Solaris Volume Manager state database contains configuration and status information for all volumes, hot spares, and disk sets. Solaris Volume Manager maintains multiple copies (replicas) of the state database to provide redundancy and to prevent the database from being corrupted during a system crash.
If all state database replicas are lost, you could, lose all data that is stored on your Solaris Volume Manager volumes.
For this reason, it is good practice to create enough state database
replicas on separate drives and across controllers to prevent
catastrophic failure. You can add additional state database replicas to
the system at any time
If your system loses a state database replica, SVM
must figure out which state database replicas still contain valid data.
SVM determines this information by using a majority consensus algorithm.
This algorithm requires that a majority (half + 1) of the state
database replicas be available and in agreement before any of them are
considered valid. You must create at least three state database replicas when you set up your disk configuration. A consensus can be reached as long as at least two of the three state database replicas are available.
The majority consensus algorithm provides the following:
- The system will stay running if at least half of the state database replicas are available.
- The system will panic if fewer than half of the state database replicas are available
- The system will not reboot into multiuser mode unless a majority (half + 1) of the total number of state database replicas is available.
Each state database replica occupies 4 Mbytes (8192 disk sectors) default. Replicas can be stored on the following devices:
- a dedicated local disk partition
- a local partition that will be part of a volume
- a local partition that will be part of a UFS logging device
Note –
Replicas cannot be stored on the root (/), swap, or /usr slices, or on
slices that contain existing file systems or data. After the replicas
have been stored, volumes or file systems can be placed on the same
slice. When a state database replica is placed on a slice that
becomes part of a volume, the capacity of the volume is reduced by the
space that is occupied by the replica(s).
# metadb -a -c n -l nnnn -f ctds-of-slice -a specifies to add a state database replica. -f specifies to force the operation, even if no replicas exist. -c n specifies the number of replicas to add to the specified slice. -l nnnn specifies the size of the new replicas, in blocks. ctds-of-slice specifies the name of the component that will hold the replica. # metadb -a -f -c2 c0t1d0s7 # metadb -a -f -c2 c0t0d0s7 # metadb flags first blk block count a u 16 8192 /dev/dsk/c0t1d0s7 a u 8208 8192 /dev/dsk/c0t1d0s7 a u 16 8192 /dev/dsk/c0t0d0s7 a u 8208 8192 /dev/dsk/c0t0d0s7
If you are replacing existing state database
replicas, you might need to specify a replica size. Particularly if you
have existing state database replicas (on a system upgraded from
Solstice DiskSuite, perhaps) that share a slice with a file system, you
must replace existing replicas with other replicas of the same size or
add new replicas in a different location.
Caution – Do not replace
default-sized (1034 block) state database replicas from Solstice
DiskSuite with default-sized Solaris Volume Manager replicas on a slice
shared with a file system. If you do, the new replicas will overwrite
the beginning of your file system and corrupt it.
# metadb -a -c 3 -l 1034 c0t0d0s7 # metadb flags first blk block count ... a u 16 1034 /dev/dsk/c0t0d0s7 a u 1050 1034 /dev/dsk/c0t0d0s7 a u 2084 1034 /dev/dsk/c0t0d0s7
# metadb -i flags first blk block count a u 16 8192 /dev/dsk/c0t1d0s7 a u 8208 8192 /dev/dsk/c0t1d0s7 a u 16 8192 /dev/dsk/c0t0d0s7 a u 8208 8192 /dev/dsk/c0t0d0s7 r - replica does not have device relocation information o - replica active prior to last mddb configuration change u - replica is up to date l - locator for this replica was read successfully c - replica's location was in /etc/lvm/mddb.cf p - replica's location was patched in kernel m - replica is master, this is replica selected as input W - replica has device write errors a - replica is active, commits are occurring to this replica M - replica had problem with master blocks D - replica had problem with data blocks F - replica had format problems S - replica is too small to hold current data base R - replica had device read errors
# metadb -d -f ctds-of-slice -d specifies to delete a state database replica. -f specifies to force the operation, even if no replicas exist. ctds-of-slice specifies the name of the component that holds the replica. # metadb -d -f c0t0d0s7
RAID 0 (Stripe and Concatenation)
RAID 0 (Stripe) Volume: A RAID 0 (stripe) volume spreads data equally across all components in the stripe, forming one logical storage unit. These segments are interleaved. When you create a stripe, you can set the interlace value or use the Solaris Volume Manager default interlace value of 16 Kbytes. Once you have created the stripe, you cannot change the interlace value.
RAID 0 (Concatenation) Volume: Concatenated
volume writes data to the first available component until it is full,
and then moves to the next available component. The data for a
concatenated volume is organized serially and adjacently across disk
slices, forming one logical storage unit. A concatenation enables
you to dynamically expand storage capacity and file system sizes
online. A concatenation can also expand any active and mounted UFS file
system without having to bring down the system. You can also create a
concatenation from a single component. Later, when you need more
storage, you can add more components to the concatenation.
Note – To
increase the capacity of a stripe, you need to build a concatenated
stripe. You must use a concatenation to encapsulate root (/), swap,
/usr, /opt, or /var when mirroring these file systems.
RAID 0 (Concatenated Stripe) Volume: A concatenated stripe is a stripe that has been expanded by adding additional components (stripes).
The following example creates a striped volume using 2 slices named /dev/md/rdsk/d10 using the metainit command.
metainit {vol-name} {number-of-stripes} {components-per-stripe} {component-names…} [-i interlace-value]
01. Create Stripe:
# metainit d10 1 2 c0t1d0s0 c0t1d0s1 d10: Concat/Stripe is setup
02. Use the metastat command to query your new volume
# metastat d10 d10: Concat/Stripe Size: 4194288 blocks (2.0 GB) Stripe 0: (interlace: 32 blocks) Device Start Block Dbase Reloc c0t1d0s0 0 No Yes c0t1d0s1 0 No Yes Device Relocation Information: Device Reloc Device ID c0t1d0 Yes id1,dad@AST38410A=5CS09PSH
03. Create file system on it
# newfs /dev/md/rdsk/d10
04. Mount the file system
# mount /dev/md/dsk/d10 /mnt
The method used for creating a Concatenated Volume is
very similar to that used in creating a Striped Volume - both use the
metainit command (obviously using different options) and the same method
for creating and mounting a UFS file system for.
Creating RAID 0 (Concatenation) Volumes:
metainit {volume-name} {number-of-stripes} { [components-per-stripe] | [component-names]…}
Creating a Concatenation of One Slice:
# metainit d25 1 1 c0t1d0s2 d25: Concat/Stripe is setup
This example shows the creation of a
concatenation, d25, that consists of one stripe (the first number 1)
made of a single slice (the second number 1 in front of the slice). The
system verifies that the volume has been set up.
Note – This example shows a concatenation that can safely encapsulate existing data.
Create concatenate volume with 2 stripes:
# metainit d20 2 1 c0t1d0s3 1 c0t1d0s4 d20: Concat/Stripe is setup
Expanding a nonmetadevice slice Filesystem:
# metainit d25 2 1 c0t1d0s2 1 c0t2d0s2 d25: Concat/Stripe is setup
This example shows the creation of a
concatenation called d25 out of two slices, /dev/dsk/c0t1d0s2 (which
contains a file system) and /dev/dsk/c0t2d0s2. The file system must
first be unmounted
Caution – The first slice in the metainit command must be the slice that contains the file system. If not, you will corrupt your data.
Caution – The first slice in the metainit command must be the slice that contains the file system. If not, you will corrupt your data.
Expanding an existing Raid 0 volume Filesytem:
01. Concatenate existing stripes using metattach command:
01. Concatenate existing stripes using metattach command:
# metattach d20 c0t1d0s5 d20: component is attached
02. Extend the filesystem
# growfs -M /mnt /dev/md/rdsk/d20
1.Create two stripes for two submirors as d21 & d22
# metainit d21 1 1 c0t0d0s2 d21: Concat/Stripe is setup # metainit t d22 1 1 c1t0d0s2 d22: Concat/Stripe is setup
2. Create a mirror device (d20) using one of the submirror (d21)
# metainit d20 -m d21 d20: Mirror is setup
3. Attach the second submirror (D21) to the main mirror device (D20)
# metattach d20 d22 d50: Submirror d52 is attached.
4. Make file system on new metadevice
#newfs /dev/md/rdsk/d20 edit /etc/vfstab to mount the /dev/dsk/d20 on a mount point.
# metainit f d1 1 1 c1t0d0s0 d1: Concat/Stripe is setup # metainit d2 1 1 c2t0d0s0 d2: Concat/Stripe is setup # metainit d0 -m d1 d0: Mirror is setup # umount /localEdit the /etc/vfstab file so that the file system references the mirror)
#mount /local #metattach d0 d2 d0: Submirror d2 is attached
Mirroring /usr filesystem
# metainit -f d12 1 1 c0t3d0s6 d12: Concat/Stripe is setup # metainit d22 1 1 c1t0d0s6 d22: Concat/Stripe is setup # metainit d2 -m d12 d2: Mirror is setup(Edit the /etc/vfstab file so that /usr references the mirror)
# reboot ... ... # metattach d2 d22 d2: Submirror d22 is attached
Mirroring / (root) filesystem
# metainit -f d11 1 1 c0t3d0s0 d11: Concat/Stripe is setup # metainit d12 1 1 c1t3d0s0 d12: Concat/Stripe is setup # metainit d10 -m d11 d10: Mirror is setup # metaroot d10 # lockfs -fa # reboot … … # metattach d10 d12 d10: Submirror d12 is attached
Make Mirrored disk bootable
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0
Create alterbate name for Mirrored boot disk
a.) Find physical path name for the second boot disk
a.) Find physical path name for the second boot disk
# ls -l /dev/rdsk/c1t3d0s0 lrwxrwxrwx 1 root root 55 Sep 12 11:19 /dev/rdsk/c1t3d0s0 - >../../devices/sbus@1,f8000000/esp@1,200000/sd@3,0:a
b.) Create an alias for booting from disk2
ok> nvalias bootdisk2 /sbus@1,f8000000/esp@1,200000/sd@3,0:a
The system must contain at least three state database replicas before you can create RAID5 metadevices.
A RAID5 metadevice can only handle a single slice
failure.A RAID5 metadevice can be grown by concatenating additional
slices to the metadevice. The new slices do not store parity
information, however they are parity protected. The resulting RAID5
metadevice continues to handle a single slice failure. Create a RAID5
metadevice from a slice that contains an existing file system.will erase
the data during the RAID5 initialization process .The interlace value
is key to RAID5 performance. It is configurable at the time the
metadevice is created; thereafter, the value cannot be modified. The
default interlace value is 16 Kbytes which is reasonable for most of the
applications.
A RAID level 5 metadevice is defined using the -r
option with an interlace size of 20 Kbytes. The data and parity segments
are striped across the slices, c1t1d0s2,c1t2d0s2, and c1t3d0s2
To setup raid5 on three slices of different disks
# metainit d10 -r c1t1d0s2 c1t2d0s2 c1t3d0s2 -i 20k d10: RAID is setup
01. Unmount the filesystem
# umount /mnt
02. Remove the volume using ‘metaclear’ command.
# metaclear d20 d20: Concat/Stripe is cleared
HotSpare Pool
A hot spare pool is a collection of slices reserved by DiskSuite to be automatically substituted in case of a slice failure in either a submirror or RAID5 metadevice . A hot spare cannot be a metadevice and it can be associated with multiple submirrors or RAID5 metadevices. However, a submirror or RAID5 metadevice can only be asociated with one hot spare pool. .Replacement is based on a first fit for the failed slice and they need to be replaced with repaired or new slices. Hot spare pools may be allocated, deallocated, or reassigned at any time unless a slice in the hot spare pool is being used to replace damaged slice of its associated metadevice.# metaparam -h hsp100 d10 # metaparam -h hsp100 d11 # metastat d0 d0: Mirror Submirror 0: d10 State: Okay Submirror 1: d11 State: Okay ... d10: Submirror of d0 State: Okay Hot spare pool: hsp100 ... d11: Submirror of d0 State: Okay Hot spare pool: hsp100
# metaparam -h hsp001 d10 # metastat d10 d10:RAID State: Okay Hot spare Pool: hsp001
# metahs -a -all /dev/dsk/c3t0d0s2 hsp001: Hotspare is added hsp002: Hotspare is added hsp003: Hotspare is added
Creating a Trans Meta Device
Trans meta devices enables ufs logging. There is one logging device and a master device and all file system changes are written into logging device and posted on to master device . This greatly reduces the fsck time for very large file systems as fsck has to check only the logging device which is usually of 64 M. maximum size.Logging device preferably should be mirrored and located on a different drive and controller than the master device.
Ufs logging can not be done for root partition.
For /home2 Filesystem
1. Setup metadevice # umount /home2 # metainit d63 -t c0t2d0s2 c2t2d0s1 d63: Trans is setup
Logging becomes effective for the file system when it is remounted
2. Change vfstab entry & reboot
from /dev/md/dsk/d2 /dev/md/rdsk/d2 /home2 ufs 2 yes - to /dev/md/dsk/d63 /dev/md/rdsk/d63 /home2 ufs 2 yes - # mount /home2
Next reboot displays the following message for logging device
# reboot ... /dev/md/rdsk/d63: is logging
For /usr Filesystem
Setup metadevice
# metainit -f d20 -t c0t3d0s6 c1t2d0s1 d20: Trans is setup
2.) Change vfstab entry & reboot:
from /dev/dsk/c0t3d0s6 /dev/rdsk/c0t3d0s6 /usr ufs 1 no - to /dev/md/dsk/d20 /dev/md/rdsk/d20 /usr ufs 1 no - # reboot
1.) Setup metadevice
#umount /home2 #metainit d64 -t d30 d12 d64 trans is setup
2.) Change vfstab entry & reboot:
from /dev/md/dsk/d30 /dev/md/rdsk/d30 /home2 ufs 2 yes to /dev/md/dsk/d64 /dev/md/rdsk/d64 /home2 ufs 2 yes
Cookbooks
Recovering from a Boot Disk Failure with Solstice on Solaris
Recovering from a failed boot disk is not a very
difficult procedure using Solstice DiskSuite when the system has been
properly setup and documented initially. The first step is obviously to
identify which piece of hardware failed. In this example it will be the
boot disk, which is is /dev/dsk/c0t0d0. Once the failed disk has been
identified it is important to boot up the system from the second half of
the mirror before the failed device is replaced.
1. Boot from the secondary disk
- Identify the failed disk (the examples in this document use /dev/dsk/c0t0d0).
- Boot from the secondary disk before replacing the failed disk:
ok boot altdisk
If no alternate boot alias is available, try to
boot off of one of the built-in alternate devices provided in the Open
Boot PROM. These are numbered disk0 through disk6 and generally only
apply to disks on the system's internal SCSI controller. If all else
fails, use probe-scsi-all to determine the device path to the secondary
disk, and then create an alias from which to boot.
2. When the system comes up, it will complain about
the stale database replicas and will only allow booting into
single-user mode. In single user mode, use the metadb command without
any arguments to list the replicas that have failed. Delete the stale
replicas using metadb –d:
# metadb flags first blk block count a m p luo 16 1034 /dev/dsk/c0t0d0s7 a p luo 1050 1034 /dev/dsk/c0t0d0s7 a p luo 2084 1034 /dev/dsk/c0t0d0s7 a m p luo 16 1034 /dev/dsk/c0t1d0s7 a p luo 1050 1034 /dev/dsk/c0t1d0s7 a p luo 2084 1034 /dev/dsk/c0t1d0s7 # metadb -d /dev/dsk/c0t0d0s7
3. Shut down the system and replace the failed disk
4. Partition the replacement disk identically to its
mirror. Use prtvtoc to print out the volume table of contents (VTOC) of
the good disk, and then use the fmthard command to write the table to
the new disk:
# prtvtoc /dev/rdsk/c0t1d0s2 > /tmp/format.out # fmthard -s /tmp/format.out /dev/rdsk/c0t0d0s2 or # prtvtoc -h /dev/rdsk/c0t1d0s2 | fmthard -s - /dev/rdsk/c0t0d0s2
Example prtvtoc -h output
# prtvtoc -h /dev/rdsk/c0t1d0s2 0 0 00 0 5121944 5121943 1 0 00 5121944 3072224 8194167 2 5 00 0 35368272 35368271 3 0 00 8194168 4099440 12293607 4 0 00 12293608 7171664 19465271 5 0 00 19465272 15794624 35259895 7 0 00 35259896 108376 35368271
5. Recreate the deleted database records with the metadb –a command:
# metadb -a /dev/dsk/c0t0d0s7
6. Detach the failed submirrors to stop read and
write operations to that half of the mirror when activity occurs on the
metadevice. The detach must be forced:
This case the mirror as defined as :
d2: Mirror
Submirror 0: d0 Submirror 1: d1
d12: Mirror
Submirror 0: d10 Submirror 1: d11
d22: Mirror
Submirror 0: d20 Submirror 1: d21
d32: Mirror
Submirror 0: d30 Submirror 1: d31
d42: Mirror
Submirror 0: d40 Submirror 1: d41 # metadetach -f d2 d0 (/ filesystem) # metadetach -f d12 d10 (swap partition) # metadetach -f d22 d20 (/var filesystem) # metadetach -f d32 d30 (/export/home filesystem) # metadetach -f d42 d40 (/u01 filesystem)
7. Clear the failed submirrors:
# metaclear d0 # metaclear d10 # metaclear d20 # metaclear d30 # metaclear d40
8. Recreate the submirrors using the metainit command:
# metainit d0 1 1 c0t0d0s0 # metainit d10 1 1 c0t0d0s1 # metainit d20 1 1 c0t0d0s3 # metainit d30 1 1 c0t0d0s4 # metainit d40 1 1 c0t0d0s5
9. Reattach the submirrors using the metattach command:
# metattach d2 d0 # metattach d12 d10 # metattach d22 d20 # metattach d32 d30 # metattach d42 d40The submirrors will automatically start to resynchronize.
10. Monitor resynchronization using the metastat
command. (Resynchronization time depends on the amount of data on the
partitions):
# metastat | more
11. Check the dump device
dumpadm -d swap # init 6
After resynchronization reboot the system.
Ensure that the system is now properly booting from its primary boot
disk and that everything is operating normally.
Gathering Solaris system information
A UNIX administrator may be asked to gather system
information about his/her Solaris systems. Here are the commands used on
a Solaris to gather various system information.
To find out the WWN of Fibre cards
# prtpicl -v | grep wwn
PICL provides a method to publish
platform-specific information for clients to access in a way that is not
specific to the platform. The Solaris PICL framework provides
information about the system configuration which it maintains in the
PICL tree.
prtdiag command can be used to
display the system configuration and diagnostics information. prtdiag
command is in '/usr/platform/platform-name/sbin' directory.
To display the partion informatin of a Hard drive
# prtvtoc /dev/rdsk/c0t0d0s2
# /usr/platform/sun4u/sbin/prtdiag System Configuration: Sun Microsystems sun4u Sun (TM) Enterprise 250 (2 X UltraSPARC-II 400MHz) System clock frequency: 100 MHz Memory size: 512 Megabytes ========================= CPUs ========================= Run Ecache CPU CPU Brd CPU Module MHz MB Impl. Mask --- --- ------- ----- ------ ------ ---- SYS 0 0 400 2.0 US-II 10.0 SYS 1 1 400 2.0 US-II 10.0 ========================= Memory ========================= Interlv. Socket Size Bank Group Name (MB) Status ---- ----- ------ ---- ------ 0 none U0701 128 OK 0 none U0801 128 OK 0 none U0901 128 OK 0 none U1001 128 OK ========================= IO Cards ========================= Bus Freq Brd Type MHz Slot Name Model --- ---- ---- ---- -------------------------------- ---------------------- SYS PCI 33 3 pciclass,068000 SYS PCI 33 3 pciclass,020000 SUNW,pci-qfe SYS PCI 33 3 pciclass,068000 SYS PCI 33 3 pciclass,020000 SUNW,pci-qfe SYS PCI 33 3 pciclass,068000 SYS PCI 33 3 pciclass,020000 SUNW,pci-qfe SYS PCI 33 3 pciclass,068000 SYS PCI 33 3 pciclass,020000 SUNW,pci-qfe No failures found in System ===========================
Processors details:
The psrinfo utility displays
processor information. When run in verbose mode, it lists the speed of
each processor and when the processor was last placed on-line (generally
the time the system was started unless it was manually taken off-line).
/usr/sbin/psrinfo -v Status of processor 1 as of: 12/12/02 09:25:50 Processor has been on-line since 11/17/02 21:10:09. The sparcv9 processor operates at 400 MHz, and has a sparcv9 floating point processor. Status of processor 3 as of: 12/12/02 09:25:50 Processor has been on-line since 11/17/02 21:10:11. The sparcv9 processor operates at 400 MHz, and has a sparcv9 floating point processor.
The psradm utility can enable or disable a specific processor
To disable a processor:
/usr/sbin/psradm -f processor_id
To enable a processor:
/usr/sbin/psradm -n processor_id
The psrinfo utility will display the processor_id when run in either standard or verbose mode.
Memory details:
prtconf utility will display the system configuration, including the amount of physical memory.
/usr/sbin/prtconf | grep Memory Memory size: 4096 Megabytes
Processor and kernel bits
Determine bits of processor:
# isainfo -bv 64-bit sparcv9 applications
Determine bits of Solaris kernel:
# isainfo -kv 64-bit sparcv9 kernel modules
Network Cards
To find out the network cards available on the system
# grep -i /etc/network path_to_inst "/pci@8,600000/network@1" 0 "ge" "/pci@9,700000/network@1,1" 0 "eri" "/pci@9,600000/pci@1/network@0" 0 "ce" "/pci@9,600000/pci@1/network@1" 1 "ce"
We can also run the following command
cat /etc/path_to_inst | egrep -i 'eri|ge|ce|qfe|hme' "/pci@8,600000/network@1" 0 "ge" "/pci@9,700000/ebus@1/serial@1,400000" 0 "se" "/pci@9,700000/network@1,1" 0 "eri" "/pci@9,600000/pci@1/network@0" 0 "ce" "/pci@9,600000/pci@1/network@1" 1 "ce"
"prtdiag" command lists all the add-on network cards available on the system.
To get the serial number and modle number of drives
# iostat -En if Veritas Volume Manager installed, then try # /usr/lib/vxvm/diag.d/vxdmpinq /dev/rdsk/cNtNdNs2
To find out the pagesize value
# pagesize 4096
http://www.science.uva.nl/pub/solaris/solaris2/
http://www.brandonhutchinson.com/
http://www.samag.com/topics/os/solaris/
http://cspry.co.uk/table_of_contents.html
http://www.brandonhutchinson.com/
http://www.samag.com/topics/os/solaris/
http://cspry.co.uk/table_of_contents.html
Solaris 10
Solaris 10 system was not starting up after abnormal system shutdown.
It was always asking for root password for system maintainance. This is
because, the boot archive may become out of sync with the root
filesystem.
Sol: run "svcadm clear system/boot-archive". This will clear the boot archive service state and bring up the system.
svcadm – common service management tasks (enable, disable, ...)
svccfg – manipulate the SMF repository
svcprop – view SMF repository data
inetadm – view and configure inetd managed services
inetconv – convert an inetd.conf(4) to an SMF manifest
To identify the multipathing support
Sol: run "svcadm clear system/boot-archive". This will clear the boot archive service state and bring up the system.
Service Management Facility (SMF)
svcs – detailed state information about all service instancessvcadm – common service management tasks (enable, disable, ...)
svccfg – manipulate the SMF repository
svcprop – view SMF repository data
inetadm – view and configure inetd managed services
inetconv – convert an inetd.conf(4) to an SMF manifest
svcs -l ssh fmri svc:/network/ssh:default name SSH server enabled true state online next_state none state_time Wed Aug 08 00:04:31 2007 logfile /var/svc/log/network-ssh:default.log restarter svc:/system/svc/restarter:default contract_id 58 dependency require_all/none svc:/system/filesystem/local (online) dependency optional_all/none svc:/system/filesystem/autofs (online) dependency require_all/none svc:/network/loopback (online) dependency require_all/none svc:/network/physical (online) dependency require_all/none svc:/system/cryptosvc (online) dependency require_all/none svc:/system/utmp (online) dependency require_all/restart file://localhost/etc/ssh/sshd_config (online)
To enable a service
# svcadm enable network/login:rlogin # svcs -l network/login:rlogin fmri svc:/network/login:rlogin name The remote login service. enabled true state online next_state none restarter svc:/network/inetd:default
- Use svcadm enable -r to enable a service plus all of its dependents
To restart a service
# svcadm restart nfs/server
To re-read the configuration file
# svcadm refresh nfs/server
To disable telnet
# svcadm disable telnetTo display the service which are enables but not runnig or preventing another enabled service from running
# svcs -xv
ZFS (Zettabyte File System)
ZFS can be administered using Java Web console by using
https://hostname:6789/zfs
If you type the appropriate URLand are unable to
reach the ZFS Administration console, the server might not be started.
To start the server, run the following command:
# /usr/sbin/smcwebserver startIf you want the server to run automatically when the system boots, run the following command:
# /usr/sbin/smcwebserver enable
zpool --> To manage ZFS pool
zfs --> To mamage the ZFS file systems
zfs --> To mamage the ZFS file systems
Create a ZFS pool called mypoo1-1 consisting of two mirrored drives
# zpool create mypool-1 mirror c1t1d0s1 c1t1d0s0 df -h -F zfs Filesystem size used avail capacity Mounted on mypool-1 3.5G 24K 3.5G 1% /mypool-1
To destroy a ZFS File system
# zfs list NAME USED AVAIL REFER MOUNTPOINT tank 1.15T 395G 24.5K /tank tank/JRP 1.15T 395G 1.15T /oracle/JRP_SAN # zfs destroy tank/JRP
To destroy the pool tank
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 1.56T 7.12M 1.56T 0% ONLINE - # zpool list no pools available
Change a mount point
# zfs set mountpoint=/jeeva mypool-1
Zones
The Solaris Zones partitioning technology is used to virtualize operating system services and provide an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris Operating System. When you create a zone, you produce an application execution environment in which processes are isolated from the rest of the system. This isolation prevents processes that are running in one zone from monitoring or affecting processes that are running in other zones. Even a process running with superuser credentials cannot view or affect activity in other zones.
Each zone that requires network connectivity has one
or more dedicated IP addresses. A process assigned to a zone can
manipulate, monitor, and directly communicate with other processes that
are assigned to the same zone. The process cannot perform these
functions with processes that are assigned to other zones in the system
or with processes that are not assigned to a zone.
Each zone also has a node name that is completely
independent of the zone name. The node name is assigned by the
administrator of the zone. Each zone has a path to its root directory
that is relative to the global zone's root directory.
Every Solaris system contains a global zone.
Global Zone
- The global zone always has the name global and assigned ID 0 by the system
- Provides the single instance of the Solaris kernel that is bootable and running on the system
- Contains a complete installation of the Solaris system software packages
- Can contain additional software packages or additional software, directories, files, and other data not installed through packages
- Provides a complete and consistent product database that contains information about all software components installed in the global zone
- Holds configuration information specific to the global zone only, such as the global zone host name and file system table
- Is the only zone that is aware of all devices and all file systems
- Is the only zone with knowledge of non-global zone existence and configuration
- Is the only zone from which a non-global zone can be configured, installed, managed, or uninstalled
Non-Global Zone
- Is assigned a zone ID by the system when the zone is booted
- Shares operation under the Solaris kernel booted from the global zone
- Contains an installed subset of the complete Solaris Operating System software packages
- Contains Solaris software packages shared from the global zone
- Can contain additional installed software packages not shared from the global zone
- Can contain additional software, directories, files, and other data created on the non-global zone that are not installed through packages or shared from the global zone
- Has a complete and consistent product database that contains information about all software components installed on the zone, whether present on the non-global zone or shared read-only from the global zone
- Is not aware of the existence of any other zones
- Cannot install, manage, or uninstall other zones, including itself
- Has configuration information specific to that non-global zone only, such as the non-global zone host name and file system table
To identify the multipathing support
# mpathadm list mpath-support mpath-support: libmpscsi_vhci.so
Tips
To disable sun terminals going to OK prompt during power failure of Terminal servers
US/Eastern
US/Central
US/Mountain
US/Pacific
System Admin Commands list
taking backup and copying directly to other system
01. Edit /etc/default/kbd file and look for line KEYBOARD_ABORT=enable and edit it as follows follows
KEYBOARD_ABORT=alternate
02. Run the kbd -i command to make the changes immediate.
03. Alternate Sequence for going to OK prompt is ~ followed by ctrl + b
fsstat reports kernel file operation activity by the
file system type (fstype) or by the path name, which is converted to a
mount point.
To mount the cdrom in solaris
mount -o ro -f hsfs /dev/dsk/c0t2d0s2 /mnt
To change the Time zone in Solaris
a) Edit /etc/TIMEZONE NOTE: the man page incorrectly states this file is called /etc/timezone b) Reboot your server with shutdown or init.Examples
US/Eastern
US/Central
US/Mountain
US/Pacific
For the full list, look in: /usr/share/lib/zoneinfo/
System Admin Commands list
http://docs.sun.com/app/docs/doc/816-0211/6m6nc66m6?a=expand
Big Admin home page
http://www.sun.com/bigadmin/home/index.html
Setting the path variable in Solaris
/etc/default/login /etc/default/su /etc/profile (eg) PATH=/opt/csw/bin:/usr/sbin:/usr/bin:/usr/dt/bin:/usr/openwin/bin:/usr/ccs/bin
taking backup and copying directly to other system
ufsdump 0f - /dev/rdsk/c0t0d0s7 | rsh <remote> "cd /home; ufsrestore xf
01. Login to DCM and find out the SMC for the partition
02. ssh as root to the SMC system
03. Find out the actual name of the partition name by viewing either /etc/hosts file or /etc/FJSVscstargets
04. Run the following command to get connected the partition console
# /opt/FJSVcsl/bin/get_console -w -n <partition_name>
05. If you want to get into OK prompt
a. ctrl+] to get the telnet prompt b. From telnet prompt, type "send break" to get OK prompt
01. Login to DCM and find out the SMC for the partition
02. ssh as root to the SMC system
03. This one uses GUI tools. So, you need to run
x-server software like exceed on your PC. If exceed software is
installed on your system, start it now
04. Export the DISPLAY in the SMC server to your local PC
# DISPLAY=IP_ADDRESS_OF_YOUR_PC:0 # export DISPLAY05. Run the following command to get the consoles. You will be presented with a screen with the list of all available partions. You can choose the right partition from there
# /opt/FJSVscsl/bin/rcmanager
main menu
/opt/FJSVscsl/bin/mainmenu
/opt/FJSVscsl/bin/mainmenu
Keine Kommentare:
Kommentar veröffentlichen