Sonntag, 25. Januar 2015

Red Had Linux Home

New in RHEL 6.0

  • ext4 is the default file system
  • uuid is used by default in the /etc/fstab file
  • upstart has replaced the init
  • Mounting via NFS default to NFS 4
  • In addition to /etc/sysctl.conf file, there is also /etc/sysctl.d directory. Instead of modifying directly /etc/sysctl.conf, files can be placed under /etc/sysctl.d directory.
  • modprod.d directory instead of /etc/modprobe.conf
Some use full Links for Redhat Linux / Centos


Boot

GRUB - GRand Unified Bootloader
  • The default boot loader for Redhat Linux and Fedora
  • Supports MD5 password protection
  • /boot/grub/grub.conf is the configuration file for Grub
/boot/grub/grub.conf
  • Changes to grub.conf takes effect immeditely
  • Grub reads the configuration file at boot time, so the grub.conf file must be on a filesystem GRUB understand. These include ext2/ext3, FAT, reiserfs, minix and FFS.
  • If MBR becomes corrupted and GRUB need to be re-installed, it can be done with command /sbin/grub-install.
To re-install GRUB on MBR of /dev/hda
 # /sbin/grub-install /dev/hda
To create encrypted passwords
 # /sbin/grub-md5-crypt
 # cat grub.conf
 default=0
 timeout=5
 splashimage=(hd0,0)/grub/splash.xpm.gz
 password --md5 $1$NonT51$hBjZLFkXmEQiXDjX.a77t0
 hiddenmenu
 title Fedora Core (2.6.12-1.1447_FC4)
       root (hd0,0)
       kernel /vmlinuz-2.6.12-1.1447_FC4 ro root=LABEL=/
       initrd /initrd-2.6.12-1.1447_FC4.img
More Information about grub can be found in the bellow links
http://www.dedoimedo.com/computers/grub-2.html
http://www.dedoimedo.com/computers/grub-2.html
/etc/rc.d/rc.sysinit
  • Run once at boot time
  • Sets Kernel parameters in /etc/sysctl.conf
  • Sets system clock
  • Loads keymaps
  • Enable swap partitions
  • Sets hostname
  • Cheks root filesystem and mounts
  • Add RAID devices if any
  • Enable disk Quotas
  • Check and mount other Filesystems
/etc/rc.d/rc.local
  • Runs each time the system enters a run level
  • Run after the run level specifig scripts
  • Common place for custom modification
Controlling Services
The following commands are used to control the services
  1. ntsysv
  2. chkconfig
  3. redhat-config-services
  4. service
ntsysv: It is a console based interactive utility that allows you to control what services run when entering a given run level. It configures the current run level by default. By using the --level option, you can configure other run levels.
chkconfig: chkconfig provides a simple command-line tool for maintaining the /etc/rc[0-6].d directory hierarchy by relieving system administrators of the task of directly manipulating the numerous symbolic links in those directories.
usage: chkconfig --list [name]
          chkconfig --add <name>
          chkconfig --del <name>
          chkconfig [--level <levels>] <name> <on|off|reset>)
redhat-config-services: It is an X client that presents a display of eash of the services that are started and stopped at each run level. Services can be added, deleted, or re-ordered in the run levels 3 through 5 with this utility.
service: It is used to start or stop a standalone service immediately. Most services accept the arguments start, stop, restart, reload, condrestart and status.
 usage:  service <service_name> [start|stop|restart|status]
         service vsftpd restart
Boot single user mode
    when the boot prompt shows hit space bar
    select the boot kernal path
    press "e" to edit the line
    apend  "single" word in the end of the boot kernal 
      or
    apend init=/bin/bash  in the end of the boot kernal
    hit enter and press b to boot, this will land in single user mode
       if the root is not mounted then follow below commands
       mount -oremount,rw /
       mount -v (check the mounted file systems)  

Disable SELINUX

Turning ON/OFF SELinux temporarily
Disabling SELinux temporarily is the easiest way to determine if the problem you are experiencing is related to your SELinux settings. To turn it off, you will need to become the root users on your system and execute the following command:
 echo 0 > /selinux/enforce
This temporarily turns off SELinux until it is either re-enabled or the system is rebooted. To turn it back on you simply execute this command:
  echo 1 > /selinux/enforce
Configuring SELinux to log warnings instead of block
You can also configure SELinux to give you a warning message instead of actually prohibiting the action. This known as permissive mode. To change SELinux's behavior to permissive mode you need to edit the configuration file. On RHEL systems that file is located at /etc/selinux/config. You need to change the SELINUX option to permissive like so:
 $ sudo vi /etc/selinux/config
 SELINUX=permissive
Note that these changes will not take effect until the system is rebooted.
To completely disable SELinux instead of setting the configuration file to permissive mode :
 $ sudo vi /etc/selinux/config
 SELINUX=disabled
You will need to reboot your system or temporarily set SELinux to non-enforcing mode to create the desired effect.


Kernel

Kernel Modules

Many components of the kernel can be compiled as dynamically loadable modules. This allows for increased kernel functionality without increasing the size of the kernel image loaded at the boot time.
Good candidates for modularization are
  • Components that need not be resident in the kernel for all configurations and hardware
  • peripheral device drivers
  • supplementary filesystems
The kernel modules reside in /lib/modules/<kernel_version> directory
Controlling modules
  • lsmod lists the modules currently resident in the kernel
  • insmod is used to load a particular module
  • rmmod unloads a module if it is idle
Controlling modules intelligently
Many modules depends on other module to be present.
  • A database of module dependencies can be generated using depmod command
  • modinfo shows the information about a linux kernel module
  • modprobe can be used to load kernel modules
The advandage of using modprobe over insmod are:
  • modprobe will load any underlying modules required by a given module
  • modprobe will consult /etc/modprobe.conf (/etc/modules.conf in old systems) for default module configurations
    modprobe -v module_name
Module configuration
Many modules accept parameters that can be specified at the load time. Default values for various parameters can be specifield in /etc/modprobe.conf (/etc/modules.conf in old systems). When modprobe loads a module, it will consult this file for appropriate defaults.
Examples:
# lsmod
Module                  Size  Used by
md5                     4161  1 
ipv6                  268865  14 
sunrpc                168453  1 
dm_mod                 58613  0 
uhci_hcd               35281  0 
i2c_piix4               8913  0 
i2c_core               21825  1 i2c_piix4
......
......

# modinfo md5
filename:       /lib/modules/2.6.12-1.1447_FC4/kernel/crypto/md5.ko
license:        GPL
description:    MD5 Message Digest Algorithm
vermagic:       2.6.12-1.1447_FC4 686 REGPARM 4KSTACKS gcc-4.0
depends:        
srcversion:     70F89E71D506DA03F4BDCA3


# cat /etc/modprobe.conf
alias eth0 e100
alias eth1 3c59x
alias snd-card-0 snd-cs4236
options snd-card-0 index=0
options snd-cs4236 index=0 
Tips: If a kernel module is not loading (test it using lsmod), run depmod command first. It will create all the dependents for all the modules. Then load the module using modprobe -v <module_name> command.

The /proc filesystem

  • /proc is a virtual filesystem containing information about the running kernel
  • It is a map to the running kernel
  • Contens of “files” under /proc can be viewed using ‘cat’ commnad
  • Provides information on system hardware, networking settings and activity, memory usage and more.
  • /proc has number of subdirectories
  • The /proc/sys subdirectory allows administrators to modify certain parameters of a running kernel
Some of the key files in top-level directory include:
/proc/interrupts, /proc/dma, /proc/ioports - IRQ, DMA and I/O settings
/proc/cpuinfo - Information about the system CPUs
/proc/meminfo - Information on availabel, free memory, swap, cached memory
/proc/version - Information on Kernel version, build host, build date, etc.
Some important subdirectories include:
/proc/scsi, /proc/ide -- information about SCSI and IDE devices
/proc/net -- Network activity and configuration
/proc/sys -- Kernel configuration parameters
/proc/<PID> -- information about process ID
The /proc/sys subdirectory allows administrators to modify certain parameters of a running kernel.
echo “1” > /proc/sys/net/ipv4/ip_forward # Turn on IP forwarding.

Changing Kernel Parameters (/proc/sys configuration) using sysctl :

  • /proc/sys modifications are temporary and not saved at system shutdown
  • The sysctl command manages such settings in a static and centralized fashion using /etc/sysctl.conf file
  • sysctl is called at boot time by rc.sysinit and uses settings in /etc/sysctl.conf
With the sysctl, you specify a path to the variable, with /proc/sys being the base.
For example, to view /proc/sys/net/ipv4/ip_forward, use the following command:
  # sysctl net.ipv4.ip_forward
  net.ipv4.ip_forward = 1
To then update this variable temporarily, use the -w (write) option:
  # sysctl -w net.ipv4.ip_forward="0"
  net.ipv4.ip_forward = 0
The same thing can be done by modifying /proc/sys/net/ipv4/ip_forward file
  # echo "0" > /proc/sys/net/ipv4/ip_forward
To make it permanent, edit /etc/sysctl.conf file. Add or modify the "net.ipv4.ip_forward" line
   net.ipv4.ip_forward=0
To re-read (make the changes to take effect) the /etc/sysctl.conf file, use the -p option
 # sysctl -p 
 net.ipv4.ip_forward = 0
 net.ipv4.conf.default.rp_filter = 1 









Installing Linux using Kickstart and Kobbler


Red Hat Linux operating system installations can be done via a network connection using a Kickstart server. It is frequently much faster than using CDs and the process can be automated.
Example Kickstart Get the kickstart cfg from http server and start the install boot: linux ks=http://server.com/path/to/kickstart/file Get the kickstart cfg from nfs server and start the install boot: linux ks=nfs:server:/path/to/kickstart/file Serving the Kickstart file from nfs server through dhcp /etc/dhcpd.conf next-server 10.10.10.100; filename "/pxelinux.0";

Setup a Kickstart Server

01. Install and configure the DHCPD server
02. Install tftp server and enable TFTP service
a. yum install tftp-server b. Enable TFTP server. vi /etc/xinetd.d/tftp and change disable to 'no' c. service xinetd restart
03. Install syslinux if not already installed
a. yum install syslinux
04. Copy needed files from syslinux to the tftpboot directory
cp /usr/lib/syslinux/pxelinux.0 /tftpboot cp /usr/lib/syslinux/menu.c32 /tftpboot cp /usr/lib/syslinux/memdisk /tftpboot cp /usr/lib/syslinux/mboot.c32 /tftpboot cp /usr/lib/syslinux/chain.c32 /tftpboot
04. Create the directory for your PXE menus
mkdir /tftpboot/pxelinux.cfg
05. For each "Release" and "ARCH" Copy vmlinuz and initrd.img from /images/pxeboot/ directory on "disc 1" of that $Release/$ARCH to /tftpboot/images/RHEL/$ARCH/$RELEASE
mkdir -p /tftpboot/images/RHEL/i386/4.3 mkdir -p /tftpboot/images/RHEL/i386/5.5 mkdir -p /tftpboot/images/RHEL/x86_64/4.3 mkdir -p /tftpboot/images/RHEL/x86_64/5.5
For RHEL 5.5 x86_64, do the following
mount /dev/cdrom /cdrom cd /cdrom/images/pxeboot cp vmlinuz initrd.img /tftpboot/images/RHEL/x86_64/5.5
Do the above for all releases and ARCH you want to kickstart from this server.
06. Add this to your existing or new /etc/dhcpd.conf.
Note: xxx.xxx.xxx.xxx is the IP address of your PXE server
allow booting; allow bootp; option option-128 code 128 = string; option option-129 code 129 = text; next-server xxx.xxx.xxx.xxx; filename "/pxelinux.0";
07. Restart DHCP service
# service dhcpd restart
08. Create Simple or Multilevel PIXIE menu. Create a file called "default" in /tftpboot/pxelinux.cfg directory. A Sample file named "isolinux.cfg" is found on the boot installation media in "isolinux" directory. Copy this file as default and edit this file as per requirement. A sample default file is given bellow.
default menu.c32 prompt 0 timeout 300 ONTIMEOUT local MENU TITLE PXE Menu LABEL Pmajic MENU LABEL Pmajic kernel images/pmagic/bzImage append noapic initrd=images/pmagic/initrd.gz root=/dev/ram0 init=/linuxrc ramdisk_size=100000 label Dos Bootdisk MENU LABEL ^Dos bootdisk kernel memdisk append initrd=images/622c.img LABEL RHEL 5 x86 eth0 MENU LABEL RHEL 5 x86 eth0 KERNEL images/RHEL/x86/5.5/vmlinuz APPEND initrd=images/RHEL/x86_64/5.5/initrd.img ramdisk_size=10000 ks=nfs:xx.xx.xx.xxx:/<path _to_jumpstart_config_file> ksdevice=eth1 LABEL RHEL 5 x86_64 eth0 MENU LABEL RHEL 5 x86_64 eth0 KERNEL images/RHEL/x86_64/5.5/vmlinuz APPEND initrd=images/RHEL/x86_64/5.5/initrd.img ramdisk_size=10000 ks=nfs:xx.xx.xx.xxx:/<path _to_jumpstart_config_file> ksdevice=eth1
09. Install the kickstart Configurator tool. This tool will be helpful to create the kickstart configuration file.
yum install system-config-kickstart
10. Create the kickstart config file. This file can be created using kickstart Configuration Tool. A Sample file anaconda-ks.cfg based on current installation of a system is placed in /root directory. We can also use this /root/anaconda-ks-cfg as the configuration file. Copy this file to the location specified in the default file. Make sure the directory is NFS exported if you are using NFS for installing the OS.
11. Modify the kickstart configuration file as per requirement. If you are using NFS for installation, Make sure to copy the ISO images of Linux disks to any NFS server and NFS export the directory. This server/directory details need to be specified in the jumpstart configuration file.
12. After creating the KS configuration files and copying the ISO images, the installation can be started.
Sample Kickstart configuration file can be found here

Kickstart using Cobbler

Installing kobbler
The latest stable releases of Cobbler and Koan are included in Extras Packages.
yum install cobbler # on the boot server yum install cobbler-web # on the boot server (optional) yum install koan # on target systems (optional)
On the cobbler server:
# Review the procedure below. The command arguments "--foo=bar" will need to vary locally. # Install cobbler. The command below assumes a Redhat-like OS. yum install cobbler # Show what aspects might need attention cobbler check # Act on the 'check' above, then re-check until satisfactory. # Import a client OS from a DVD. This automatically sets up a "distro" and names it. cobbler import --path=/mnt --name=rhel5 --arch=x86_64 # Create a profile (e.g. "rhel5_workstation") and associate it with that distro cobbler profile add --name=rhel5_workstation --distro=rhel5 # Set up a kickstart file. # Associate a kickstart file with this profile cobbler profile edit --name=rhel5_workstation --kickstart=/path/to/kick.ks # Register a client machine (e.g. "workstation1") and its network details # and associate it with a profile cobbler system add --name=workstation1 \ --mac=AA:BB:CC:DD:EE:FF --ip=III.JJJ.KKK.LLL \ --profile=rhel5_workstation \ --netboot-enabled=true # Get a detailed report of everything in cobbler cobbler report # Get cobbler to act on all the above (set up DHCP, etc.) cobbler sync

Kobbler Import

The purpose of "cobbler import" is to set up a network install server for one or more distributions. This mirrors content of a DVD image, an ISO file, a tree on a mounted filesystem, an external rsync mirror or SSH location to the local storage.
  • Cobbler import uses rsync to mirror the content to the disk. By default, the rsync operations will exclude content of certain architectures, debug RPMs, and ISO images -- to change what is excluded during an import, see /etc/cobbler/rsync.exclude.
  • Mirrored content is saved automatically in /var/www/cobbler/ks_mirror.
Import from inserted DVD
cobbler import --path=/media/dvd --name=centos6 ..OR... import from the mounted ISO example
cobbler import --path=/somedir --name=centos6 To mirror from a public rsync server
cobbler import --path=rsync://servergoeshere/path/to/distro --name=centos6 Setup from existing filesystem
cobbler import --path=/path/where/filer/is/mounted --name=centos6 --available-as=nfs://nfsserver:/is/mounted/here
Once imported, run a "cobbler list" or "cobbler report" to see what you've added.
# cobbler list distros: Centos62-x86_64 profiles: Centos62-x86_64 systems: centos03 repos: images:

Snippet

Kickstart Snippets
Kickstart Snippets are a way of reusing common blocks of code between kickstarts. For instance, the default Cobbler installation has a snippet called "$SNIPPET('func_register_if_enabled')" that can help set up the application called Func. This means that every time that this SNIPPET text appears in a kickstart file it is replaced by the contents of /var/lib/cobbler/snippets/func_register_if_enabled. This allows this block of text to be reused in every kickstart template. You may think of snippets, if you like, as templates for templates!
Snippets are implemented using a Cheetah function. The preferred syntax is:
$SNIPPET('snippet_name_here')

Manage yum repos

Yum Repos
If you manage a large number of machines and they are (A) not allowed to get to the outside world, (B) bandwidth constrained, or (C) wanting to get access to 3rd party packages including custom yum repositories, we can setup a local repo using cobbler.
To add a repo
First, follow the setup for DVD / ISO import
Once the import is complete, we will add the mirrors
cobbler repo add --mirror=http://download.fedora.redhat.com/pub/fedora/linux/updates/12/ --name=f12-i386-updates cobbler repo add --mirror=http://download.fedora.redhat.com/pub/fedora/linux/releases/12/Everything/i386/ --name=f12-i386-everything cobbler repo add --mirror=http://download.fedoraproject.org/pub/epel/6/x86_64 --name=epel6x86_64
Now that we've added the mirrors, let's pull down the content. This will take a little while, but subsequent updates won't take nearly as long.
cobbler reposync
Now, that the repositories are mirrored locally, let's create a cobbler profile that will be able to automatically install from the above repositories and also configure clients to use the new mirror.
cobbler profile add --name=f12-i386-test --repos="f12-i386-updates f12-i386-everything" --distro=F12-i386 --kickstart=/etc/cobbler/sample_end.ks
Now, any machines installed from this mirror won't have to hit the outside world for any content they may need during install or with yum. They'll ask for content from the cobbler server instead. Cool.

Koan

Perhaps it stands for "kickstart over a network". Koan is a client-side helper program for use with Cobbler. koan allows for both network provisioning of new virtualized guests (Xen, QEMU/KVM, VMware) and re-installation of an existing system.
When invoked, koan requests install information from a remote cobbler boot server, it then kicks off installations based on what is retrieved from cobbler and fed in on the koan command line. The examples below show the various use cases.
INSTALL KOAN ON A CLIENT
On the client,
yum install koan
LISTING REMOTE COBBLER OBJECTS
To browse remote objects on a cobbler server and see what you can install using koan, run one of the following commands:
koan --server=cobbler.example.org --list=profiles koan --server=cobbler.example.org --list=systems koan --server=cobbler.example.org --list=images
LEARNING MORE ABOUT REMOTE COBBLER OBJECTS
To learn more about what you are about to install, run one of the following commands:
koan --server=cobbler.example.org --display --profile=name koan --server=cobbler.example.org --display --system=name koan --server=cobbler.example.org --display --image=name
REINSTALLING EXISTING SYSTEMS
If you want to install Fedora 12 on (instead of whatever it is running now), right now, you can do this:
Using --replace-self will reinstall the existing system the next time you reboot.
koan --server=bootserver.example.com --list=profiles koan --server=cobbler.example.org --replace-self --profile=F12-i386 koan --server=cobbler.example.org --replace-self --system=name /sbin/reboot
The system will install the new operating system after rebooting, hands off, no interaction required.
Notice in the above example "F12-i386" is just one of the boring default profiles cobbler created for you. You can also create your own, for instance "F12-webservers" or "F12-appserver" -- whatever you would like to automate.
Additionally, adding the flag --add-reinstall-entry will make it add the entry to grub for reinstallation but will not make it automatically pick that option on the next boot.
Also the flag --kexec can be appended, which will launch the installer without needing to reboot. Not all kernels support this option.

 

Logical Volume Manager (LVM) & File Systems

Please download the following RHEL LVM Administrator Guide for complete reference.
Logical Volume Manager
To list all Volume Groups
 # vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vggrd" using metadata type lvm2
  Found volume group "rootvg" using metadata type lvm2
To list all physical volumes
 # pvscan
  PV /dev/sdb    VG vggrd    lvm2 [410.12 GB / 123.12 GB free]
  PV /dev/sda2   VG rootvg   lvm2 [136.00 GB / 76.00 GB free]
  Total: 2 [546.12 GB] / in use: 2 [546.12 GB] / in no VG: 0 [0   ]
To list all Logical Volumes
 # lvscan
  ACTIVE            '/dev/vggrd/sapgrd' [8.00 GB] inherit
  ACTIVE            '/dev/vggrd/sapmntgrd' [4.00 GB] inherit
  ACTIVE            '/dev/vggrd/oracle' [7.00 GB] inherit
  ACTIVE            '/dev/vggrd/oraclnt' [1.00 GB] inherit
  ACTIVE            '/dev/vggrd/archdata' [6.00 GB] inherit
Scan for all disks/multiple devices/partitions usable by LVM.
  # lvmdiskscan
 lvmdiskscan -- reading all disks / partitions (this may take a while...)
 lvmdiskscan -- /dev/hda1  [ 31.35 MB] Primary  LINUX native partition [0x83]
 lvmdiskscan -- /dev/hda2  [  5.98 GB] DOS extended partition [0x05]
 lvmdiskscan -- /dev/hda5  [  2.50 GB] Extended LINUX native partition [0x83]
 lvmdiskscan -- /dev/hda6  [517.69 MB] Extended LINUX native partition [0x83]
 lvmdiskscan -- /dev/hda7  [258.83 MB] Extended LINUX swap partition [0x82]
 lvmdiskscan -- 1 disk
 lvmdiskscan -- 0 whole disks
 lvmdiskscan -- 0 loop devices
 lvmdiskscan -- 0 multiple devices 
 lvmdiskscan -- 0 network block devices
 lvmdiskscan -- 5 partitions
 lvmdiskscan -- 0 LVM physical volume partitions
Cookbook
To list all the disks
 # lvmdiskscan
Prepare the disk /dev/sdb for LVM
 # pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
Create a Volume group vg01 on /dev/sdb with PE size of 64
 # vgcreate -s 64M vg01 /dev/sdb
  Volume group "vg01" successfully created
Verify the Volume group properties
 # vgdisplay vg01
  --- Volume group ---
  VG Name               vg01
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               9.94 GB
  PE Size               64.00 MB
  Total PE              159
  Alloc PE / Size       0 / 0
  Free  PE / Size       159 / 9.94 GB
  VG UUID               B1NYxs-wpxk-S4Ug-vS9Y-7x99-kPUp-4HTdjI
Create a Logical volume datalv with the size of 5GB on vg01 volume group
  # lvcreate -L 5G -n datalv vg01
  Logical volume "dalalv" created
Create the file system on datalv
 # mkfs -t ext3 /dev/vg01/datalv
Mount the filesystem
 # mount /dev/vg01/datalv /mnt
Creating Snapshot of one volume
 # lvcreate -L5G -s -n datalvsnapshot /dev/vg01/datalv
Extending a Logical Volume with LVM
If the kernel version is atleast 2.6 and File sytem type is ext3, then it is possible to extend the size of mounted file systems online without un-mounting using resize2fs. In the older systems, we can ext2online command to extend the file system size online.
 # umount /mnt
 # lvextend -L+6G /dev/volgroup/logicalvol

 # e2fsck -f /dev/volgroup/logicalvol
 # resize2fs /dev/volgroup/logicalvol
 # mount /dev/volgroup/logicalvol /mnt
Extending File system online
 [root@dc7700 ~]# lvextend -L+2G /dev/oravg/lv01
  Extending logical volume lv01 to 12.00 GB
  Logical volume lv01 successfully resized

 [root@dc7700 ~]# resize2fs /dev/oravg/lv01
 resize2fs 1.39 (29-May-2006)
 Filesystem at /dev/oravg/lv01 is mounted on /oracle; on-line resizing required
 Performing an on-line resize of /dev/oravg/lv01 to 3145728 (4k) blocks.
 The filesystem on /dev/oravg/lv01 is now 3145728 blocks long.
Reducing a file system size in LVM control:
 lvreduce -L 50g -r <lv_name>  # -r reduces the underlying Filesytem size also. 
'Shrinking a Logical Volume With LVM
 # umount /mnt
 # lvmdiskscan
 # e2fsck -f /dev/volgroup/logicalvol
 # resize2fs /dev/volgroup/logicalvol 40000 
 # mount /dev/volgroup/logicalvol /mnt
 # df
 # umount /mnt
 # lvreduce -L -8G /dev/volgroup/logicalvol
 # mount /dev/volgroup/logicalvol /mnt
 # vgreduce volgroup /dev/sdg
 # pvscan
Creating ext3 filesystem
 #  mkfs -t ext3 /dev/oravg/lv01

To resize the pv under lvm control

 # pvresize /dev/<disk_name>

To remove a VG from a system

 vgremove

To import a VG from the cloned disks which has the same PVID of the original disks:

 # vgimportclone

To make a partition as lvm partition in parted

 (parted) mkpart primary ext2 0 4000
 (parted) set 1 lvm on

File Systems

To change the label on ext2/ext3 Filesystems
 # e2label /dev/hdc1 /data2
To find out the super block and block group Information.
 dumpe2fs - dump ext2/ext3 filesystem information
To find more details about an file system including UUID
 tune2fs -l <device_name>
To find out the UUID of all the file sytems
 blkid 
To find out the UUID of file systems using dumpe2fs command
 dumpe2fs <device_name> | grep UUID
The /etc/fstab file
Te /etc/fstab file is segregated into 6 columns. They are as follows :
  1. Filesystem - This is the device file. It could be your hard disk, cdrom, floppy, or a partition in your hard disk.
  2. Mount point - This is the place (usually a directory) where you want your data from the device file to be made available.
  3. Type of filesystem - It could be any filesystem like ext3, ntfs, ufs2 and so on.
  4. Mount Options - This column decides what permissions are to be given and to whom for accessing the device
  5. Dump frequency - This is usually always zero which means never dump. Other values being 1 for every day, 2 for every other day and so on.
  6. fsck order
Sample entry
 /dev/mapper/data01vg-data01lv /data1       ext4    defaults        0       0
 /dev/mapper/data02vg-data02lv /data2       ext4    defaults        0       0
To backup the root disk in tar
  1. umount non root filesystems
  2. cd /
  3. tar -c --exclude tmp/rootbackup.tar --exclude proc --exclude tmp/log --exclude tmp/.font-unix/fs7100 -f tmp/rootbackup.tar *
  4. mv tmp/rootbackup.tar to other media or usb disk or etc

Repair file system using alternate super block

Find out the alternate super blocks
  # dumpe2fs /dev/testvg/lv01 | grep superblock
   dumpe2fs 1.41.12 (17-May-2010)
    Primary superblock at 0, Group descriptors at 1-2
    Backup superblock at 32768, Group descriptors at 32769-32770
    Backup superblock at 98304, Group descriptors at 98305-98306
    Backup superblock at 163840, Group descriptors at 163841-163842
    Backup superblock at 229376, Group descriptors at 229377-229378
    Backup superblock at 294912, Group descriptors at 294913-294914
Check the file system using superblock at 98304
  # e2fsck -f -b 98304 /dev/testvg/lv01
Or mount the file system using superblock 32768
  # mount -o sb=32768 /dev/testvg/lv01 /mountdir

swap

swapon, swapoff - enable/disable devices and files for paging and swapping
 swapon -a  # All  devices  marked as swap devices in /etc/fstab are made available
 swapon -s  #  Display swap usage summary by device

 swapoff <swapfile>  # disables swapping on the specified devices and files
 swapoff -a # ll  devices  marked as swap devices in /etc/fstab are disabled
To increase the swap size:
 swapoff -a  # To disable swap space
 lvextend -L 20g <lv_name>
 mkswap <lv_name>
 swapon -a  
 
 

NFS and SAMBA

Network File system (NFS)

The Network File System (NFS) was developed to allow machines to mount a disk partition on a remote machine as if it were on a local hard drive
/etc/exports - The configuration file
  • Exported directories are defined in /etc/exports
  • Each entry specifies the hosts to which the filesystem is exported plus associated permissions and options
An entry in /etc/exports will typically look like this:
directory machine1(option11,option12) machine2(option21,option22)
/export/vmware 192.168.123.0/255.255.255.0(rw,sync,insecure,no_root_squash,no_subtree_check)
The latest nfs-utils introduced /etc/exports.d; the files under the directory are loaded as if they are part of /etc/exports. Added/removing a new export point to the system becomes easier; just put or remove a file under the directory.
The following options can be used:
  • ro: The directory is shared read only; the client machine will not be able to write to it. This is the default
  • rw: The client machine will have read and write access to the directory
  • no_root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories.
  • no_subtree_check: If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
  • sync: By default, all but the most recent version (version 1.11) of the exportfs command will use async behavior, telling a client machine that a file write is complete - that is, has been written to stable storage - when NFS has finished handing the write over to the filesysytem. This behavior may cause data corruption if the server reboots, and the sync option prevents this.
Examples:
To give read write or read only permissions to systems:
  /home        192.168.0.1(rw) 192.168.0.2(ro)
To give read/write all the systems in network 192.168.0
 /home      192.168.0.0/255.255.255.0(rw)
To give read/write permission to all systems in test.com
 /home      *.test.com(rw)
To export to all the systems
 /home     *(rw)
NFS Daemons
  • portmap maps calls made from other machines to the correct RPC service
  • rpc.nfsd, which does most of the work
  • rpc.lockd and rpc.statd, which handle file locking;
  • rpc.mountd, which handles the initial mount requests,
  • rpc.rquotad, which andles user file quotas on exported volumes.
Note: In recent linux releases, lockd is called by nfsd upon demand, so you do not need to worry about starting it yourself.
To unexport all the exported directories
 # exportfs -ua
To re-read /etc/exports file
 # exportfs -ra
To un-export a share (/data) which is exported to all the hosts
 # exportfs -u *:/data
syntax: exportfs -u hostname:/<dir_name>
Problem: df: `/dir1': Stale NFS file handle
Solution: Umount using umount -f option
 umount -f /dir1
Problem:
Could not mount the NFS share exported from linux to aix server.
Sol: export the filesystem on linux with inseure option
 /stage 10.253.1.0/24(sync,insecure)
 
 

Network

Network Configuration and Configuration files and tools

Interface configuration files:
  • ifcfg-xxx located in /etc/sysconfig/network-scripts/ directory
Redhat Linux stores network interface configuration information files in directory /etc/sys-config/network-scripts. The file names are pre-fixed with ifcfg- . For example, the file for the first ethernet interface would be ifcfg-eth0.
For an interface with the static IP address configuration, the following is required.
DEVICE=eth0 IPADDR=xxx.xxx.xxx.xxx NETMASK=xxx.xxx.xxx.xxx TYPE=Ethernet PEERDNS=yes BOOTPROTO=static # dhcp for interface under DHCP ontrol ONBOOT=yes USERCTL=no GATEWAY=xxx.xxx.xxx.xxx
IF the ifcfg-eth0 file is modified, the network services can be restarted using the following command
# service network restart
To start or stop a particular network interface
# ifup eth0 # To start eth0 # ifdown eth0 # to bring down interface eth0
Configuration utilities:
netconfig: netconfig is a curses-based tool that is used to configure network interfaces, either as a DHCP client or with static IP address., nameserver, and gateway. By default it modifies the settings for the first wthernet interface (eth0), but the --device argument can be used to setup other network interfaces.
# netconfig --device eth2
redhat-config-network The redhat Network Administration tool is a X-based utility that can be used to setup Ethernet, PPP, ISDN or wireless network interfaces. It can be stated by running redhat-config-network command.
The redhat-config-network creates an alternate file hierarchy under /etc/sysconfig/networking. Modifying and interface with these tools will update two configuration files. One in /etc/syscofnig/network and another in /etc/sysconfig/networking/profiles/<profile>. Both files will have the same name (e.g. ifcfg-eth0) and are hard linked to each other.

IP Aliasing

1. Go to the directory /etc/sysconfig/network-scripts/
2. To add an IP alias to interface eth0, copy file ifcfg-eth0 file to ifcfg-eth0:1 (and to ifcfg-eth0:2 if you want to add more than one alias).
3. Now edit each of the files you have just created
a. Change the DEVICE setting to match the name specified in the name of this file, e.g. DEVICE=eth0:1 b. Change the IPADDR setting to the address you want to add. c. If necessary, change the NETMASK setting d. You can delete any other settings that are the same as in the original file. Their values will default from there. (E.g. settings for eth0:1 will default from the file ifcfg-eth0)
A sample ifcfg-etho:1 file:
# cat ifcfg-eth0:1 DEVICE=eth0:1 ONBOOT=yes BOOTPROTO=static IPADDR=10.5.5.5 NETMASK=255.0.0.0 DEVICE=eth0 ONBOOT=yes
4. Restart the network servives to make the changes effect immediatly.

Global Network Parameters:
Global network parameters such as hostnae, gateway, NISdomain etc.. are stored in /etc/syscofig/network file.
#cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=rhl.mydomain.com GATEWAY=<Gateway IP> NISDOMAIN=<NIS domain name>

Routing in Linux

To add or delete routes
route del default gw 0.0.0.0 route add default gw <new gateway> route add -net 192.168.0.0 netmask 255.255.255.0 gw 192.168.123.1
Make the Static route persistent after reboot
In the older Redhat system older than redhat 9, /etc/sysconfig/static-routes file is used
# cat static-routes eth0 net 100.2.0.0 netmask 255.255.0.0 gw 100.4.1.3 eth0 net 100.8.0.0 netmask 255.255.0.0 gw 100.4.1.3 eth0 net 100.16.0.0 netmask 255.255.0.0 gw 100.4.1.3 eth0 net 100.34.0.0 netmask 255.255.0.0 gw 100.4.1.3
In the newer system, route-eth# file can be used instead
# cat /etc/sysconfig/network-scripts/route-eth0 ADDRESS0=100.2.0.0 NETMASK0=255.255.0.0 GATEWAY0=100.4.1.3 ADDRESS1=100.8.0.0 NETMASK1=255.255.0.0 GATEWAY1=100.4.1.3

Finding and changing Network card speed settings

ethtool can be used to find out the current speed and duplex settings or to change the speed settings
# ethtool eth0 Settings for eth0: Supported ports: [ FIBRE ] Supported link modes: 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: FIBRE PHYAD: 2 Transceiver: internal Auto-negotiation: on Supports Wake-on: d Wake-on: d Link detected: yes
To Change the Network Speed settings:
Download and install the ethtool package from www.rpmfind.net
To set the network speed to 100Mbps full Duplex and autonegotiation off:
# ethtool -s eth0 speed 100 duplex full autoneg off
To force the above settings in system Startup, add the following line at the end of /etc/sysconfig/network-scripts/ifcfg-eth0 file.
ETHTOOL_OPTS=speed 100 duplex full autoneg off

Cisco VPN Client for Linux (Centos and Redhat)

The Cisco VPN client, vpnc, enables your Linux workstation to connect to a Cisco 3000 series VPN concentrator PIX firewall.
01. Configure epel repository if not already configured
02. Install the VPNC software
# yum install vpnc.x86_64 NetworkManager-vpnc.x86_64 vpnc-consoleuser.x86_64 03. Configure the vpnc
a. Go to /etc/vpnc directory b. Create a configuration file for vpnc # cat /etc/vpnc/test.conf IPSec gateway <VPN server IP address> IPSec ID <group name> IPSec secret <group_password> Xauth username <user_name> Xauth password <password>
If you have the eccrypted group password and does not know the actual group password, use the following link to de-crypt the group password
http://www.unix-ag.uni-kl.de/~massar/bin/cisco-decode
04. Start the VPN
$ sudo /usr/sbin/vpnc test.conf
05. To stop the vpn
# sudo vpnc-disconnect

Make your RHEL system as router

01. Make sure the system has at least two Network Cards. The following assumptions are made
a. Internal network in 192.168.1.0/24 b. System is connected to internet using eth1 c. System is connected to internal network using eth0 02. Turn on IP forwarding. Edit the /etc/sysctl.conf file and change the value of "net.ipv4.ip_forward" value from "0" to "1".
net.ipv4.ip_forward=1 03. Make the changes to effect using the sysctl command
# sysctl -p
04. Set IP tables so that the internal network can route packets to the Internet.
a.Routing packets to Internet connected to ISP using NAT # iptables –t nat –A POSTROUTING –o eth1 –j MASQUERADE b. To accept all connection from Internal network # iptables –A FORWARD –d 192.168.1.0/24 –j ACCEPT c. To drop connection if not from 192.168.1.0 network # iptables –A FORWARD –s ! 192.168.1.0/24 –j DROP
05. Save the Changes made to iptables configuration file
# iptables-save > /etc/sysconfig/iptables
06. Restart the network and iptables services
# service network restart # service iptables restart
Load Balance and Redundancy to the internet
If system is connected to two ISPs, and if you want to provide redundancy and load balancing to internet connection using this router, do the following additional steps.
07. Connect system to the second ISP using eth2.
08. Let us assume the IP address of eth1 (first ISP) is 202.61.19.29 with netmask 255.255.255.0 and IP address of eth2 (second ISP) is 202.63.89.45 with netmask 255.255.255.248
09. Configure Route Failover
a. Add the Default routes provided by the ISP # route add default gw 202.61.19.1 dev eth1 # route add default gw 202.63.89.1 dev eth2 Add the appropriate entries to the config files or add the above two lines to /etc/rc.d/rc.local so that these routes are configured even after reboot. b. Finally, open /proc/sys/net/ipv4/route/gc_timeout file from a terminal window and set the value from 300 to 10 and save this file. The gc_timeout file contains some timeout value, after which the kernel declares a route to be dead and automatically switches to other route. Your system will now automatically switch to the second route every time the primary route fails. Add the appropriate line to sysctl.conf make it permanent
10. Configure Load Balance (Skip step 9)
# ip route del default # ip route add default equalize nexthop via 202.61.19.1 dev eth1 nexthop via 202.63.89.1 dev eth2
Add these commands in /etc/rc.d/rc.local file, otherwise the route will vanish every time you reboot the system. Finally, open /proc/sys/net/ipv4/route/gc_timeout file from a terminal window and set the value from 300 to 10 and save this file. The gc_timeout file contains some timeout value, after which the kernel declares a route to be dead and automatically switches to other route. Your system will now automatically switch to the second route every time the primary route fails.
To load balance outbound network connections from the internal network, the CONFIG_IP_ROUTE_MULTIPATH kernel option is used, which allows you to have multiple default gateways. It is set up by removing the default gateway from the /etc/sysconfig/network file and setting up the default gateway using advanced routing features with the command we issued.

Link Aggregation and High Availability with Bonding

Linux allows binding of multiple network interfaces into a single channel/NIC using special kernel module called bonding.
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.

Step #1: Create a Bond0 Configuration File

Red Hat Enterprise Linux (and its clone such as CentOS) stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create a bond0 config file as follows:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 IPADDR=192.168.1.20 NETWORK=192.168.1.0 NETMASK=255.255.255.0 USERCTL=no BOOTPROTO=none ONBOOT=yes

Step #2: Modify eth0 and eth1 config files

# vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none # vi /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none

Step # 3: Load bond driver/module

Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:
# vi /etc/modprobe.conf Append following two lines:
alias bond0 bonding options bond0 mode=balance-alb miimon=100

Step # 4: Test configuration

First, load the bonding module, enter:
# modprobe bonding Restart the networking service in order to bring up bond0 interface, enter:
# service network restart
Make sure everything is working. Type the following cat command to query the current status of Linux kernel bounding driver, enter:
# cat /proc/net/bonding/bond0 Sample outputs:
Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 200 Down Delay (ms): 200 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:0c:29:c6:be:59 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:0c:29:c6:be:63

Removing bonding in LINUX

ifconfig bond0 down echo "-eth0" > /sys/class/net/bond0/bonding/slaves echo "-eth1" > /sys/class/net/bond0/bonding/slaves echo "-bond0" > /sys/class/net/bonding_masters rmmod bonding

Replacing the NIC cards in Linux

Once you added the NIC card using system-config-network command, eth0 or eht1 entry will be added. If a NIC card eth0 goes bad and replaced by a new NIC card, the entry for the old NIC card will be still there and the new NIC card will appear as eth1 or eth2 instead of eth1. To remove the eth0 completely and to make eth1 as eth0, edit /etc/udev/rules/70-persistant-net.rules
$ sudo vi /etc/udev/rules/70-persistent-net.rules # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x8086:0x100f (e1000) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:90:e1:e0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x8086:0x100f (e1000) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:91:e2:f0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
Now, remove the line which has NAME="eth0"
Change NAME="eth1" to NAME="eth0"
Reboot the system. After reboot, the newly replaced NIC card will have the name of eth0.
The updated /etc/udev/rules/70-persistent-net.rules files will looks like:
$ cat /etc/udev/rules/70-persistant-net.rules # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x8086:0x100f (e1000) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:91:e2:f0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

Other Networking Tips

To analyze network traffic
use tcpdump or wireshark (tshark) or ethereal
To capture all the packets thru eth0
# tcpdump -i eth0
To capture all the packets to port 200
# tcpdump -i eth0 port 2001 # tcpdump -i eth0 port 2001 or port 2050 # tcpdump -i eth0 tcp port 2001 or udp port 2000 # tcpdump -i eth0 src port 2001 # tcpdump -i eth0 dst port 2001
To list all the listioning ports
# netstat -tap To scan a host for open ports
# nmap

 

Rescue

Creation of a Recovery Disk:
mkbootdisk creates a boot floppy appropriate for the run­ning system. The boot disk is entirely self-contained, and includes an initial ramdisk image which loads any neces­sary SCSI modules for the system. The created boot disk looks for the root filesystem on the device suggested by /etc/fstab. The only required argument is the kernel ver­sion to put onto the boot floppy.
 # /sbin/mkbootdisk -- device <device_name> <Kernel_version>
Example:
 # /sbin/mkbootdisk --device /dev/fd0 2.4.9-e.3
To obtain the kernel version see your /etc/grub.conf file or the name of the kernel modules library directory for the information. (ls /lib/modules/ Use the directory name of the appropriate kernel. i.e. if the directory name is /lib/modules/2.4.9-34/ then use 2.4.9-34)
Useage:
Boot computer with the floppy in it's drive.
It will boot the kernel from the floppy but then try to use your hard drive. If successful, it will fully boot from your hard drive. It's Painless!!

tar options:
 -A   append tar files to an archive
 -c   create a new archive
 -d   find differences between archive and file system
 -r   append files to the end of an archive
 -t   list the contents of an archive
 -u   only append files that are newer than copy in archive
 -x   extract files from an archive
 -l   stay in local file system when creating an archive
 -M   create/list/extract multi-volume archive
 -v   verbose mode
 -j   filter the archive through bzip2 
 -z   filter the archive through gzip
 -Z   filter the archive through compress
 -f <file_name>   Name of the archive/Device 
 
 

Security

To check the failed login attemtps
 # lastb
 www      ssh:notty    116.193.40.59    Thu Sep 20 06:46 - 06:46  (00:00)
 cyrus    ssh:notty    116.193.40.59    Thu Sep 20 06:46 - 06:46  (00:00)
 cyrus    ssh:notty    116.193.40.59    Thu Sep 20 06:46 - 06:46  (00:00)
lastb searches /var/log/btmp file for failed login attempts and the command may fail if /var/log/btmp file does not exist.
Another command to test the failed login attempts
# faillog -a
Login       Failures Maximum   Latest    On

root          0        0           12/31/69  16:00:00 -0800
bin           0        0            12/31/69    16:00:00 -0800
daemon        0        0            12/31/69    16:00:00 -0800 
faillog --help
Usage: faillog [options]

Options:
  -a, --all   display faillog records for all users
  -h, --help   display this help message and exit
  -l, --lock-time SEC  after failed login lock accout to SEC seconds
  -m, --maximum MAX  set maximum failed login counters to MAX
  -r, --reset   reset the counters of login failures
  -t, --time DAYS  display faillog records more recent than DAYS
  -u, --user LOGIN  display faillog record or maintains failure counters
    and limits (if used with -r, -m or -l options) only
    for user with LOGIN
last command is used to list the list of last logged in users. It refers /var/log/wtmp file
To check the listing of last logged in users
 # last 
 reboot   system boot  2.6.18-8.1.8.el5 Fri Sep 21 02:07          (09:44)
 reboot   system boot  2.6.18-8.1.8.el5 Thu Sep 20 21:11          (14:40)
 root     tty1                          Thu Sep  6 19:26 - 19:26  (00:00)
Password aging Policies:
chage: used to chage the number of days between password changes and the date of the last password change. This information is used by the system to determine when a user must change her password
Usage: chage [-m mindays] [-M maxdays] [-d lastday] [-I inactive] [-E expiredate] [-W warndays] user
chage command can also be used to find out the last password change, password expiration date, numberdays between password channge, etc..
  # chage -l root
  Last password change                                    : Aug 18, 2005
  Password expires                                        : never
  Password inactive                                       : never
  Account expires                                         : never
  Minimum number of days between password change          : 0
  Maximum number of days between password change          : 99999
  Number of days of warning before password expires       : 7
Set default password expiry policy for all new users
/etc/login.defs defines the password expiry policy for all the newly created users. The following policies can be defined in /etc/login.defs
 #   PASS_MAX_DAYS   Maximum number of days a password may be used.
 #   PASS_MIN_DAYS   Minimum number of days allowed between password changes.
 #   PASS_MIN_LEN    Minimum acceptable password length.
 #   PASS_WARN_AGE   Number of days warning given before a password expires.
 #   MAIL_DIR 
 #   MAIL_FILE
 #   UID_MIN            Minimum UID number to use for new users
 #   UID_MAX            Maximum UID number to use for new users
 #   GID_MIN            Minimum GID number to use for new Groups
 #   GID_MAX            Maximum GID number to use for new Groups
 #   CREATE_HOME yes    If useradd should create home directories for users by default
 #   UMASK 077
 #   ENCRYPT_METHOD MD5
Setting limits to each user using pam
Limits can be set to each user by modifying /etc/security/limits.conf file. This file is the configuration file for the pam_limits module.
 oracle              soft    nproc   2047           ## Max. Number of Processes
 oracle              hard    nproc   16384
 oracle              soft    nofile  1024          ## Max. Number of open files
 oracle              hard    nofile  65536
The following line need to be added to /etc/pam.d/login file the limits to take effect
 session    required     pam_limits.so
To limit the root logins:
Add all the ttys from where you want to login directly as root to /etc/securetty file. The /etc/securetty file governs the consoles from where you can log into Linux as the root user.

Setting Password Complexity Rules

Both pam_cracklib and pam_passwdqc are modules used in enforcing password length and complexity, though pam_passwordqc is more powerful.
TO Set the password complexity with the following rules,
  1. Minimum password lenth 8 characters (minlen)
  2. Minimum one lower case letter (lcredit)
  3. Minimum one Upper case letter (ucredit)
  4. Minimum one numeric character (dcredit)
  5. Mimimum one special character (ocredit)
  6. Minimum theree different characters from previous passwd (diffok)
Edit the /etc/pam.d/system-auth like bellow line
 password    requisite     pam_cracklib.so retry=3 minlen=8 lcredit=-1 ucredit=-1 dcredit=-1 ocredit=-1 diffok=-3
In the above line, "retry=3" means that users get three chances to pick a good password before the passwd program aborts.

Password History

pam_cracklib is capable of consulting a user's password "history" and not allowing them to re-use old passwords. However, the functionality for actually storing the user's old passwords is enabled via the pam_unix module.
The first step is to create an empty /etc/security/opasswd file for storing old user passwords. If you forget to do this before enabling the history feature in the PAM configuration file, then all user password updates will fail because the pam_unix module will constantly be returning errors from the password history code due to the file being missing.
Treat your opasswd file like your /etc/shadow file because it will end up containing user password hashes (albeit for old user passwords that are no longer in use):
 # touch /etc/security/opasswd
 # chown root:root /etc/security/opasswd
 # chmod 600 /etc/security/opasswd
Once you've got the opasswd file set up, enable password history checking by adding the option "remember=<x>" to the pam_unix configuration line in the /etc/pam.d/system-auth (/etc/pam.d/common-password in debian)file. The value of the "remember" parameter is the number of old passwords you want to store for a user.
 password    requisite      pam_cracklib.so retry=3 minlen=6 lcredit=-1 ucredit=-1 ocredit=-1
 password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_authtok remember=5

 password required pam_cracklib.so retry=3 minlen=12 difok=4
 password required pam_unix.so md5 remember=12 use_authtok
Once you've enabled password history, the opasswd file starts filling up with user entries that look like this:
 user1:1000:<n>:<hash1>,<hash2>,...,<hashn>
The first two fields are the username and user ID. The <n> in the third field represents the number of old passwords currently being stored for the user--this value is incremented by one every time a new hash is added to the user's password history until <n> ultimately equals the value of the "remember" parameter set on the pam_unix configuration line. <hash1>,<hash2>,...,<hashn> are actually the MD5 password hashes for the user's old passwords.

Password Expiration

At this point you may be wondering how to get the system to automatically force users to change their password after some period of time. This is not actually the job of pam_cracklib. Instead, these parameters are set in the /etc/login.defs file on most Linux systems. PASS_MAX_DAYS is how often users have to change their passwords. PASS_MIN_DAYS is how long a user is forced to live with their new password before their allowed to change it again. PASS_WARN_AGE is the number of days before the password expiration date that the user is warned that their password is about to expire. The choice of values for these parameters is entirely dependent on site policy. Note that these parameters are only applied to new accounts created with the default system useradd program.

Account Lockout

Account lockout after a number of unsuccessful authentication attempts may be enabled using pam_tally. In this example, accounts are locked out after 5 failed login attempts. Twice an hour, the failed login counter is reset. The failed login counter is also reset with each successful authentication (reset option in PAM configuration).
Create the pam_tally store for failed login attempts.
 # touch /var/log/faillog
 # chown root:root /var/log/faillog
 # chmod 600 /var/log/faillog
Add the red lines to /etc/pam.d/system-auth file as shown bellow
 auth        required      pam_env.so
 auth        required      pam_tally.so onerr=fail deny=5 unlock_time=1800
 auth        sufficient    pam_unix.so nullok try_first_pass
 auth        requisite     pam_succeed_if.so uid >= 500 quiet
 auth        required      pam_deny.so

 account     required      pam_unix.so
 account     sufficient    pam_succeed_if.so uid < 500 quiet
 account     required      pam_permit.so
 account     required      pam_tally.so
The pam_tally options are:
  • onerr=[fail|succeed] --->If something weird happens (like unable to open the file), return with PAM_SUCESS if onerr=succeed is given, else with the corresponding PAM error code.
  • deny=n ---> Deny access if tally for this user exceeds n.
  • lock_time=n ---> Always deny for n seconds after failed attempt.
  • unlock_time=n ---> Allow access after n seconds after failed attempt. If this option is used the user will be locked out for the specified amount of time after he exceeded his maximum allowed attempts. Otherwise the account is locked until the lock is removed by a manual intervention of the system administrator.
Unlock user account / Reset tally count
The users locked by tally can be unlocked or tally count can be reset using pam_tally command.
 pam_tally [--file /path/to/counter] [--user username] [--reset[=n]] [--quiet]

 # pam_tally
 User user1   (500)   has 8
 # pam_tally --user user1 --reset   
 User user1   (500)   had 8
 # pam_tally --user user1 --reset=3

 # pam_tally --user user1 --reset=2
 User user1   (500)   had 0
 # aemsapa5:/etc/pam.d # pam_tally
 User user1   (500)   has 2
RHEL 3/4 Example
 auth        required      /lib/security/$ISA/pam_env.so
 auth        required      /lib/security/$ISA/pam_tally.so onerr=fail no_magic_root
 auth        sufficient    /lib/security/$ISA/pam_unix.so likeauth nullok
 auth        required      /lib/security/$ISA/pam_deny.so

 account     required      /lib/security/$ISA/pam_unix.so
 account     sufficient    /lib/security/$ISA/pam_succeed_if.so uid < 100 quiet
 account     required      /lib/security/$ISA/pam_permit.so
 account     required      /lib/security/$ISA/pam_tally.so deny=5 no_magic_root reset
  • pam_tally on RHEL 3/4 does not support the unlock_time parameter.

pam_passwdqc module is used to set the passwd complexity
http://www.openwall.com/passwdqc/

 

Scheduling

CRON

crontab files are stored in /var/spool/cron.
Crontab command is used to install, remove, list or edit the crontab files.
 # crontab -u <username> <file_name>
 # crontab -e  # To edit the crontab files
 # crontab -l # to display the contents of crontab file
 # crontab -r # to remove the crontab file
Fields of Crontab file is:
 Minute Hour Day_of_Month Month Day_of_week Command

at

 # at 9.00am April 5
 at> rm /tmp/*

 # atq  # lists the pending jobs scheduled with at
 # at -l # same as above
 # atrm <job_number> # cancels a job that is pending 
 # at -d <job_number> # same as above



 atrm # 

System crontab files

  • Different format than user crontab files
  • Master crontab file /etc/crontab runs executables in the following directories
    • /etc/cron.hourly
    • /etc/cron.daily
    • /etc/cron.weekly
    • /etc/cron.monthly
  • /etc/cron.d/ directory contains additional system crontab files. 

RPM

Software Management

The following commands are maily used to install and maintain packages/softwares on systems running on Redhat or Fedora systems.

RPM: Package Manager


The RPM system consists of a local database, the rpm executable, rpm package files and Internet accessible metadirectories.
The local RPM database is maintained in /var/lib/rpm. The database stores information about installed packages such as file attributes and package prerequisites. Package files are named using following format
name-version-release.architecture.rpm
RPM installing and removing software:
  • Install: rpm -i, --install -v gives detailed output. -h for print hash marks
  • upgrade: rpm -U, --upgrade
  • Freshen: rpm -F, --freshen. It upgrades only if the package is currently installed.
  • Erase: rpm -e, --erase
To install rpm
 rpm -ivh <package_file_name ....>
when installing rpm package, it consults the local database to ensure that prerequisites are met. The checks can be omitted by enabling the --nodpes or --replacefiles command-line switches, respectively, or both using the --force switch.
To Upgrade RPM
 rpm -Uvh <package_file_name ....>
 rpm -F <package_file_name ....> 
rpm can be used to upgrade already installed software with the -U (--upgrade) switch. When upgrading with -U, the package will be installed whether or not it is already installed. Freshening is almost identical to upgrading, but with freshening, the package will be ignored if not already installed.
To uninstallRPM
 rpm -e <package_name ...>
To uninstall multiple copy of same RPM
 rpm -e --allmatches filename.rpm
Software is removed from system using the -e (--erase) switch.

Updating the Kernel RPM


Do not use rpm -U while updating the kernel rpm.
Only use
  rpm -ivh  <kernel-version.arch.rpm> 
It is possible to install multiple versions of Kernel package. If you use rpm -ivh instead of -U, the new kernel will be added to the system but the old kernel will still remain on it as well. In case of any problem with the new kernel, we can still boot from the old kernel by modifying the /boot/grub.conf file.

RPM Queries:

rpm -qa # to get list of all installed software
rpm -qi <package name> # gives detailed output of package information
rpm -ql <package name> # gives list of files contained in the package
rpm -qf <file name> # gives package that owns the file
--requires # package pre-requisites
--scripts # scripts run upon installation and removal
--provides # capabilities provided by the package

rpm queries and verification:

rpm -V <package name> # verifies the installed package against the RPM database
rpm -Va # to verify all the packages
rpm -Vf /usr/bin/vim # To verify a package containing a particular file

Other rpm utilities:

rpm2cpio: This command allows the files contained within a package file to be converted into cpio stream
 rpm2cpio zip-2.3-8.i386.rpm | cpio --extract --make-directories *bin*

RPM signature verification:

Redhat signs all the package with GPG provate signature. This public key is shipped with every redhat distrubution (1st CD). The integrity of the package can be checked by using rpm --checksig option. To verify the integrity of any package, the redhat public key should be imported first.
gpg --import /mnt/cdrom/RPM-GPG-KEY # gpg is encription and signing tool
rpm --import /mnt/cdrom/RPM-GPG-KEY
rpm -qa gpg-pubkey
rpm --checksig zip-2.3-14.i386.rpm

up2date

up2date is no longer available from RHEL 5
rhn_register
Note:- Starting from RHEL 5, red-hat has replaced up2date with yum. There is no up2date command in RHEL 5.x. Use the “rhn_register” command to register the system with Red Hat Network.
up2date utility will query RHN (Red Hat Network) for any relevant updates published, download the RPM package files to /var/spool/up2date directory and optionally upgrade the packages as well.
-u, --update # update accoring to default configuration
-l, --list # list relevant updates only
-d, --download # download relevent updates only (no update)
-i, --install # download and install relevant updates
-p,--packages # resync RHN profile to currently installed rpm
To change the configuration file of up2date
 /usr/sbin/up2date-config 
The first time, when the up2date utility is run, it will prompt for the RHN user name and password (If it is redhat Advanced server edition). After giving the user name and password, the registration utility creates a certificate in /etc/sysconfig/rhn/systemid that serves to identify the machine to the redhat network. To regenerate a new certificate for the system, simply delete this file and re-run up2date utility.
Note: if the system is behind firewall, and need to access the system using proxy, export the server configuration of the system using the following command
 export http_proxy="http://10.82.128.128:8080"
Remote administration of up2date
Webbased administration https://rhn.redhat.com
RHN can be used to perform remote administration of collections of machines. First, actions for the machine such as specifig package installation or upgrades are queued for the machine using RHN account.
Client machines use the rhnsd daemon to poll RHN priodically for quyeyed actions. By default, it polls every 2 hours. But it can be adjusted in /etc/sysconfig/rhn/rhnsd. The rhnsd daemon uses the /usr/sbin/rhn_check command to actually perform the poll and administer any queued actions.

yum - Yellowdog Updater Modified

  • yum is an interactive, automated update program which can be used for maintaining systems using rpm.
  • Additional yum repositories can be added by adding entries in /etc/yum.conf or putting separate files for each repo in /etc/yum.repos.d/ directory.
  • The log file for yum is stored in /var/log/yum.log file
The options of yum are
  • install --> To install the latest version of a package or group of packages while ensuring that all dependencies are satisfied.
  • update --> If run without any packages, update will update every currently installed package. If one or more packages are specified, Yum will only update the listed packages.
  • check-update --> Implemented so you could know if your machine had any updates that needed to be applied without running it interactively. Returns exit value of 100 if there are packages available for an update. Also returns a list of the pkgs to be updated in list format. Returns 0 and no packages are available for update.
  • upgrade --> Is the same as the update command with the --obsoletes flag set. See update for more details.
  • remove or erase --> Are used to remove the specified packages from the system as well as removing any packages which depend on the package being removed.
  • list --> Is used to list various information about available packages
  • provides or whatprovides --> Is used to find out which package provides some feature or file.
  • search --> Is used to find any packages matching a string in the description, summary, packager and package name fields of an rpm. Useful for finding a package you do not know by name but know by some word related to it.
  • deplist --> Produces a list of all dependencies and what packages provide those dependencies for the given packages.
  • clean --> Is used to clean up various things which accumulate in the yum cache directory over time.
  • grouplist --> To list the available installable groups (like GNOME Desktop, X WINDOWS)
  • groupinstall --> To install all the rpms in a group (eg X Windows)
  • --enablerepo=<RepoName> Enables specific repositories by id or glob that have been disabled in the configuration file using the enabled=0 option.
  • --disablerepo=<RepoName> Disables specific repositories by id or glob.

Examples

To install a package or group of packages (-y used to answer yes to all questions)
 yum -y install <package1 package2 ...>  
To install X-Windows on a system
 yum groupinstall "X Window System" "GNOME Desktop Environment"
To view available groups for installation
 yum grouplist
To update every currently installed package
 yum -y update 
To update a singel or group of packages
 yum -y update <package1 package2 ...>
To remove a package
 yum remove <package1>
      or
 yum erase <package>
To cleanup the yum cache directory
 yum clean
To search for some packages matching a string in the description, summary, and package name fields of an rpm.
 yum search <search_string>
To check for updates
 yum check-update
To search only rpmforge repo for package 'mailscan'
 yum search --disablerepo=* --enablerepo=rpmforge mailscan

Adding more repos:

You have two ways to add more Centos repositories:
  • Drop .repo files into your /etc/yum.repos.d/ directory
  • Add a repository entry in your /etc/yum.conf file
EPEL (Extra Packages for Enterprise Linux) is a volunteer-based community effort from the Fedora project to create a repository of high-quality add-on packages that complement the Fedora-based Red Hat Enterprise Linux (RHEL) and its compatible spinoffs, such as CentOS and Scientific Linux.
To install EPEL Repo
 # su -c 'rpm -Uvh http://download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm'
 # su -c 'rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm' 
 # su -c 'rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/x86_64/epel-release-5-3.noarch.rpm'
 # su -c 'yum -y install foo'
To install rpmforge Repo
 # su -c 'rpm -Uvh http://apt.sw.be/redhat/el5/en/i386/RPMS.dag/rpmforge-release-0.3.6-1.el5.rf.i386.rpm'
 # su -c 'rpm -Uvh http://apt.sw.be/redhat/el5/en/x86_64/RPMS.dag/rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm'
To enable or disable a Repo using command line
 # yum-config-manager --enable repository…
Additional third-party repositories
Add the bellow lines in /etc/yum.conf file
 
[dries]
name=Extra Fedora rpms dries - $releasever - $basearch
baseurl=http://ftp.belnet.be/packages/dries.ulyssis.org/redhat/el4/en/i386/dries/RPMS
gpgcheck=1
enabled=1

[dag]  ## Large collection 
name=Dag RPM Repository for Red Hat Enterprise Linux
baseurl=http://apt.sw.be/redhat/el$releasever/en/$basearch/dag
gpgcheck=1
enabled=1

[extras]
name=CentOS-$releasever - Extras
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
baseurl=http://mirror.centos.org/centos/5.1/extras/i386/
gpgcheck=1
enabled=0
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5

[kbs-CentOS-Extras]
name=CentOS.Karan.Org-EL$releasever - Stable
gpgcheck=1
gpgkey=http://centos.karan.org/RPM-GPG-KEY-karan.org.txt
enabled=1
baseurl=http://centos.karan.org/el$releasever/extras/stable/$basearch/RPMS/

[kbs-CentOS-Misc]
name=CentOS.Karan.Org-EL$releasever - Stable
gpgkey=http://centos.karan.org/RPM-GPG-KEY-karan.org.txt
gpgcheck=1
enabled=1
baseurl=http://centos.karan.org/el$releasever/misc/stable/$basearch/RPMS/

[kbs-CentOS-Misc-Testing]
name=CentOS.Karan.Org-EL$releasever - Testing
gpgkey=http://centos.karan.org/RPM-GPG-KEY-karan.org.txt
gpgcheck=1
enabled=1
baseurl=http://centos.karan.org/el$releasever/misc/testing/i386/RPMS/

[atrpms]
name=Fedora Core $releasever - $basearch - ATrpms
baseurl=http://dl.atrpms.net/el$releasever-$basearch/atrpms/stable
gpgkey=http://ATrpms.net/RPM-GPG-KEY.atrpms
gpgcheck=1enabled=1

#[fedora7]
#name=Fedora  - 7 - i386
#baseurl=http://mirrors.kernel.org/fedora/releases/7/Everything/i386/os/
#gpgcheck=1
#gpgkey=http://mirrors.kernel.org/fedora/releases/7/Everything/i386/os/RPM-GPG-KEY-Fedora

To use the above repositories, you need to add their GPG key for authentication:
 
$ su -
# rpm --import http://centos.karan.org/RPM-GPG-KEY-karan.org.txt
# rpm --import http://dries.ulyssis.org/rpm/RPM-GPG-KEY.dries.txt
# rpm --import http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt 
# rpm --import http://ATrpms.net/RPM-GPG-KEY.atrpms

Graphical Tools for software

  • pirut is a graphical frontend to yum. Pirut can install, remove, or update software packages. Pirut also allows searching, or viewing and installing software package groups.
  • pup provides a handsome graphical front end for installing software updates via yum
  • puplet is a service that runs during a desktop session, and appears in the system tray when software updates are available, allowing you to launch pup 

Storage Management

To check the Link status of an HBA Card
 # cd /sys/class/fc_host/host#
 # cat port_state
 Online
To find out the WWPN
 # cat /sys/class/fc_host/host/port_name
 0x10000000c9b0ed59

Scanning Storage Interconnects

The following commands can be used to scan storage interconnects.

echo "1" > /sys/class/fc_host/hosth/issue_lip

This operation performs a Loop Initialization Protocol (LIP) and then scans the interconnect and causes the SCSI layer to be updated to reflect the devices currently on the bus. A LIP is, essentially, a bus reset, and will cause device addition and removal. This procedure is necessary to configure a new SCSI target on a Fibre Channel interconnect.
Bear in mind that issue_lip is an asynchronous operation. The command may complete before the entire scan has completed. You must monitor /var/log/messages to determine when it is done.
The lpfc (emulex) and qla2xxx (Qlogic) drivers support issue_lip

/usr/bin/rescan-scsi-bus.sh

This script is included in Red Hat Enterprise Linux 5.4 and all future updates as part of sg3_utils RPM. By default, this script scans all the SCSI buses on the system, updating the SCSI layer to reflect new devices on the bus. The script provides additional options to allow device removal and the issuing of LIPs.

echo "- - -" > /sys/class/scsi_host/hosth/scan

This command is used to add a storage device or path. In the above command, the Channel Number, SCSI Target ID, and LUN values are replaced by wildcards. This procedure will add LUNs, but not remove them.

rmmod driver-name or modprobe driver-name

These commands completely re-initialize the state of all interconnects controlled by the driver. Although this is extreme, it may be appropriate in some situations. This may be used, for example, to re-start the driver with a different module parameter value.
More information about storage reconfiguration can be found from the following Redhat Guide.
Storage Reconfiguration Guide
If the above methods are not working, try installing the latest driver by downloading it from Emulex or Qlogic web sites.
If emulex cards are used, One Connect Manager can be installed on the system to manage the LUNs and cards.

Storage Adding examples.

To configure the newly added LUNS on RHEL:
 # ls /sys/class/fc_host
 host0  host1

 # fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l
 # echo "1" > /sys/class/fc_host/host0/issue_lip
 # echo "- - -" > /sys/class/scsi_host/host0/scan
 # echo "1" > /sys/class/fc_host/host1/issue_lip
 # echo "- - -" > /sys/class/scsi_host/host1/scan

 # cat /proc/scsi/scsi | egrep -i 'Host:' | wc -l
 # fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l
To scan new LUNs on Linux operating system which is using QLogic driver
You need to find out driver proc file /proc/scsi/qlaXXX. For example on my system it is /proc/scsi/qla2300/0
Once file is identified you need to type following command (login as the root user):
 # echo "scsi-qlascan" > /proc/scsi/qla2300/0
 # cat /proc/scsi/qla2300/0
Now use the script rescan-scsi-bus.sh to configure new LUN as a device. Run script as follows:
 # ./rescan-scsi-bus.sh -l -w
       or
 # /bin/rescan-scsi-bus.sh -w --hosts=4-5 --luns=0-31
For older systems, the script can be here
How to differentiate local storage from SAN LUN
The output of ls -l /sys/block/*/device should give you an idea about how each device is connected to the system.
In the example below red is a virtual disk on an internal RAID controller, green is a CD-ROM connected via an IDE controller, and the rest are SAN-connected SCSI disks where "hostN" refers to the instance of the Host Bus Adapter they are connected to.
 # ls -l /sys/block/*/device
 lrwxrwxrwx  1 root root 0 Sep 19 02:11 /sys/block/cciss!c0d0/device -> ../../devices/pci0000:00/0000:00:04.0/0000:0d:00.0/disk0
 lrwxrwxrwx  1 root root 0 Sep 19 02:11 /sys/block/hda/device -> ../../devices/pci0000:00/0000:00:1f.1/ide0/0.0
 lrwxrwxrwx  1 root root 0 Sep 18 14:58 /sys/block/sda/device -> ../../devices/pci0000:00/0000:00:02.0/0000:13:00.0/host0/target0:0:0/0:0:0:0
 lrwxrwxrwx  1 root root 0 Sep 18 14:58 /sys/block/sdb/device -> ../../devices/pci0000:00/0000:00:02.0/0000:13:00.0/host0/target0:0:0/0:0:0:1
 lrwxrwxrwx  1 root root 0 Sep 18 14:58 /sys/block/sdc/device -> ../../devices/pci0000:00/0000:00:02.0/0000:13:00.0/host0/target0:0:0/0:0:0:64
 lrwxrwxrwx  1 root root 0 Sep 18 14:58 /sys/block/sdd/device -> ../../devices/pci0000:00/0000:00:02.0/0000:13:00.0/host0/target0:0:0/0:0:0:120
 ...

WWID and UUID

The World Wide Identifier (WWID) can be used in reliably identifying devices. It is a persistent, system-independent ID that the SCSI Standard requires from all SCSI devices. The WWID identifier is guaranteed to be unique for every storage device, and independent of the path that is used to access the device.
This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83) or Unit Serial Number (page 0x80). The mappings from these WWIDs to the current /dev/sd names can be seen in the symlinks maintained in the /dev/disk/by-id/' directory.
For example, a device with a page 0x83 identifier would have:
 scsi-3600508b400105e210000900000490000 -> ../../sda
Or, a device with a page 0x80 identifier would have:
 scsi-SSEAGATE_ST373453LW_3HW1RHM6 -> ../../sda
Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/name to reference the data on the disk, even if the path to the device changes, and even when accessing the device from different systems.
If there are multiple paths from a system to a device, device-mapper-multipath uses the WWID to detect this. Device-mapper-multipath then presents a single "pseudo-device" in /dev/mapper/wwid, such as /dev/mapper/3600508b400105df70000e00000ac0000.
The command multipath -l shows the mapping to the non-persistent identifiers:
Host:Channel:Target:LUN, /dev/sd name, and the major:minor number.
 3600508b400105df70000e00000ac0000 dm-2 vendor,product
 [size=20G][features=1 queue_if_no_path][hwhandler=0][rw]
 \_ round-robin 0 [prio=0][active]
 \_ 5:0:1:1 sdc 8:32 [active][undef]
 \_ 6:0:1:1 sdg 8:96 [active][undef]
 \_ round-robin 0 [prio=0][enabled]
 \_ 5:0:0:1 sdb 8:16 [active][undef]
 \_ 6:0:0:1 sdf 8:80 [active][undef]
Device-mapper-multipath automatically maintains the proper mapping of each WWID-based device name to its corresponding /dev/sd name on the system. These names are persistent across pathchanges, and they are consistent when accessing the device from different systems.
When the user_friendly_names feature (of device-mapper-multipath) is used, the WWID is mapped to a name of the form /dev/mapper/mpathn. By default, this mapping is maintained in the file /var/lib/multipath/bindings. These mpathn names are persistent as long as that file is maintained.

UUID and Other Persistent Identifiers

If a storage device contains a filesystem, then that filesystem may provide one or both of the following:
  • Universally Unique Identifier (UUID)
  • Filesystem label
These identifiers are persistent, and based on metadata written on the device by certain applications.
They may also be used to access the device using the symlinks maintained by the operating system in the /dev/disk/by-label/ (e.g. boot -> ../../sda1 ) and /dev/disk/by-uuid/ (e.g. f8bf09e3-4c16-4d91-bd5e-6f62da165c08 -> ../../sda1) directories

Removing a Storage Device from system

To ensure a Clean Device Removal:
  1. Use umount to unmount any file systems that mounted the device
  2. Remove the device from any md and LVM volume using it
  3. If the device uses multipathing, run multipath -l and note all the paths to the device. Afterwards, remove the multipathed device using multipath -f device.
    # multipath -l
    # multipath -f mpath1
  4. Run blockdev -–flushbufs device to flush any outstanding I/O to all paths to the device.
    # blockdev -–flushbufs /dev/sdc
    # blockdev -–flushbufs /dev/sdd
  5. Finally, remove each path to the device from the SCSI subsystem
    # echo 1 > /sys/block/sdc/device/delete
    # echo 1 > /sys/block/sdd/device/delete

Re-read The Partition Table Without Rebooting

If a new partition is created in the boot disk using fdisk, the Linux system need to be rebooted to get partition recognized. However with partprobe command you should able to create a new file system without rebooting the box. It is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.
After the fdisk command session (which makes changes to partition table) just type the following command:
  # partprobe /dev/sdX

Multipath in RHEL

For detailed information for setting up DM-Multipath in RHEL, refer the official guide attached bellow.
'''RHEL Device Mapper Multipath Configuration and Administration Guide
  • dm-multipath kernel module: It eroutes I/O and supports failover for paths and path groups.
  • multipath command: Lists and configures multipath devices. Normally started up with /etc/rc.sysinit, it can also be started up by a udev program whenever a block device is added or it can be run by the initramfs file system.
  • multipathd daemon: Monitors paths; as paths fail and come back, it may initiate path group switches. Provides for interactive changes to multipath devices. This must be restarted for any changes to the /etc/multipath.conf file.
  • kpartx command: Creates device mapper devices for the partitions on a device It is necessary to use this command for DOS-based partitions with DM-MP. The kpartx is provided in its own package, but the device-mapper-multipath package depends on it.

Multipath Device Identifiers

Each multipath device has a World Wide Identifier (WWID), which is guaranteed to be globally unique and unchanging. By default, the name of a multipath device is set to its WWID. Alternately, you can set the user_friendly_names option in the multipath configuration file, which sets the alias to a node-unique name of the form mpathn.
You can also set the name of a multipath device to a name of your choosing by using the alias option in the multipaths section of the multipath configuration file.
For example, a node with two HBAs attached to a storage controller with two ports via a single unzoned FC switch sees four devices: /dev/sda, /dev/sdb, dev/sdc, and /dev/sdd. DM-Multipath creates a single device with a unique WWID that reroutes I/O to those four underlying devices according to the multipath configuration. When the user_friendly_names configuration option is set to yes, the name of the multipath device is set to mpathn.
When new devices are brought under the control of DM-Multipath, the new devices may be seen in three different places under the /dev directory: /dev/mapper/mpathn, /dev/mpath/mpathn, and /dev/dm-n.
  • The devices in /dev/mapper are created early in the boot process. Use these devices to access the multipathed devices, for example when creating logical volumes.
  • The devices in /dev/mpath are provided as a convenience so that all multipathed devices can be seen in one directory. These devices are created by the udev device manager and may not be available on startup when the system needs to access them. Do not use these devices for creating logical volumes or filesystems.
  • Any devices of the form /dev/dm-n are for internal use only and should never be used.

Setting up DM-Multipath (Device Mapper Multipath

01. Install the device-mapper-multipath package if already not installed
    # yum install device-mapper-multipath
02. Edit the multipath.conf file
  • comment out the default blacklist
    blacklist {
    devnode "*"
    }
  • change any of the existing defaults as needed
  • save the configuration file
03. Start the multipath daemons and create the multipath device with the multipath command.
 # modprobe dm-multipath
 # service multipathd start
 # multipath -v2
 # chkconfig multipathd on
Since the value of user_friendly_name is set to yes in the configuration file the multipath devices will be created as /dev/mapper/mpathn.

Systems with separate /var filesystem

Problem: When using the user_friendly_names feature the device-mapper-multipath package stores a persistent database of device name to WWID mappings in the /var file system. This is problematic for systems that have been configured to mount a separate file system on the /var directory since the database will not be available during booting until this file system has been mounted.
This may lead to inconsistent device naming, configuration, and in some circumstances data corruption. This is due to devices being mis-identified during the boot process.
To avoid these problems it is required to relocate the bindings file database to a path within the root file system.
Solution: To relocate the bindings file on a system using multipath and having a separate /var file system, perform the following steps:
01. On systems that have been configured with a separate /var file system the following configuration directive should be added to the defaults section of the mulitpath.conf configuration file:
 ## Use user friendly names, instead of using WWIDs as names.
 defaults {
      user_friendly_names     yes
      bindings_file           /etc/multipath/bindings
 }
Use of a location within the /etc directory ensures that the bindings database is always available since this directory is required to be part of the root file system.
Note: Use of the bindings_file directive to specify a database path within the root file system is mandatory for systems configured with a separate /var file system.
02. Copy the existing bindings database to the new location
If multipath has already been configured on the system and device aliases have already been stored in the database, the existing bindings file should be copied to the new location
 # cp /var/lib/multipath/bindings /etc/multipath/bindings
03. Flush and reconfigure all multipath devices or reboot
 # service multipathd reload
 # multipath -F
 # multipath 
 
 

Systeminfo

Memory and swap
'free' command is used find out the physical memory installed on a system free command is used. ‘free –m’ gives the size in MB
  # free -m
             total       used       free     shared    buffers     cached
  Mem:         48298      38577       9721          0        360      34202
  -/+ buffers/cache:       4014      44284
  Swap:        20479        365      20114
The second line starting with -/+ buffers/cache: tells us how much of the memory in the buffers/cache is used by the applications and how much is free. Keep in mind that in general the cache is filled with disk IO cached data. The cache can be very easily reclaimed by the OS for applications. Linux will always try to use free RAM for caching IO, so "free" will almost always be very low. Therefore the line "-/+ buffers/cache:" is shown, because it shows how much memory is free when ignoring caches; caches will be freed automatically if memory gets scarce, so they do not really matter.
Let BUFFERS + CACHED from first line be value X.
X subtracted from the USED memory from the first line gives how much RAM is used by applications (USED value on second line)
X added to the FREE memory on the first line gives how much RAM applications can still request from the OS.
A Linux system is really low on memory if the free value in "-/+ buffers/cache:" gets low.
Also we can use ‘cat /proc/meminfo’ to find out the memory usage
To find out current swap usage:
  # swapon -s
  Filename                                Type            Size    Used    Priority
  /dev/sda6                               partition       17125248        0       -1
  /dev/sdb1                               partition       143364020       0       -2
To find out the swap space used by each process
 #! /bin/bash
swapusage () {
swap_total=0
echo "Process_Name PID Swap_Used"
for i in /proc/[0-9]*; do
  pid=$(echo $i | sed -e 's/\/proc\///g')
  swap_pid=$(cat /proc/$pid/smaps |
    awk 'BEGIN{total=0}/^Swap:/{total+=$2}END{print total}')
  if [ "$swap_pid" -gt 0 ]; then
    name=$(cat /proc/$pid/status | grep ^Name: | awk '{print $2}')
    echo "${name} ${pid} ${swap_pid} kB"
    let swap_total+=$swap_pid
  fi
done
echo -e "=================================================================="
echo "Total: ${swap_total}KB"
}
swapusage | awk '{printf " %-40s%-15s%s \n", $1,$2,$3}'
echo -e "==================================================================" 
To make the newly added (dynamically) memory to a linux VM available to OS without rebooting the os, use the following script:
#!/bin/bash
if [ "$UID" -ne "0" ]
 then
  echo -e "You must be root to run this script.\nYou can 'sudo' to get root access"
  exit 1
fi


for MEMORY in $(ls /sys/devices/system/memory/ | grep memory)
do
 SPARSEMEM_DIR="/sys/devices/system/memory/${MEMORY}"
 echo "Found sparsemem: \"${SPARSEMEM_DIR}\" ..."
 SPARSEMEM_STATE_FILE="${SPARSEMEM_DIR}/state"
 STATE=$(cat "${SPARSEMEM_STATE_FILE}" | grep -i online)
 if [ "${STATE}" == "online" ]; then
  echo -e "\t${MEMORY} already online"
 else
  echo -e "\t${MEMORY} is new memory, onlining memory ..."
  echo online > "${SPARSEMEM_STATE_FILE}"
 fi
don

List all PCI devices

Use the lspci command to list all PCI devices. Use the command lspci -v for more verbose information or lspci -vv for very verbose output. For example, lspci can be used to determine the manufacturer, model, and memory size of a system's video card:

Processor Info

To find out the number of processors installed and processor details:
cat /proc/cpuinfo

[root@Jeeva root]# cat /proc/cpuinfo
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 6
model           : 8
model name      : AMD Athlon(tm) XP 2100+
stepping        : 1
cpu MHz         : 1728.451
cache size      : 256 KB

Hardware Resources

lshw is a very good utility which can be installed from dag repo. This gives the detailed information about the current hardware configuration.
dmesg and /var/log/dmesg: When booting, kernal generates messages cataloging detected Hardware and they are stored in internal kernal buffer memory. This can be examined using ‘dmesg’ command. Also, this buffer contents are copied to /var/log/dmesg file.
/var/log/dmesg
kudzu: kudzu maintains a Database of detected Hardware at /etc/sysconfig/hwconf file. When run, Kudzu compares the currently detected hardware against to the stored database. If any changes found, kudzu automatically will attempt to reconfigure the system. Kudzu uses catalogs of known Hardwares in /usr/share/hwdata directory.
/etc/sysconfig/hwconf
/usr/share/hwdata
/proc filesystem: This filesystem contains pseudo-files which provide detailed hardware informatiom. The meminfo, cpuinfo, interrupts, ioports and iomem files and bus, scsi and ide directories are few good examples.
How to find the OS release
 cat /etc/redhat-release
How to find the OS kernal release version
 uname -r
To find out whether the running kernel is 32-bit or 64-bit
 $ uname -a
 Linux ora100 2.6.5-7.252-smp #1 SMP Tue Feb 14 11:11:04 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux
The x86_64 confirms you can run 64 bit apps.
How to view the full command path and details in process listing
 ps -ef --cols 5555  | grep openview

dmidecode

dmidecode is a tool for dumping a computer's DMI (some say SMBIOS) table contents in a human-readable format. This table contains a description of the system's hardware components, as well as other useful pieces of information such as serial numbers and BIOS revision.


HA and DRBD in Linux

# cat drbd.conf

# please have a a look at the example configuration file in
# /usr/share/doc/drbd/drbd.conf
#
resource r0 {
        protocol C;
        startup {
    degr-wfc-timeout 120;    # 2 minutes.
  }
        disk {
    on-io-error   detach;
  }
        net {
  }
syncer {
        rate 20M;
  }
  on IDCeLearning01 {
    device    /dev/drbd0;
    disk      /dev/cciss/c0d0p3;
    address   192.168.100.10:7789;
    meta-disk internal;
  }
  on IDCeLearning02 {
    device    /dev/drbd0;
    disk      /dev/cciss/c0d0p3;
    address   192.168.100.20:7789;
    meta-disk internal;
  }
}


# cat /etc/ha.d/ha.cf
debugfile       /var/log/ha-debug
logfile         /var/log/ha-log
logfacility     local0
keepalive       2

deadtime        30
warntime        10
initdead        120
udpport         694

ucast           eth1 10.82.130.91
auto_failback   off
node            IDCeLearning01 IDCeLearning02


# cat /etc/ha.d/haresources
IDCeLearning01 192.168.100.15/32/192.168.100.15 drbddisk::r0 Filesystem::/dev/drbd0::/mnt/shared::ext3 nfs






User Admin

User, Group, Quota, PAM and File Security

To add, modify and remove users, use useradd, usermod and userdel commands are used.
userdel -r deletes the home directory as well.
We can also add/delete/modify users by manually editing /etc/passwd, /etc/shadow and /etc/group files.
When a user is created, the default .profile/.bash_profile files are copied from /etc/skel directory.
groupadd, groupmod and groupdel commands are used to create/modify/delete groups.
Password aging Policies:
chage: used to chage the number of days between password changes and the date of the last password change. This information is used by the system to determine when a user must change her password
Usage: chage [-m mindays] [-M maxdays] [-d lastday] [-I inactive] [-E expiredate] [-W warndays] user
chage command can also be used to find out the last password change, password expiration date, numberdays between password channge, etc..
  # chage -l root
  Last password change                                    : Aug 18, 2005
  Password expires                                        : never
  Password inactive                                       : never
  Account expires                                         : never
  Minimum number of days between password change          : 0
  Maximum number of days between password change          : 99999
  Number of days of warning before password expires       : 7
The setgid Access Mode:
When a file is created in a directory, it belongs to the primary group of the user that created the file. However, if the setgid bit is set for the directory, new files that are created in the directory have their group ownership set to the same group as the owner of the directory.
To set gid to a directory
 chmod g+s <direcroty>  
 chmod 2770 <directory> 
User Environment:
  • /etc/skel # default template for a newly added user’s home directory
  • /etc/profile # First script executed when a user logs in to the system
  • ~/.bash_profile # executed after /etc/profile
  • ~/.bashrc # Called by ~/.bash_profile. This file contains user’s aliases and settings. It runs whenever a user starts up a non-login interactive shell and the default users ~/.bash_profile also calls it whenever the user logs in.
  • /etc/bashrc # Usually called by ~/.bash_profile. Contains global aliases and settings. It allows system adminstrator to set alised for every user like c for clear and h for history
  • /etc/profile.d # This directory contains initialization scripts specifig to software packages installed by RPM. These scripts are called by /etc/profile on login, or by /etc/bashrc if called by a non-login interactive shell.

PAM: Pluggable Authentication Module

Redhat Linux uses PAM system to check for authorized users. PAM includes a group of dynamically loadable library modules that govern how individual applications verify their users. You can modify PAM configuration files to suit your needs. PAM modules are documented in the /usr/share/doc/pam-0.XX directory
Applications calls libpam.so for authentication. An application <service> linked against libpam.so will lookup /etc/pam.d/<service> for configuration. If this file does not exist in /etc/pam.d, PAM will default to /etc/pam.d/other. Based on the configuration file, additional libraries are called to deterine the overall success or failure of the service access.
Each line of the config file has the following syntax:
module-type control-flag module-path arguments
Example:
auth required pam_unix.so nullok
01. First field indicates the type of library module. (auth, account, password, session)
  • Authentication management (auth) Establishes the identity of a user. For example, a PAM auth command decides whether to prompt for a username and or a password.
  • Account management (account) Allows or denies access according to the account policies. For example, a PAM account command may deny access according to time, password expiration, or a specific list of restricted users.
  • Password management (password) Manages other password policies. For example, a PAM password command may limit the number of times a user can try to log in before a console is reset.
  • Session management (session) Applies settings for an application. For example, the PAM session command may set default settings for a login console.
02. The second field determins the effect an individual library has on the overall result.
  • required -- sucess is required., failure will still call the remaining modules but the command will still fail.
  • requisite -- Failure will immediatly terminate the authentication process
  • sufficient -- success bypasses remaining modules, failure is ignored
  • optional -- result is ignored.
Normally, each application that uses PAM has its own configuration file. Redhat uses /usr/lib/security/pam_stack.so and /etc/pam.d/system-auth to configure global or default tests. /usr/lib/security/pam_stack.so calls another PAM service much like a funtion.
Example:
/etc/pam.d/login:
 auth       required     pam_securetty.so
 auth       required     pam_stack.so service=system-auth
 auth       required     pam_nologin.so
/etc/pam.d/system-auth:
 auth       required      /lib/security/$ISA/pam_env.so
 auth       sufficient    /lib/security/$ISA/pam_unix.so likeauth nullok
 auth       required      /lib/security/$ISA/pam_deny.so=]
In the above example, login service calls system-auth thru pam_stack.so. pam_stack will check all auth calls for this extra service. The value of $ISA variable is usually empty.
 Core PAM Modules
 pam_unix:  Standard Authentication
 pam_env:  Sets Environment Variable
 pam_securetty:  limit root login to secure terminals
 pam_stack:  calls another PAM service
 pam_nologin:  tests for /etc/nologin
 pam_console:  previleges for users at the console

NIS Client Configuration

NIS client requires ypbind and portmap daemons
Can define NIS server binding manually by
  • Define NISDOMAIN in /etc/sysconfig/network
  • Define NISDOMAIN in NIS server specification in /etc/yp.conf
  • Update PAM’s /etc/nsswitch.conf file
Also, system can be configured as NIS client using ‘/usr/sbin/authconfig’ utility.

Linux Quota System:

The quota system enables administrator to limit the disk usage for every users. Because, resource accounting must accur with every file creation, quotas must be implemented within the kernel. It is enabled per filesystem basis.
In order for a partition to implement quotas, it must be mounted with the userquota or group quota options. These options can be added to the appropriate entries in /etc/fstab. After editing, the options can be made to immeditely take effect by remounting the filesystem.
To implement quota for /home file system.
01. Modify the /etc/fstab for /home file systems as follows
Device         Mount point  Filesys  Options                          dump  Fsck 
LABEL=/        /            ext3     defaults                           1   1 
LABEL=/boot    /boot        ext3     defaults                           1   2 
/dev/hdd1      /home        ext3     exec,dev,suid,rw,usrquota,grpquota 1   2 
02. Mount /home filesystem
 mount -o remount /home
Starting and Stopping quotas:
Quotas are turned on or off by running quotaon and quotaoff commands. These commands rarely needs to be run, because they are included in default Redhat linux script /etc/rc.d/rc.sysinit.
  quotaon /home   
     (or)
  quotaon -a        # To turn on quota for /home filesystem.
Editing user policies:
User policies are implemented with the edquota command. This command invokes an editor and loads a template, which can then be edited to establish the appropriate values. These values are committed to the database upon exiting the editor.
To implement a quota policy for a user
edquota <user name>
To define grace time periode for the quota
edquota -t
Generating quota reports: Users can inspect their disk usage and quotas by issuing /usr/bin/quota command. An administrator can generate a report of disk usages by all users with the /usr/sbin/repquota command. Users over their quota can be warned by placing /usr/sbin/warnquota in the cron job.
Create and update the quota database: The disk usage database is stored in specially named binary files within a partition’s top-level directory, aquota.user and aquota.group. These files may have to be created manually using touch command at the beginning. In case of database corruption/out of sync with the actual state of the partion, the database can be brought up to date by running quotachek command.
quotocheck -cm /home # -c Don’t read existing quota files. Perform a new scan and save it to disk

Using OpenLDAP for User Authentication

http://linsec.ca/usermgmt/openldap.php

 
 

VNC

VNC is used to display an X windows session running on another computer. Unlike a remote X connection, the xserver is running on the remote computer, not on your local workstation. Your workstation ( Linux or Windows ) is only displaying a copy of the display ( real or virtual ) that is running on the remote machine.
01. Install the vnc-server rpm if not already installed
    # yum install vnc-server
02. Make sure to install a window manager in order to get a normal GUI desktop if not alreday installed
    # yum groupinstall "GNOME Desktop Environment" 
03. Create VNC users
    # useradd user1
    # useradd user2
    # passwd user1
    # passwd user2
04. Create VNC password for each user. This will create the .vnc directory in the respective home directory
    # su - user1
    # vncpasswd
    # su - user2
    # vncpasswd
05. Edit /etc/sysconfig/vncservers, and add the following to the end of the file
    VNCSERVERS="1:user1 2:user2"
    VNCSERVERARGS[1]="-geometry 1024x768"
    VNCSERVERARGS[2]="-geometry 1280x1024"
User1 will have 1024x768 screen and user2 will have 1280x1024
06. Create xstartup scripts by starting and stopping the vncserver as root
    # /sbin/service vncserver start
    # /sbin/service vncserver stop
Login to each user and edit the xstartup script. To use user1 as an example, first login as user1
    $ cd .vnc

    $ cat xstartup
    #!/bin/sh
    # Uncomment the following two lines for normal desktop:
    # unset SESSION_MANAGER
    # exec /etc/X11/xinit/xinitrc
    [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
    [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
    xsetroot -solid grey
    vncconfig -iconic &
    xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
    twm &
Add the line indicated below to assure that an xterm is always present, and uncomment the two lines as directed if you wish to run the user's normal desktop window manager in the VNC. Note that in the likely reduced resolution and color depth of a VNC window the full desktop will be rather cramped and a look bit odd. If you do not uncomment the two lines you will get a gray speckled background to the VNC window.
 #!/bin/sh
 # Add the following line to ensure you always have an xterm available.
 ( while true ; do xterm ; done ) & 
 # Uncomment the following two lines for normal desktop:
 unset SESSION_MANAGER 
 exec /etc/X11/xinit/xinitrc 
 [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
 [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
 xsetroot -solid grey
 vncconfig -iconic &
 xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
 twm & 
07. Start the VNC Server
 # service vncserver start
08a. Test the VNC Server
Let us assume that mymachine has an IP address of 192.168.0.10. The URL to connect to each of the users will be:
 user1 is http://192.168.0.10:5801
 user2 is http://192.168.0.10:5802
Connect to http://192.168.0.10:5801. A java applet window will pop-up showing a connection to your machine at port 1. Click the [ok] button. Enter user1's VNC password, and a 1024x768 window should open using the default window manager selected for user1 . The above ports 5801 and 5802 must be open in the firewall {iptables) for the source IP addresses or subnets of a given client.
'''08b. Testing with VNC Client
 For user1: vncviewer 192.168.0.10:1
 For user2: vncviewer 192.168.0.10:2
To test user1 using vncviewer, vncviewer 192.168.0.10:1. Enter user1's VNC password, and a 1024x768 window should open using user1's default window manager. The vncviewer client will connect to port 590X where X is an offset of 1,2 for user1 and user2 respectively, so these ports must be open in the firewall for the IP addresses or subnets of the clients.
09. Start the VNC Server during boot
 # chkconfig vncserver on 
10. VNC encrypted through an ssh tunnel
You will be connecting through an ssh tunnel. You will need to be able to ssh to a user on the machine. For this example, the user on the vncserver machine is user1.
10a. Edit /etc/sysconfig/vncservers, and add the option -localhost.
     VNCSERVERS="1:user1 2:user2 3:user3"
     VNCSERVERARGS[1]="-geometry 1024x768 -localhost"
     VNCSERVERARGS[2]="-geometry 1280x1024 -localhost"
     VNCSERVERARGS[1]="-geometry 800x600 -localhost"
10b. Restart vncserver
     /sbin/service vncserver restart
10c. Go to another machine with vncserver and test the VNC.
       a. vncviewer -via user1@192.168.0.10 localhost:1
       b. vncviewer -via user2@192.168.0.10 localhost:2
       c. vncviewer -via user3@192.168.0.10 localhost:3 
By default, many vncviewers will disable compression options for what it thinks is a "local" connection. Make sure to check with the vncviewer man page to enable/force compression. If not, performance may be very poor!
11. Recovery from a logout
If you logout of your desktop manager, it is gone.
  • We added a line to xstartup to give us an xterm where we can restart our window manager.
   For gnome, enter gnome-session.
   For kde, enter startkde. 

To use different desktop managers instead of the default

If you install VNC on RH Linux / Solaris, the default windowing manager is twm, which isn’t very sexy. Here is how you can switch to something a bit more interesting:
1. vi ./vnc/xstartup in your home directory.
2. Comment out xterm and twn by adding a “#” character at the beginning of the line.
3a. For CDE, add the following line:
 /usr/dt/bin/dtwm
3b. For GNOME, add the following lines:
 gnome-session&
 gnome-terminal --geometry 80x24+10+10 --title="My Desktop" &
3c. For KDE, try the following. (Note: I haven’t try this yet. So, let me know if you have comment about this step.)
 startkde&
To run vncserver at the resolution you like, try the following on the command-line:
 vncserver -geometry 1250x680
The above resolution configuration assumes you have a wide monitor. Please note that vncserver can run at just about any interesting resolution you want, so set it to fit within your local monitor.

 

New

Some Useful Linux Commands:
To limit the root logins:
Add all the ttys from where you want to login directly as root to /etc/securetty file. The /etc/securetty file governs the consoles from where you can log into Linux as the root user.
To change to Language and Font Variable:
Edit the /etc/sysconfig/i18n
Acrobat 5: works fine only if LANG variable is set to en_US
tune2fs: Adjust tunable filesystem parameters on second extended filesystems
 -c max-mount-counts 
    adjust the max. mounts count between two  filesystem  checks
 -C mount-count  
    Set the number of times the filesystem has been mounted
 -i interval-between-checks[d|m|w]
    Adjust the maximal time between two filesystem checks.
 -l List the contents of the filesystem superblock
To list the open ports in Linux:
 # nmap hostname
strace
In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process. The name of each system call, its arguments and its return value are printed on standard error or to the file specified with the -o option
 -p pid   Attach  to  the  process with the process ID pid and begin tracing
 -e is a qualifying expression which modifies which events to trace or how to trace them 
 -o filename    Write  the  trace output to the file filename rather than to stderr
 -f    Trace child processes as they are  created  by  currently  traced  processes 
       as a result of the fork system call
  Replace -p [pid] with [command] to trace a specific command

 # strace -fe verbose=all -e write=all -o /tmp/strace.log -p [pid]
To enable XDMCP broadcast :
In Red Hat Linux 7.1
Edit /etc/X11/gdm/gdm.conf and look for the [XDMCP] section and change Enable=false to Enable=true
In Mandrake 8.1
edit the file /usr/share/config/kdm/kdmrc and locate:
[Xdmcp]
Enable=false
change the false to true
save the file and then restart the X Server (logout and pick menu ->
Restart Xserver)
linux home 


Tips

How to scan the SCSI bus with a 2.6 kernel

If you are playing with SCSI devices (like Fibre Channel, SAS, ..) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone. Well, this is the way to do it in CentOS with versions that have a 2.6 kernel. This means CentOS 5 and CentOS 4 (starting from update 3).
   1. Find what's the host number for the HBA:

          ls /sys/class/fc_host/

      (You'll have something like host1 or host2, I'll refer to them as host$NUMBER from now on)

   2. Ask the HBA to issue a LIP signal to rescan the FC bus:

          echo 1 >/sys/class/fc_host/host$NUMBER/issue_lip

   3. Wait around 15 seconds for the LIP command to have effect

   4. Ask Linux to rescan the SCSI devices on that HBA:

          echo "- - -"  >/sys/class/scsi_host/host$NUMBER/scan

      The wildcards "- - -" mean to look at every channel, every target, every lun.
To source the .bashrc in gnome
 # cat $HOME/.gnomerc
 source $HOME/.bashrc
To support windows true type font in Linux
  1. Create a directory called TTF in /usr/share/fonts directory
  2. Copy all the windows TTF fonts to /usr/share/fonts/TTF directory
  3. Restart the xfs service
To install TTF fonts in per user basis in Linux
  1. Create .fonts directory in $HOME
  2. copy the .ttf font file to $HOME/.fonts directory
To disable the colors in directory listings
  1. Open /etc/DIR_COLORS.xterm file in vi
  2. Search for 'COLOR tty' line
  3. Change it as 'COLOR none'
  4. Save the file and quit
  5. Logout and Login back

screen in Linux

Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells).
To start a screen
 # screen
To list the current screen sessions
 # screen -ls
 There is a screen on:
         13086.pts-2.ae  (Attached)
To resume detached session 13086.pts-2.ae
  # screen -r 13086.pts-2.ae
To re-attache to session 13086.pts-2.ae and if necessary to detach it first
 # screen -d -r 13086.pts-2.ae
To attache to a already attached screen
 # screen -x 13086.pts-2.ae
To detach a screen
 Ctrl+a then d
also using the following screen command
 # screen -d 13086.pts-2.ae
to start a screen session with session name "testsession"
 # screen -S testsession
To autostart a program after logging into gnome
Applications --> Preferences --> More preferences --> Sessions --> Startup Programs --> Add
To start a gnome session from xterm session
 gnome-session start
To start a kde session, from xterm session
 startkde

Resolution problem on Wide screen displays with intel 9xx series display adapters
01. download and install 915resolution rpm package
02. List all the resolution supported using "915resolution -l" command
03. If the required resolution is not listed, override some mode (eg 4b) with the required resolution
    915resolution 4b 1680 1050 24
04. add line "/usr/sbin/915resolution 4b 1680 1050 24" to /etc/rc.net and reboot the system

To make Dell wireles to work
01. Install ndiswrapper
a. Install the dev packages
b. Install the kernel source
    yum install kernel-devel
c. Download and install ndiswrapper
    wget http://internap.dl.sourceforge.net/sourceforge/ndiswrapper/ndiswrapper-1.41.tar.gz
    tar -xzvf ndiswrapper-1.41.tar.gz
    make uninstall
    make
    make install
d. Download the dell 1390 WLAN driver
  wget http://ftp.us.dell.com/network/R151517.EXE
  unzip -a R151517.EXE
  cd DRIVERS
e. ndiswrapper -i bcmwl5.inf
  installing bcmwl5 ...
f. [root@dhcp51-52 DRIVER]# ndiswrapper -l
  bcmwl5 : driver installed
  device (14E4:4311) present
g.[root@dhcp51-52 DRIVER]# ndiswrapper -m
 module configuration already contains alias directive
h. [root@dhcp51-52 DRIVER]# modprobe ndiswrapper
i. add the following line to /etc/modprobe.conf
   alias wlan0 ndiswrapper
j. To automatically load the module ndiswrapper, create a file called ndiswrapper.modules with the following line on it. alsto make it executable using chmod 755 /etc/sysconfig/modules/ndiswrapper.modules command
   cat /etc/sysconfig/modules/ndiswrapper.modules  
   modeprobe ndiswrapper
 -- Reboot your system now --
Bash Shell help
 http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_01.html

 # updatedb
 # locate
 # makewhatis
 # man -k <keyword>

To install Java Plugin for Firefox

01. Install the Java JRE
02. Create the link the fire fox home directory
    # ln -s /usr/java/latest/lib/amc64/libnpjp2.so libnpjp2.so
or create the link in firefox plugin directory of the user.
Note: The path for libnpjp2.so will change for each version of Java
   # cd $HOME/.mozilla/plugins
   # ln -s <path_to_libnpjp2.so> .

Keine Kommentare:

Kommentar veröffentlichen