A Firefox extension that allows you to be simultaneously logged into multiple accounts.
Friday, December 31, 2010
DHCP - Dynamic Host Configuration Protocol
DHCP is a very common protocol and we often here about it. DHCP is much more complex than it looks. DHCP IP address assignment process goes through a few steps explained in this article.
DHCP stands for Dynamic Host Configuration Protocol and is used to automatically assign IP configuration to hosts connecting to a network. The Dynamic Host Configuration Protocol (DHCP) provides a framework for passing configuration information to hosts on a TCPIP network. DHCP is based on the Bootstrap Protocol (BOOTP)A DHCP client makes a request to a DHCP server that may or may not reside on the same subnet. The automatic distribution of IP configuration information to hosts eases the administrative burden of maintaining IP networks. In its simplest form, DHCP distributes the IP address, subnet mask and default gateway to a host, but can include other configuration parameters such as name servers and netbios configuration.
A DHCP client goes through six stages during the DHCP process. These stages are:
- Initializing
- Selecting
- Requesting
- Binding
- Renewing
- Rebinding
The DHCP client starts the DHCP process by issuing a DHCPDISCOVER message to its local subnet on UDP port 67. Since the client does not know what subnet it belongs to, a general broadcast is used (destination address 255.255.255.255). If the DHCP server is located on a different subnet, a DHCP-relay agent must be used. The DHCP-relay agent can take several forms. The ip-helper IOS command is used to set up a DHCP-relay agent on a Cisco router.
The DHCP-relay agent forwards the DHCPDISCOVER message to a subnet that contains a DHCP server. Once the DHCP server receives the DHCPDISCOVER message, it replies with a DHCPOFFER message. The DHCPOFFER message contains the IP configuration information for the client. THE DHCPOFFER message is sent as a broadcast on UDP port 68. The client will know that the DHCPOFFER message is intended for it because the client's MAC address is included in the message.
If the client is on a different subnet than the server, the message is sent unicast to the DHCP-relay agent on UDP port 67. The DHCP-relay agent broadcasts the DHCPOFFER on the client's subnet on UDP port 68.
After the client receives the DHCPOFFER, it sends a DHCPREQUEST message to the server. The DHCPREQUEST message informs the server that it accepts the parameters offered in the DHCPOFFER message. The DHCPREQUEST is a broadcast message, but it includes the MAC address of the server, so that other DHCP servers on the network will know which server is serving the client.
The DHCP server will send a DHCPACK message to the client to acknowledge the DHCPREQUEST. The DHCPACK message contains all the configuration information that was requested by the client. After the client receives the DHCPACK, it binds the IP address and is ready to communicate on the network. If the server is unable to provide the requested configuration, it sends a DHCPNACK message to the client. The client will resend the DHCPREQUEST message. If the DHCPREQUEST message does not return a DHCPACK after four attempts, the client will start the DHCP process from the beginning and send a new DHCPDISCOVER message. There is a great diagram of the DHCP process at the "Understanding DHCP" link at the end of this article.
After the client receives the DHCPACK, it will send out an ARP request for the IP address assigned. If it gets a reply to the ARP request, the IP address is already in use on the network. The client then sends a DHCPDECLINE to the server and sends a new DHCPREQUEST. This step is optional, and is often not performed.
Since the DHCP works on broadcast, two pc which are on different networks (or VLANs) cannot work on the DHCP protocol. Does that mean we should have one dedicated server of DHCP in each vlan? No … in Cisco devices IP helper-address command helps to broadcast DHCP messages from one vlan to other vlan.
Tuesday, April 27, 2010
Linux Startup Scripts
Boot sequence summary
- BIOS
- Master Boot Record (MBR)
- Kernel
- init
BIOS
Load boot sector from one of:
- Floppy
- CDROM
- SCSI drive
- IDE drive
Master Boot Record
- MBR (loaded from /dev/hda or /dev/sda) contains:
- lilo
- load kernel (image=), or
- load partition boot sector (other=)
- DOS
- load "bootable" partition boot sector (set with fdisk)
- lilo
- partition boot sector (eg /dev/hda2) contains:
- DOS
- loadlin
- lilo
- kernel
- DOS
LILO
One minute guide to installing a new kernel
- edit /etc/lilo.conf
- duplicate image= section, eg:
image=/bzImage-2.2.12
label=12
read-only
- man lilo.conf for details
- duplicate image= section, eg:
- run /sbin/lilo
- (copy modules)
- reboot to test
Kernel
- initialise devices
- (optionally loads initrd, see below)
- mount root FS
- specified by lilo or loadin
- kernel prints:
- VFS: Mounted root (ext2 filesystem) readonly.
- run /sbin/init, PID 1
- can be changed with boot=
- init prints:
- INIT: version 2.76 booting
initrd
Allows setup to be performed before root FS is mounted
- lilo or loadlin loads ram disk image
- kernel runs /linuxrc
- load modules
- initialise devices
- /linuxrc exits
- "real" root is mounted
- kernel runs /sbin/init
Details in /usr/src/linux/Documentation/initrd.txt
/sbin/init
- reads /etc/inittab
- runs script defined by this line:
- si::sysinit:/etc/init.d/rcS
- switches to runlevel defined by
- id:3:initdefault:
sysinit
- debian: /etc/init.d/rcS which runs
- /etc/rcS.d/S* scripts
- symlinks to /etc/init.d/*
- /etc/rc.boot/* (depreciated)
- /etc/rcS.d/S* scripts
- redhat: /etc/rc.d/rc.sysinit script which
- load modules
- check root FS and mount RW
- mount local FS
- setup network
- mount remote FS
Example Debian /etc/rcS.d/ directory
README
S05keymaps-lct.sh -> ../init.d/keymaps-lct.sh
S10checkroot.sh -> ../init.d/checkroot.sh
S20modutils -> ../init.d/modutils
S30checkfs.sh -> ../init.d/checkfs.sh
S35devpts.sh -> ../init.d/devpts.sh
S35mountall.sh -> ../init.d/mountall.sh
S35umsdos -> ../init.d/umsdos
S40hostname.sh -> ../init.d/hostname.sh
S40network -> ../init.d/network
S41ipmasq -> ../init.d/ipmasq
S45mountnfs.sh -> ../init.d/mountnfs.sh
S48console-screen.sh -> ../init.d/console-screen.sh
S50hwclock.sh -> ../init.d/hwclock.sh
S55bootmisc.sh -> ../init.d/bootmisc.sh
S55urandom -> ../init.d/urandom
Run Levels
- 0 halt
- 1 single user
- 2-4 user defined
- 5 X11
- 6 Reboot
- Default in /etc/inittab, eg
- id:3:initdefault:
- Change using /sbin/telinit
Run Level programs
- Run programs for specified run level
- /etc/inittab lines:
- 1:2345:respawn:/sbin/getty 9600 tty1
- Always running in runlevels 2, 3, 4, or 5
- Displays login on console (tty1)
- 2:234:respawn:/sbin/getty 9600 tty2
- Always running in runlevels 2, 3, or 4
- Displays login on console (tty2)
- l3:3:wait:/etc/init.d/rc 3
- Run once when switching to runlevel 3.
- Uses scripts stored in /etc/rc3.d/
- ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now
- Run when control-alt-delete is pressed
- 1:2345:respawn:/sbin/getty 9600 tty1
Typical /etc/rc3.d/ directory
When changing runlevels /etc/init.d/rc 3:- Kills K##scripts
- Starts S##scripts
K25nfs-server -< ../init.d/nfs-server
K99xdm -< ../init.d/xdm
S10sysklogd -< ../init.d/sysklogd
S12kerneld -< ../init.d/kerneld
S15netstd_init -< ../init.d/netstd_init
S18netbase -< ../init.d/netbase
S20acct -< ../init.d/acct
S20anacron -< ../init.d/anacron
S20gpm -< ../init.d/gpm
S20postfix -< ../init.d/postfix
S20ppp -< ../init.d/ppp
S20ssh -< ../init.d/ssh
S20xfs -< ../init.d/xfs
S20xfstt -< ../init.d/xfstt
S20xntp3 -< ../init.d/xntp3
S89atd -< ../init.d/atd
S89cron -< ../init.d/cron
S99rmnologin -< ../init.d/rmnologin
Boot Summary
- lilo
- /etc/lilo.conf
- debian runs
- /etc/rcS.d/S* and /etc/rc.boot/
- /etc/rc3.d/S* scripts
- redhat runs
- /etc/rc.d/rc.sysinit
- /etc/rc.d/rc3.d/S* scripts
Tuesday, April 6, 2010
Create bootable USB drive
Here are quick steps on CentOS 5.3 box (should be identical on any RH based distro) to create a bootable USB stick of latest Fedora 11 distrubution:
1. Check whether required tools are already installed or not:
# rpm -q livecd
2. Install tools:
# yum install livecd-tools
3. Insert your USB stick in one of USB port, it should get automatically detected and mounted. Make sure your stick has atleast 1 GB free space. Jump to step #7 as it’s absolutely not necessary to format it, but if there’s no worthy data in and you are willing to clean it completely before moving forward, here is the way to proceed after unmounting it:
# fdisk -l /dev/sda
Disk /dev/sda: 4043 MB, 4043309056 bytes
125 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 7750 * 512 = 3968000 bytes
Device Boot Start End Blocks Id System
4. Proceed to format:
# fdisk /dev/sda
Command (m for help): d
No partition is defined yet!
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1018, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1018, default 1018):
Using default value 1018
Command (m for help): a
Partition number (1-4): 1
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 6
Changed system type of partition 1 to 6 (FAT16)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: If you have created or modified any DOS 6.x
partitions, please see the fdisk manual page for additional
information.
Syncing disks.
## here, after initiating fdisk with USB device controller ie /dev/sda (it may be different in your machine, Please make sure you choose correct). We tried deleting (d) any existing partitions, then create new one (n) of type primary (p) with all avaialbe size. Then make this partition active (a) and assign (t) filesystem FAT 16 (6) to it. Save these changes or write these changes to device by pressing w.
5. Issue parprobe to detect new changes and check:
# partprobe
# fdisk -l /dev/sda
Disk /dev/sda: 4043 MB, 4043309056 bytes
125 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 7750 * 512 = 3968000 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 1018 3944719 6 FAT16
6. Format USB stick partition (sda1) with FAT file system and mount it:
# mkdosfs -n usbdisk /dev/sda1
mkdosfs 2.11 (12 Mar 2005)
# mount /dev/sda1 /mnt/cam
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda2 43G 17G 24G 42% /
/dev/hda5 51G 20G 29G 41% /var
tmpfs 248M 0 248M 0% /dev/shm
/dev/sda1 3.8G 4.0K 3.8G 1% /mnt/cam
7. USB stick is ready for use, you should have ISO image of Fedora 11 in your machine to proceed. If you didnt downloaded yet, get it from here. I’m a kde fan, if you too, grab it from here.
8. Start the actual process of creating bootable and transferring files now. The command syntax is: livecd-iso-to-disk
# livecd-iso-to-disk F11-i686-Live.iso /dev/sda1
Verifying image...
F11-i686-Live.iso: f21debace1339dbdefff323064d40164
Fragment sums: c22bcc22b29728f2a7136396121621caf6c18169f3326e5c7e66153cd57e
Fragment count: 20
Percent complete: 100.0% Fragment[20/20] -> OK
100.0
The supported flag value is 0
The media check is complete, the result is: PASS.
It is OK to install from this media.
Copying live image to USB stick
Updating boot config file
Installing boot loader
USB stick set up as live image!
9. All Done! grab any PC available sideby and restart it (after saving fellows work . Go to BIOS menu, change boot option from HDD/CD to USB Drive, insert USB stick in one of avaiable USB ports. Start the PC and enjoy!!
How to Install, setup and config HAProxy loadbalancer for content switching
Sometimes we have different servers with different contents, such as one set of servers with all static contents (html, image files) of a website while another set of servers have dynamic contents (cgi, perl, php scripts) This type of config is benefitial in some situations where you want to serve your static data directly from CDN for faster response and dynamic contents from your own servers.
While deploying a load balancer, we need some mechanisam to inform loadbalancer to forward request to different set of servers based on the condition specified. Here I’m using HAProxy load balancer on CentOS 5 box. This is a very small test setup in Amazon EC2 environment having 3 small instances.
Let’s start the action. Login to server where you want to install HAProxy. Download it and extract it. You can download source and compile but as its a single executable file, I prefer to download precompiled file for being lazy
# mkdir /usr/local/haproxy
# cd /usr/local/haproxy
# wget http://haproxy.1wt.eu/download/1.3/bin/haproxy-1.3.15.2-pcre-40kses-splice-linux-i586.notstripped.gz
# gunzip haproxy-1.3.15.2-pcre-40kses-splice-linux-i586.notstripped.gz
# mv haproxy-1.3.15.2-pcre-40kses-splice-linux-i586.notstripped haproxy
# chmod 700 haproxy
As an example, we have two backend servers/domains, one to serve static contents and other for dynamic contents. Let’s create config file:
# vi haproxy.cfg
defaults
balance roundrobin
cookie SERVERID insert indirect
frontend www 10.252.130.162:80
mode http
acl dyn_content url_sub cgi-bin
use_backend dyn_server if dyn_content
default_backend stat_serverbackend dyn_server
mode http
server dserver1 dynamic.example.com:80 cookie A checkbackend stat_server
mode http
server sserver1 static.example.com:80 cookie B check
—
Here, 10.252.130.162 is the IP of your load balancer server. HAProxy configuration file has several sections called defaults, listen, frontend and backend. We used cookie to forward all subsequent requests from same user to same backend. The main thing here is the acl in frontend section which stats that if there’s a word “cgi-bin” in user’s url then use dyn_server as backend otherwise use default backend which is stat_server. You should refer HAProxy documentation for further information of configurations.
Save the file and check it for syntax:# ./haproxy -f ./haproxy.cfg -c
It will throw warnings regarding missing timeouts in sections, you can ignore these warnings. If there’s any error, check the config file again.
Run HAProxy:# ./haproxy3 -f ./haproxy.cfg -p /var/run/haproxy.pid
Put load balancer’s public IP/domain name in your browser and test this setup, it should go as expected.
Bash script to backup essential log files of Linux Server
Here’s small bash script to backup important log files from a server to a backup server. You should customize it per your environment. I’ve deployed this script in some hosts and its working fine for me but I’m not making any guarantee that this will work for you as well.
Task: Two most imporant log files in any Redhat based distro is /var/log/secure and /var/log/messages. These are basic log files and there are more log files when your server perform additional roles such as a database server, web server, mail server etc. You can look log files of other installed softwares also and add them in this script to backup them. I have a separate backup server where I want to transfer my log files after compressing them. You can transfer them in some location in case you dont have a separate backup host or environment.
#!/bin/bash
##
## hostlogBackup.sh: perform backup of essential log files. Developed by Jagbir Singh (contact AT jagbir DOT info)
## You are free to use or distribute it in whatever means but I'll be happy if you send me a copy of updated one.
##
## create some varibales
yesterDate=`date -d "-1 day" +%d-%b-%g` ## yesterday's date
toDay=`date +%u`; ## day of week in numeric
bakServer="backup-user@server-ip" ## backup server address user@hostname, use directory name if backup in same host
bakHost="$bakServer:/backup/host/firsthost" ## specify directory where log files will be copied
bakHostDaily="$bakHost/daily/" ## directory for daily backup files
cd /var/log ## change directory where important log file resides
# compress messages log file
`cp messages messages-log`
`/bin/tar czf messages_$toDay.tgz messages-log`
# compress secure log file
`cp secure secure-log`
`/bin/tar czf secure_$toDay.tgz secure-log`
# compress mysqld log file. comment following 2 lines if you are not using mysql
`cp mysqld.log mysqld-log`
`/bin/tar czf mysqld_$toDay.tgz mysqld-log`
# compress apache log files. uncomment if your server runs apache service.
#`cp httpd/access_log ./access-log`
#`cp httpd/error_log ./error-log`
#`/bin/tar czf httpd_$toDay.tgz access-log error-log`
#copy all compressed files to backup server, you must set secure authentication for password less scp, else you have to enter password
`/usr/bin/scp *_$toDay.tgz $bakHostDaily`
#remove all temp files
`rm -f *-log`
`rm -f *_$toDay.tgz`
# Apart from daily, Take a weekly backup on Monday for files which get rotated on weekly basis.
if [ $toDay == "1" ]; then
# take backup of messages log file
if [ -f messages.1 ]; then
`/bin/tar czf message_$yesterDate.tgz messages.1`
`/usr/bin/scp message_$yesterDate.tgz $bakHost`
`rm -f message_$yesterDate.tgz`
fi
# take backup of secure log file
if [ -f secure.1 ]; then
`/bin/tar czf secure_$yesterDate.tgz secure.1`
`/usr/bin/scp secure_$yesterDate.tgz $bakHost`
`rm -f secure_$yesterDate.tgz`
fi
fi
Again I’m stressing on point that this is a very basic script and doesn’t handle any unforseen situations like file doesn’t exist or what happens if compression or copying to other server fails etc. You have to do it yourself.
The point of taking backup on weekly basis is that the file combines week log in a single file which is easy to retain. Daily backup files here get overwritten but I want to retain weekly files for longer duration.
Now you should run this script on daily basis through cron at around 4:30am. why 4:30? because the syslog service normally runs at 4:03am daily to rotate log files and you should copy the rotated file if needed.
$crontab -l
# backup logs to backup server daily
30 4 * * * /bin/bash /root/logBackup/hostlogBackup.sh
That’s all we need to do. Let me know your views about it.
Top 5 most useful commands or tools for Linux administratorsTop 5 most useful commands or tools for Linux administrators
There are plenty such tools which are definitely very useful for Linux admins. Here I am just trying to figure out 5 of such useful tools which are used by a normal Linux administrator in day to day operations. A tool which I think is most useful may not fit in your usage and its definitely possible that you know some awesome tool which I forgot to include here, for such case, I am requesting hereby to please mention the tool in comments. One more thing, I am mentioning here tools which are somewhat optional and not absolutely required for everybody and excluding tool which have no viable alternative and every Linux admin have to use them.. such as SSH, SCP etc.
#5. head/tail
Most of time, the sole purpose of logging in a server is to diagnose some issue and the common way to start this is to look at logs. Logs of different applications like Apache, MySQL, mail logs etc. What you use to look at logs? isn’t that tail? similarly we sometimes use ‘head’ to check few starting lines of any file.
Few examples:
* Continuously check Apache error log file:
$ tail -f /var/log/httpd/error_log
* View first 15 linues from MySQL log:
$ head -15 /var/log/mysqld.log
#4. vi/nano/emacs
Text editor basically needed frequently to create/update config files. I prefer vim, simply because I am very comfortable with it and remembers some of its useful commands for quick editing.
few example of working with vi. open a file with vi and without going in insert mode, here are useful character you can press:
=> jump to end of line
$
=> start of line
0
=> Delete rest of line
D
=> Repeat the last command given:
. (dot)
=> add 'maal' to the end of every line. 1 is line 1, $ is the last line
:1,$ s/$/maal/
=> put 'bingo' at the start of lines 5-10
:5,10 s/^/bingo/
=> change foo to bar for all occurrences in the rest of the file from where the cursor is
:s/foo/bar/g
=> Delete current line and got into insert mode.
C
=> Remove the ^M from files that came from windows:
:se ff=unix
=> Turn on/off display of line numbers:
:set nu
:set nonu
=> if you want actual line numbers in your file:
:%!cat -n
=> find the word under cursor
* (star)
#3. screen
screen is one of much underutilized command in nix world. take a scenario, when last time you issued a command in remote server and find out that the command will take hours to complete? or you are in need to login in 10 servers and check something.. copy files among them.. and voila.. your internet connection get reset and your ssh session get terminated. Here comes screen, once you start using it, you will get hooked to it. Screen is a terminal multiplexer that allows you to manage many processes (like ssh sessions) through one physical terminal. Each process gets its own virtual window, and you can bounce between virtual windows interacting with each process.
Let me give you more insight. Suppose you have many servers and ideally you should restrict ssh (port 22) access to selected IPs only. So, you login into one server which allows access from remote IPs. You can start screen there by typing ’screen’ (all major Linux distributions have screen already installed). You can see a status bar. create new screen windows by pressing Ctrl+ac. switch between them by pressing Ctrl+an (next) and Ctrl+ap (previous). Basically, for b It offers very useful features like Remote terminal session management (detaching or sharing terminal sessions), unlimited windows (unlike the hardcoded number of Linux virtual consoles), copy/paste between windows, notification of either activity or inactivity in a window, split terminal (horizontally and vertically) into multiple regions, sharing terminals etc.
You can save your preferences in .screenrc, like here’s my .screenrc where I’ve redefining status bar look and feel and assigning key f5 (previous window) and f6 (next window):
$ cat ~/.screenrc
# no annoying audible bell, please
vbell on
# detach on hangup
autodetach on
# don't display the copyright page
startup_message off
# emulate .logout message
pow_detach_msg "Screen session of \$LOGNAME \$:cr:\$:nl:ended."
# advertise hardstatus support to $TERMCAP
termcapinfo xterm* ti@:te@
# make the shell in every window a login shell
shell -$SHELL
defscrollback 10000
# Extend the vt100 desciption by some sequences.
termcap vt* AF=\E[3%dm:AB=\E[4%dm
caption always
caption string '%{= wk}[ %{k}%H %{k}][%= %{= wk}%?%-Lw%?%{r}(%{r}%n*%f%t%?(%u)%?%{r})%{k}%?%+Lw%?%?%= %{k}][%{b} %d/%m %{k}%c %{k}]'
# keybindings
bind -k F5 prev
bind -k F6 next
#2. netstat/nmap
These are very useful commands to diagnose things about network. of course, ping/traceroute may be most commonly used ones but the usefulness wise, nmap and netstat are more useful than a basic ping. netstat stands for network status. nmap is a sort of security/port scanner or you can say a network exploration command.
few examples of netstat:
* Display total number of internet (port 80) connections:
$ netstat -an |grep :80 |wc -l
* Display all ports your machine listening on:
$ netstat -ant | grep LISTEN
* Scan a machine on your LAN with nmap and know which ports are open on it:
$ nmap ip #1. find and grep
List of some routine tasks: How many files are there consuming most of disk space? Delete all temporary files older than 2 days, find out how many files have old server name written in them which is causing issue? rename all ‘.list’ to ‘.txt’. The commands find, grep are your best friend here.
Find command is used to search for files. you can specify many options with it like files created today or having size greater then you specified. Normally we also combine find with xargs or exec to issue commands on files returned by find.
examples of find command:
* find top 10 largest files in /var:
$ find /var -type f -ls | sort -k 7 -r -n | head -10
* find all files having size more than 5 GB in /var/log/:
$ find /var/log/ -type f -size +5120M -exec ls -lh {} \;
* find all today’s files and copy them to another directory:
$ find /home/me/files -ctime 0 -print -exec cp {} /mnt/backup/{} \;
* find all temp files older than a week and delete:
$ find /temp/ -mtime +7-type f | xargs /bin/rm -f
* find and rename all mp3 files by changing their uppercase names to lowercase:
$ find /home/me/music/ -type f -name *.mp3 -exec rename 'y/[A-Z]/[a-z]/' '{}' \;
some examples of grep command:
* Print Apache’s documentroot directory name:
$ grep -i documentroot /etc/httpd/conf/httpd.conf
* View file contents without comments and empty lines:
$ grep -Ev “^$|^#” /etc/my.cnf
* print only IP address assigned to the interface:
$ ifconfig eth0 | grep 'inet addr:' | cut -d':' -f2 | awk '{ print $1}'
* How many email messages sent for a particular date:
$ cat /var/log/maillog | grep "status=sent" | grep "May 25" | wc -l
* Find out a running process/daemon from process list (thanks to staranneph for recalling this):
ps -ef | grep mysql
* You can also note cpu/mem usage by using above. like in below command output, you can see that Plesk’s statistics process is utilizing more than 18% cpu alone:
[root@myserver ~]# ps aux | grep statistics
root 8183 18.4 0.0 58384 2848 ? D 04:05 3:00 /usr/local/psa/admin/sbin/statistics
I would like to know your thoughts, any command / tool you think should be included in top 5 here.