LINUX syslogd uses synchronous writes
by default, which is very expensive. For services such as mail it is recommended that you
disable synchronous logfile writes by editing /etc/syslog.conf and by prepending a ''-''
to the logfile name.
You may prefix each entry with the
minus ''-'' sign to omit syncing the file after every logging. Note that you might lose
information if the system crashes right behind a write attempt. Nevertheless this might
give you back some performance, especially if you run programs that use logging in a very
verbose manner.
mail.*
-/var/log/mail.log
Send a "kill -HUP" to the syslogd to
make the change effective.
The linux commandline tool chkconfig updates and
queries runlevel information for system services.
chkconfig --list [name]
chkconfig --add name
chkconfig --del name
chkconfig [--level levels] name <on|off|reset>
chkconfig [--level levels] name
chkconfig provides a simple command-line
tool for main taining the /etc/rc.d directory
hierarchy by relieving system administrators of the task of directly manipulating the
numerous symbolic links in those directories.
$ chkconfig --list
amd 0:off
1:off 2:off 3:off 4:on
5:off 6:off
httpd 0:off 1:off
2:off 3:on 4:on
5:on 6:off
apmd 0:off
1:off 2:on 3:off 4:on
5:off 6:off
arpwatch 0:off 1:off
2:off 3:off 4:off 5:off 6:off
atd 0:off
1:off 2:off 3:on 4:on
5:on 6:off
autofs 0:off 1:off
2:off 3:off 4:off 5:off 6:off
named 0:off 1:off
2:off 3:on 4:off 5:off
6:off
bootparamd 0:off 1:off 2:off
3:off 4:off 5:off 6:off
keytable 0:off 1:off
2:on 3:on 4:on
5:on 6:off
crond 0:off 1:off
2:on 3:on 4:on
5:on 6:off
syslog 0:off 1:off
2:on 3:on 4:on
5:on 6:off
netfs 0:off 1:off
2:off 3:off 4:on 5:on
6:off
$ chkconfig --level 3 inet on
LSOF is a utility that lists
information about files opened by processes. An open file may be a regular file, a
directory, a block special file, a character special file, an executing text reference, a
library, a stream or a network file (Internet socket, NFS file or UNIX domain
socket).
We think, that is utility is very
handy, therefore you find some examples below. For a more extensive set of examples,
documented more fully, click here. Most of the
information in this tip is from Vic Abell <abe@purdue.edu>.
Finding Uses of a Specific Open
File
Finding Open Files
Filling a File System
Finding Processes Blocking
Umount
Finding Listening Sockets
Finding a Particular Network
Connection
Finding Files Open to a
particular Command
Listing Open NFS Files
Listing Files Open by a
Specific Login
Often you're interested in knowing who is using a
specific file. You know the path to it and you want lsof to tell you the processes that
have open references to it. Simple -- execute lsof and give it the path name of the file
of interest. This only works if lsof has permission to get the status.
$ lsof /etc/passwd
Oh! Oh! /tmp is filling and ls doesn't show that any
large files are being created. Can lsof help? Maybe. If there's a process that is writing
to a file that has been unlinked, lsof may be able to discover the process for you. You
ask it to list all open files on the file system where /tmp
is located. Sometimes /tmp is a file system by itself. In that case
$ lsof /tmp
is the appropriate command. If, however, /tmp is
part of another file system, typically /, then you may have to ask lsof to list all files
open on the containing file system and locate the offending file and its process by
inspection
$ lsof / | more
or
$ lsof / | grep ...
When you need to unmount a file system with the
umount command, you may find the operation blocked by a process that has a file open on
the file systems. Lsof may be able to help you find the process. In response
to:
$ lsof
<file_system_name>
Sooner or later you may wonder if someone has
installed a network server that you don't know about. Lsof can list for you all the
network socket files open on your machine with:
$ lsof -i
The -i option without further qualification lists
all open Internet socket files. You can add network names or addresses, protocol names,
and service names or port numbers to the -i option to refine the search. (See the next
section.)
When you know the source or destination of a network
connection whose open files and process you'd like to identify, the -i option may help.
If, for example, you want to know what process has a connection open to or from the
Internet host named aaa.bbb.ccc, you can ask
lsof to search for it with:
$ lsof -i@aaa.bbb.ccc
If you're further interested in a particular
protocol -- TCP or UDP -- and a specific port number or service name, you can add those
discriminators to the -i information:
$ lsof -iTCP@aaa.bbb.ccc:ftp-data
When you want to look at the files open to a
particular command, you can look up the PID of the process running the command and use
lsof's -p option to specify it.
$ lsof -p <PID>
However, there's a quicker way, using lsof's -c
option, provided you don't mind seeing output for every process running the named
command.
$ lsof -c
<first_characters_of_command_name_that_interest_you>
The lsof -c option is useful when you want to see
how many instances of a given command are executing and what their open files are. One
useful example is for the sendmail command.
$ lsof -c sendmail
Lsof will list all files open on remote file
systems, supported by an NFS server. Just use:
$ lsof -N
Note, however, that when run on an NFS server, lsof
will not list files open to the server from one of its clients. That's because lsof can
only examine the processes running on the machine where it is called -- i.e., on the NFS
server. If you run lsof on the NFS client, using the -N option, it will list files open
by processes on the client that are on remote NFS file systems.
If you're interested in knowing what files the
processes owned by a particular login name have open, lsof can help.
$ lsof -u<login>
or
$ lsof -u<User ID
number>
Here are some guidelines to make linux more stable against hacker
attacks.
Despite choosing minimal software
during installation. Many services will have to be manually disabled with
chkconfig
Active services can be examined
with:
$ chkconfig --list
For firewall and DMZ systems you
may switch off basically everything, except SSH
$ chkconfig httpd off
$ chkconfig apmd off
$ chkconfig atd off
$ chkconfig xfs off
$ chkconfig pcmcia off
$ chkconfig lpd off
$ chkconfig nfs off
$ chkconfig gpm off
$ chkconfig linuxconf off
$ chkconfig identd off
$ chkconfig portmap off
$ chkconfig rhnsdoff
$ chkconfig sendmail off
$ chkconfig xinetd off
The list of open tcp/udp ports is
now VERY small (in fact only SSH is listening):
# netstat -a
Active Internet connections (servers and
established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:ssh *:* LISTEN
raw 0 0 *:icmp *:* 7
raw 0 0 *:tcp *:* 7
- The "init level" should be set to 3 (command line login), rather
than 5 (graphical login). If a GUI is needed, it can always be started manually with
startx.
- To login via the 'serial port A' on x86 Hardware, which is useful
for troubleshooting, installations and getting to know the command line. Add the
following to /etc/inittab.
con:23:respawn:/sbin/getty ttyS0 VC
To allow root to login via this serial port, add
ttyS0 to /etc/securetty
echo "ttyS0" >> /etc/securetty
- Environment such as /.cshrc /.profile /.bashrc /etc/profile
/etc/bashrc: set aliases, variables (such as VISUAL, EDITOR and PATH don't include
".") for your favourite shell. Set umask to 077, or 027.
Disk mounting
- To reduce the risk of trojan horses and unauthorised
modifications, in
/etc/vfstab, mount /var and other data disks with "nosuid".
- Configure /etc/hosts with a list of critical machines
(which you don't want resolved via DNS).
- DNS client (avoid if not needed): add domain name & DNS
servers to /etc/resolv.conf. Add a DNS entry for "hosts" in
/etc/nsswitch.conf (and remove nis and nisplus entries).
Keyboard security
- If your hosts are in secured rooms, it might be desirable to
disable certain key functions such as the following. To disable hotkey interactive
startup, set PROMPT=no in /etc/sysconfig/init.
- On x86: To allow ctrl-alt-delete to shutdown the system, an entry
in /etc/inittab like the following is used:
# Trap CTRL-ALT-DELETE
ca::ctrlaltdel:/sbin/shutdown -t3 -r now
Comment it out and reboot or "killall -HUP init" to activate the
change.
- Use default routes: add the IP address of the router to
/etc/sysconfig/network.
- In /etc/inetd.conf, all services should be disabled: reopen
very specific services only if absolutely needed, and adapt /etc/hosts.allow and
/etc/hosts.deny.
- If a sensitive host is to be administered by several people,
consider
using a tool such as sudo.
If user
accounts will be allowed on the system, consider restricting access to:
- cron: via /etc/cron.allow and cron.deny
at: via /etc/al.allow and at.deny
- ftp: Disallowed users are listed in
/etc/ftpusers
- SSH: See /etc/ssh/sshd_config (look for AllowUsers
DenyUsers entries) and /etc/hosta.allow
- General inetd services: /etc/hosts.allow and
hosts.deny
- File system groups: /etc/groups and use file and directly
permissions accordingly.
- Setup a syslog server to receive messages from all
clients.
- So, to configure syslog to send all messages to the syslog
server,
and keep a local copy too:
1. Add an entry to /etc/hosts for the "loghost"
192.168.136.3 loghost
loghost.akadia.ch
2. Add the follwoing as the first line of
/etc/syslog.conf
(important: the two fields are separated by a TAB):
*.* @loghost
3. Restart syslog: either reboot, or kill -1
SYSLOG_PID
- Mount filesystems (in /etc/fstab) not containing systems
programs nosuid, such as /var.
- What SUID files are on the system? The find command can be used to
list all
SUID or SGID files:
find / -perm -u+s -type f -ls
find / -perm -g+s -type f -ls
Why both touching them? Because they are often e source of
weaknesses, and if problems are found in the future, we won't be exposed and we won't
have to rush and install patches.
- rlogin/rsh/rcp are not needed, since we use SSH. Lets
restrict access
to root & remove SUID
chmod 700 /usr/bin/rcp /usr/bin/rlogin
/usr/bin/rsh
- We only allow root to use cron, at and bring network interfaces up
or down:
chmod 700 /usr/bin/crontab /usr/bin/at
/usr/sbin/usernetctl
- Users don't need to be able to mount any devices (non-root users
can't
use floppies of CDs after this):
chmod ug-s /bin/umount /bin/mount
- On sensitive systems, only root needs access to account
management
and network debugging
chmod ug-s /usr/bin/chage /usr/bin/gpasswd
chmod ug-s /usr/bin/chsh /usr/bin/chfn
chmod ug-s /usr/sbin/traceroute /bin/ping
- If no SUID perl scripts are needed, we can remove the SUID bit
from perl itself:
chmod ug-s /usr/bin/suidperl
/usr/bin/sperl*
- Delete any indications of the system version from
/etc/issue and put in a warning about unauthorised use of the system.
mv /etc/issue /etc/issue.orig
- The other warning banner /etc/motd is emty on RH7, but the
same applies.
There are both free and commercial versions. Red Hat x86 is the
only Linux for which the Commercial version is officially available (and included for
free in RH7). The free version can be tricky to get working correctly and has a few bugs.
Source code is provided. The commercial version is a bit pricey (for non Linux users),
reports are too verbose (you may need filter scripts), more configuration examples should
be provided. It is more stable than the free version, also runs on UNIX and NT and offers
enhanced security by cryptographic signing of policy and configuration files. Support
(even when paid for) is not great.
Can also be used, by signing files to be protected (creating lots
of signature files), then writing a script to check the validity of signatures. This will
not catch permission, link, inode or modify date changes though.
MD5 signatures could also be used in a similar way, but the list of
MD5 signatures should not be stored on the system being monitored, unless it is PGP
signed or encrypted.
Uncontrolled growing of log files may
fill up important filesystems like /var. One, not documented features under Linux is
logrotate, which belongs to the standard RedHat Linux distribution.
Logrotate is designed to ease
administration of systems that generate large numbers of log files. It allows automatic
rotation, compression, removal, and mailing of log files. Each log file may be handled
daily, weekly, monthly, or when it grows too large. Normally, logrotate is run as a daily
cron job. It will not modify a log multiple times in one day unless the criterium for
that log is based on the log’s size and logrotate is being run multiple times each
day, or unless the -f or -force option is used.
Any number of config files may be given
on the command line. Later config files may override the options given in earlier files,
so the order in which the logrotate config files are listed in is important. Normally, a
single config file which includes any other config files which are needed should be
used.
The configuration file for logrotate
can be found in /etc/logrotate.conf
# rotate log
files daily
daily
# keep 1 day of backlogs
rotate 1
# send errors to
errors martin dot zahn at akadia dot ch
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
compress
# RPM packages drop log rotation information into this
directory
include /etc/logrotate.d
# no packages own lastlog or wtmp -- we'll rotate them
here
/var/log/wtmp {
monthly
create 0664 root utmp
rotate 1
}
Included Files from
/etc/logrotate.d (e.g. apache)
/var/log/httpd/akadia.log {
missingok
postrotate
/bin/kill -HUP `cat /var/run/httpd.pid 2>/dev/null`
2> /dev/null || true
endscript
}
/var/log/httpd/error.log {
missingok
postrotate
/bin/kill -HUP `cat /var/run/httpd.pid 2>/dev/null`
2> /dev/null || true
endscript
}
For more information type the following
commands:
# man logrotate
# man logrotate | col -b > logrotate.txt
# man -t logrotate > logrotate.ps
Overview
A driver disk adds support for hardware that is
not otherwise supported by the installation program. The driver disk could be
produced by Red Hat, it could be a disk you make yourself from drivers found on the
Internet, or it could be a disk that a hardware vendor (e.g. Adaptec) includes with a
piece of hardware.
There is no need to use a driver disk unless you
need a particular device in order to install Red Hat Linux. Driver disks are most often
used for non-standard or very new CD-ROM drives, SCSI adapters, or NICs. These are
the only devices used during the installation that might require drivers not included on
the Red Hat Linux CD-ROMs.
How to obtain a Driver Disk for SCSI Ultra 320
Support
Download the Image File from www.adaptec.com (e.g. aic79xx-1.3.5-i686-rh80.img)
for RedHat 8.0 or from http://people.freebsd.org/~gibbs/linux/DUD/aic79xx/
for Linux Distributions.
Creating a Driver Disk from the Image File
If you have the image that you need to write to a
floppy disk, this can be done from within DOS or Red Hat Linux.
To create a driver disk from a driver disk image using
Red Hat Linux:
Insert a blank, formatted floppy disk into the first
floppy drive. From the same directory containing the driver disk image do as
root:
dd if=aic79xx-1.3.5-i686-rh80.img
of=/dev/fd0
To create a driver disk from a driver disk image using
DOS:
Insert a blank, formatted floppy disk into the a:
drive. From the same directory containing the driver disk image do:
rawrite aic79xx-1.3.5-i686-rh80.img
a:
Using a Driver Disk During Installation
Having a driver disk is not enough; you must
specifically tell the Red Hat Linux installation program to load that driver disk and use
it during the installation process.
A driver disk is different than a boot disk. If you
require a boot disk to begin the Red Hat Linux installation, you will still need to
create that floppy and boot from it before using the driver disk.
Once you have created your driver disk, begin the
installation process by booting from the Red Hat Linux CD-ROM 1 (or the installation boot
disk). At the boot: prompt, enter either:
linux dd
The Red Hat Linux installation program will ask you
to insert the driver disk. Once the driver disk is read by the installer, it can apply
those drivers to hardware discovered on your system later in the installation
process.
The default Apache access for <Directory /> is Allow from All. This means that Apache
will serve any file mapped from an URL. It is recommended to limit the directories that
can be accessed. The best way to do this is to stop Apache accessing any directory at
all, and then enable the directories we want it to be able to access.
We can do this with the followin Directory directive in the http.conf file:
# First, we configure the
"default" to be a very
# restrictive set of permissions. Also, for security,
# we disable indexes globally. <Directory />
Options None
AllowOverride None
<IfModule mod_access.c>
Order deny,allow
Deny from all
</IfModule>
</Directory>
This tells Apache to stop access to all directories
below /. The next task is to allow access to the document root.
DocumentRoot /home/webmail
<Directory /home/webmail>
Options -All -Multiviews FollowSymLinks Indexes
AllowOverride None
<IfModule mod_access.c>
Order allow,deny
Allow from all
</IfModule>
</Directory>
First we specify where the document root is. Then we use the
Directory directive to tell Apache that it is accessible by everyone.
Finally we need to make sure that the .htaccess files cannot be
viewed.
# AccessFileName: The name of the file to look for in
# each directory for access control informationAccessFileName .htaccess
#
# The following lines prevent .htaccess files from being viewed by
# Web clients. Since .htaccess files often contain authorization
# information, access is disallowed for security reasons. Comment
# these lines out if you want Web visitors to see the contents of
# .htaccess files. If you change the AccessFileName directive above,
# be sure to make the corresponding changes here.
#
# Also, folks tend to use names such as .htpasswd for password
# files, so this will protect those as well.
<IfModule mod_access.c>
<Files ~ "^\.ht">
Order allow,deny
Deny from all
</Files>
</IfModule>
These are only very basic hardenings, more can be found in the
Book: Hardening
Apache
In the Linux ext2 and ext3 filesystems, there are a
number of additional file attributes that are available beyond the standard bits
accessible through chmod. If you haven't seen it already,
take a look at the manpages for chattr and its
companion, lsattr.
One of the very useful new attributes is -i, the immutable flag. With
this bit set, attempts to unlink, rename, overwrite, or append to the file are forbidden.
Even making a hard link is denied (so you can't make a hard link, then edit the link).
And having root privileges makes no difference when immutable is in effect:
# lsattr immutable ------------- immutable
# chattr +i
immutable # lsattr immutable
----i-------- immutable
No, we have a read-only file - first try to delete the whole
directory:
# cd ..
# rm -rf test
rm: cannot remove 'test/immutable': Operation not
permitted
No try to delete (as root):
# cd test
# rm immutable
rm: remove write-protected regular empty file 'immutable'?
y
rm: cannot remove `immutable': Operation not
permittedst
Let's try emptying the file instead of deleting it:
# cp /dev/null immutable
cp: cannot create regular file 'immutable': Permission
denied
# > immutable
-bash: immutable: Permission denied
# Try to create a hard link
# ln immutable link-immutable
ln: creating hard link 'link-immutable' to
'immutable': Operation not permitted
Clear the flag:
# chattr -i immutable
# rm immutable
This could be very useful for adding an extra security step on
files you know you'll never want to change. While little will help you on a box that has
been rooted, immutable files probably aren't vulnerable to simple overwrite attacks from
other processes, even if they are owned by root.
Overview
Accessing MS/Windows file servers from Linux can be established
using the SMBFS Filesystem, which is part of the Samba Package. The Linux Kernel must be
compiled with the SMBFS support. With smbmount you can
mount a Windows Share. It is usually invoked as mount.smbfs by the mount command
when using the "-t smbfs" option.
Example
The following example shows, how
you can mount the Share \\XEON\Akadia (on
a W2K Server) under the Linux mount point: /mnt/xeon/Akadia.
First verify, which Shares are accessible from Linux:
zahn@linux> smbclient ---user=zahn
--list=xeon
Password:
Domain=[XEON] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager]
Sharename
Type Comment
---------
---- -------
E$
Disk Default share
Photos Disk
IPC$
IPC Remote IPC
D$
Disk Default share
print$
Disk Printer Drivers
Archive Disk
Users Disk
Akadia Disk
F$
Disk Default share
ADMIN$
Disk Remote Admin
C$
Disk Default share
HPColorL Printer HP LaserJet 2500
PCL 6
Pmz Disk
Vorlagen Disk
Domain=[XEON] OS=[Windows 5.0] Server=[Win 2000 LAN Manager]
Server
Comment
--------- -------
Workgroup Master
---------
-------
As you can see, the Share Akadia is accessible.
Mount Share with smbmount
root@linux> mkdir -p /mnt/xeon/Akadia
root@linux> smbmount //xeon/Akadia /mnt/xeon/Akadia/
\
-o
username=zahn,password=mypassword
Password:
root@linux> df -k Filesystem 1K-blocks Used Avail Use%
Mounted on
/dev/sda7 64342248 2885788 58188040 5% /
/dev/sda1 489992 11448
453244 3% /boot
/dev/sda6 10072456 4537756 5023032 48% /var
none
253724 0 253724 0% /var/tempfs
//xeon/Akadia 25671680 2201088 23470592 9%
/mnt/xeon/Akadia
Note, that the given password is the Windows Password. Check that
this password is not shown when listing the linux processes!
root@linux> ps -ef
......
root smbmount ..... -o username zahn password XXXXXXXX
root [smbiod]
......
Permanently mount with /etc/fstab
If you want to mount the Windows Share when the
Linux System boots do the following steps:
- Create the File $HOME/.smbpassword
username = zahn
password = mypassword
- Set correct Access Rights
chmod 400 $HOME/.smbpassword
- Add an Entry in /etc/fstab (All
in one line)
//xeon/Akadia /mnt/xeon/Akadia smbfs \
credentials=/home/zahn/.smbpassword, \
workgroup=AKADIA,uid=500,gid=500 0 0
- Mount the Share
mount -a
The files in the Share are now mapped to the Linux User with
UID=500, GID=500 which is zahn. The Share is mounted with mount.smbfs. You have read/write access to this Share.
The more traditional traceroute sends out either UDP or ICMP ECHO
packets with a TTL of one, and increments the TTL until the destination has been reached.
By printing the gateways that generate ICMP time exceeded messages along the way, it is
able to determine the path packets are taking to reach the destination.
The problem is that with the widespread use of firewalls on the
modern Internet, many of the packets that traceroute sends out end up being
filtered, making it impossible to completely trace the path to the destination.
However, in many cases, these firewalls will permit inbound TCP packets to specific ports
that hosts sitting behind the firewall are listening for connections on. By sending out
TCP SYN packets instead of UDP or ICMP ECHO packets, tcptraceroute is able to bypass the
most common firewall filters.
It is worth noting that tcptraceroute never completely establishes
a TCP connection with the destination host. If the host is not listening for incoming
connections, it will respond with an RST indicating that the port is closed. If the host
instead responds with a SYN|ACK, the port is known to be open, and an RST is sent by the
kernel tcptraceroute is running on to tear down the connection without completing
three-way handshake. This is the same half-open scanning technique that nmap uses when
passed the -sS flag.
Download
Download it from http://michael.toren.net/code/tcptraceroute/
Manual Page
tcptraceroute [-nNFSAE] [ -i interface ] [ -f first
ttl ]
[ -l length ] [ -q number of queries ] [ -t tos ]
[ -m max ttl ] [ -p source port ] [ -s source address ]
[ -w wait time ] host [ destination port ] [ length ]
-n
- Display numeric output, rather than doing a reverse DNS lookup for
each hop. By default, reverse lookups are never attempted on RFC1918 address space,
regardless of the -n flag.
-N
- Perform a reverse DNS lookup for each hop, including RFC1918
addresses.
-f
- Set the initial TTL used in the first outgoing packet. The default
is 1.
-m
- Set the maximum TTL used in outgoing packets. The default is
30.
-p
- Use the specified local TCP port in outgoing packets. The default
is to obtain a free port from the kernel using bind(2). Unlike with traditional
traceroute(8), this number will not increase with each hop.
-s
- Set the source address for outgoing packets. See also the -i
flag.
-i
- Use the specified interface for outgoing packets.
-q
- Set the number of probes to be sent to each hop. The default is
3.
-w
- Set the timeout, in seconds, to wait for a response for each
probe. The default is 3.
-S
- Set the TCP SYN flag in outgoing packets. This is the default, if
neither -S or -A is specified.
-A
- Set the TCP ACK flag in outgoing packets. By doing so, it is
possible to trace through stateless firewalls which permit outgoing TCP
connections.
-E
- Send ECN SYN packets, as described in RFC2481.
-t
- Set the IP TOS (type of service) to be used in outgoing packets.
The default is not to set any TOS.
-F
- Set the IP "don't fragment" bit in outgoing packets.
-l
- Set the total packet length to be used in outgoing packets. If the
length is greater than the minimum size required to assemble the necessary probe packet
headers, this value is automatically increased.
-d
- Enable debugging, which may or may not be useful.
Examples
To trace the path to a web server listening for connections on port
80:
tcptraceroute www.akadia.com
Selected device eth0, address 192.168.138.28,
port 32852 for outgoing packets
Tracing the path to www.akadia.com (62.2.210.215) on TCP port 80, 30 hops max
1 192.168.138.1 (192.168.138.1) 0.928 ms 0.876 ms 0.874 ms
2 bwadf2zhb.bluewin.ch (195.186.252.131) 12.416 ms 11.497 ms 13.179 ms
3 net300.bwrt1zhb.bluewin.ch (195.186.120.129) 13.353 ms 13.322 ms 13.486 ms
4 195.186.120.177 (195.186.120.177) 13.201 ms 14.332 ms 14.289 ms
5 net125.bwrt1inb.bluewin.ch (195.186.125.71) 12.125 ms 13.699 ms 13.334 ms
6 if114.ip-plus.bluewin.ch (195.186.0.114) 13.478 ms 13.357 ms 13.334 ms
7 i79tix-005-pos2-0.bb.ip-plus.net (138.187.130.163) 14.290 ms 12.638 ms 13.507 ms
8 cabcom-00-ser0.ce.ip-plus.net (164.128.22.10) 13.247 ms 13.990 ms 14.962 ms
9 tengig-2-4.mlrZHZ006.gw.cablecom.net (62.2.33.2) 14.561 ms 14.545 ms 14.955 ms
10 62-2-210-210.webcom.cablecom.ch (62.2.210.210) 25.312 ms 26.091 ms 26.373 ms
11 62-2-210-215.webcom.cablecom.ch (62.2.210.215) [open] 23.411 ms 24.017 ms 26.060
ms
To trace the path to a mail server listening for connections on
port 25:
tcptraceroute mail.akadia.com 25
Expect is a tool primarily for automating interactive applications
such as telnet, ftp, passwd, fsck, rlogin, tip, etc.
Expect really makes this stuff trivial. Expect is also useful for testing these same
applications. Expect is described in many books, articles, papers, and FAQs. There is an
entire book on it available from O'Reilly.
Get Started With Expect
The three commands send, expect, and spawn are the building power of Expect. The send command sends strings to a process, the expect command waits for strings from a process, and the spawn command starts a process.
The send Command
-
If Expect is already interacting with a program,
the string will be sent to that program. But initially, send will send to the standard output.
Here is what happens when I type this to the Expect interpreter
interactively:
expect expect1.1>
send "hello world" hello world expect1.2> exit
-
The send command
does not format the string in any way, so after it is printed the next Expect prompt
gets appended to it without any space. To make the prompt appear on a different line,
put a newline character at the end of the string. A newline is represented by "\n". The
exit command gets you out of the Expect interpreter.
expect1.1> send
"hello world\n"
hello world
expect1.2> exit
-
If these commands are stored in a file,
speak.exp, the script can be executed from the UNIX
command line:
#!/usr/bin/expect send "hello
world\n"
./speak.exp
The expect Command
-
The expect command
waits for a response, usually from a process. Expect can wait
for a specific string or any string that matches a given pattern. Like send, the
expect command initially waits for characters from the keyboard. To see how the
expect command works, create a a file response.exp
that reads:
#!/usr/bin/expect
expect "hi\n"
send "hello there!\n"
-
When I make response.exp executable and run it, the interaction looks like
this:
./response.exp
hi
hello there!
-
If you get an error that goes like couldn't read
file " ": No such file or directory, it may be because there are non-printable
characters in your file. This is true if you do cut-and-paste from Netscape to your
file. To solve this problem, try deleting trailing spaces at the end of each command
line (even if there seems to be nothing there) in the script and follow the above steps
again.
-
If expect reads characters that do not match the
expected string, it continues waiting for more characters. If I had type hello instead
of hi followed by a return, expect would continue to wait for "hi\n". Finding
unexpected data in the input does not bother expect. It keeps looking until it finds
something that matches. If no input is given, expect command eventually times out and
returns. By default, after 10 seconds expect gives up waiting for input that matches
the pattern. This default value can be changed by setting the variable timeout
using the Tcl set command. For example, the following command sets the timeout to 60
seconds.
-
set timeout 60
-
A timeout of -1 signifies that expect should wait
forever and a timeout of 0 indicates that expect should not wait at all.
-
To prevent expect from matching unexpected data,
expect patterns can include regular expressions. The caret ^ is a special character that only matches the beginning of the
input; it cannot skip over characters to find a valid match. For example, the
pattern ^hi matches if I enter
"hiccup" but not if I enter "sushi" . The dollar sign ($) is another special character.
It matches the end of the data. The pattern hi$ matches
if I enter "sushi" but not if I enter "hiccup". And the pattern ^hi$ matches neither "sushi" nor "hiccup". It matches "hi" and
nothing else.
-
Patterns that use ^ or $ are said to be anchored.
When patterns are not anchored, patterns match beginning at the earliest possible
position in the string.
-
Expect also allows association between a command
and a pattern. The association is made by listing the action (also known as command)
immediately after the pattern in the expect command itself. Here is an example of
pattern-action pairs:
-
#!/usr/bin/expect -f
expect "hi" { send "You said hi\n" } \
"hello" { send "Hello yourself\n" } \
"bye" { send "Good-bye cruel world\n" }
-
This command looks for "hi", "hello", and "bye".
If any of the three patterns are found, the action immediately following it gets
executed. If there is no match and the default timeout expires, expect stops waiting
and execution continues with the next command in the script.
The spawn Command
-
The spawn command
starts another program. The first argument of the spawn
command is the name of a program to start. The remaining arguments are passed to the
program. For example:
-
spawn ftp ftp.uu.net
-
This command spawns an ftp process and ftp.uu.net
is the argument to the ftp process.
Example: Anonymous FTP
-
To partially automate an anonymous FTP
action, create a file aftp.exp that
looks like this:
#!/usr/bin/expect -f
spawn ftp $argv
expect "Name"
send "anonymous\r"
expect "Password:"
send "martin dot zahn at akadia dot ch\r"
interact
-
./aftp.exe ftp.uu.net
spawn ftp ftp.uu.net
Connected to ftp.uu.net.
220 FTP server ready.
Name (ftp.uu.net:root): anonymous
530 Please login with USER and PASS.
SSL not available
331 Guest login ok, send your complete e-mail address as password.
Password:
230-
230- Welcome to the UUNET archive.
230- A service of UUNET Technologies Inc, Falls Church, Virginia
230- For information about UUNET, call +1 703 206 5600, or see the files
230- in /uunet-info
230-
230- Please see http://www.us.uu.net/support/usepolicy/ for Acceptable
230- Use Policy
230-
230- Access is allowed all day. Current time is Tue Jun 28 13:18:19 2005 GMT.
230-
230- All transfers are logged with your host name and email address.
230- If you don't like this policy, disconnect now!
230-
230- If your FTP client crashes or hangs shortly after login, try using a
230- dash (-) as the first character of your password. This will turn off
230- the informational messages which may be confusing your ftp client.
230-
230-
230-Please read the file /info/README.ftp
230- it was last modified on Fri Jun 29 00:54:02 2001 - 1459 days ago
230-Please read the file /info/README
230- it was last modified on Fri Jun 29 00:54:02 2001 - 1459 days ago
230 Guest login ok, access restrictions apply.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>
Notice that each send command in the script ends
with \r and not \n (\r denotes a return character while \n denotes a linefeed character).
Interact is an Expect command that turns control from the
script over to you. When this command is executed, Expect stops reading commands from the
script and instead begins reading from the keyboard.
Example: Download RFC
-
With the script ftp-rfc you can
download a RFC article:
#!/bin/sh
# \
exec expect "$0" ${1+"$@"}
# ftp-rfc <rfc-number>
# ftp-rfc -index
# retrieves an rfc (or the index) from uunet
exp_version -exit 5.0
if {$argc!=1} {
send_user "usage: ftp-rfc \[#] \[-index]\n"
exit
}
set file "rfc$argv.Z"
set timeout 60
spawn ftp ftp.uu.net
expect "Name*:"
send "anonymous\r"
expect "Password:"
send "martin.zahn@akadia.ch\r"
expect "ftp>"
send "binary\r"
expect "ftp>"
send "cd inet/rfc\r"
expect "550*ftp>" exit "250*ftp>"
send "get $file\r"
expect "550*ftp>" exit "200*226*ftp>"
close
wait
send_user "\nuncompressing file - wait...\n"
exec uncompress $file
Example: Telnet Session
#!/usr/bin/expect
set timeout 20
set name [lindex $argv 0]
set user [lindex $argv 1]
set password [lindex $argv 2]
spawn telnet $name
expect "login:"
send "$user "
expect "Password:"
send "$password "
interact
./logon.exp telnet-host name password
Example: Talking to a Mail Server
#!/usr/bin/expect
set timeout 20
set mailserver [lindex $argv 0]
spawn telnet $mailserver 25
expect "*Postfix*"
send "helo $mailserver\r"
expect "*250*"
send "mail from: <martin.zahn@akadia.ch>\r"
expect "*250*"
send "rcpt to: <martin dot zahn at akadia dot ch>\r"
expect "*250*"
send "data\r"
expect "*354*"
send "hello\n.\n"
expect "*250*"
send "quit"
./smtp mail-server
spawn telnet rabbit.akadia.com 25
Trying 62.2.210.211...
Connected to rabbit.akadia.com.
Escape character is '^]'.
220 rabbit.akadia.com ESMTP Postfix
helo rabbit.akadia.com
250 rabbit.akadia.com
mail from: <martin.zahn@akadia.ch>
250 Ok
rcpt to: <martin dot zahn at akadia dot ch>
250 Ok
data
354 End data with <CR><LF>.<CR><LF>
hello
.
250 Ok: queued as 9B0BD3BA3D7
Example: Check if Mail Server is up and running
Script: check_smtp.exp
#!/usr/bin/expect
set timeout 2
spawn telnet mailserver 25
expect "220 mailserver ESMTP Postfix"
send "quit\r"
In the calling script:
# Check if SMTP is up
check_smtp.exp 1>/dev/null 2>&1
if [ $? != 0 ]
then
echo "OK, running"
else
echo "NOT OK, probably down"
fi
|