Setting up DIAL-UP Modem in Ubuntu Linux

Although high-speed DSL, cable modem, and wireless LAN hardware have become
widely available, there may still be times when a phone line and a modem are your
only way to get on the Internet. Linux offers both graphical and command line tools
for configuring and communicating with modems.

As with other network connections in Ubuntu, dial-up modem connections can be configured using the Network Configuration window. Most external serial modems will
work with Linux without any special configuration. Most hardware PCI modems will
also work. However, many software modems (also sometimes called Winmodems)
often will not work in Linux (although some can be configured with special drivers,
and are therefore referred to as Linmodems).

Instead of describing the contortions you must go through to get some Winmodems
working in Linux, we recommend that you purchase either a modem that connects
to an external serial port or a hardware modem. If you want to try configuring your
Winmodem yourself, refer to the Linmodems site ( you are not able to get your modem working from the Network Configuration window, there are some commands you can try. First try the wvdialconf command to try to scan any modems connected to your serial ports and create a configuration file:

$ sudo wvdialconf /etc/wvdial.conf Scan serial ports, create config file Scanning your serial ports for a modem.

ttyS0: ATQ0 V1 E1 -- OK
ttyS0: ATQ0 V1 E1 Z -- OK

In this example, a modem was found on the COM1 port (serial port /dev/ttyS0).
Further output should show which speeds are available and various features that are
supported. The configuration information that results is, in this case, written to the file /etc/wvdial.conf. Here’s an example of what that file might look like:

[Dialer Defaults]
Modem = /dev/ttyS0
Baud = 115200
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 S11=55 +FCLASS=0
;Phone =
;Username =
;Password =

Open wvdial.conf in a text editor and remove the comment characters (;) from in
front of the Phone, Username, and Password entries. Then add the phone number you need to dial to reach your ISP’s bank of dial-in modems. Next add the user name
and password you need to log in to that modem connection.

To use the dial-up entry you just configured, you can use the wvdial command:

$ sudo wvdial Dial out and connect to your ISP
--> WvDial: Internet dialer version 1.54.0
--> Initializing modem.
--> Sending: ATZ
--> Modem initialized.

After the connection is established between the two modems, a Point-to-Point Protocol
(PPP) connection is created between the two points. After that, you should be able to
start communicating over the Internet. If you find that you are not able to communicate with your modem, there are some ways of querying your computer’s serial ports to find out what is going wrong. The first thing to check at the low level is that your /dev/ttyS? device talks to the hardware serial port.

By default, the Linux system knows of four serial ports: COM1 (/dev/ttyS0),
COM2 (/dev/ttyS1), COM3 (/dev/ttyS2), and COM4 (/dev/ttyS3). To see a
listing of those serial ports, use the setserial command (from the setserial package)
with the -g option, as follows:

$ setserial -g /dev/ttyS0 /dev/ttyS1 /dev/ttyS2 /dev/ttyS3 See port info
/dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4
/dev/ttyS1, UART: unknown, Port: 0x02f8, IRQ: 3
/dev/ttyS2, UART: unknown, Port: 0x03e8, IRQ: 4
/dev/ttyS3, UART: unknown, Port: 0x02e8, IRQ: 3

To see more detailed information on your serial ports, use the -a option:

$ setserial -a /dev/ttyS0 View serial port details
/dev/ttyS0, Line 0, UART: 16550A, Port: 0x03f8, IRQ: 4
Baud_base: 115200, close_delay: 50, divisor: 0
closing_wait: 3000
Flags: spd_normal skip_test

$ setserial -ga /dev/ttyS0 /dev/ttyS1 Check multiple port details

The setserial command can also be used to re-map physical serial ports to logical
/dev/ttyS? devices. Unless you’re running kernel 2.2 with a jumper-configured ISA
serial card, you won’t need this. Modern Linux systems running on modern hardware
make COM1 and COM2 serial ports work right out of the box, so we won’t cover these
options. The stty command is another command you can use to work with serial ports. To view the current settings for the COM1 port (ttyS0), type the following:

$ stty -F /dev/ttyS0 -a View tty settings for serial port
speed 9600 baud; rows 0; columns 0; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = ; eol2 =
; swtch = ; start = ^Q; stop = ^S;
susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 hupcl -cstopb cread clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff –
iuclc -ixany -imaxbel -iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke

The dialer will typically change these settings as needed, although you can use the
stty command to change these settings as well. Refer to the stty man page (man
stty) for descriptions of any of the tty settings. You can talk directly to the modem or other serial devices using the minicom command (from the minicom package). In fact, it can be useful to troubleshoot dialing by issuing AT commands to the modem using minicom. The first time you run minicom, use -s to enter setup mode:

$ minicom -s Create your modem settings
| Filenames and paths |
| File transfer protocols |
| Serial port setup |
| Modem and dialing |
| Screen and keyboard |
| Save setup as dfl |
| Save setup as.. |
| Exit |
| Exit from Minicom |

Let’s forget about modems for a moment and assume you want to use COM1 to connect
to a Cisco device at 9600 baud. Use the arrow keys to navigate to Serial port setup
and press Enter to select it. Press a to edit the serial device and change that device to /dev/ttyS0. Next, press e for port settings and when the Comm Parameters screen appears, press e for 9600 baud. To toggle off hardware flow control, press f. Press Enter to return to the configuration screen.

To change modem parameters, select modem and dialing. Then clear the init, reset,
connect, and hangup strings (which are not appropriate for the Cisco device we’re
configuring). When that’s done, select save setup as dfl (default) from the configuration screen and choose Exit (not Exit from Minicom). You’re now in the minicom terminal. To learn more about how to use minicom, press Ctrl+a, then z for help. When you are done, press Ctrl+a, then x to exit from minicom.

WARNING! Do not run minicom inside screen with the default key bindings!
Otherwise, Ctrl+a gets intercepted by screen! If you do so by mistake, go to
another screen window and type: killall minicom.

How to troubleshoot wireless networking in Ubuntu Linux

If you need help determining exactly what wireless card you have, type the following:

$ lspci | grep -i wireless Search for wireless PCI cards
01:09.0 Network controller: Broadcom Corporation BCM4306 802.11b/g
Wireless LAN Controller (rev 03)

Assuming that your wireless card is up and running, there are some useful commands
in the wireless-tools package you can use to view and change settings for your wireless cards. In particular, the iwconfig command can help you work a with your wireless LAN interfaces. The following scans your network interfaces for supported wireless cards and lists their current settings:

$ iwconfig
eth0 no wireless extensions.
eth1 IEEE 802.11-DS ESSID:”” Nickname:”BHARATHVN”
Mode:Managed Frequency:2.457 GHz Access Point: Not-Associated
Bit Rate:11 Mb/s Tx-Power=15 dBm Sensitivity:1/3
Retry limit:4 RTS thr:off Fragment thr:off
Encryption key:off
Power Management:off

Wireless interfaces may be named wlanX or ethX, depending on the hardware and
driver used. You may be able to obtain more information after setting the link up on
the wireless interface:

$ ip link set eth1 up

$ iwconfig eth1
eth1 IEEE 802.11-DS ESSID:”” Nickname:”BHARATHVN”
Mode:Managed Frequency:2.457 GHz Access Point: None
Bit Rate:11 Mb/s Tx-Power=15 dBm Sensitivity:1/3
Retry limit:4 RTS thr:off Fragment thr:off
Encryption key:off
Power Management:off
Link Quality=0/92 Signal level=134/153 Noise level=134/153
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0

The settings just shown can be modified in a lot of ways. Here are some ways to use
iwconfig to modify your wireless interface settings. In the following examples, we operate on a wireless interface named wlan0. These operations may or may not be supported, depending on which wireless card and driver you are using.

$ sudo iwconfig wlan0 essid “MyWireless” Set essid to MyWireless

$ sudo iwconfig wlan0 channel 3 Set the channel to 3

$ sudo iwconfig wlan0 mode Ad-Hoc Change from Managed to Ad-Hoc mode

$ sudo iwconfig wlan0 ap any Use any access point available

$ sudo iwconfig wlan0 sens -50 Set sensitivity to –50

$ sudo iwconfig wlan0 retry 20 Set MAC retransmissions to 20

$ sudo iwconfig wlan0 key 1234-5555-66 Set encryption key to 1234-5555-66

The essid is sometimes called the Network Name or Domain ID. Use it as the common
name to identify your wireless network. Setting the channel lets your wireless
LAN operate on that specific channel. With Ad-Hoc mode, the network is composed of only interconnected clients with no central access point. In Managed/Infrastructure mode, by setting ap to a specific MAC address, you can force the card to connect to the access point at that address, or you can set ap to any and allow connections to any access point. If you have performance problems, try adjusting the sensitivity (sens) to either a negative value (which represents dBm) or positive value (which is either a percentage or a sensitivity value set by the vendor). If you get retransmission failures, you can increase the retry value so your card can send more packets before failing. Use the key option to set an encryption key. You can enter hexadecimal digits (XXXXXXXX-XXXX-XXXX or XXXXXXXX). By adding an s: in front of the key, you can enter an ASCII string as the key (as in s:My927pwd).

Using lspci to poke Harware in Linux

If you just generally want to find out more about your computer’s hardware, you can
use the following commands. The lspci command lists information about PCI devices on
your computer:

$ lspci List PCI hardware items
00:00.0 Host bridge: VIA Technologies, Inc. VT8375 [KM266/KL266] Host Bridge
00:01.0 PCI bridge: VIA Technologies, Inc. VT8633 [Apollo Pro266 AGP]
00:10.0 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1
00:11.0 ISA bridge: VIA Technologies, Inc. VT8235 ISA Bridge
00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II]
01:00.0 VGA compatible controller: S3 Inc. VT8375 [ProSavage8 KM266/KL266]

$ lspci -v List PCI hardware items with more details

$ lspci -vv List PCI hardware items with even more details
Using the dmidecode command, you can display information about your computer’s hardware components, including information about what features are supported in the BIOS. Here is an example:

$ sudo dmidecode | less List hardware components

$ sudo dmidecode 2.7
SMBIOS 2.3 present.
32 structures occupying 919 bytes.
Table at 0x000F0100.
Handle 0x0000, DMI type 0, 20 bytes.
BIOS Information
Vendor: Award Software International, Inc.
Version: F2
Release Date: 10/06/2003
Processor Information
Socket Designation: Socket A
Type: Central Processor
Family: Athlon
Manufacturer: AMD
ID: 44 06 00 00 FF FB 83 01
Signature: Family 6, Model 4, Stepping 4
FPU (Floating-point unit on-chip)
VME (Virtual mode extension)
DE (Debugging extension)

You can use the hdparm command to view and change information relating to your hard disk.

Although it’s safe to view information about features of your hard
disks, it can potentially damage your hard disk to change some of those settings.
Here are some examples of printing information about your hard disks:
$ sudo hdparm /dev/sda Display hard disk settings (SATA or SCSI drive)
IO_support = 0 (default 16-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 30401/255/63, sectors = 488395055, start = 0

$ sudo hdparm /dev/hda Display hard disk settings (IDE drive)

$ sudo hdparm –I /dev/sda Display detailed drive information
ATA device, with non-removable media
Model Number: FUJITSU MPG3409AT E
Serial Number: VH06T190RV9W
Firmware Revision: 82C5

How to sync Hardware Clock in Linux using hwclock

Anyone can use the hwclock command to view hardware clock settings; however,
you must have root privileges to change those settings. To use hwclock to view the
current time from your computer’s hardware clock, type the following:

$ hwclock -r Display current hardware clock settings
Sun 12 Aug 2007 03:45:40 PM CDT -0.447403 seconds

Even if your hardware clock is set to UTC time, hwclock displays local time by default.If your system time strays from your hardware clock (for example, if you tried some of the date commands shown previously), you can reset your system clock from your hardware clock as follows:

$ sudo hwclock --hctosys Reset system clock from hardware clock

Likewise if your hardware clock is set incorrectly (for example, if you replaced the
CMOS battery on your motherboard), you can set the hardware clock from your system clock as follows:

# hwclock --systohc Reset hardware clock from system clock

Over time your hardware clock can drift. Because the clock tends to drift the same
amount each day, hwclock can keep track of this drift time (which it does in the
/etc/adjtime file). You can adjust the hardware clock time based on the adjtime file
as follows:

$ sudo hwclock --adjust Adjust hardware clock time for drift

To set the hardware clock to a specific time, you can use the --set option. Here is an example:

$ sudo hwclock --set --date=”3/18/08 18:22:00” Set clock to new date/time

In this example, the hardware clock is set to March 18, 2008 at 6:22 p.m. This update
does not immediately affect the system clock

Setting up Linux Date and Time

The date command is the primary command-based interface for viewing and changing
date and time settings, if you are not having that done automatically with NTP.
Here are examples of date commands for displaying dates and times in different ways:

$ date Display current date, time and time zone
Sun Aug 12 01:26:50 CDT 2007

$ date ‘+%A %B %d %G’ Display day, month, day of month, year
Sunday August 12 2007

$ date ‘+The date today is %F.’ Add words to the date output
The date today is 2007-08-12

$ date --date=’4 weeks’ Display date four weeks from today
Sun Sep 9 10:51:18 CDT 2007

$ date --date=’8 months 3 days’ Display date 8 months 3 days from today
Tue Apr 15 10:59:44 CDT 2008

$ date --date=’4 Jul’ +%A Display day on which July 4 falls

Although our primary interest in this section is time, since we are on the subject
of dates as well, the cal command is a quick way to display dates by month. Here are

$ cal Show current month calendar (today is highlighted)
August 2007
Su Mo Tu We Th Fr Sa
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

$ cal 2007 Show whole year’s calendar
January February March
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 1 2 3 1 2 3
7 8 9 10 11 12 13 4 5 6 7 8 9 10 4 5 6 7 8 9 10
14 15 16 17 18 19 20 11 12 13 14 15 16 17 11 12 13 14 15 16 17
21 22 23 24 25 26 27 18 19 20 21 22 23 24 18 19 20 21 22 23 24
28 29 30 31 25 26 27 28 25 26 27 28 29 30 31

$ cal -j Show Julian calendar (numbered from January 1)
August 2007
Sun Mon Tue Wed Thu Fri Sat
213 214 215 216
217 218 219 220 221 222 223
224 225 226 227 228 229 230
231 232 233 234 235 236 237
238 239 240 241 242 243

The date command can also be used to change the system date and time. For example:

$ sudo date 081215212008 Set date/time to Aug. 12, 2:21PM, 2008
Tue Aug 12 11:42:00 CDT 2008

$ sudo date --set=’+7 minutes’ Set time to 7 minutes later
Sun Aug 12 11:49:33 CDT 2008

$ sudo date --set=’-1 month’ Set date/time to one month earlier
Sun Jul 12 11:50:20 CDT 2008

The next time you boot Ubuntu, the system time will be reset based on the value of
your hardware clock (or your NTP server, if NTP service is enabled). And the next
time you shut down, the hardware clock will be reset to the system time, in order to
preserve that time while the machine is powered off. To change the hardware clock,
you can use the hwclock command.

How to Monitor CPU Usage in Linux

An overburdened CPU is another obvious place to look for performance problems
on your system. The vmstat command, shown earlier, can produce basic statistics
relating to CPU usage (user activity, system activity, idle time, I/O wait time, and
time stolen from a virtual machine). The iostat command (from the sysstat package),
however, can generate more detailed reports of CPU utilization.

Here are two examples of using iostat to display a CPU utilization report:

$ iostat -c 3 CPU stats every 3 seconds (starting apps)
Linux 2.6.21-1.3194.fc7 (davinci) 08/10/2007
avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 0.00 0.00 0.00 99.50
avg-cpu: %user %nice %system %iowait %steal %idle
28.71 0.00 5.45 18.32 0.00 47.52
avg-cpu: %user %nice %system %iowait %steal %idle
98.99 0.00 1.01 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
99.50 0.00 0.50 0.00 0.00 0.00
$ iostat -c 3 CPU stats every 3 seconds (copying files)
Linux 2.6.21-1.3194.fc7 (davinci) 08/10/2007
avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 24.88 74.63 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 10.00 89.50 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 17.41 82.09 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 14.65 85.35 0.00 0.00

The first iostat example above starts with a quiet system, then several applications
started up. You can see that most of the processing to start the applications is being done in user space. The second iostat example shows a case where several large
files are copied from one hard disk to another. The result is a high percentage of time being spent at the system level, also known as kernel space (in this case, reading from and writing to disk partitions). Note that the file copies also result in a higher amount of time waiting for I/O requests to complete (%iowait).

Here are examples using iostat to print CPU utilization reports with timestamps:

$ iostat -c -t Print time stamp with CPU report
Linux 2.6.21-1.3194.fc7 (davinci) 08/10/2007
Time: 9:28:03 AM
avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 0.00 0.00 0.00 99.50
$ iostat -c -t 2 10 Repeat every 2 seconds for 10 times

The dstat command (dstat package) is available as an alternative to iostat for viewing information about your CPU usage (as well as other performance-related items). One advantage of dstat over other tools is that it more precisely shows the units of measurement it is displaying (such as kilobytes or megabytes) and also uses colors to differentiate the data. Here is an example of dstat for displaying CPU information:

$ dstat -t -c 3 View CPU usage continuously with time stamps
---time--- ----total-cpu-usage----
__epoch___|usr sys idl wai hiq siq
1189727284| 0 0 100 0 0 0
1189727287| 1 0 99 0 0 0
1189727290| 3 0 97 0 0 0
1189727293| 0 0 100 0 0 0
1189727296| 5 0 95 0 0 0
1189727299| 1 0 99 0 0 0
1189727302| 3 0 97 0 0 0
1189727305| 0 0 100 0 0 0
1189727308| 3 0 96 0 1 0
1189727311| 1 0 99 0 0 0
1189727314| 0 0 100 0 0 0
1189727317| 0 0 100 0 0 0
1189727320| 1 0 99 0 0 0
1189727323| 5 0 95 0 0 0
1189727326| 3 0 97 0 0 0
1189727329| 3 0 97 0 0 0
1189727332| 2 0 98 0 0 0
1189727335| 5 0 95 0 0 0

In this example, the output includes a date/time values based on the start of the
epoch (-t) for the CPU report (-c) that is produced every three seconds (3). This
report runs continuously until you stop it (Ctrl+c).

Using fuser to find running process in system

Another way to locate a particular process is by what the process is accessing. The
fuser command can be used to find which processes have a file or a socket open
at the moment. After the processes are found, fuser can be used to send signals to
those processes.

The fuser command is most useful for finding out if files are being held open
by processes on mounted file systems (such as local hard disks or Samba shares).
Finding those processes allows you to close them properly (or just kill them if you
must) so the file system can be unmounted cleanly.

Here are some examples of the fuser command for listing processes that have files open on a selected file system:

$ fuser -mauv /boot Verbose output of processes with /boot open
/boot/grub/: root 3853 ..c.. (root)bash
root 19760 ..c.. (root)bash
root 28171 F.c.. (root)vi
root 29252 ..c.. (root)man
root 29255 ..c.. (root)sh
root 29396 F.c.. (root)vi

The example just shown displays the process ID for running processes associated with
/boot. They may have a file open, a shell open, or be a child process of a shell with the current directory in /boot. Specifically in this example, there are two bash shells open in the /boot file system, two vi commands with files open in /boot, and a man command running in /boot. The -a shows all processes, -u indicates which user owns each process, and -v produces verbose output.

Resize LVM partition in Linux

You can also use the lvresize command if you want to take unneeded space from an existing LVM volume. As before, unmount the volume before resizing it and run e2fsck (to check the file system) and resize2fs (to resize it to the smaller size):

$ sudo umount /mnt/u1
$ sudo e2fsck -f /dev/vgusb/lvm_u1
fsck 1.38 (30-Jun-2005)
e2fsck 1.38 (30-Jun-2005)

The filesystem size (according to the superblock) is 16384 blocks

The physical size of the device is 8192 blocks
Pass 1: Checking inodes, blocks, and sizes
/dev/vgusb/lvm_u1: 12/3072 files (8.3% non-continguous,3531/16384 blocks
$ sudo resize2fs /dev/vgusb/lvm_u1 12M Resize file system
resize2fs 1.38 (30-Jun-2005)
Resizing the filesystem on /dev/vgusb/lvm_u1 to 12288 (1k) blocks.

The filesystem on /dev/vgusb/lvm_u1 is now 12288 blocks long.
$ sudo lvresize --size 12M /dev/vgusb/lvm_u1

WARNING: Reducing active logical volume to 12.00 MB

THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvm_u1? [y/n]: y
Reducing logical volume lvm_u1 to 8.00 MB
Logical volume lvm_u1 successfully resized

$ sudo mount -t ext3 /dev/mapper/vgusb-lvm_u1 /mnt/u1 Remount volume
$ df -m /mnt/u1 See 4MB of 12MB used

Filesystem 1M-blocks Used Available Use% Mounted on
12 4 9 20% /mnt/u1

The newly mounted volume appears now as 12MB instead of 16MB in size.

Extend LVM Partition in Linux

Say that you are running out of space and you want to add more space to your LVM volume.

To do that, unmount the volume and use the lvresize command. After that, you
must also check the file system with e2fsck and run resize2fs to resize the ext3
file system on that volume:

$ sudo umount /mnt/u1 Unmount volume
$ sudo lvresize --size 16M /dev/vgusb/lvm_u1 Resize volume

Extending logical volume lvm_u1 to 16.00 MB
Logical volume lvm_u1 successfully resized

$ sudo e2fsck -f /dev/vgusb/lvm_u1
e2fsck 1.40 (12-Jul-2007)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vgusb/lvm_u1: 12/3072 files (25.0% non-contiguous), 3379/12288 blocks
$ sudo resize2fs /dev/vgusb/lvm_u1 16M Resize file system
resize2fs 1.38 (30-Jun-2005)

Resizing the filesystem on /dev/vgusb/lvm_u1 to 16384 (1k) blocks.
The filesystem on /dev/vgusb/lvm_u1 is now 16384 blocks long.

In the example just shown, the volume and the file system are both resized to 16MB.
Next, mount the volume again and check the disk space and the md5sum you created

$ sudo mount -t ext3 /dev/mapper/vgusb-lvm_u1 /mnt/u1 Remount volume
$ df -m /mnt/u1 See 4MB of 16MB used

Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/vgusb-lvm_u1
16 4 13 20% /mnt/u1

$ md5sum /mnt/u1/vmlinuz-2.6.20-1.2316.fc5 Recheck md5sum
8d0dc0347d36ebd3f6f2b49047e1f525 /mnt/u1/vmlinuz-2.6.20-1.2316.fc5

The newly mounted volume is now 16MB instead of 10MB in size.

How to adjust Audio Levels in Ubuntu

The command line audio tools you use to enable audio devices and adjust audio levels
depend on the type of audio system you use. Advanced Linux Sound Architecture
(ALSA) is the sound system used by most Linux systems these days. The Open Source
Sound System (OSS) has been around longer and is still used on older hardware. In general, you can use alsamixer to adjust sound when ALSA is used and aumix with OSS.
ALSA is the default sound system for many Linux systems. By adding loadable modules that enable OSS device interfaces to work as well, audio applications that require the OSS device interface can work with ALSA as well. To see if OSS modules are loaded, such as snd-pcm-oss (emulates /dev/dsp and /dev/audio), snd-mixeross (emulates /dev/mixer), and snd-seq-oss (emulates /dev/sequencer), type:

# lsmod | grep snd

If the modules are loaded, you can use alsamixer to adjust audio levels for OSS sound
applications. Start alsamixer as follows:

$ alsamixer Show alsamixer screen with playback view
$ alsamixer -V playback Show only playback channels (default)
$ alsamixer -V all Show with playback and capture views
$ alsamixer -c 1 Use alsamixer on second (1) sound card

Volume bars appear for each volume channel. Move right and left arrow keys to
highlight different channels (Master, PCM, Headphone, and so on). Use the up and down
arrow keys to raise and lower the volume on each channel. With a channel highlighted,
press m to mute or unmute that channel. Press the spacebar on a highlighted
input channel (Mic, Line, and so on) to assign the channel as the capture channel (to record audio input). To quit alsamixer, press Alt+q or the Esc key. Press Tab to cycle through settings for Playback, Capture, and All.

The aumix audio mixing application (for which you need to install the aumix package)
can operate in screen-oriented or plain command mode. In plain text you use options
to change or display settings. Here are examples of aumix command lines:

$ aumix -q Show left/right volume and type for all channels
$ aumix -l q -m q List current settings for line and mic only
$ aumix -v 80 -m 0 Set volume to 70% and microphone to 0
$ aumix -m 80 -m R -m q Set mic to 80%, set it to record, list mic
$ aumix With no options, aumix runs screen-oriented

When run screen-oriented, aumix displays all available audio channels. In screenoriented mode, use keys to highlight and change displayed audio settings. Use PageUp,PageDown, and the up arrow and down arrow keys to select channels. Use the right or left arrow key to increase or decrease volume. Type m to mute the current channel. Press the spacebar to select the current channel as the recording device. If a mouse is available, you can use it to select volume levels, balance levels, or the current recording channel.

Using sed to replace a word in a file

Finding text within a file is sometimes the first step towards replacing text. Editing streams of text is done using the sed command. The sed command is actually a fullblown scripting language. For the examples in this chapter, we cover basic text replacement with the sed command.

If you are familiar with text replacement commands in vi, sed has some similarities.
In the following example, you would replace only the first occurrence per line of francois with chris. Here, sed takes its input from a pipe, while sending its output to stdout (your screen):

$ cat myfile.txt | sed s/francois/chris/

Adding a g to the end of the substitution line, as in the following command, causes
every occurrence of francois to be changed to chris. Also, in the following example,
input is directed from the file myfile.txt and output is directed to mynewfile.txt:

$ sed s/francois/chris/g < myfile.txt > mynewfile.txt

The next example replaces the first occurrences of of the text /home/bob to /home2/bob from the /etc/passwd file. (Note that this command does not change that file, but outputs the changed text.) This is useful for the case when user accounts are migrated to a new directory (presumably on a new disk), named with much deliberation, home2. Here, we have to use quotes and backslashes to escape the forward slashes so they are not interpreted as delimiters:

$ sed ‘s/\/home\/bob/\/home2\/bob/g’ < /etc/passwd

Although the forward slash is the sed command’s default delimiter, you can change the
delimiter to any other character of your choice. Changing the delimiter can make your
life easier when the string contains slashes. For example, the previous command line
that contains a path could be replaced with either of the following commands:

$ sed ‘s-/home/bob/-/home2/bob/-’ < /etc/passwd
$ sed ‘sD/home/bob/D/home2/bob/D’ < /etc/passwd

In the first line shown, a dash (-) is used as the delimiter. In the second case, the letter D is the delimiter.

The sed command can run multiple substitutions at once, by preceding each one with -e. Here, in the text streaming from myfile.txt, all occurrences of francois are changed to FRANCOIS and occurrences of chris are changed to CHRIS:

$ sed -e s/francois/FRANCOIS/g -e s/chris/CHRIS/g < myfile.txt

You can use sed to add newline characters to a stream of text. Where Enter appears, press the Enter key. The > on the second line is generated by bash, not typed in.
$ echo aaabccc | sed ‘s/b/\Enter > /’

The trick just shown does not work on the left side of the sed substitution command.
When you need to substitute newline characters, it’s easier to use the tr command.

How to Search a text in file using Grep Command

The grep command comes in handy when you need to perform more advanced string
searches in a file. In fact, the phrase to grep has actually entered the computer jargon as a verb, just as to Google has entered the popular language. Here are examples of the grep command:

$ grep francois myfile.txt Show lines containing francois
# grep 404 /var/log/httpd/access_log Show lines containing 404
$ ps auwx | grep init Show init lines from ps output
$ ps auwx | grep “\[*\]” Show bracketed commands
$ dmesg | grep “[ ]ata\|^ata” Show ata kernel device information

These command lines have some particular uses, beyond being examples of the grep
command. By searching access_log for 404 you can see requests to your web server
for pages that were not found (these could be someone fishing to exploit your system,
or a web page you moved or forgot to create). Displaying bracketed commands that are
output from the ps command is a way to see commands for which ps cannot display
options. The last command checks the kernel buffer ring for any ATA device information, such as hard disks and CD-ROM drives.

The grep command can also recursively search a few or a whole lot of files at the same time. The following command recursively searches files in the /etc/httpd/conf and /etc/httpd/conf.d directories for the string VirtualHost:

$ grep -R VirtualHost /etc/httpd/conf*

Note that your system may not have any files with names starting with conf in the
/etc/httpd directory, depending on what you have installed on your system. You
can apply this technique to other files as well. Add line numbers (-n) to your grep command to find the exact lines where the search terms occur:

$ grep -Rn VirtualHost /etc/httpd/conf*

To colorize the searched term in the search results, add the --color option:

$ grep --color -Rn VirtualHost /etc/httpd/conf*

By default, in a multifile search, the file name is displayed for each search result. Use the -h option to disable the display of file names. This example searches for the string sshd in the file auth.log:

$ grep -h sshd /var/log/auth.log

If you want to ignore case when you search messages, use the -i option:

$ grep -i selinux /var/log/messages Search file for selinux (any case)
To display only the name of the file that includes the search term, add the -l option:

$ grep -Rl VirtualHost /etc/httpd/conf*

To display all lines that do not match the string, add the -v option:

$ grep -v “ 200 “ /var/log/httpd/access_log* Show lines without “ 200 “

NOTE When piping the output of ps into grep, here’s a trick to prevent the grep process from appearing in the grep results:

# ps auwx | grep “[i]nit”

Linux file checksums MD5 - SHA1

When files such as software packages and CD or DVD images are shared over the
Internet, often a SHA1SUM or MD5SUM file is published with it. Those files contain
checksums that can be used to make sure that the file you downloaded is exactly the
one that the repository published.

The following are examples of the md5sum and sha1sum commands being used to
produce checksums of files:

$ md5sum whatever.iso
d41d8cd98f00b204e9800998ecf8427e whatever.iso

$ sha1sum whatever.iso
da39a3ee5e6b4b0d3255bfef95601890afd80709 whatever.iso

Which command you choose depends on whether the provider of the file you are
checking distributed md5sum or sha1sum information. For example, here is what
the md5sum.txt file for the Ubuntu Feisty distribution looked like:

90537599d934967f4de97ee0e7e66e6c ./dists/feisty/main/binary-i386/Release
c53152b488a9ed521c96fdfb12a1bbba ./dists/feisty/main/binary-i386/Packages
ba9a035c270ba6df978097ee68b8d7c6 ./dists/feisty/main/binary-i386/Packages.gz

To verify only one of the files listed in the file, you could do something like the following:

$ cat md5sum.txt | grep Release.gpg |md5sum -c
./dists/feisty/Release.gpg: OK

If you had an SHA1SUM file instead of a md5sum.txt file to check against, you could
use the sha1sum command in the same way. By combining the find command described earlier in this chapter with the md5sum command, you can verify any part of your file system. For example, here’s how to create an MD5 checksum for every file in the
/etc directory so they can be checked later to see if any have changed:

$ sudo find /etc -type f -exec md5sum {} \; > /tmp/md5.list 2> /dev/null

The result of the previous command line is a /tmp/md5.list file that contains a 128-bit checksum for every file in the /etc directory. Later, you could type the following command to see if any of those files have changed:

$ cd /etc

$ md5sum -c /tmp/md5.list | grep -v ‘OK’
./hosts.allow: FAILED
md5sum: WARNING: 1 of 1668 computed checksums did NOT match

As you can see from the output, only one file changed (hosts.allow). So the next
step is to check the changed file and see if the changes to that file were intentional.

Listing File Linux Command

$ ls -l Files and directories in current directory
$ ls -la Includes files/directories beginning with dot (.)
$ ls -lt Orders files by time recently changed
$ ls -lu Orders files by time recently accessed
$ ls -lS Orders files by size
$ ls -li Lists the inode associated with each file
$ ls -ln List numeric user/group IDs, instead of names
$ ls -lh List file sizes in human-readable form (K, M, etc.)
$ ls -lR List files recursively, from current directory and subdirectories

Find files in Linux More than 40 day Older

This command line finds files that have not been accessed in /home/chris for more
than 40 days:

$ find /home/chris/ -atime +40

How to display all Environment Variables in Linux

To display all of the environment variables, in alphabetical order, that are already set for your shell, type the following:

$ set | less

Also, you can concatenate a string to an existing variable:

$ export PATH=$PATH:/home/fcaen

To list your bash’s environment variables:
$ env

Linux Bash History

The Bourne Again Shell (bash) is the shell used by default by most modern Linux systems and quite a few other operating systems such as Mac OS X. Built into bash, as with other shells, is a history feature that lets you review, change, and reuse commands that you have run in the past. This can prove very helpful as many Linux commands are long and complicated.

When bash starts, it reads the ~/.bash_history file and loads it into memory. This
file is set by the value of $HISTFILE.

NOTE See the section “Using Environment Variables” later in this chapter for
more on how to work with shell environment variables such as $HISTFILE. During a bash session, commands are added to history in memory. When bash exits, history in memory is written back to the .bash_history file. The number of commands held in history during a bash session is set by $HISTSIZE, while the number of commands actually stored in the history file is set by $HISTFILESIZE:


/home/fcaen/.bash_history 500 500

To list the entire history, type history. To list a previous number of history commands, follow history with a number. This lists the previous five commands in your history:

$ history 5
975 mkdir extras
976 mv *doc extras/
977 ls -CF
978 vi house.txt
979 history

To move among the commands in your history, use the up arrow and down arrow. Once a
command is displayed, you can use the keyboard to edit the current command like any other command: left arrow, right arrow, Delete, Backspace, and so on. Here are some other ways to recall and run commands from your bash history:

$ !! Run the previous command
$ !997 Run command number 997 from history
ls -CF
$ !997 *doc Append *doc to command 997 from history
ls -CF *doc
$ !?CF? Run previous command line containing the CF string
ls -CF *doc
$ !ls Run the previous ls command
ls -CF *doc
$ !ls:s/CF/l Run previous ls command, replacing CF with l
ls -l *doc

Another way to edit the command history is using the fc command. With fc, you open
the chosen command from history using the vi editor. The edited command runs when you
exit the editor. Change to a different editor by setting the FCEDIT variable (for example, FCEDIT=gedit) or on the fc command line. For example:

$ fc 978 Edit command number 978, then run it
$ fc Edit the previous command, then run it
$ fc -e /usr/bin/nano 989 Use nano to edit command 989

Use Ctrl+r to search for a string in history. For example, typing Ctrl+r followed by the string ss resulted in the following:
(reverse-i-search)`ss’: sudo /usr/bin/less /var/log/messages
Press Ctrl+r repeatedly to search backwards through your history list for other occurrences of the ss string.

Ubuntu Virtual terminals

When Ubuntu boots in multi-user mode (runlevel 2, 3, or 5), six virtual consoles (known as tty1 through tty6) are created with text-based logins. If an X Window System desktop is running, X is probably running in virtual console 7. If X isn’t running, chances are you’re looking at virtual console 1. From X, you can switch to another virtual console with Ctrl+Alt+F1, Ctrl+Alt+F2, and so on up to 6. From a text virtual console, you can switch using Alt+F1, Alt+F2, and so on. Press Alt+F7 to return to the X GUI. Each console allows you to log in using different user accounts. Switching to look at another console doesn’t affect running processes
in any of them. When you switch to virtual terminal one through six, you see a login
prompt similar to the following:

Ubuntu 7.04 localhost tty2
localhost login:
Separate getty processes manage each virtual terminal. Type this command to see
what getty processes look like before you log in to any virtual terminals:

$ ps awx | grep -v grep | grep getty
4366 tty4 Ss+ 0:00 /sbin/getty 38400 tty4
4367 tty5 Ss+ 0:00 /sbin/getty 38400 tty5
4372 tty2 Ss+ 0:00 /sbin/getty 38400 tty2
4373 tty3 Ss+ 0:00 /sbin/getty 38400 tty3
4374 tty1 Ss+ 0:00 /sbin/getty 38400 tty1
4375 tty6 Ss+ 0:00 /sbin/getty 38400 tty6

After I log in on the first console, getty handles my login, and then fires up a bash shell:

$ ps awx | grep -v grep | grep tty
4366 tty4 Ss+ 0:00 /sbin/getty 38400 tty4
4367 tty5 Ss+ 0:00 /sbin/getty 38400 tty5
4372 tty2 Ss 0:00 /bin/login --
4373 tty3 Ss+ 0:00 /sbin/getty 38400 tty3
4374 tty1 Ss+ 0:00 /sbin/getty 38400 tty1
4375 tty6 Ss+ 0:00 /sbin/getty 38400 tty6
7214 tty2 S+ 0:00 -bash

Virtual consoles are configured in the /etc/event.d directory. A script appears for each virtual console, such as tty1 for the tty1 console, tty2 for the tty2 console, and so on.

How to Build deb Packages in Ubuntu

By rebuilding the .deb file that is used to build a Debian package, you can change it
to better suit the way you use the software (for example, including an md5sum file).
To begin, you need to extract a .deb file that you want to modify into a working
directory. You then modify the file tree and control files to suit your needs.
For example, you could download and extract the rsync package and control files into
the current directory by typing the following commands (your $RANDOM directory will
be different of course):

$ aptitude download rsync

Then extract the package contents and the control files from the downloaded file. Note that the $RANDOM directory is found by typing /tmp/rsync_ and pressing Tab:

$ sudo dpkg -x rsync_2.6.9-3ubuntu1.1_i386.deb /tmp/rsync_$RANDOM
$ sudo dpkg -e rsync_2.6.9-3ubuntu1.1_i386.deb /tmp/rsync_17197/

Now change to your package directory, where you extracted the .deb file to, and have
a look around. You should see a directory structure that looks very similar to this:

$ cd /tmp/rsync_17197
$ ls -lart
-rwxr-xr-x 1 root root 491 2007-08-17 20:47 prerm
-rwxr-xr-x 1 root root 110 2007-08-17 20:47 postrm
-rwxr-xr-x 1 root root 523 2007-08-17 20:47 postinst
drwxr-xr-x 4 root root 4096 2007-08-17 20:48 usr
drwxr-xr-x 4 root root 4096 2007-08-17 20:48 etc
-rw-r--r-- 1 root root 37 2007-08-17 20:48 conffiles
-rw-r--r-- 1 root root 985 2007-09-02 12:02 control
drwxr-xr-x 4 root root 4096 2007-09-02 12:02 .
drwxrwxrwt 10 root root 4096 2007-09-02 13:24 ..

Now you have to configure the package directory to fit the formats that dpkg will want for building the .deb file. This involves creating a subdirectory named rsync_2.6.9-3cn1.1/DEBIAN and moving the install files into it. The control file itself is a specially formatted file that contains header and content fields and is parsed by the package tools to print out information about the package:

$ sudo mkdir –p rsync_2.6.9-3cn1.1/DEBIAN
$ sudo mv control conffiles prerm postrm postinst rsync_2.6.9-3cn1.1/DEBIAN

You also need to move the etc/ and usr/ directories under the rsync_2.6.9-3cn1.1

$ sudo mv usr etc rsync_2.6.9-3cn1.1

You should end up with everything filed away correctly, and all that is left is the
rsync_2.6.9-3cn1.1 directory in your current directory. Now move the md5sums file you made earlier into your DEBIAN subdirectory and rename it to md5sums. This will allow debsums to have some md5sums to check:

$ sudo mv /var/lib/dpkg/info/rsync.md5sums rsync_2.6.9-3cn1.1/DEBIAN/md5sums

Now edit the control file to modify some of the information. You certainly don’t want
to install your modified version of rsync with the same package info as the original.
Open the control file in vi or another editor and change the Version line to reflect the one below. You will notice the word Version has a colon after it; this is the header field. The information field follows right after it. Be sure to maintain the space after the colon, and do not put any extra carriage returns or spaces in the file. It is very picky about formatting.

$ sudo vi rsync_2.6.9-3cn1.1/DEBIAN/control
Version: 2.6.9-3cn1.1

A little farther down, you can add to the Description field. This will show up in the
descriptions whenever someone views the package details. Notice the space right
before the words fast remote .... The space is part of the special formatting and is
how dpkg tells the description text from the multiline header. Be sure to put a space
in the first column if you wrap the description to the next line:

Description: Modified by CN 2007-09-02 to include md5sums.
fast remote file copy program (like rcp)

Now build your new package using dpkg –b and the name of the control file subdirectory you created. You will get a warning about Original-Maintainer being a
user-defined field. You can safely ignore the warning.

$ sudo dpkg -b rsync_2.6.9-3cn1.1
warning, `rsync_2.6.9-3cn1.1/DEBIAN/control' contains user-defined field
dpkg-deb: building package `rsync' in `rsync_2.6.9-3cn1.1.deb'.
dpkg-deb: ignoring 1 warnings about the control file(s)
You now have a new .deb file and can ask dpkg to display information about it. Just
run dpkg with the –I option to see the new package info:
$ dpkg -I rsync_2.6.9-3cn1.1.deb

new debian package, version 2.0.
size 1004 bytes: control archive= 712 bytes.
970 bytes, 21 lines control
Package: rsync
Version: 2.6.9-3cn1.1

You could install the new rsync package at this point. This exercise is mainly a
demonstration for building a custom package, not necessarily for hacking up the
system needlessly. Nonetheless, the following code shows that this package will
install and act like a regular Debian package. You want debsums to work also.
Notice dpkg tells you about the downgrade:

$ sudo dpkg -i rsync_2.6.9-3cn1.1.deb

dpkg - warning: downgrading rsync from 2.6.9-3ubuntu1 to 2.6.9-3cn1.1.
(Reading database ... 88107 files and directories currently installed.)
Preparing to replace rsync 2.6.9-3ubuntu1 (using rsync_2.6.9-3cn1.1.deb) ...
Unpacking replacement rsync ...
Setting up rsync (2.6.9-3cn1.1) ...

The debsums utility now has some md5sum files to test with, and anywhere your new
rsync package is installed, this will be the same:

$ debsums rsync
/usr/bin/rsync OK
/usr/share/doc/rsync/examples/rsyncd.conf OK
/usr/share/doc/rsync/README.gz OK

You can also ask dpkg to list your rsync package using the –l option to confirm that
the new version is installed:

$ dpkg -l rsync
ii rsync 2.6.9-3cn1.1 Modified by CN 2007-09-02 to include md5sums.
NOTE You can find out more about building .deb files by visiting the Debian
Binary Package Building HOWTO ( Binary-Package-Building-HOWTO). The dpkg-deb man page is also a good source of info on deb package building.

How to Upgrade Packages using Aptitude

By default, aptitude will always perform an apt-get update before installing or
upgrading. You can, however, still issue a command to perform only the update:

$ sudo aptitude update
Get:1 feisty-security Release.gpg [191B]
Ign feisty-security/main Translation-en_US
Get:2 feisty Release.gpg [191B]

If you want to upgrade all packages on the system, you can send along the upgrade option with aptitude. This will install any new packages waiting in the repositories (in this example, there were no new packages on hand).

$ sudo aptitude upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reading extended state information
Initializing package states... Done
Building tag database... Done
No packages will be installed, upgraded, or removed.
0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Need to get 0B of archives. After unpacking 0B will be used.

How to Upgrade Packages using Aptitude

By default, aptitude will always perform an apt-get update before installing or
upgrading. You can, however, still issue a command to perform only the update:

$ sudo aptitude update
Get:1 feisty-security Release.gpg [191B]
Ign feisty-security/main Translation-en_US
Get:2 feisty Release.gpg [191B]

If you want to upgrade all packages on the system, you can send along the upgrade option with aptitude. This will install any new packages waiting in the repositories (in this example, there were no new packages on hand).

$ sudo aptitude upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reading extended state information
Initializing package states... Done
Building tag database... Done
No packages will be installed, upgraded, or removed.
0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Need to get 0B of archives. After unpacking 0B will be used.

How to find Cached Packages using APT

$ apt-cache search picasa
picasa - Picasa is software that helps you instantly find, edit and share all the pictures on your PC.

You can also ask APT to show info about this Picasa package:

$ apt-cache show picasa
Package: picasa
Version: 2.2.2820-5

Just how much extra software will Picasa require to be updated? Check for dependencies with the following:

$ apt-cache depends picasa
Depends: libc6

Additing Repository and third party signature in Ubuntu / Debian

To get started using the Google repository, bring up the /etc/apt/sources.list
file in a text editor (nano, vi) via sudo:

$ sudo vi /etc/apt/sources.list

Then add the following two lines to the bottom of the sources.list file:

# cn – added for google software

deb stable non-free

You also need to download the Google signing key for authenticating the Google packages by way of a digital signature. This digital key could be downloaded using wget and placing the downloaded file in the /tmp/ directory for importing as a second step.

$ wget -O /tmp/
`/tmp/' ...

The wget command (described in Chapter 12) downloads a file from the Google site
and places it into /tmp/ The crucial part here is that this is the public encryption key used to verify the packages downloaded from the Google site.
Then import the key into APT using the apt-key command:

$ sudo apt-key add /tmp/

Check the APT security keys to make sure the Google digital signature was imported
correctly (some output omitted):

$ sudo apt-key list
uid Google, Inc. Linux Package Signing Key

sub 2048g/C07CB649 2007-03-08

Next, update the APT package cache to refresh the new repository. This is done using sudo and running apt-get update. Make sure to check for the Google repository as it scrolls by:

$ sudo apt-get update
Get:1 stable Release.gpg [189B]
Ign stable/non-free Translation-en_US
Get:2 stable Release [1026B]

Enabling ubuntu - Debian additional repositories for apt

The multiverse and universe repositories were not enabled by default. These repositories now come enabled by default with Ubuntu, so doing updates and searching for software will turn up many more options. One concern you may have, however, is that support, licensing, and patches may not be available for the universe and multiverse repositories. This could be a problem if you are considering an installation where you need to adhere to certain policies and procedures.

To disable the universe or muliverse repositories, open the file /etc/apt/sources
.list in a text editor and comment out the lines which have multiverse or universe
components enabled. You may want to initial the comments to make note of what you
commented out, as shown by the #cn in the following examples:

#cn deb feisty universe
#cn deb-src feisty universe
#cn deb feisty multiverse
#cn deb-src feisty multiverse
#cn deb feisty-security universe
#cn deb-src feisty-security universe
#cn deb feisty-security multiverse
#cn deb-src feisty-security multiverse

Likewise, if you want to add extra repositories that may be offered by individuals or
companies, you can do so by adding a line to the /etc/apt/sources.list file. To
edit this file, you must have root permissions:

$ sudo vi /etc/apt/sources.list

Insert a line starting with deb (for pre-built packages) or deb-src (for source packages), then the URL for the repository, along with the distribution (such as feisty above), and the component descriptions (universe in the examples). Typically, you'll describe components as contrib for contributed (that is, not from the Ubuntu project) and free or non-free. Normally, you should receive all this information from the site that offers the repository.

If you do add other third-party repositories, be sure to look into the authenticity of the entity offering the software before modifying your Linux system. Although it’s
not a big problem with Linux these days, it is easy to add broken or malicious software to your system if you do not exercise care and reasonable caution. Only use software from well-known sources, and always have a means to verify software you download prior to installing. For more information on software repositories,
see the Debian Repository HOWTO (

An example from the HOWTO document follows:

deb unstable main contrib non-free

Handling Locale Error Messages in Ubuntu

If you are working at the command line with Ubuntu (Feisty Fawn), you may see a
locale error messages like one of these while trying to install packages:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
locale: Cannot set LC_CTYPE to default locale: No such file or directory

This seems to be a problem related to the installed language settings, or something
with internationalized encoding in general. One workaround you can use to keep things satisfied is to export the LC_ALL environment variable and set it the same as
your LANG setting.

$ export LC_ALL=”$LANG”

There are several other possible workarounds on the help sites, but this one will be the easiest to undo in case the cure causes more problems than the condition. It should also work regardless of what language you speak. Note that you will have to run this command every time you open a local or ssh shell. You can automate this task by placing the command in your ~/.bashrc file.

Windows Shortcut Tricks

Windows system key combinations

* F1: Help
* CTRL+ESC: Open Start menu
* ALT+TAB: Switch between open programs
* ALT+F4: Quit program
* SHIFT+DELETE: Delete item permanently

Windows program key combinations

* CTRL+C: Copy
* CTRL+X: Cut
* CTRL+V: Paste
* CTRL+Z: Undo
* CTRL+B: Bold
* CTRL+U: Underline
* CTRL+I: Italic

Mouse click/keyboard modifier combinations for shell objects

* SHIFT+right click: Displays a shortcut menu containing alternative commands
* SHIFT+double click: Runs the alternate default command (the second item on the menu)
* ALT+double click: Displays properties
* SHIFT+DELETE: Deletes an item immediately without placing it in the Recycle Bin

General keyboard-only commands

* F1: Starts Windows Help
* F10: Activates menu bar options
* SHIFT+F10 Opens a shortcut menu for the selected item (this is the same as right-clicking an object
* CTRL+ESC: Opens the Start menu (use the ARROW keys to select an item)
* CTRL+ESC or ESC: Selects the Start button (press TAB to select the taskbar, or press SHIFT+F10 for a context menu)
* ALT+DOWN ARROW: Opens a drop-down list box
* ALT+TAB: Switch to another running program (hold down the ALT key and then press the TAB key to view the task-switching window)
* SHIFT: Press and hold down the SHIFT key while you insert a CD-ROM to bypass the automatic-run feature
* ALT+SPACE: Displays the main window's System menu (from the System menu, you can restore, move, resize, minimize, maximize, or close the window)
* ALT+- (ALT+hyphen): Displays the Multiple Document Interface (MDI) child window's System menu (from the MDI child window's System menu, you can restore, move, resize, minimize, maximize, or close the child window)
* CTRL+TAB: Switch to the next child window of a Multiple Document Interface (MDI) program
* ALT+underlined letter in menu: Opens the menu
* ALT+F4: Closes the current window
* CTRL+F4: Closes the current Multiple Document Interface (MDI) window
* ALT+F6: Switch between multiple windows in the same program (for example, when the Notepad Find dialog box is displayed, ALT+F6 switches between the Find dialog box and the main Notepad window)

Shell objects and general folder/Windows Explorer shortcuts
For a selected object:

* F2: Rename object
* F3: Find all files
* CTRL+X: Cut
* CTRL+C: Copy
* CTRL+V: Paste
* SHIFT+DELETE: Delete selection immediately, without moving the item to the Recycle Bin
* ALT+ENTER: Open the properties for the selected object

To copy a file
Press and hold down the CTRL key while you drag the file to another folder.
To create a shortcut
Press and hold down CTRL+SHIFT while you drag a file to the desktop or a folder.

General folder/shortcut control

* F4: Selects the Go To A Different Folder box and moves down the entries in the box (if the toolbar is active in Windows Explorer)
* F5: Refreshes the current window.
* F6: Moves among panes in Windows Explorer
* CTRL+G: Opens the Go To Folder tool (in Windows 95 Windows Explorer only)
* CTRL+Z: Undo the last command
* CTRL+A: Select all the items in the current window
* BACKSPACE: Switch to the parent folder
* SHIFT+click+Close button: For folders, close the current folder plus all parent folders

Windows Explorer tree control

* Numeric Keypad *: Expands everything under the current selection
* Numeric Keypad +: Expands the current selection
* Numeric Keypad -: Collapses the current selection.
* RIGHT ARROW: Expands the current selection if it is not expanded, otherwise goes to the first child
* LEFT ARROW: Collapses the current selection if it is expanded, otherwise goes to the parent

Properties control

* CTRL+TAB/CTRL+SHIFT+TAB: Move through the property tabs

Accessibility shortcuts

* Press SHIFT five times: Toggles StickyKeys on and off
* Press down and hold the right SHIFT key for eight seconds: Toggles FilterKeys on and off
* Press down and hold the NUM LOCK key for five seconds: Toggles ToggleKeys on and off
* Left ALT+left SHIFT+NUM LOCK: Toggles MouseKeys on and off
* Left ALT+left SHIFT+PRINT SCREEN: Toggles high contrast on and off

Microsoft Natural Keyboard keys

* Windows Logo: Start menu
* Windows Logo+R: Run dialog box
* Windows Logo+M: Minimize all
* SHIFT+Windows Logo+M: Undo minimize all
* Windows Logo+F1: Help
* Windows Logo+E: Windows Explorer
* Windows Logo+F: Find files or folders
* Windows Logo+D: Minimizes all open windows and displays the desktop
* CTRL+Windows Logo+F: Find computer
* CTRL+Windows Logo+TAB: Moves focus from Start, to the Quick Launch toolbar, to the system tray (use RIGHT ARROW or LEFT ARROW to move focus to items on the Quick Launch toolbar and the system tray)
* Windows Logo+TAB: Cycle through taskbar buttons
* Windows Logo+Break: System Properties dialog box
* Application key: Displays a shortcut menu for the selected item

Microsoft Natural Keyboard with IntelliType software installed

* Windows Logo+L: Log off Windows
* Windows Logo+P: Starts Print Manager
* Windows Logo+C: Opens Control Panel
* Windows Logo+V: Starts Clipboard
* Windows Logo+K: Opens Keyboard Properties dialog box
* Windows Logo+I: Opens Mouse Properties dialog box
* Windows Logo+A: Starts Accessibility Options (if installed)
* Windows Logo+SPACEBAR: Displays the list of Microsoft IntelliType shortcut keys
* Windows Logo+S: Toggles CAPS LOCK on and off

Dialog box keyboard commands

* TAB: Move to the next control in the dialog box
* SHIFT+TAB: Move to the previous control in the dialog box
* SPACEBAR: If the current control is a button, this clicks the button. If the current control is a check box, this toggles the check box. If the current control is an option, this selects the option.
* ENTER: Equivalent to clicking the selected button (the button with the outline)
* ESC: Equivalent to clicking the Cancel button
* ALT+underlined letter in dialog box item: Move to the corresponding item


* Windows Server 2008 Datacenter
* Windows Server 2008 Enterprise
* Windows Server 2008 Standard
* Microsoft Windows Server 2003, Datacenter Edition (32-bit x86)
* Microsoft Windows Server 2003, Enterprise x64 Edition
* Microsoft Windows Server 2003, Enterprise Edition (32-bit x86)
* Microsoft Windows Server 2003, Enterprise Edition for Itanium-based Systems
* Microsoft Windows Server 2003, Standard x64 Edition
* Microsoft Windows Server 2003, Standard Edition (32-bit x86)
* Microsoft Windows 2000 Server
* Microsoft Windows Millennium Edition
* Microsoft Windows 98 Second Edition
* Microsoft Windows 98 Standard Edition
* Microsoft Windows 95
* Windows Vista Business
* Windows Vista Enterprise
* Windows Vista Home Basic
* Windows Vista Home Premium
* Windows Vista Starter
* Windows Vista Ultimate
* Microsoft Windows XP Home Edition
* Microsoft Windows XP Professional
* Microsoft Windows XP Starter Edition
* Microsoft Windows XP Tablet PC Edition

Security Interview Q&A

Q: How does a firewall (both host-based and network) affect the time required to run tools that perform network enumeration?
A: When a firewall is configured to reject unauthorized packets, the sending host receives a “connection refused” message. When a firewall drops the unauthorized packet without sending the connection refused message, the sending system must wait a minimum time before determining that the connection will not succeed. On many operating systems and in many applications, the number of retries and length of the timeout can be configured. The longer the timeout and the higher the retry count, the longer it takes to determine whether a service is responding.

Q: What are the common SNMP community strings? What other strings might you try?
A: Public and Private are the two most common community strings. Next try the company’s name. Next are the corporate initials. Then, along with the corporate initials, try adding RO (read-only) and RW (read-write) to the front and back.

Q: For the following ports, what is the common service and why should you care about this?

A: This is the port for Microsoft SQL Server. A number of worms have used MS-SQL as their attack vector. Many MS-SQL installations have configuration vulnerabilities. You should never be allowed to connect directly to a database server from the external untrusted network.

A: POP3 runs on 110. This is an unencrypted protocol for downloading e-mail. On many systems, the password for e-mail is the same as account sign-on. When you’re network sniffing, both POP3 and IMAP are prime protocols to watch for to learn username/password pairs.


A: Internet Relay Chat (IRC). Most botnets use IRC to communicate. All outbound connections to IRC should be blocked by default. If a business need exists, open connections only to individual IRC servers that are known to be safe.

A: PC Anywhere is used by many organizations for remote administration. Many PC Anywhere installations are not configured with strong authentication. Most are not configured with encrypted connections, allowing for easy sniffing of all activity.

Q: When doing a security evaluation, how many automated tools should you use and why?
A: You should use at least one tool but preferably two or more. Using at least one automated tool increases the consistency and reliability of the work. No one tool can do it all. By using more than one tool, you lessen the number of false positives and false negatives.

Q: BiDiBLAH uses Google to gather information. Why use Google?
A: Many companies do not understand the completeness with which Google caches the Internet. Combine Google with the Wayback Machine ( and you can learn a lot about a company. By searching for e-mail addresses, you learn who key individuals are, along with, possibly, sub-domains. By searching Internet newsgroups, you may be able to determine what language and IDE the target is using to develop its primary application. Sometimes Google will even find code snippets from the application.

Q: How does ARP poisoning work?
A: Systems do not communicate directly with IP; they use the MAC address. When systemA is about to start a new connection to systemB, it must find systemB’s MAC address. SystemA will broadcast an ARP request to the network. This request asks, “Who has IP Address B?” Normally, configured machines answer only when it is their IP address. In ARP poisoning, you respond to all ARP requests by saying that you are systemB when you are really systemH. Your machine will usually also constantly broadcast for all IP address on the network. This broadcast is normally picked up by all systems, and they fill their local ARP table with your MAC for all IPs.

Q: How accurate is banner grabbing for enumerating what application and version is running on a remote system?
A: You cannot rely on banner grabbing. Most applications can be configured to lie within their banner. Also, most applications can be configured to run on different ports than normal.

Q: Why will Port Security not stop ARP poisoning?
A: Port Security limits only the number of MAC addresses per port. When ARP poisoning, you are not sending multiple MAC addresses. You are sending multiple IP addresses and associating them with your one MAC address. If you can limit the number of IP addresses per port, you can severely limit the scope of ARP poisoning.

Security Postures Interview Q&A

Q: Why should we care about what our security posture is?
A: The short answer is liability and risk management. Companies that are taking an active risk management approach will reduce the likelihood of failure to meet regulatory compliance. Knowing what your security posture is can assist management in effectively assigning resources to achieve business and security goals.

Q: I’m looking at working with the federal government; what should I read?
A: The Federal Information Security Management Act of 2002 (FISMA) and the related documents that are discussed in the FISMA document.

Q: My company currently supports the DITSCAP process. Why was that not covered in this chapter?
A: DITSCAP has been superceded by DIACAP. The transition process is fully explained on the DITSCAP Web site (go to and click DIACAP).

Q: Can you use more security objectives than confidentiality, integrity, and availability?
A: Yes, but doing so is customer specific. Some customers may want authentication or nonrepudiation. Authentication is the process of determining whether someone or something is who or what he, she, or it claims to be. Nonrepudiation is the ability to ensure that the sender of a communication cannot deny the authenticity of his or her signature on a document or the sending of a message that he or she originated. Other security objectives used depend upon the customer.

Q: How does a risk assessment differ from a self-assessment?
A: Risk assessments are normally conducted by an independent group that cannot be influenced by organizational politics. Self-assessments can be any of the assessments discussed in this chapter but are conducted by internal staffing.

Q: What is the validity of PDD-63? I was under the impression that PDD-63 expired when President Clinton left office.
A: That is correct. PDD-63 expired when Clinton left office, but President G. W. Bush signed PDD-1 as an interim stopgap measure to prevent the intent of PDD-63 from dying. The current authority for Critical Infrastructure Protection is HSPD-7.

Q: Can I use something like the DISA IAVA system instead of CVE?
A: Yes, the requirement is to use an industry standard. IAVA is a DOD industry standard, whereas CVE is a security industry standard. It is important to pick the appropriate standard for your customer and stick with it.

Q: If CVEs comprise a dictionary of vulnerabilities and ICAT is a database of vulnerabilities, which should I use?
A: We recommend that you use the ICAT. ICAT provides much more information than CVE and includes all the CVEs.

Q: Why is it important to provide a justification discussion for every finding?
A: The discussion portion of every finding is important to ensure that management has enough information to make good risk-management decisions. Consider that a report is delivered 30 days after the conclusion of the assessment. Management may not have time for the next week or two to start implementing the vulnerability management. What is the chance that the managers will remember what you told them in the out-briefing? They probably won’t remember exactly what you explain and will have to rely on their favorite administrator to fill in the gaps. If the administrator does not want to do the remediation for a particular finding, he or she will try to shift the management opinion. So you need to provide enough information to make good risk-management decisions.

Q: I have never seen the IPR before. Is it truly useful?
A: Yes. At the end of the out-briefing, the customer wants to know how his or her company is doing. For years, the answer has always come as a personal opinion on the assessor’s part. The IPR shows how the customer is doing without much opinion playing a part.

Wireless Security Interview Q&A

Q: If my wireless network doesn’t have a lot of traffic, is it okay to use WEP because the IVs required to crack the WEP key won’t be generated?
A: No. Automated tools are available that allow attackers to capture an ARP packet and reinject it to the access point very rapidly. This generates a significant amount of traffic and allows the attacker to capture enough unique initialization vectors to quickly crack the key.

Q: What is the difference between active and passive WLAN detection?
A: Active WLAN detection requires that the SSID be broadcast in the beacon frame. Passive WLAN detection listens to all traffic in range of the device and determines what WLANs are in range.

Q: Briefly describe the process involved in cracking WEP.
A: To efficiently crack a WEP key, you first need to obtain an Address Resolution Protocol (ARP) packet from the access point you want to attack. You can obtain this packet using a tool such as Void11 ( to send deauthentication packets to the clients associated with that access point. When the clients reassociate to the access point, ARP packets will be generated and can be captured. After you have captured a valid ARP packet, you can use a tool such as Aireplay, a part of the Aircrack suite (, to inject the ARP packet back into the network. This injection process will cause a large number
of initialization vectors to be generated. You can capture this traffic with any pcap format sniffer. Ethereal, Airodump, and Kismet all support pcap format. After you have captured between 500,000 and 1 million unique initialization vectors, you can then crack the WEP key using Aircrack or other, similar tools. Most of these tools are available for free on the Internet.

Q: How many types of Extensible Authentication Protocols (EAPs) are supported by WPA/WPA2 and what are they?
A: There are six fully supported EAP types for WPA/WPA2: EAP-TLS; EAP-TLS/MSCHAPv2;

Q: What is the primary difference between 802.11g and 802.11a?
A: 802.11g operates in the 2.4 GHz frequency range, as do 802.11b and 802.11i, whereas 802.11a operates in the 5 GHz frequency range.

Q: What is the difference between the HostAP drivers and the wlan-ng drivers for Linux?
A: Both of these drivers work with a variety of cards; however, only the HostAP drivers allow you to place your card in monitor mode.

Q: Who determines the wireless standards?
A: The IEEE develops and determines the wireless standards (802.11a, b, g, and so on). The WiFi Alliance, the group that owns the WiFi trademark, then certifies the interoperability of these devices.

Q: What tools do you use to WarDrive?
A: Depending on the operating system in use, Kismet for Linux or Kismac for OS X provide the greatest level of functionality for detecting and identifying WLANs. NetStumbler is available for Windows but supports only active WLAN detection and identification, whereas the Linux and OS X tools both support passive WLAN detection and identification.

Q: What is the minimum passphrase length that should be used for WPA-PSK?
A: Because WPA-PSK with a short passphrase is vulnerable to a dictionary attack, and automated tools are available to facilitate this process, a WPA-PSK passphrase should be at least 21 characters long.

Q: Our organization doesn’t have a wireless network, so is it even important for our security engineers to understand wireless security?
A: Yes. Even though wireless networking isn’t allowed at your site, it is important that the security staff understand that laptops with wireless cards (authorized or unauthorized) pose a threat to the network and know how to identify them and react accordingly. Additionally, the staff should be able to identify rogue access points and the potential impact they can have on the security of the network.

Wireless Security - WiFi Protected Access (WPA)

In response to the problems with WEP, the WiFi Alliance released WiFi Protected Access (WPA). WPA was initially released in two forms: Pre-Shared Key (WPA-PSK) and in conjunction with RADIUS. WPA uses Temporal Key Integrity Protocol (TKIP) to hash the IVs with the WPA key to create the RC4 key that is transmitted. Initially, this appeared to be the fix to the problems with wireless security; however, as vulnerabilities were discovered in WPA when deployed using the Pre-Shared Key, it became apparent that further attention had to be paid to wireless security, and WPA2 was developed to address these issues.

WPA with a Pre-Shared Key is the easiest way to deploy WPA on a wireless network. WPA-PSK is sometimes referred to as WPA Personal because it was designed for use primarily in home networks or smaller corporate environments. To use WPA-PSK, a passphrase is set on the access point, and any client that wants to connect to it must transmit the passphrase. WPA-PSK works well unless the passphrase is shorter than 21 characters. If the passphrase is shorter than 21 characters, it can be guessed using a dictionary attack. The disclosure of this vulnerability led many experts to believe that wireless could never be deployed securely, and the WiFi Alliance went back to work to develop yet another security mechanism
for wireless networks.

WPA can also be used in conjunction with a backend RADIUS server to perform authentication. This mechanism is sometimes referred to as WPA Enterprise because it was designed to be used in large environments in which distributing the PSK to each individual might not be feasible. This mechanism removes the requirement of a Pre-Shared Key and instead uses WPA to transmit authentication information
to the RADIUS server. WPA-RADIUS relies on an Extensible Authentication Protocol (EAP). EAPTLS was initially certified by the WiFi Alliance for use with WPA-RADIUS; however, five additional

EAPs have now been certified:

Currently, no known weaknesses are associated with WPA-RADIUS.

WPA2, sometimes called 802.11i, requires the use of the Advanced Encryption Standard (AES) instead of TKIP but operates in the same way as WPA. WPA2 can also be deployed with either a PSK or by using a RADIUS server. No WPA2 vulnerabilities have been discovered to date.

WEP - Wireless Security

Wired Equivalent Privacy (WEP)
WEP was the first encryption standard for wireless networks. WEP can be deployed in three strengths: 64, 128, and 256 bit. WEP is based on the RC4 encryption algorithm. As wireless networks gained popularity, a vulnerability in the key scheduling algorithm of RC4 was discovered wherein a subset of the initialization vectors (IVs) used by WEP were determined to be weak. By collecting enough of these weak IVs, an attacker could determine the WEP key and potentially compromise the wireless network. Many vendors issued firmware updates for their wireless equipment that reduced the number of weak IVs that were generated. These updates, coupled with the amount of time it took to gather enough weak IVs to crack the key, greatly reduced the effectiveness of attacks against WEP. Security researchers discovered another way to attack WEP, called chopping. As explained previously, chopping involves taking a WEP packet and removing, or chopping off, the last byte, which breaks the
CRC/ICV. If the last byte is 0, the last four bytes are xor’ed with a specific value to make a valid CRC and then the packet is retransmitted to the network. This attack effectively ended the need for weak IVs to be collected in order to crack WEP. Using chopping methods, only unique IVs needed to be collected.

The amount of time involved in data collection was significantly reduced. Despite these vulnerabilities, WEP is still the most used form of wireless encryption deployed worldwide. These numbers are slightly misleading, though, because the majority of WEP networks are deployed in home WLANs. Corporate and government WLANs rarely use WEP now and have migrated to a more secure form of encryption.

C++ Aptitude Interview Q&A

Note : All the programs are tested under Turbo C++ 3.0, 4.5 and Microsoft VC++ 6.0 compilers.
It is assumed that,
 Programs run under Windows environment,
 The underlying machine is an x86 based system,
 Program is compiled using Turbo C/C++ compiler.
The program output may depend on the information based on this assumptions (for example sizeof(int) == 2 may be assumed).

1) class Sample
int *ptr;
Sample(int i)
ptr = new int(i);
delete ptr;
void PrintVal()
cout << "The value is " << *ptr;
void SomeFunc(Sample x)
cout << "Say i am in someFunc " << endl;
int main()
Sample s1= 10;
Say i am in someFunc
Null pointer assignment(Run-time error)
As the object is passed by value to SomeFunc the destructor of the object is called when the control returns from the function. So when PrintVal is called it meets up with ptr that has been freed.The solution is to pass the Sample object by reference to SomeFunc:

void SomeFunc(Sample &x)
cout << "Say i am in someFunc " << endl;
because when we pass objects by refernece that object is not destroyed. while returning from the function.

2) Which is the parameter that is added to every non-static member function when it is called?
‘this’ pointer

3) class base
int bval;
base(){ bval=0;}

class deri:public base
int dval;
deri(){ dval=1;}
void SomeFunc(base *arr,int size)
for(int i=0; i cout<bval;

int main()
base BaseArr[5];
deri DeriArr[5];

The function SomeFunc expects two arguments.The first one is a pointer to an array of base class objects and the second one is the sizeof the array.The first call of someFunc calls it with an array of bae objects, so it works correctly and prints the bval of all the objects. When Somefunc is called the second time the argument passed is the pointeer to an array of derived class objects and not the array of base class objects. But that is what the function expects to be sent. So the derived class pointer is promoted to base class pointer and the address is sent to the function. SomeFunc() knows nothing about this and just treats the pointer as an array of base class objects. So when arr++ is met, the size of base class object is taken into consideration and is incremented by sizeof(int) bytes for bval (the deri class objects have bval and dval as members and so is of size >= sizeof(int)+sizeof(int) ).

4) class base
void baseFun(){ cout<<"from base"< };
class deri:public base
void baseFun(){ cout<< "from derived"< };
void SomeFunc(base *baseObj)
int main()
base baseObject;
deri deriObject;
from base
from base
As we have seen in the previous case, SomeFunc expects a pointer to a base class. Since a pointer to a derived class object is passed, it treats the argument only as a base class pointer and the corresponding base function is called.

5) class base
virtual void baseFun(){ cout<<"from base"< };
class deri:public base
void baseFun(){ cout<< "from derived"< };
void SomeFunc(base *baseObj)
int main()
base baseObject;
deri deriObject;
from base
from derived
Remember that baseFunc is a virtual function. That means that it supports run-time polymorphism. So the function corresponding to the derived class object is called.

void main()
int a, *pa, &ra;
pa = &a;
ra = a;
cout <<"a="<}
Answer :
Compiler Error: 'ra',reference must be initialized
Explanation :
Pointers are different from references. One of the main
differences is that the pointers can be both initialized and assigned,
whereas references can only be initialized. So this code issues an error.

const int size = 5;
void print(int *ptr)

void print(int ptr[size])

void main()
int a[size] = {1,2,3,4,5};
int *b = new int(size);
Compiler Error : function 'void print(int *)' already has a body

Arrays cannot be passed to functions, only pointers (for arrays, base addresses)
can be passed. So the arguments int *ptr and int prt[size] have no difference
as function arguments. In other words, both the functoins have the same signature and
so cannot be overloaded.

class some{
cout<<"some's destructor"< }

void main()
some s;
some's destructor
some's destructor
Destructors can be called explicitly. Here 's.~some()' explicitly calls the
destructor of 's'. When main() returns, destructor of s is called again,
hence the result.


class fig2d
int dim1;
int dim2;

fig2d() { dim1=5; dim2=6;}

virtual void operator<<(ostream & rhs);

void fig2d::operator<<(ostream &rhs)
rhs <dim1<<" "<dim2<<" ";

/*class fig3d : public fig2d
int dim3;
fig3d() { dim3=7;}
virtual void operator<<(ostream &rhs);
void fig3d::operator<<(ostream &rhs)
fig2d::operator <<(rhs);

void main()
fig2d obj1;
// fig3d obj2;

obj1 << cout;
// obj2 << cout;
Answer :
5 6
In this program, the << operator is overloaded with ostream as argument.
This enables the 'cout' to be present at the right-hand-side. Normally, 'cout'
is implemented as global function, but it doesn't mean that 'cout' is not possible
to be overloaded as member function.
Overloading << as virtual member function becomes handy when the class in which
it is overloaded is inherited, and this becomes available to be overrided. This is as opposed
to global friend functions, where friend's are not inherited.

class opOverload{
bool operator==(opOverload temp);

bool opOverload::operator==(opOverload temp){
if(*this == temp ){
cout<<"The both are same objects\n";
return true;
cout<<"The both are different\n";
return false;

void main(){
opOverload a1, a2;
a1= =a2;

Answer :
Runtime Error: Stack Overflow
Explanation :
Just like normal functions, operator functions can be called recursively. This program just illustrates that point, by calling the operator == function recursively, leading to an infinite loop.

class complex{
double re;
double im;
complex() : re(1),im(0.5) {}
bool operator==(complex &rhs);
operator int(){}

bool complex::operator == (complex &rhs){
if((this->re == && (this->im ==
return true;
return false;

int main(){
complex c1;
cout<< c1;

Answer : Garbage value

The programmer wishes to print the complex object using output
re-direction operator,which he has not defined for his lass.But the compiler instead of giving an error sees the conversion function
and converts the user defined object to standard object and prints
some garbage value.

class complex{
double re;
double im;
complex() : re(0),im(0) {}
complex(double n) { re=n,im=n;};
complex(int m,int n) { re=m,im=n;}
void print() { cout<};

void main(){
complex c3;
double i=5;
c3 = i;

Though no operator= function taking complex, double is defined, the double on the rhs is converted into a temporary object using the single argument constructor taking double and assigned to the lvalue.

void main()
int a, *pa, &ra;
pa = &a;
ra = a;
cout <<"a="<

Answer :
Compiler Error: 'ra',reference must be initialized
Explanation :
Pointers are different from references. One of the main
differences is that the pointers can be both initialized and assigned,
whereas references can only be initialized. So this code issues an error.

Try it Yourself

1) Determine the output of the 'C++' Codelet.
class base
public :
cout<<"base ";
class deri{
public : out()
cout<<"deri ";
void main()
{ deri dp[3];
base *bp = (base*)dp;
for (int i=0; i<3;i++)

2) Justify the use of virtual constructors and destructors in C++.

3) Each C++ object possesses the 4 member fns,(which can be declared by the programmer explicitly or by the implementation if they are not available). What are those 4 functions?

4) What is wrong with this class declaration?
class something
char *str;
st = new char[10]; }
delete str;

5) Inheritance is also known as -------- relationship. Containership as ________ relationship.

6) When is it necessary to use member-wise initialization list (also known as header initialization list) in C++?

7) Which is the only operator in C++ which can be overloaded but NOT inherited.

8) Is there anything wrong with this C++ class declaration?
class temp
int value1;
mutable int value2;
public :
void fun(int val)
((temp*) this)->value1 = 10;
value2 = 10;

1. What is a modifier?
A modifier, also called a modifying function is a member function that changes the value of at least one data member. In other words, an operation that modifies the state of an object. Modifiers are also known as ‘mutators’.

2. What is an accessor?
An accessor is a class operation that does not modify the state of an object. The accessor functions need to be declared as const operations

3. Differentiate between a template class and class template.
Template class:
A generic definition or a parameterized class not instantiated until the client provides the needed information. It’s jargon for plain templates.
Class template:
A class template specifies how individual classes can be constructed much like the way a class specifies how individual objects can be constructed. It’s jargon for plain classes.

4. When does a name clash occur?
A name clash occurs when a name is defined in more than one place. For example., two different class libraries could give two different classes the same name. If you try to use many class libraries at the same time, there is a fair chance that you will be unable to compile or link the program because of name clashes.

5. Define namespace.
It is a feature in c++ to minimize name collisions in the global name space. This namespace keyword assigns a distinct name to a library that allows other libraries to use the same identifier names without creating any name collisions. Furthermore, the compiler uses the namespace signature for differentiating the definitions.

6. What is the use of ‘using’ declaration.
A using declaration makes it possible to use a name from a namespace without the scope operator.

7. What is an Iterator class?
A class that is used to traverse through the objects maintained by a container class. There are five categories of iterators:
 input iterators,
 output iterators,
 forward iterators,
 bidirectional iterators,
 random access.
An iterator is an entity that gives access to the contents of a container object without violating encapsulation constraints. Access to the contents is granted on a one-at-a-time basis in order. The order can be storage order (as in lists and queues) or some arbitrary order (as in array indices) or according to some ordering relation (as in an ordered binary tree). The iterator is a construct, which provides an interface that, when called, yields either the next element in the container, or some value denoting the fact that there are no more elements to examine. Iterators hide the details of access to and update of the elements of a container class.
The simplest and safest iterators are those that permit read-only access to the contents of a container class. The following code fragment shows how an iterator might appear in code:
cont_iter:=new cont_iterator();;
while x/=none do
In this example, cont_iter is the name of the iterator. It is created on the first line by instantiation of cont_iterator class, an iterator class defined to iterate over some container class, cont. Succesive elements from the container are carried to x. The loop terminates when x is bound to some empty value. (Here, none)In the middle of the loop, there is s(x) an operation on x, the current element from the container. The next element of the container is obtained at the bottom of the loop.

9. List out some of the OODBMS available.
 GEMSTONE/OPAL of Gemstone systems.
 ONTOS of Ontos.
 Objectivity of Objectivity inc.
 Versant of Versant object technology.
 Object store of Object Design.
 ARDENT of ARDENT software.
 POET of POET software.

10. List out some of the object-oriented methodologies.
 Object Oriented Development (OOD) (Booch 1991,1994).
 Object Oriented Analysis and Design (OOA/D) (Coad and Yourdon 1991).
 Object Modelling Techniques (OMT) (Rumbaugh 1991).
 Object Oriented Software Engineering (Objectory) (Jacobson 1992).
 Object Oriented Analysis (OOA) (Shlaer and Mellor 1992).
 The Fusion Method (Coleman 1991).

11. What is an incomplete type?
Incomplete types refers to pointers in which there is non availability of the implementation of the referenced location or it points to some location whose value is not available for modification.
int *i=0x400 // i points to address 400
*i=0; //set the value of memory location pointed by i.
Incomplete types are otherwise called uninitialized pointers.

12. What is a dangling pointer?
A dangling pointer arises when you use the address of an object after its lifetime is over.
This may occur in situations like returning addresses of the automatic variables from a function or using the address of the memory block after it is freed.

13. Differentiate between the message and method.
Message Method
Objects communicate by sending messages Provides response to a message.
to each other.
A message is sent to invoke a method. It is an implementation of an operation.

14. What is an adaptor class or Wrapper class?
A class that has no functionality of its own. Its member functions hide the use of a third party software component or an object with the non-compatible interface or a non- object- oriented implementation.

15. What is a Null object?
It is an object of some class whose purpose is to indicate that a real object of that class does not exist. One common use for a null object is a return value from a member function that is supposed to return an object with some specified properties but cannot find such an object.

16. What is class invariant?
A class invariant is a condition that defines all valid states for an object. It is a logical condition to ensure the correct working of a class. Class invariants must hold when an object is created, and they must be preserved under all operations of the class. In particular all class invariants are both preconditions and post-conditions for all operations or member functions of the class.

17. What do you mean by Stack unwinding?
It is a process during exception handling when the destructor is called for all local objects between the place where the exception was thrown and where it is caught.

18. Define precondition and post-condition to a member function.
A precondition is a condition that must be true on entry to a member function. A class is used correctly if preconditions are never false. An operation is not responsible for doing anything sensible if its precondition fails to hold.
For example, the interface invariants of stack class say nothing about pushing yet another element on a stack that is already full. We say that isful() is a precondition of the push operation.

A post-condition is a condition that must be true on exit from a member function if the precondition was valid on entry to that function. A class is implemented correctly if post-conditions are never false.
For example, after pushing an element on the stack, we know that isempty() must necessarily hold. This is a post-condition of the push operation.

19. What are the conditions that have to be met for a condition to be an invariant of the class?
 The condition should hold at the end of every constructor.
 The condition should hold at the end of every mutator(non-const) operation.

20. What are proxy objects?
Objects that stand for other objects are called proxy objects or surrogates.
class Array2D
class Array1D
T& operator[] (int index);
const T& operator[] (int index) const;
Array1D operator[] (int index);
const Array1D operator[] (int index) const;

The following then becomes legal:
Here data[3] yields an Array1D object and the operator [] invocation on that object yields the float in position(3,6) of the original two dimensional array. Clients of the Array2D class need not be aware of the presence of the Array1D class. Objects of this latter class stand for one-dimensional array objects that, conceptually, do not exist for clients of Array2D. Such clients program as if they were using real, live, two-dimensional arrays. Each Array1D object stands for a one-dimensional array that is absent from a conceptual model used by the clients of Array2D. In the above example, Array1D is a proxy class. Its instances stand for one-dimensional arrays that, conceptually, do not exist.

21. Name some pure object oriented languages.
 Smalltalk,
 Java,
 Eiffel,
 Sather.

22. Name the operators that cannot be overloaded.
sizeof . .* .-> :: ?:

23. What is a node class?
A node class is a class that,
 relies on the base class for services and implementation,
 provides a wider interface to te users than its base class,
 relies primarily on virtual functions in its public interface
 depends on all its direct and indirect base class
 can be understood only in the context of the base class
 can be used as base for further derivation
 can be used to create objects.
A node class is a class that has added new services or functionality beyond the services inherited from its base class.

24. What is an orthogonal base class?
If two base classes have no overlapping methods or data they are said to be independent of, or orthogonal to each other. Orthogonal in the sense means that two classes operate in different dimensions and do not interfere with each other in any way. The same derived class may inherit such classes with no difficulty.

25. What is a container class? What are the types of container classes?
A container class is a class that is used to hold objects in memory or external storage. A container class acts as a generic holder. A container class has a predefined behavior and a well-known interface. A container class is a supporting class whose purpose is to hide the topology used for maintaining the list of objects in memory. When a container class contains a group of mixed objects, the container is called a heterogeneous container; when the container is holding a group of objects that are all the same, the container is called a homogeneous container.

26. What is a protocol class?
An abstract class is a protocol class if:
 it neither contains nor inherits from classes that contain member data, non-virtual functions, or private (or protected) members of any kind.
 it has a non-inline virtual destructor defined with an empty implementation,
 all member functions other than the destructor including inherited functions, are declared pure virtual functions and left undefined.

27. What is a mixin class?
A class that provides some but not all of the implementation for a virtual base class is often called mixin. Derivation done just for the purpose of redefining the virtual functions in the base classes is often called mixin inheritance. Mixin classes typically don't share common bases.

28. What is a concrete class?
A concrete class is used to define a useful object that can be instantiated as an automatic variable on the program stack. The implementation of a concrete class is defined. The concrete class is not intended to be a base class and no attempt to minimize dependency on other classes in the implementation or behavior of the class.

29.What is the handle class?
A handle is a class that maintains a pointer to an object that is programmatically accessible through the public interface of the handle class.
In case of abstract classes, unless one manipulates the objects of these classes through pointers and references, the benefits of the virtual functions are lost. User code may become dependent on details of implementation classes because an abstract type cannot be allocated statistically or on the stack without its size being known. Using pointers or references implies that the burden of memory management falls on the user. Another limitation of abstract class object is of fixed size. Classes however are used to represent concepts that require varying amounts of storage to implement them.
A popular technique for dealing with these issues is to separate what is used as a single object in two parts: a handle providing the user interface and a representation holding all or most of the object's state. The connection between the handle and the representation is typically a pointer in the handle. Often, handles have a bit more data than the simple representation pointer, but not much more. Hence the layout of the handle is typically stable, even when the representation changes and also that handles are small enough to move around relatively freely so that the user needn’t use the pointers and the references.

30. What is an action class?
The simplest and most obvious way to specify an action in C++ is to write a function. However, if the action has to be delayed, has to be transmitted 'elsewhere' before being performed, requires its own data, has to be combined with other actions, etc then it often becomes attractive to provide the action in the form of a class that can execute the desired action and provide other services as well. Manipulators used with iostreams is an obvious example.
A common form of action class is a simple class containing just one virtual function.
class Action
virtual int do_it( int )=0;
virtual ~Action( );
Given this, we can write code say a member that can store actions for later execution without using pointers to functions, without knowing anything about the objects involved, and without even knowing the name of the operation it invokes. For example:
class write_file : public Action
File& f;
int do_it(int)
return fwrite( ).suceed( );
class error_message: public Action
response_box db(message.cstr( ),"Continue","Cancel","Retry");
switch (db.getresponse( ))
case 0: return 0;
case 1: abort();
case 2: current_operation.redo( );return 1;

A user of the Action class will be completely isolated from any knowledge of derived classes such as write_file and error_message.

31. When can you tell that a memory leak will occur?
A memory leak occurs when a program loses the ability to free a block of dynamically allocated memory.

32.What is a parameterized type?
A template is a parameterized construct or type containing generic code that can use or manipulate any type. It is called parameterized because an actual type is a parameter of the code body. Polymorphism may be achieved through parameterized types. This type of polymorphism is called parameteric polymorphism. Parameteric polymorphism is the mechanism by which the same code is used on different types passed as parameters.

33. Differentiate between a deep copy and a shallow copy?
Deep copy involves using the contents of one object to create another instance of the same class. In a deep copy, the two objects may contain ht same information but the target object will have its own buffers and resources. the destruction of either object will not affect the remaining object. The overloaded assignment operator would create a deep copy of objects.
Shallow copy involves copying the contents of one object into another instance of the same class thus creating a mirror image. Owing to straight copying of references and pointers, the two objects will share the same externally contained contents of the other object to be unpredictable.
Using a copy constructor we simply copy the data values member by member. This method of copying is called shallow copy. If the object is a simple class, comprised of built in types and no pointers this would be acceptable. This function would use the values and the objects and its behavior would not be altered with a shallow copy, only the addresses of pointers that are members are copied and not the value the address is pointing to. The data values of the object would then be inadvertently altered by the function. When the function goes out of scope, the copy of the object with all its data is popped off the stack.
If the object has any pointers a deep copy needs to be executed. With the deep copy of an object, memory is allocated for the object in free store and the elements pointed to are copied. A deep copy is used for objects that are returned from a function.

34. What is an opaque pointer?
A pointer is said to be opaque if the definition of the type to which it points to is not included in the current translation unit. A translation unit is the result of merging an implementation file with all its headers and header files.

35. What is a smart pointer?
A smart pointer is an object that acts, looks and feels like a normal pointer but offers more functionality. In C++, smart pointers are implemented as template classes that encapsulate a pointer and override standard pointer operators. They have a number of advantages over regular pointers. They are guaranteed to be initialized as either null pointers or pointers to a heap object. Indirection through a null pointer is checked. No delete is ever necessary. Objects are automatically freed when the last pointer to them has gone away. One significant problem with these smart pointers is that unlike regular pointers, they don't respect inheritance. Smart pointers are unattractive for polymorphic code. Given below is an example for the implementation of smart pointers.
class smart_pointer
smart_pointer(); // makes a null pointer
smart_pointer(const X& x) // makes pointer to copy of x

X& operator *( );
const X& operator*( ) const;
X* operator->() const;

smart_pointer(const smart_pointer &);
const smart_pointer & operator =(const smart_pointer&);
This class implement a smart pointer to an object of type X. The object itself is located on the heap. Here is how to use it:
smart_pointer p= employee("Harris",1333);
Like other overloaded operators, p will behave like a regular pointer,

36. What is reflexive association?
The 'is-a' is called a reflexive association because the reflexive association permits classes to bear the is-a association not only with their super-classes but also with themselves. It differs from a 'specializes-from' as 'specializes-from' is usually used to describe the association between a super-class and a sub-class. For example:
Printer is-a printer.

37. What is slicing?
Slicing means that the data added by a subclass are discarded when an object of the subclass is passed or returned by value or from a function expecting a base class object.
Consider the following class declaration:
class base
base& operator =(const base&);
base (const base&);
void fun( )
base e=m;
As base copy functions don't know anything about the derived only the base part of the derived is copied. This is commonly referred to as slicing. One reason to pass objects of classes in a hierarchy is to avoid slicing. Other reasons are to preserve polymorphic behavior and to gain efficiency.

38. What is name mangling?
Name mangling is the process through which your c++ compilers give each function in your program a unique name. In C++, all programs have at-least a few functions with the same name. Name mangling is a concession to the fact that linker always insists on all function names being unique.
In general, member names are made unique by concatenating the name of the member with that of the class e.g. given the declaration:
class Bar
int ival;
ival becomes something like:
// a possible member name mangling
Consider this derivation:
class Foo : public Bar
int ival;
The internal representation of a Foo object is the concatenation of its base and derived class members.
// Pseudo C++ code
// Internal representation of Foo
class Foo
int ival__3Bar;
int ival__3Foo;
Unambiguous access of either ival members is achieved through name mangling. Member functions, because they can be overloaded, require an extensive mangling to provide each with a unique name. Here the compiler generates the same name for the two overloaded instances(Their argument lists make their instances unique).

39. What are proxy objects?
Objects that points to other objects are called proxy objects or surrogates. Its an object that provides the same interface as its server object but does not have any functionality. During a method invocation, it routes data to the true server object and
40. Differentiate between declaration and definition in C++.
A declaration introduces a name into the program; a definition provides a unique description of an entity (e.g. type, instance, and function). Declarations can be repeated in a given scope, it introduces a name in a given scope. There must be exactly one definition of every object, function or class used in a C++ program.
A declaration is a definition unless:
 it declares a function without specifying its body,
 it contains an extern specifier and no initializer or function body,
 it is the declaration of a static class data member without a class definition,
 it is a class name definition,
 it is a typedef declaration.
A definition is a declaration unless:
 it defines a static class data member,
 it defines a non-inline member function.

41. What is cloning?
An object can carry out copying in two ways i.e. it can set itself to be a copy of another object, or it can return a copy of itself. The latter process is called cloning.

42. Describe the main characteristics of static functions.
The main characteristics of static functions include,
 It is without the a this pointer,
 It can't directly access the non-static members of its class
 It can't be declared const, volatile or virtual.
 It doesn't need to be invoked through an object of its class, although for convenience, it may.

43. Will the inline function be compiled as the inline function always? Justify.
An inline function is a request and not a command. Hence it won't be compiled as an inline function always.
Inline-expansion could fail if the inline function contains loops, the address of an inline function is used, or an inline function is called in a complex expression. The rules for inlining are compiler dependent.

44. Define a way other than using the keyword inline to make a function inline.
The function must be defined inside the class.

45. How can a '::' operator be used as unary operator?
The scope operator can be used to refer to members of the global namespace. Because the global namespace doesn’t have a name, the notation :: member-name refers to a member of the global namespace. This can be useful for referring to members of global namespace whose names have been hidden by names declared in nested local scope. Unless we specify to the compiler in which namespace to search for a declaration, the compiler simple searches the current scope, and any scopes in which the current scope is nested, to find the declaration for the name.

46. What is placement new?
When you want to call a constructor directly, you use the placement new. Sometimes you have some raw memory that's already been allocated, and you need to construct an object in the memory you have. Operator new's special version placement new allows you to do it.
class Widget
public :
Widget(int widgetsize);
Widget* Construct_widget_int_buffer(void *buffer,int widgetsize)
return new(buffer) Widget(widgetsize);
This function returns a pointer to a Widget object that's constructed within the buffer passed to the function. Such a function might be useful for applications using shared memory or memory-mapped I/O, because objects in such applications must be placed at specific addresses or in memory allocated by special routines.