TCP/IP Data Communications Model

A Data Communications Model

To discuss computer networking, it is necessary to use terms that have special meaning. Even other computer professionals may not be familiar with all the terms in the networking alphabet soup. As is always the case, English and computer-speak are not equivalent (or even necessarily compatible) languages. Although descriptions and examples should make the meaning of the networking jargon more apparent, sometimes terms are ambiguous. A common frame of reference is necessary for understanding data communications terminology.

An architectural model developed by the International Standards Organization (ISO) is frequently used to describe the structure and function of data communications protocols. This architectural model, which is called the Open Systems Interconnect Reference Model (OSI), provides a common reference for discussing communications. The terms defined by this model are well understood and widely used in the data communications community - so widely used, in fact, that it is difficult to discuss data communications without using OSI's terminology.

The OSI Reference Model contains seven layers that define the functions of data communications protocols. Each layer of the OSI model represents a function performed when data is transferred between cooperating applications across an intervening network. Figure 1.1 identifies each layer by name and provides a short functional description for it. Looking at this figure, the protocols are like a pile of building blocks stacked one upon another. Because of this appearance, the structure is often called a stack or protocol stack.

Figure 1.1: The OSI Reference Model

Figure 1.1

A layer does not define a single protocol - it defines a data communications function that may be performed by any number of protocols. Therefore, each layer may contain multiple protocols, each providing a service suitable to the function of that layer. For example, a file transfer protocol and an electronic mail protocol both provide user services, and both are part of the Application Layer.

Every protocol communicates with its peer. A peer is an implementation of the same protocol in the equivalent layer on a remote system; i.e., the local file transfer protocol is the peer of a remote file transfer protocol. Peer-level communications must be standardized for successful communications to take place. In the abstract, each protocol is concerned only with communicating to its peer; it does not care about the layer above or below it.

However, there must also be agreement on how to pass data between the layers on a single computer, because every layer is involved in sending data from a local application to an equivalent remote application. The upper layers rely on the lower layers to transfer the data over the underlying network. Data is passed down the stack from one layer to the next, until it is transmitted over the network by the Physical Layer protocols. At the remote end, the data is passed up the stack to the receiving application. The individual layers do not need to know how the layers above and below them function; they only need to know how to pass data to them. Isolating network communications functions in different layers minimizes the impact of technological change on the entire protocol suite. New applications can be added without changing the physical network, and new network hardware can be installed without rewriting the application software.

Although the OSI model is useful, the TCP/IP protocols don't match its structure exactly. Therefore, in our discussions of TCP/IP, we use the layers of the OSI model in the following way:

Application Layer

The Application Layer is the level of the protocol hierarchy where user-accessed network processes reside. In this text, a TCP/IP application is any network process that occurs above the Transport Layer. This includes all of the processes that users directly interact with, as well as other processes at this level that users are not necessarily aware of.

Presentation Layer

For cooperating applications to exchange data, they must agree about how data is represented. In OSI, this layer provides standard data presentation routines. This function is frequently handled within the applications in TCP/IP, though increasingly TCP/IP protocols such as XDR and MIME perform this function.

Session Layer

As with the Presentation Layer, the Session Layer is not identifiable as a separate layer in the TCP/IP protocol hierarchy. The OSI Session Layer manages the sessions (connection) between cooperating applications. In TCP/IP, this function largely occurs in the Transport Layer, and the term "session" is not used. For TCP/IP, the terms "socket" and "port" are used to describe the path over which cooperating applications communicate.

Transport Layer

Much of our discussion of TCP/IP is directed to the protocols that occur in the Transport Layer. The Transport Layer in the OSI reference model guarantees that the receiver gets the data exactly as it was sent. In TCP/IP this function is performed by the Transmission Control Protocol (TCP). However, TCP/IP offers a second Transport Layer service, User Datagram Protocol (UDP), that does not perform the end-to-end reliability checks.

Network Layer

The Network Layer manages connections across the network and isolates the upper layer protocols from the details of the underlying network. The Internet Protocol (IP), which isolates the upper layers from the underlying network and handles the addressing and delivery of data, is usually described as TCP/IP's Network Layer.

Data Link Layer

The reliable delivery of data across the underlying physical network is handled by the Data Link Layer. TCP/IP rarely creates protocols in the Data Link Layer. Most RFCs that relate to the Data Link Layer discuss how IP can make use of existing data link protocols.

Physical Layer

The Physical Layer defines the characteristics of the hardware needed to carry the data transmission signal. Features such as voltage levels, and the number and location of interface pins, are defined in this layer. Examples of standards at the Physical Layer are interface connectors such as RS232C and V.35, and standards for local area network wiring such as IEEE 802.3. TCP/IP does not define physical standards - it makes use of existing standards.

The terminology of the OSI reference model helps us describe TCP/IP, but to fully understand it, we must use an architectural model that more closely matches the structure of TCP/IP. The next section introduces the protocol model we'll use to describe TCP/IP.

How to Backup VMware ESX Servers

The subject of backing up ESX hosts for disaster recovery comes up from time to time, but not nearly as often as backing up the virtual machines. To be specific, I am talking about backing up the ESX Service Console. Honesty, to reinstall ESX takes such little time there is really no need to keep a full system backup for recovery. There is an advantage to saving key configuration files and folders to quickly re-apply after a re- installation, however. This can be done without installing a backup agent on the ESX Service Console.

This post provides information on what ESX Service Console files and directories to backup, how to use the tar command to create a backup file, and then how to restore from the backup file after a new installation. The material comes from one of the VMware Authorized Consultant (VAC) toolkit documents that I often use for customer documentation deliverables. To give credit where credit is due, the author of the document is listed as “VMware PSO - Practice Development”.

The rest of this post is copied from the VAC toolkit document except for a few format changes.

VMware ESX Server Host Backups

Backing up the VMware ESX Server host is not a recommended practice since a typical ESX build takes minutes from start to finish. Since all critical data is stored on the SAN, it is not necessary to backup the Service Console.

In the event that VMware ESX Server host has a large amount of customization, backups may be conducted of the files and directories. Within the environment there should not be extensive changes to the default environment to warrant the backups.

ESX Files and Directories to Back Up

File Description
/etc/passwd The password file containing the local users for the VMware ESX Server host service console.
/etc/shadow The shadow password file containing local users and encrypted passwords for the VMware ESX Server host service console.
/etc/group The group file for containing local security groups for the VMware ESX Server host service console.
/etc/grub.conf The boot information for the grub boot loader.
/etc/vmware The configuration files for VMware ESX host.
/boot The boot partition for VMware ESX host. It should be noted that these should be default.
/home/ Any user information that is stored on the home directory on the local machine.

To perform the backup, a file can be generated using the following command:

# tar –cvf esx1-backup.datestamp.tar /etc/passwd /etc/shadow /etc/group /etc/grub.conf /etc/pam.d /etc/vmware /boot/ /home/

VMware ESX Server Host Restore

Normally, VMware ESX Server should be reinstalled and connected to the shared storage. If the above steps were conducted, complete restoration can be performed through the following steps:

  1. Re-install ESX with the same partition configuration as the original host.
  2. SFTP files back on.
  3. Remove the /etc/vmware & /boot directories by typing the following commands

    # cd /

    # rm –Rf /etc/vmware

    # rm –Rf /boot

  4. Restore the backup set on the new ESX host. Be sure to overwrite existing files on restore! For example, from the root directory you can issue the following command to restore from the original tarball:

    tar –xvf ..tar

    # tar –xvf esx1-backup.datestamp.tar

  5. Reboot.

ESX Shell Script VM Creation Utilizing Cloning

##### VM Creation Script Utilizing Cloning ####################
# Purpose|
# This script will create a VM utilizing the cloning option of # vmkfstools
command tool;
# The New Virtual Machine Configuration will be set as follows
# Virtual Machine Name = ScriptedCloneVM
# Location of Virtual Machine = /VMFS/volumes/storage1/ScriptedVM
# Virtual Machine Type = "Microsoft Windows 2003 Standard"
# Virtual Machine Memory Allocation = 256 meg
#Custom Variable Section for Modification|
#NVM is name of virtual machine(NVM). No Spaces allowed in name
#NVMDIR is the directory which holds all the VM files
#NVMOS specifies VM Operating System
### Default Variable settings - change this to your preferences
NVM="ScriptedCloneVM" # Name of Virtual Machine
NVMDIR="ScriptedCloneVM" # Specify only the folder name to be created; NOT
the complete path
NVMOS="winnetstandard" # Type of OS for Virtual Machine
VMMEMSIZE="256" # Default Memory Size
### End Variable Declaration
mkdir /vmfs/volumes/storage1/$NVMDIR # Creates directory
exec 6>&1 # Sets up write to file
exec 1>/vmfs/volumes/storage1/$NVMDIR/$NVM.vmx # Open file
# write the configuration
echo config.version = '"'6'"' # For ESX 3.x the value is 8
echo virtualHW.version = '"'3'"' # For ESX 3.x the value is 4
echo memsize = '"'$VMMEMSIZE'"'
echo floppy0.present = '"'TRUE'"' # setup VM with floppy
echo displayName = '"'$NVM'"' # name of virtual machine
echo guestOS = '"'$NVMOS'"'
echo ide0:0.present = '"'TRUE'"'
echo ide0:0.deviceType = '"'cdrom-raw'"'
echo ide:0.startConnected = '"'false'"' # CDROM enabled
echo floppy0.startConnected = '"'FALSE'"'
echo floppy0.fileName = '"'/dev/fd0'"'
echo Ethernet0.present = '"'TRUE'"'
echo Ethernet0.networkName = '"'VM Network'"' # Default network
echo Ethernet0.addressType = '"'vpx'"'
echo scsi0.present = '"'true'"'
echo scsi0.sharedBus = '"'none'"'
echo scsi0.virtualDev = '"'lsilogic'"'
echo scsi0:0.present = '"'true'"' # Virtual Disk Settings
echo scsi0:0.fileName = '"'$NVM.vmdk'"'
echo scsi0:0.deviceType = '"'scsi-hardDisk'"'
# close file
exec 1>&-
# make stdout a copy of FD 6 (reset stdout), and close FD6
exec 1>&6
exec 6>&-
# Change permissions on the file so it can be executed by anyone
chmod 755 /vmfs/volumes/storage1/$NVMDIR/$NVM.vmx
#Clone existing Template VM's VMDK into current directory
cd /vmfs/volumes/storage1/$NVMDIR #change to the VM dir
vmkfstools -i /vmfs/volumes/storage1/ScriptedVM/ScriptedVM.vmdk $NVM.vmdk
#Register VM
vmware-cmd -s register /vmfs/volumes/storage1/$NVMDIR/$NVM.vmx

CreateScheduledTask on Vmware ESX

ManagedObjectReference MgdObjRef_VM =
_service.FindByInventoryPath(_sic.SearchIndex(), pathVM);
MethodActionArgument[] mActArgumnt = new MethodActionArgument();
MethodAction mAction = new MethodAction();
mActArgumnt.Value = MgdObjRef_VM;
ma.Argument = mActArgumnt;
ma.Name = "MigrateVM";
DailyTaskScheduler dtScheduler = new DailyTaskScheduler();
dtScheduler.Hour = 12;
dtScheduler.Minute = 0;
ScheduledTaskSpec tSpec = new ScheduledTaskSpec();
tSpec.Action = mAction;
tSpec.Scheduler = dtScheduler;
tSpec.Enabled = true;
tSpec.Name = "Migrate virtual machine";
tSpec.Description = "Migrate virtual machine at noon");
tSpec.Notification = "";

PowerOff VM_Task Script for ESX

ManagedObjectReference MgdObjRef_VM =
_service.findByInventoryPath(_sic.SearchIndex(), pathVM);
ManagedObjectReference MgdObjRef_Host =
_service.findByInventoryPath(_sic.SearchIndex(), pathHost);
ManagedObjectReference MgdObjRef_Task =
_service.PowerOffVM(MgdObjRef_VM, MgdObjRef_Host);

C# Script for Changing the Priority of a VM on ESX

ViewContents vc = vma_.GetContents(vm);
Change change = new Change(); = "hardware/cpu/controls/shares";
change.val = "high";
change.op = ChangeOp.edit;
change.valSpecified = true;
ChangeReqList changeList = new ChangeReqList();
ChangeReq changeReq = new ChangeReq();
changeReq.handle = vc.handle;
changeReq.change = new Change[] { change };
ChangeReq[] changeReqs = new ChangeReq[] { changeReq };
changeList.req = changeReqs;
UpdateList updateList = vma_.PutUpdates(changeList);

C# Script for Migrating a VM via VMotion on Vmware ESX

string handleHost = vma_.ResolvePath(pathHost);
string handleVM = vma_.ResolvePath(pathVM);
ViewContents contentsXML = vma_.MigrateVM(handleVM, handleHost,

C# Script for Obtaining Information with ResolvePath and GetContents on Vmware ESX

string path = "/vm";
string handle = vma_.ResolvePath(path);
ViewContents contentsXML = vma_.GetContents(handle);
Container objContainer = (Container) contentsXML.body;

VB.NET Script for Implementing ICerfificatePolicy

Imports System.Net
Imports System.Security.Cryptography.X509Certificates
Public Class CertPolicy Implements ICertificatePolicy
Public Function CheckValidationResult(ByVal _
svcPnt As ServicePoint, ByVal cert As X509Certificate, _
ByVal req As WebRequest, ByVal certProblem As Integer) _
As Boolean Implements ICertificatePolicy.CheckValidationResult
Return True
End Function
End Class

How to Create a VM to use with ESX managed by Altiris

Creating a New Virtual Machine to Use with an ESX Server Managed by Altiris

#Scripting VMware Power Tools: Automating Virtual Infrastructure
#Creates a new Virtual Machine for use with Altiris
#####USER MODIFICATION################
#VMNAME is the name of the new virtual machine
#VMOS specifies which Operating System the virtual machine will have
#DESTVMFS is the path to the VMFS partition of the VMDK file
#VMDSIZE is the size of the Virtual Disk File being created ex (500mb) or
DESTVMFS="vmhba0:6:0:1 #Must use the vmhba path
$LOG -l:1 -ss:"Creating VMX Configuration File"
mkdir /home/vmware/$VMNAME
exec 6>&1
exec 1>/home/vmware/$VMNAME/$VMNAME.vmx
# write the configuration file
echo #!/usr/bin/vmware
echo config.version = '"'6'"'
echo virtualHW.version = '"'3'"'
echo memsize = '"'$VMMEMSIZE'"'
echo floppy0.present = '"'TRUE'"'
echo usb.present = '"'FALSE'"'
echo displayName = '"'$VMNAME'"'
echo guestOS = '"'$VMOS'"'
echo suspend.Directory = '"'/vmfs/vmhba0:0:0:5/'"'
echo checkpoint.cptConfigName = '"'$VMNAME'"'
echo priority.grabbed = '"'normal'"'
echo priority.ungrabbed = '"'normal'"'
echo ide1:0.present = '"'TRUE'"'
echo ide1:0.fileName = '"'auto detect'"'
echo ide1:0.deviceType = '"'cdrom-raw'"'
echo ide1:0.startConnected = '"'FALSE'"'
echo floppy0.startConnected = '"'FALSE'"'
echo floppy0.fileName = '"'/dev/fd0'"'
echo Ethernet0.present = '"'TRUE'"'
echo Ethernet0.connectionType = '"'monitor_dev'"'
echo Ethernet0.networkName = '"'Network0'"'
echo draw = '"'gdi'"'
echo scsi0.present = '"'TRUE'"'
echo scsi0:1.present = '"'TRUE'"'
echo = '"'vmhba0:0:0:5:$VMNAME.vmdk'"'
echo scsi0:1.writeThrough = '"'TRUE'"'
echo scsi0.virtualDev = '"'vmxlsilogic'"'
# close file
exec 1>&-
# make stdout a copy of FD 6 (reset stdout), and close FD6
exec 1>&6
exec 6>&-
$LOG -l:1 -ss:"VMX Configuration File Created Successfully"
#Change the file permissions
chmod 755 /home/vmware/$VMNAME/$VMNAME.vmx
#Create the Virtual Disk
$LOG -l:1 -ss:"Creating Virtual Disk"
vmkfstools -c $VMDSIZE vmhba0:0:0:5:$VMNAME.vmdk
$LOG -l:1 -ss:"Virtual Disk Created Successfully"
#Register the new VM
$LOG -l:1 -ss:"Registering VMX Configuration"
#Registering .vmx Configuration"
vmware-cmd -s register /home/vmware/$VMNAME/$VMNAME.vmx
$LOG -l:1 -ss:"VMX Initialization Completed Successfully"
#Starting the Virtual Machine
$LOG -l:1 -ss:"Starting the Virtual Machine"
vmware-cmd /home/vmware/$VMNAME/$VMNAME.vmx start
$LOG -l:1 -ss:"Virtual Machine Started"
$LOG -l:1 -ss:"Passing control to Altiris for PXE boot and install of VM"

Howto Create Dynamic Creation of Virtual Machines on VMware ESX 3.5

This illustrates how to create dynamic VM images:

Now that we have looked at what makes up the vmx file, let’s generate some
scripts to dynamically create virtual machines. First, we’ll take a script and
modify it so we can create a virtual machine that will use a golden image as
its base.We’ll then make a couple of changes so we can take advantage of
Altiris in the VM creation.We will then modify the script so that a virtual
machine will be created and then start the VM with the installation CD
mounted to begin the installation.
Code Listing 5.9 shows script that uses a golden image disk file. A golden
image disk file is a fully loaded and patched virtual machine vmx file that has
had sysprep run on it so it can be cloned.

Please make sure you look through these scripts and make any changes
needed to match your environment. Pay attention to the vmhba path
and double-check these values with the values in your own environment.
Code Listing 5.9 Using a Golden Image Disk File to Dynamically Create a
Virtual Machine

#Scripting VMware Power Tools: Automating Virtual Infrastructure
#Dynamic Creation of a new Virtual Machine using a Golden Image
#Stephen Beaver
#####USER MODIFICATION################
#VMNAME is the name of the new virtual machine
#VMOS specifies which Operating System the virtual machine will have
#GLDIMAGE is the path to the "Golden Image" VMDK file
#DESTVMFS is the path to VMFS partition that the VMDK file
echo "Start of Logging" > $LOG
echo "Importing Golden Image Disk File VMDK" >> $LOG
vmkfstools -i $GLDIMAGE $DESTVMFS:$1.vmdk
echo "Creating VMX Configuration File" >> $LOG
mkdir /home/vmware/$1
exec 6>&1
exec 1>/home/vmware/$1/$1.vmx
# write the configuration file
214 Chapter 5 • Modifying VMs
echo #!/usr/bin/vmware
echo config.version = '"'6'"'
echo virtualHW.version = '"'3'"'
echo memsize = '"'$VMMEMSIZE'"'
echo floppy0.present = '"'TRUE'"'
echo usb.present = '"'FALSE'"'
echo displayName = '"'$1'"'
echo guestOS = '"'$VMOS'"'
echo suspend.Directory = '"'/vmfs/vmhba0:0:0:10/'"'
echo checkpoint.cptConfigName = '"'$1'"'
echo priority.grabbed = '"'normal'"'
echo priority.ungrabbed = '"'normal'"'
echo ide1:0.present = '"'TRUE'"'
echo ide1:0.fileName = '"'auto detect'"'
echo ide1:0.deviceType = '"'cdrom-raw'"'
echo ide1:0.startConnected = '"'FALSE'"'
echo floppy0.startConnected = '"'FALSE'"'
echo floppy0.fileName = '"'/dev/fd0'"'
echo Ethernet0.present = '"'TRUE'"'
echo Ethernet0.connectionType = '"'monitor_dev'"'
echo Ethernet0.networkName = '"'Network0'"'
echo draw = '"'gdi'"'
echo scsi0.present = '"'TRUE'"'
echo scsi0:1.present = '"'TRUE'"'
echo = '"'$DESTVMFS:$1.vmdk'"'
echo scsi0:1.writeThrough = '"'TRUE'"'
echo scsi0.virtualDev = '"'vmxlsilogic'"'
# close file
exec 1>&-
# make stdout a copy of FD 6 (reset stdout), and close FD6
exec 1>&6
exec 6>&-
Modifying VMs • Chapter 5 215
echo "VMX Configuration File Created Successfully" >> $LOG
#Change the file permissions
chmod 755 /home/vmware/$1/$1.vmx
#Register the new VM
echo "Registering .vmx Configuration" >> $LOG
vmware-cmd -s register /home/vmware/$1/$1.vmx
echo "VMX Initialization Completed Successfully" >> $LOG

Notice that the preceding script uses a golden image file that is local to
that machine. If your golden image is located on a network share, you
can easily mount that share and import the file from there. To mount a
network share you can use the following command:
mount -t smbfs //server/share /mnt/smb -o

Script for Rebooting all VMware Images on ESX

Script for Rebooting All Running Virtual Machines

This is very handy if you have installed updates or anything else and want to delay the reboot till later.

vmwarelist=`vmware-cmd -l`
vmwarelist=`echo $vmwarelist | sed -e 's/ /*/g'`
vmwarelist=`echo $vmwarelist | sed -e 's/.vmx/.vmx /g'`
for vm in $vmwarelist
vm=`echo $vm | sed -e 's/*/ /g'`
vm=`echo $vm | sed -e 's/ \//*/g'`
if [ `vmware-cmd "$vm" getstate | sed -e 's/getstate() = //'` = "on" ]
echo Found $vm that is on, Rebooting $vm
vmware-cmd "$vm" reset trysoft

How to Convert IDE to SCSI Disk on VMware ESX

Virtual Machine Conversion from IDE to SCSI

You may find the need to be able to move virtual machines around from one
platform to another. For example, I encourage people to utilize VMware
Workstation in order to work on a virtual machine while on the go. I have
had several instances where a virtual machine was created on VMware
Workstation, but unfortunately was not created in legacy mode or had an
IDE drive. As a result, when attempting to migrate to ESX, it would fail until
some changes were made.

Therefore, here we will examine changing an IDE drive to a SCSI drive.
Before we change the settings, we need to get the SCSI drivers in the system
first.The easiest way to do this is to add another hard disk to the virtual
machine as a secondary drive. Configure this drive to be a SCSI drive. Start
the virtual machine with the new drive attached and, the SCSI drivers are
now in place, allowing us to continue and really edit the files. When we open
the descriptor file for a virtual machine using an IDE drive, it looks like the
sample in Code Listing 5.4.

Code Listing 5.4 Descriptor File for a Virtual Machine Using an IDE Drive
# Disk DescriptorFile
# Extent description
RW 4192256 SPARSE "Windows-s001.vmdk"
RW 4192256 SPARSE "Windows-s002.vmdk"
RW 4096 SPARSE "Windows-s003.vmdk"
# The Disk Data Base
ddb.adapterType = "ide"
ddb.geometry.sectors = "63"
ddb.geometry.heads = "16"
ddb.geometry.cylinders = "8322"
ddb.virtualHWVersion = "4"
ddb.toolsVersion = "6404"
Starting with the ddb.adapterType you can see that this was indeed an
IDE drive.There are a total of three different options for this setting.We’ll
discuss each in this section.

ddb.adapterType = “buslogic”
This entry converts the disk into a SCSI-disk with a BusLogic Controller.
This is the standard for Windows 2000 virtual machines.
ddb.adapterType = “lsilogic”
This entry converts the disk into a SCSI-disk with LSILogic Controller.This
is the standard for Windows 2003 virtual machines.
ddb.adapterType = "ide"
This entry converts the disk into an IDE-disk with Intel-IDE Controller.
Next, let’s open the SCSI disk that we used to get the drivers in the virtual
machine and use it to give us the section, heads, and cylinder values we
ddb.adapterType = "buslogic"
ddb.geometry.cylinders = "522"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
Put this all together and we have a new SCSI disk for our virtual
There is one change left to be done, however.We will need to change the
ddb.virtualHWVersion.The ddb.virtualHWVersion is dependent upon which
VMware platform you are using.You may need to change the version number
to get the virtual machine to start in certain cases, namely moving a virtual
machine in to ESX Server.
Change the ddb.virtualHWVersion = “4” and make it
ddb.virtualHWVersion = “3”.You now have a legacy virtual machine disk file
you have converted from IDE to SCSI.You’ve also brought the virtual
machine disk file down to legacy mode so that it can run on ESX.
# Disk DescriptorFile
Modifying VMs • Chapter 5 207
# Extent description
RW 4192256 SPARSE "Windows-s001.vmdk"
RW 4192256 SPARSE "Windows-s002.vmdk"
RW 4096 SPARSE "Windows-s003.vmdk"
# The Disk Data Base
ddb.adapterType = "buslogic"
ddb.geometry.sectors = "63"
ddb.geometry.heads = "255"
ddb.geometry.cylinders = "522"
ddb.virtualHWVersion = "3"
ddb.toolsVersion = "6309"
To complete this process we need to make an adjustment in the vmx file
in order to change the IDE values to SCSI. Code Listing 5.5 is an example of
a disk file that’s been configured to use an IDE.

Code Listing 5.5 Configuring a Disk to Use an IDE
config.version = "8"
virtualHW.version = "4"
scsi0.present = "TRUE"
memsize = "200"
ide0:0.present = "TRUE"
ide0:0.fileName = "Windows.vmdk"
ide1:0.present = "TRUE"
ide1:0.fileName = "auto detect"
ide1:0.deviceType = "cdrom-raw"
floppy0.fileName = "A:"
ethernet0.present = "TRUE"
usb.present = "TRUE"
sound.present = "TRUE"
sound.virtualDev = "es1371"
208 Chapter 5 • Modifying VMs
displayName = "Windows XP Professional 1"
guestOS = "winxppro"
nvram = "winxppro.nvram"
ide0:0.redo = ""
ethernet0.addressType = "generated"
uuid.location = "56 4d b7 df d7 1d 42 ca-3e 81 5d a3 5e 05 7a f7"
uuid.bios = "56 4d b7 df d7 1d 42 ca-3e 81 5d a3 5e 05 7a f7"
tools.remindInstall = "FALSE"
ethernet0.generatedAddress = "00:0c:29:05:7a:f7"
ethernet0.generatedAddressOffset = "0"
ide1:0.autodetect = "TRUE"
ide1:0.startConnected = "TRUE"
tools.syncTime = "FALSE"
To finish the change from IDE to SCSI we need to adjust these lines in
the vmx file (see Table 5.2).
Table 5.2 VMX Old and New Settings
From the Old Settings To the New Settings
config.version = “8” config.version = “6”
virtualHW.version = “4” virtualHW.version = “3”
ide0:0.present = “TRUE” scsi0.present = “TRUE”
ide0:0.fileName = “Windows.vmdk” scsi0:0.present = “TRUE”
scsi0:0.fileName = “Windows.vmdk”
Now we have completed downgrading the virtual hardware and also
changed a virtual machine from using an IDE drive to a SCSI drive.This virtual
machine will now start and run in VMWare ESX server. By using the
example of taking a virtual machine from VMware Workstation and getting it
to run to VMware ESX Server, we have gone from one extreme of the
VMware product line (workstation) to the other extreme (ESX Server).

TCP Connections: The Three-Way Handshake

TCP Connections: The Three-Way Handshake

The three-way handshake in Transmission Control Protocol (also called the three message handshake) is the method used to establish and tear down network connections. This handshaking technique is referred to as the 3-way handshake or as "SYN-SYN-ACK" (or more accurately SYN, SYN-ACK, ACK). The TCP handshaking mechanism is designed so that two computers attempting to communicate can negotiate the parameters of the network connection before beginning communication. This process is also designed so that both ends can initiate and negotiate separate connections at the same time.

3-Way Handshake Description

Below is a (very) simplified description of the TCP 3-way handshake process. Refer to the diagram on the right as you examine the list of events on the left.


Host A sends a TCP SYNchronize packet to Host B

Host B receives A's SYN

Host B sends a SYNchronize-ACKnowledgement

Host A receives B's SYN-ACK

Host A sends ACKnowledge

Host B receives ACK. TCP connection is ESTABLISHED.

three-way handshake

SYNchronize and ACKnowledge messages are indicated by a bit inside the TCP header of the segment.

TCP knows whether the network connection is opening, synchronizing or established by using the SYNchronize and ACKnowledge messages when establishing a network connection.

When the communication between two computers ends, another 3-way communication is performed to tear down the TCP connection. This setup and teardown of a TCP connection is part of what qualifies TCP a reliable protocol.

Note that UDP does not perform this 3-way handshake and for this reason, it is referred to as an unreliable protocol.

Protocols Encapsulated in TCP

Note that FTP, Telnet, HTTP, HTTPS, SMTP, POP3, IMAP, SSH and any other protocol that rides over TCP also has a three way handshake performed as connection is opened. HTTP web requests, SMTP emails, FTP file transfers all manage the messages they each send. TCP handles the transmission of those messages.

TCP rides on top of Internet Protocol (IP) which is why it is called TCP/IP (TCP over IP). TCP segments are passed inside the payload section of the IP packets. IP handles addressing and routing and gets the packets from one place to another, but TCP handles the actual communication between hosts.

Rsync Howto


Rsync is a wonderful little utility that's amazingly easy to set up on your machines. Rather than have a scripted FTP session, or some other form of file transfer script -- rsync copies only the diffs of files that have actually changed, compressed and through ssh if you want to for security. That's a mouthful -- but what it means is:

Rsync is rather versatile as a backup/mirroring tool, offering many features above and beyond the above. I personally use it to synchronize Website trees from staging to production servers and to backup key areas of the filesystems both automatically through cron and by a CGI script. Here are some other key features of rsync:

How does it work?

You must set up one machine or another of a pair to be an "rsync server" by running rsync in a daemon mode ("rsync --daemon" at the commandline) and setting up a short, easy configuration file (/etc/rsyncd.conf). Below I'll detail a sample configuration file. The options are readily understood, few in number -- yet quite powerful.

Any number of machines with rsync installed may then synchronize to and/or from the machine running the rsync daemon. You can use this to make backups, mirror filesystems, distribute files or any number of similar operations. Through the use of the "rsync algorithm" which transfers only the diffs between files (similar to a patch file) and then compressing them -- you are left with a very efficient system.

Setting up a Server

You must set up a configuration file on the machine meant to be a server and run the rsync binary in daemon mode. Even your rsync client machines can run rsync in daemon mode for two-way transfers. You can do this automatically for each connection via the inet daemon or at the commandline in standalone mode to leave it running in the background for often repeated rsyncs. I personally use it in standalone mode, like Apache. I have a crontab entry that synchronizes a Web site directory hourly. Plus there is a CGI script that folks fire off frequently during the day for immediate updating of content. This is a lot of rsync calls! If you start off the rsync daemon through your inet daemon, then you incur much more overhead with each rsync call. You basically restart the rsync daemon for every connection your server machine gets! It's the same reasoning as starting Apache in standalone mode rather than through the inet daemon. It's quicker and more efficient to start rsync in standalone mode if you anticipate a lot of rsync traffic. Otherwise, for the occasional transfer follow the procedure to fire off rsync via the inet daemon. This way the rsync daemon, as small as it is, doesn't sit in memory if you only use it once a day or whatever. Your call.

Below is a sample rsync configuration file. It is placed in your /etc directory as rsyncd.conf.

motd file = /etc/rsyncd.motd
log file = /var/log/rsyncd.log
pid file = /var/run/
lock file = /var/run/rsync.lock

path = /rsync_files_here
comment = My Very Own Rsync Server
uid = nobody
gid = nobody
read only = no
list = yes
auth users = username
secrets file = /etc/rsyncd.scrt

Various options that you would modify right from the start are the areas in italics in the sample above. I'll start at the top, line by line, and go through what you should pay attention to. What the sample above does is setup a single "path" for rsync transfers to that machine.

Starting at the top are four lines specifying files and their paths for rsync running in daemon mode. The first is a "message of the day" (motd) file like you would use for an FTP server. This file's contents get displayed when clients connect to this machine. Use it as a welcome, warning or simply identification. The next line specifies a log file to send diagnostic and norml run-time messages to. The PID file contains the "process ID" (PID) number of the running rsync daemon. A lock file is used to ensure that things run smoothly. These options are global to the rsync daemon.

The next block of lines is specific to a "path" that rsync uses. The options contained therein have effect only within the block (they're local, not global options). Start with the "path" name. It's somewhat confusing that rsync uses the term "path" -- as it's not necessarily a full pathname. It serves as an "rsync area nickname" of sorts. It's a short, easy to remember (and type!) name that you assign to a try filesystem path with all the options you specify. Here are the things you need to set up first and foremost:

One thing you should seriously consider is the "hosts allow" and "hosts deny" options for your path. Enter the IPs or hostnames that you wish to specifically allow or deny! If you don't do this, or at least use the "auth users" option, then basically that area of your filesystem is wide open to the world by anyone using rsync! Something I seriously think you should avoid...

Check the rsyncd.conf man page with "man rsyncd.conf" and read it very carefully where security options are concerned. You don't want just anyone to come in and rsync up an empty directory with the "--delete" option, now do you?

The other options are all explained in the man page for rsyncd.conf. Basically, the above options specify that the files are chmod'ed to uid/gid, the filesystem path is read/write and that the rsync path shows up in rsync listings. The rsync secrets file I keep in /etc/ along with the configuration and motd files, and I prefix them with "rsyncd." to keep them together.

Using Rsync Itself

Now on to actually using, or initiating an rsync transfer with rsync itself. It's the same binary as the daemon, just without the "--daemon" flag. It's simplicity is a virtue. I'll start with a commandline that I use in a script to synchronize a Web tree below.

rsync --verbose  --progress --stats --compress --rsh=/usr/local/bin/ssh
--recursive --times --perms --links --delete \
--exclude "*bak" --exclude "*~" \
/www/* webserver:simple_path_name

Let's go through it one line at a time. The first line calls rsync itself and specifies the options "verbose," progress" and "stats" so that you can see what's going on this first time around. The "compress" and "rsh" options specify that you want your stream compressed and to send it through ssh (remember from above?) for security's sake.

The next line specifies how rsync itself operates on your files. You're telling rsync here to go through your source pathname recursively with "recursive" and to preserve the file timestamps and permissions with "times" and "perms." Copy symbolic links with "links" and delete things from the remote rsync server that are also deleted locally with "delete."

Now we have a line where there's quite a bit of power and flexibility. You can specify GNU tar-like include and exclude patterns here. In this example, I'm telling rsync to ignore some backup files that are common in this Web tree ("*.bak" and "*~" files). You can put whatever you want to match here, suited to your specific needs. You can leave this line out and rsync will copy all your files as they are locally to the remote machine. Depends on what you want.

Finally, the line that specifies the source pathname, the remote rsync machine and rsync "path." The first part "/www/*" specifies where on my local filesytem I want rsync to grab the files from for transmission to the remote rsync server. The next word, "webserver" should be the DNS name or IP address of your rsync server. It can be "w.x.y.z" or "" or even just "webserver" if you have a nickname defined in your /etc/hosts file, as I do here. The single colon specifies that you want the whole mess sent through your ssh tunnel, as opposed to the regular rsh tunnel. This is an important point to pay attention to! If you use two colons, then despite the specification of ssh on the commandline previously, you'll still go through rsh. Ooops. The last "www" in that line is the rsync "path" that you set up on the server as in the sample above.

Yes, that's it! If you run the above command on your local rsync client, then you will transfer the entire "/www/*" tree to the remote "webserver" machine except backup files, preserving file timestamps and permissions -- compressed and secure -- with visual feedback on what's happening.

Note that in the above example, I used GNU style long options so that you can see what the commandline is all about. You can also use abbreviations, single letters -- to do the same thing. Try running rsync with the "--help" option alone and you can see what syntax and options are available.

Howto Linux iSCSI target Server

What is iSCSI, a little background

Internet SCSI. Pronounced eye skuzzy. An IP-based standard for linking data storage devices over a network and transferring data by carrying SCSI commands over IP networks. iSCSI supports a Gigabit Ethernet interface at the physical layer, which allows systems supporting iSCSI interfaces to connect directly to standard Gigabit Ethernet switches and/or IP routers. When an operating system receives a request it generates the SCSI command and then sends an IP packet over an Ethernet connection. At the receiving end, the SCSI commands are separated from the request, and the SCSI commands and data are sent to the SCSI controller and then to the SCSI storage device. iSCSI will also return a response to the request using the same protocol. iSCSI is important to SAN technology because it enables a SAN to be deployed in a LAN, WAN, or MAN. iSCSI was developed by the IETF and became an official standard in February 2003.

The scenario

Because we are in a production environment with mixed infrastructure, we will setup a FC4 server, exporting our spare partition via iSCSI (Note that you can export block devices, regular files, LVM and RAID). The iSCSI Initiator will be some Windows 2003 server, that will se this iSCSI exported partition as another local drive on this server. This will make our Linux box the iSCSI target, and a Windows 2003 server a iSCSI Initiator.


You need to download and install iscsitarget software from The iSCSI Enterprise Target Project ( The downloaded version depends on your kernel version. You can see your kernel version issuing the command:

uname -a

For the time of writing, for the latest version of iscsitarget (0.4.13) you need kernel 2.6.14 or newer. If you don't have that kernel version you can get the latest kernel as shown below or you can get older versions of iscsitarget in *.rpm:

yum update kernel kernel-devel

Reboot your server after the kernel installation, to apply changes. At this point you are ready to build iSCSI target.

The develop of iSCSI

Goto /usr/local folder for example and extract the files from the archive in that folder:

tar xvfz iscsitarget-0.4.13.tar.gz

Goto the newly created folder and build iSCSI modules and service:

cd iscsitarget-0.4.13

...but first, export your kernel source path. Make sure you export the correct PATH for your kernel version!

export KERNELSRC=/usr/src/kernels/2.6.14-1.1526_FC4-i686

Compile and install

make && make install

Copy the default config file to /etc folder

cp etc/ietd.conf /etc

Configure iSCSI target service

Well, you can play around with the options that you have, but actually, you only need to setup Target (for identifying this box), Incoming/Outgoing user (if you would like to use authentication), the part of storage that you are exporting and possibly Alias for this target. So i basically changed just the following:

Target iqn.2009-08.local.fog:storage.lvm
# Users, who can access this target
# (no users means anyone can access the target)
# Lun definition
# (right now only block devices are possible)
Lun 0 Path=/dev/hdb
# Alias name for this target
Alias iSCSI
# various iSCSI parameters
# (not all are used right now, see also iSCSI spec

This is how my config file looks like. My domain name is fog.local, I'm not using authentication, I'm exporting second HDD (hdb) and the alias of this target is iSCSI. Feel free to look at the manual page for ietd.conf file for more parameters explanations.

Run the service

At this point you are ready to start the service. Fire it up with

/etc/init.d/iscsi-target start

If you configured everything correct, you should see a message like this in your log file /var/log/messages

Oct  5 10:45:01 iscsi-fc4-prod kernel: iSCSI Enterprise Target Software - version 0.4.13
Oct 5 10:45:01 iscsi-fc4-prod kernel: iotype_init(97) register fileio
Oct 5 10:45:01 iscsi-fc4-prod kernel: iet_target_param_set(128) d 1 8192 262144 65536 2 20 8 0
Oct 5 10:45:01 iscsi-fc4-prod iscsi-target: ietd startup succeeded

If you want the iscsi-target service to be automaticaly run on next reboot do:

chkconfig iscsi-target on

The End of Part 1

Now you have a fully working iSCSI target up and running. If you would like to use the exported storage on Windows machine, read on, else you have to configure iSCSI Initiator on Linux client. You can find more on this subject from this fine docs (

Configure your Windows client to use the Linux iSCSI target

This is the easy part, all the "hard work" was already done. There are a few steps left:

First get the initiator software from this site (

Next, install the software (next, next, next.... finish). Note you have to install Initiator and Service.

When done, configure initiator as shown below:

Configure the IP address of the iSCSI target server:


Make the iSCSI target available and Log On to it. Note, that you can automatically restore the target by checking the first checkbox.


Use it!

Now we have set up everything we need to be successfully running iSCSI software in Enterprise production. Goto your Management Console and open Disk Management. If you've setup the Windows part correct, you should see a new drive in your Disk Management. Feel free to format it and make it usable in your system. This drive is treated just like a hardware drive would be, but it is all done over the network.

The example of successfully configured exported storage is shown below:



iSCSI can be a VERY useful thing, especially when you have a large storage to be exporting it over network. It's a cheap solution for a big part in every network infrastructure.

Cisco LAN Topology and Network Design

LAN Topology Design

The CCDA objectives covered in this section are as follows:


Describe the advantages, disadvantages, scalability issues, and applicability of standard internetwork topologies.


Draw a topology map that meets the customer's needs and includes a high-level view of internetworking devices and interconnecting media.

This section covers CCDA exam objectives about designing network topologies for the LAN. LANs provide data transfer rates that are typically much faster than wide-area networks (WANs). While most companies own their own LAN infrastructure, wide-area connections between LANs are usually leased on a monthly basis from an outside carrier. With the recent developments in Gigabit Ethernet technologies, LAN designs are now capable of 1000 Mbps speeds. High-speed Gigabit links can connect servers to LAN switches. At these speeds, the capacity is there to meet the performance requirements of current high-bandwidth applications.

Various speeds of Ethernet have evolved into the de facto standard for LANs. Ethernet uses a contention-based access method, meaning each device competes simultaneously for access to the network. All devices attached to the same Ethernet segment form a collision domain. Each device transmitting on that segment may attempt to transmit at the same time as another device on the same segment, resulting in a collision. As the number of devices in the same collision domain increases, so do the collisions, resulting in poorer performance.

Although not discussed in newer switched (bridged) networks, legacy Ethernet networks with repeaters and hubs should limit the size of the collision domain. To scale multiprotocol networks and networks with high-bandwidth applications, limit the size of collision domains using bridges, switches, and routers. This is covered in the section "LAN Hardware" later in the chapter.

Three different network topology models are discussed in the following sections:

* Hierarchical models
* Redundant models
* Secure models

Hierarchical Models

Hierarchical models enable you to design internetworks in layers. To understand the importance of layering, consider the Open System Interconnection (OSI) reference model, which is a layered model for implementing computer communications. Using layers, the OSI model simplifies the tasks required for two computers to communicate. Hierarchical models for internetwork design also use layers to simplify the tasks required for internetworking. Each layer can be focused on specific functions, allowing you to choose the right systems and features for each layer. Hierarchical models apply to both LAN and WAN design.
Benefits of Hierarchical Models

The many benefits of using hierarchical models for your network design include the following:

* Cost savings
* Ease of understanding
* Easy network growth
* Improved fault isolation

After adopting hierarchical design models, many organizations report cost savings because they are no longer trying to do it all in one routing/switching platform. The modular nature of the model enables appropriate use of bandwidth within each layer of the hierarchy, reducing wasted capacity.

Keeping each design element simple and small facilitates ease of understanding, which helps control training and staff costs. Management responsibility and network management systems can be distributed to the different layers of modular network architectures, which also helps control management costs.

Hierarchical design facilitates changes. In a network design, modularity allows creating design elements that can be replicated as the network grows, facilitating easy network growth. As each element in the network design requires change, the cost and complexity of making the upgrade is contained to a small subset of the overall network. In large, flat, or meshed network architectures, changes tend to impact a large number of systems.

Improved fault isolation is facilitated by structuring the network into small, easy-to-understand elements. Network managers can easily understand the transition points in the network, which helps identify failure points.

Today's fast-converging protocols were designed for hierarchical topologies. To control the impact of routing overhead processing and bandwidth consumption, modular hierarchical topologies must be used with protocols designed with these controls in mind, such as EIGRP.

Route summarization is facilitated by hierarchical network design. Route summarization reduces the routing protocol overhead on links in the network and reduces routing protocol processing within the routers.
Hierarchical Network Design

As Figure 4-1 illustrates, a hierarchical network design has three layers:

* The core layer provides optimal transport between sites.
* The distribution layer provides policy-based connectivity.
* The access layer provides workgroup/user access to the network.

Figure 4-1 A Hierarchical Network Design Has Three Layers: Core, Distribution, and Access

Each layer provides necessary functionality to the network. The layers do not need to be implemented as distinct physical entities. Each layer can be implemented in routers or switches, represented by a physical media, or combined in a single box. A particular layer can be omitted altogether, but for optimum performance, a hierarchy should be maintained.
Core Layer

The core layer is the high-speed switching backbone of the network, which is crucial to enable corporate communications. The core layer should have the following characteristics:

* Offer high reliability
* Provide redundancy
* Provide fault tolerance
* Adapt to changes quickly
* Offer low latency and good manageability
* Avoid slow packet manipulation caused by filters or other processes
* Have a limited and consistent diameter


When routers are used in a network, the number of router hops from edge to edge is called the diameter. As noted, it is considered good practice to design for a consistent diameter within a hierarchical network. This means that from any end station to another end station across the backbone, there should be the same number of hops. The distance from any end station to a server on the backbone should also be consistent.

Limiting the diameter of the internetwork provides predictable performance and ease of troubleshooting. Distribution layer routers and client LANs can be added to the hierarchical model without increasing the diameter because neither will affect how existing end stations communicate.
Distribution Layer

The distribution layer of the network is the demarcation point between the access and core layers of the network. The distribution layer can have many roles, including implementing the following functions:

* Policy (for example, to ensure that traffic sent from a particular network should be forwarded out one interface, while all other traffic should be forwarded out another interface)
* Security
* Address or area aggregation or summarization
* Departmental or workgroup access
* Broadcast/multicast domain definition
* Routing between virtual LANs (VLANs)
* Media translations (for example, between Ethernet and Token Ring)
* Redistribution between routing domains (for example, between two different routing protocols)
* Demarcation between static and dynamic routing protocols

Several Cisco IOS software features can be used to implement policy at the distribution layer, including the following:

* Filtering by source or destination address
* Filtering on input or output ports
* Hiding internal network numbers by route filtering
* Static routing
* Quality of service mechanisms (for example, to ensure that all devices along a path can accommodate the requested parameters)

Access Layer

The access layer provides user access to local segments on the network. The access layer is characterized by switched and shared bandwidth LANs in a campus environment. Microsegmentation, using LAN switches, provides high bandwidth to workgroups by dividing collision domains on Ethernet segments and reducing the number of stations capturing the token on Token Ring LANs.

For small office/home office (SOHO) environments, the access layer provides access for remote sites into the corporate network by using WAN technologies such as ISDN, Frame Relay, and leased lines. Features such as dial-on-demand routing (DDR) and static routing can be implemented to control costs.
Hierarchical Model Examples

For small- to medium-sized companies, the hierarchical model is often implemented as a hub-and-spoke topology, as shown in Figure 4-2. Corporate headquarters forms the hub and links to the remote offices form the spokes.

Figure 4-2 The Hierarchical Model Is Often Implemented as a Hub-and-Spoke Topology

You can implement the hierarchical model by using either routers or switches. Figure 4-3 is an example of a switched hierarchical design, while Figure 4-4 shows examples of routed hierarchical designs.

Figure 4-3 An Example of a Switched Hierarchical Design

Figure 4-4 Examples of Routed Hierarchical Designs
Redundant Models

When designing a network topology for a customer who has critical systems, services, or network paths, you should determine the likelihood that these components will fail and design redundancy where necessary.

Consider incorporating one of the following types of redundancy into your design:

* Workstation-to-router redundancy
* Server redundancy
* Route redundancy
* Media redundancy

Each of these types of redundancy is elaborated in the sections that follow.
Workstation-to-Router Redundancy

When a workstation has traffic to send to a station that is not local, the workstation has many possible ways to discover the address of a router on its network segment, including the following:

* Address Resolution Protocol (ARP)
* Explicit configuration
* Router Discovery Protocol (RDP)
* Routing Information Protocol (RIP)
* Internetwork Packet Exchange (IPX)
* AppleTalk
* Hot Standby Router Protocol (HSRP)

The sections that follow cover each of these methods.

Some IP workstations send an ARP frame to find a remote station. A router running proxy ARP can respond with its data link layer address. Cisco routers run proxy ARP by default.
Explicit Configuration

Most IP workstations must be configured with the IP address of a default router. This is sometimes called the default gateway.

In an IP environment, the most common method for a workstation to find a server is via explicit configuration (default router). If the workstation's default router becomes unavailable, the workstation must be reconfigured with the address of a different router. Some IP stacks enable you to configure multiple default routers, but many other IP stacks do not support redundant default routers.

RFC 1256 specifies an extension to the Internet Control Message Protocol (ICMP) that allows an IP workstation and router to run RDP to facilitate the workstation learning the address of a router.

An IP workstation can run RIP to learn about routers. RIP should be used in passive mode rather than active mode. (Active mode means that the station sends RIP frames every 30 seconds.) The Open Shortest Path First (OSPF) protocol also supports a workstation running RIP.

An IPX workstation broadcasts a find network number message to find a route to a server. A router then responds. If the client loses its connection to the server, it automatically sends the message again.

An AppleTalk workstation remembers the address of the router that sent the last Routing Table Maintenance Protocol (RTMP) packet. As long as there are one or more routers on an AppleTalk workstation's network, it has a route to remote devices.

Cisco's HSRP provides a way for IP workstations to keep communicating on the internetwork even if their default router becomes unavailable. HSRP works by creating a phantom router that has its own IP and MAC addresses. The workstations use this phantom router as their default router.

HSRP routers on a LAN communicate among themselves to designate two routers as active and standby. The active router sends periodic hello messages. The other HSRP routers listen for the hello messages. If the active router fails and the other HSRP routers stop receiving hello messages, the standby router takes over and becomes the active router. Because the new active router assumes both the IP and MAC addresses of the phantom, end nodes see no change at all. They continue to send packets to the phantom router's MAC address, and the new active router delivers those packets.

HSRP also works for proxy ARP. When an active HSRP router receives an ARP request for a node that is not on the local LAN, the router replies with the phantom router's MAC address instead of its own. If the router that originally sent the ARP reply later loses its connection, the new active router can still deliver the traffic.

Figure 4-5 shows a sample implementation of HSRP.

Figure 4-5 An Example of HSRP: The Phantom Router Represents the Real Routers

In Figure 4-5, the following sequence occurs:


The Anderson workstation is configured to use the Phantom router as its default router.

Upon booting, the routers elect Broadway as the HSRP active router. The active router does the work for the HSRP phantom. Central Park is the HSRP standby router.

When Anderson sends an ARP frame to find its default router, Broadway responds with the Phantom router's MAC address.

If Broadway goes off line, Central Park takes over as the active router, continuing the delivery of Anderson's packets. The change is transparent to Anderson. If a third HSRP router was on the LAN, that router would begin to act as the new standby router.

Server Redundancy

In some environments, fully redundant (mirrored) file servers should be recommended. For example, in a brokerage firm where traders must access data in order to buy and sell stocks, the data can be replicated on two or more redundant servers. The servers should be on different networks and power supplies.

If complete server redundancy is not feasible due to cost considerations, mirroring or duplexing of the file server hard drives is a good idea. Mirroring means synchronizing two disks, while duplexing is the same as mirroring with the additional feature that the two mirrored hard drives are controlled by different disk controllers.
Route Redundancy

Designing redundant routes has two purposes: load balancing and minimizing downtime.
Load Balancing

AppleTalk and IPX routers can remember only one route to a remote network by default, so they do not support load balancing. You can change this for IPX by using the ipx maximum-paths command and for AppleTalk by using the appletalk maximum-paths command on a Cisco router.

Most IP routing protocols can load balance across up to six parallel links that have equal cost. Use the maximum-paths command to change the number of links that the router will load balance over for IP; the default is four, the maximum is six. To support load balancing, keep the bandwidth consistent within a layer of the hierarchical model so that all paths have the same cost. (Cisco's IGRP and EIGRP are exceptions because they can load balance traffic across multiple routes that have different metrics by using a feature called variance.)

A hop-based routing protocol does load balancing over unequal bandwidth paths as long as the hop count is equal. After the slower link becomes saturated, the higher-capacity link cannot be filled; this is called pinhole congestion. Pinhole congestion can be avoided by designing equal bandwidth links within one layer of the hierarchy or by using a routing protocol that takes bandwidth into account.

IP load balancing depends on which switching mode is used on a router. Process switching load balances on a packet-by-packet basis. Fast, autonomous, silicon, optimum, distributed, and NetFlow switching load balance on a destination-by-destination basis because the processor caches the encapsulation to a specific destination for these types of switching modes.
Minimizing Downtime

In addition to facilitating load balancing, redundant routes minimize network downtime.

As already discussed, you should keep bandwidth consistent within a given layer of a hierarchy to facilitate load balancing. Another reason to keep bandwidth consistent within a layer of a hierarchy is that routing protocols converge much faster if multiple equal-cost paths to a destination network exist.

By using redundant, meshed network designs, you can minimize the effect of link failures. Depending on the convergence time of the routing protocols being used, a single link failure will not have a catastrophic effect.

A network can be designed as a full mesh or a partial mesh. In a full mesh network, every router has a link to every other router, as shown in Figure 4-6. A full mesh network provides complete redundancy and also provides good performance because there is just a single-hop delay between any two sites. The number of links in a full mesh is n(n–1)/2, where n is the number of routers. Each router is connected to every other router. (Divide the result by 2 to avoid counting Router X to Router Y and Router Y to Router X as two different links.)

Figure 4-6 Full Mesh Network: Every Router Has a Link to Every Other Router in the Network

A full mesh network can be expensive to implement in wide-area networks due to the required number of links. In addition, practical limits to scaling exist for groups of routers that broadcast routing updates or service advertisements. As the number of router peers increases, the amount of bandwidth and CPU resources devoted to processing broadcasts increases.

A suggested guideline is to keep broadcast traffic at less than 20 percent of the bandwidth of each link; this will limit the number of peer routers that can exchange routing tables or service advertisements. When planning redundancy, follow guidelines for simple, hierarchical design. Figure 4-7 illustrates a classic hierarchical and redundant enterprise design that uses a partial mesh rather than a full mesh architecture. For LAN designs, links between the access and distribution layer can be Fast Ethernet, with links to the core at Gigabit Ethernet speeds.

Figure 4-7 Partial Mesh Design with Redundancy
Media Redundancy

In mission-critical applications, it is often necessary to provide redundant media.

In switched networks, switches can have redundant links to each other. This redundancy is good because it minimizes downtime, but it may result in broadcasts continuously circling the network, which is called a broadcast storm. Because Cisco switches implement the IEEE 802.1d Spanning-Tree Algorithm, this looping can be avoided in the Spanning-Tree Protocol. The Spanning-Tree Algorithm guarantees that only one path is active between two network stations. The algorithm permits redundant paths that are automatically activated when the active path experiences problems.

Because WAN links are often critical pieces of the internetwork, redundant media is often deployed in WAN environments. As shown in Figure 4-8, backup links can be provisioned so they become active when a primary link goes down or becomes congested.

Figure 4-8 Backup Links Can Be Used to Provide Redundancy

Often, backup links use a different technology. For example, a leased line can be in parallel with a backup dialup line or ISDN circuit. By using floating static routes, you can specify that the backup route has a higher administrative distance (used by Cisco routers to select which routing information to use) so that it is not normally used unless the primary route goes down.


When provisioning backup links, learn as much as possible about the actual physical circuit routing. Different carriers sometimes use the same facilities, meaning that your backup path is susceptible to the same failures as your primary path. You should do some investigative work to ensure that your backup really is acting as a backup.

Backup links can be combined with load balancing and channel aggregation. Channel aggregation means that a router can bring up multiple channels (for example, Integrated Services Digital Network [ISDN] B channels) as bandwidth requirements increase.

Cisco supports the Multilink Point-to-Point Protocol (MPPP), which is an Internet Engineering Task Force (IETF) standard for ISDN B channel (or asynchronous serial interface) aggregation. MPPP does not specify how a router should accomplish the decision-making process to bring up extra channels. Instead, it seeks to ensure that packets arrive in sequence at the receiving router. Then, the data is encapsulated within PPP and the datagram is given a sequence number. At the receiving router, PPP uses this sequence number to re-create the original data stream. Multiple channels appear as one logical link to upper-layer protocols.
Secure Models

This section introduces secure topology models. The information in this book is not sufficient to learn all the nuances of internetwork security. To learn more about internetwork security, you might want to read the book Firewalls and Internet Security, by Bill Cheswick and Steve Bellovin, published by Addison Wesley. Also, by searching for the word "security" on Cisco's web site (, you can keep up to date on security issues.

Secure topologies are often designed by using a firewall. A firewall protects one network from another untrusted network. This protection can be accomplished in many ways, but in principle, a firewall is a pair of mechanisms: One blocks traffic and the other permits traffic.

Some firewalls place a greater emphasis on blocking traffic, and others emphasize permitting traffic. Figure 4-9 shows a simple firewall topology using routers.

Figure 4-9 A Simple Firewall Network, Using Routers

You can design a firewall system using packet-filtering routers and bastion hosts. A bastion host is a secure host that supports a limited number of applications for use by outsiders. It holds data that outsiders access (for example, web pages) but is strongly protected from outsiders using it for anything other than its limited purposes.
Three-Part Firewall System

The classic firewall system, called the three-part firewall system, has the following three specialized layers, as shown in Figure 4-10:


Another router that acts as an outside packet filter between the isolation LAN and the outside internetwork.

An isolation LAN that is a buffer between the corporate internetwork and the outside world. (The isolation LAN is called the demilitarized zone (DMZ) in some literature.)

A router that acts as an inside packet filter between the corporate internetwork and the isolation LAN.\

Figure 4-10 Structure and Components of a Three-Part Firewall System

Services available to the outside world are located on bastion hosts in the isolation LAN. Example services in these hosts include:

* Anonymous FTP server
* Web server
* Domain Name System (DNS)
* Telnet
* Specialized security software such as Terminal Access Controller Access Control System (TACACS)

The isolation LAN has a unique network number that is different than the corporate network number. Only the isolation LAN network is visible to the outside world. On the outside filter, you should advertise only the route to the isolation LAN.

If internal users need to get access to Internet services, allow TCP outbound traffic from the internal corporate internetwork. Allow TCP packets back into the internal network only if they are in response to a previously sent request. All other TCP traffic should be blocked because new inbound TCP sessions could be from hackers trying to establish sessions with internal hosts.


To determine whether TCP traffic is a response to a previously sent request or a request for a new session, the router examines some bits in the code field of the TCP header. If the acknowledgement field (ACK) is valid or reset the connection (RST) bits are set in a TCP segment header, the segment is a response to a previously sent request. The established keyword in Cisco IOS access lists (filters) is used to indicate packets with ACK or RST bits set.

The following list summarizes some rules for the three-part firewall system:


The outside packet filter router should allow inbound TCP packets from established TCP sessions.

The outside packet filter router should also allow packets to specific TCP or UDP ports going to specific bastion hosts (including TCP SYN packets that are used to establish a session).

The inside packet filter router should allow inbound TCP packets from established sessions.

Always block traffic from coming in from between the firewall routers and hosts and the internal network. The firewall routers and hosts themselves are likely to be a jumping-off point for hackers, as shown in Figure 4-11.

Figure 4-11 Firewall Routers and Hosts May Make Your Network Vulnerable to Hacker Attacks

Keep bastion hosts and firewall routers simple. They should run as few programs as possible. The programs should be simple because simple programs have fewer bugs than complex programs. Bugs introduce possible security holes.

Do not enable any unnecessary services or connections on the outside filter router. A list of suggestions for implementing the outside filter router follows:

* Turn off Telnet access (no virtual terminals defined).
* Use static routing only.
* Do not make it a TFTP server.
* Use password encryption.
* Turn off proxy ARP service.
* Turn off finger service.
* Turn off IP redirects.
* Turn off IP route caching.
* Do not make the router a MacIP server (MacIP provides connectivity for IP over AppleTalk by tunneling IP datagrams inside AppleTalk).

Cisco PIX Firewall

To provide stalwart security, hardware firewall devices can be used in addition to or instead of packet-filtering routers. For example, in the three-part firewall system illustrated earlier in Figure 4-10, a hardware firewall device could be installed on the isolation LAN. A hardware firewall device offers the following benefits:

* Less complex and more robust than packet filters
* No required downtime for installation
* No required upgrading of hosts or routers
* No necessary day-to-day management

Cisco's PIX Firewall is a hardware device that offers the features in the preceding list, as well as full outbound Internet access from unregistered internal hosts. IP addresses can be assigned from the private ranges, as defined in RFC 1918 (available at The PIX Firewall uses a protection scheme called Network Address Translation (NAT), which allows internal users access to the Internet while protecting internal networks from unauthorized access.

Further details on the PIX Firewall are available on Cisco's web site at

The PIX Firewall provides firewall security without the administrative overhead and risks associated with UNIX-based or router-based firewall systems. The PIX Firewall operates on a secure real-time kernel, not on UNIX. The network administrator is provided with complete auditing of all transactions, including attempted break-ins.

The PIX Firewall supports data encryption with the Cisco PIX Private Link, a card that provides secure communication between multiple PIX systems over the Internet using the data encryption standard (DES).

The PIX Firewall provides TCP and UDP connectivity from internal networks to the outside world by using a scheme called adaptive security. All inbound traffic is verified for correctness against the following connection state information:

* Source and destination IP addresses
* Source and destination port numbers
* Protocols
* TCP sequence numbers (which are randomized to eliminate the possibility of hackers guessing numbers)

LAN Types

The CCDA objective covered in this section is as follows:


Draw a topology map that meets the customer's needs and includes a high-level view of internetworking devices and interconnecting media.

Local-area networks can be classified as a large building LAN, campus LAN, or small/remote LAN. The large building LAN contains the major data center with high-speed access and floor communications closets; the large building LAN is usually the headquarters in larger companies. Campus LANs provide connectivity between buildings on a campus; redundancy is usually a requirement. Small/remote LANs provide connectivity to remote offices with a small number of nodes.

It is important to remember the Cisco hierarchical approach of network design. First, build a high-speed core backbone network. Second, build the distribution layer, where policy can be applied. Finally, build the access layer, where LANs provide access to the network end stations.
Large Building LANs

Large building LANs are segmented by floors or departments. Company mainframes and servers reside in a computing center. Media lines run from the computer center to the wiring closets at the various segments. From the wiring closets, media lines run to the offices and cubicles around the work areas. Figure 4-12 depicts a typical large building design.

Figure 4-12 Large Building LAN Design

Each floor may have more than 200 users. Following a hierarchical model of access, distribution, and core, Ethernet and Fast Ethernet nodes may connect to hubs and switches in the communications closet. Uplink ports from closet switches connect back to one or two (for redundancy) distribution switches. Distribution switches may provide connectivity to server farms that provide business applications, DHCP, DNS, intranet, and other services.
Campus LANs

A campus LAN connects two or more buildings located near each other using high-bandwidth LAN media. Usually the media (for example, copper or fiber) is owned. High-speed switching devices are recommended to minimize latency. In today's networks, Gigabit Ethernet campus backbones are the standard for new installations. In Figure 4-13, campus buildings are connected by using Layer 3 switches with Gigabit Ethernet media.

Figure 4-13 Campus LANs

Ensure that a hierarchical design is implemented on the campus LAN and that network layer addressing is assigned to control broadcasts on the networks. Each building should have addressing assigned in such a way as to maximize address summarization. Apply contiguous subnets to buildings at the bit boundary to apply summarization and ease the design. Campus networks can support high-bandwidth applications such as video conferencing. Although most WAN implementations are configured to support only IP, legacy LANs may still be configured to support IPX and AppleTalk.
Small/Remote Site LANs

Small/remote sites usually connect back to the corporate network via a small router (Cisco 2500). The local-area network service is provided by a small hub or LAN switch (Catalyst 1900). The router filters broadcasts to the WAN circuit and forwards packets that require services from the corporate network. A server may be placed at the small/remote site to provide DHCP and other local applications such as NT backup domain controller and DNS; if not, the router will need to be configured to forward DHCP broadcasts and other types of services. Figure 4-14 shows a typical architecture of a small or remote LAN. Building Cisco Remote Access Networks from Cisco Press is an excellent resource for more information on remote access.

Figure 4-14 Small/Remote Office LAN
LAN Media

The CCDA objectives covered in this section are as follows:


Recognize scalability constraints and issues for standard LAN technologies.


Recommend Cisco products and LAN technologies that will meet a customer's requirements for performance, capacity, and scalability in small- to medium-sized networks.

This section identifies some of the constraints that should be considered when provisioning various LAN media types. For additional reference material on this subject, refer to Appendix D, "LAN Media Reference."
Ethernet Design Rules

Table 4-1 provides scalability information that you can use when provisioning IEEE 802.3 networks.
Table 4-1 Scalability Constraints for IEEE 802.3










Maximum Segment Length (meters)



100 from hub to station

100 from hub to station

Maximum Number of Attachments per Segment



2 (hub and station or hub-hub)

2 (hub and station or hub-hub)

Maximum Collision Domain

2500 meters of 5 segments and 4 repeaters; only 3 segments can be populated

2500 meters of 5 segments and 4 repeaters; only 3 segments can be populated

2500 meters of 5 segments and 4 repeaters; only 3 segments can be populated

See the details in the section "100 Mbps Fast Ethernet Design Rules" later in this chapter.

The most significant design rule for Ethernet is that the round-trip propagation delay in one collision domain must not exceed 512 bit times, which is a requirement for collision detection to work correctly. This rule means that the maximum round-trip delay for a 10 Mbps Ethernet network is 51.2 microseconds. The maximum round-trip delay for a 100 Mbps Ethernet network is only 5.12 microseconds because the bit time on a 100 Mbps Ethernet network is 0.01 microseconds as opposed to 0.1 microseconds on a 10 Mbps Ethernet network.

To make 100 Mbps Ethernet work, distance limitations are much more severe than those required for 10 Mbps Ethernet. The general rule is that a 100 Mbps Ethernet has a maximum diameter of 205 meters when unshielded twisted-pair (UTP) cabling is used, whereas 10 Mbps Ethernet has a maximum diameter of 500 meters with 10BaseT and 2500 meters with 10Base5.
10 Mbps Fiber Ethernet Design Rules

Table 4-2 provides some guidelines to help you choose the right media for your network designs. 10BaseF is based on the fiber-optic interrepeater link (FOIRL) specification, which includes 10BaseFP, 10BaseFB, 10BaseFL, and a revised FOIRL standard. The new FOIRL allows data terminal equipment (DTE) end-node connections rather than just repeaters, which were allowed with the older FOIRL specification.
Table 4-2 Scalability Constraints for 10 Mbps Fiber Ethernet







Passive star

Backbone or repeater fiber system



Link or star

Allows DTE (End Node) Connections?






Maximum Segment Length (Meters)



1000 or 2000



Allows Cascaded Repeaters?






Maximum Collision Domains in Meters






100 Mbps Fast Ethernet Design Rules

100 Mbps Ethernet, or Fast Ethernet, topologies present some distinct constraints on the network design because of their speed. The combined latency due to cable lengths and repeaters must conform to the specifications in order for the network to work properly. This section discusses these issues and provides example calculations.
Understanding Collision Domains

The overriding design rule for 100 Mbps Ethernet networks is that the round-trip collision delay must not exceed 512 bit times. However, the bit time on a 100 Mbps Ethernet network is 0.01 microseconds, as opposed to 0.1 microseconds on a 10 Mbps Ethernet network. Therefore, the maximum round-trip delay for a 100 Mbps Ethernet network is 5.12 microseconds, as opposed to the more lenient 51.2 microseconds in a 10 Mbps Ethernet network.
100BaseT Repeaters

For a 100 Mbps Ethernet to work, you must impose distance limitations based on the type of repeaters used.

The IEEE 100BaseT specification defines two types of repeaters: Class I and Class II. Class I repeaters have a latency (delay) of 0.7 microseconds or less. Only one repeater hop is allowed. Class II repeaters have a latency (delay) of 0.46 microseconds or less. One or two repeater hops are allowed.

Table 4-3 shows the maximum size of collision domains, depending on the type of repeater.
Table 4-3 Maximum Size of Collision Domains for 100BaseT


Mixed Copper and Multimode Fiber

Multimode Fiber

DTE-DTE (or Switch-Switch)

100 meters

412 meters (2000 if full duplex)

One Class I Repeater

200 meters

260 meters

272 meters

One Class II Repeater

200 meters

308 meters

320 meters

Two Class II Repeaters

205 meters

216 meters

228 meters

The Cisco FastHub 316 is a Class II repeater, as are all the Cisco FastHub 300 series hubs. These hubs actually exceed the Class II specifications, which means that they have even lower latencies and therefore allow longer cable lengths. For example, with two FastHub 300 repeaters and copper cable, the maximum collision domain is 223 meters.
Example of 100BaseT Topology

Figure 4-15 shows examples of 100BaseT topologies with different media.

Figure 4-15 Examples of 100BaseT Topologies with Various Media and Repeaters

Other topologies are possible as long as the round-trip propagation delay does not exceed 5.12 microseconds (512 bit times). When the delay does exceed 5.12 microseconds, the network experiences illegal (late) collisions and CRC errors.
Checking the Propagation Delay

To determine whether configurations other than the standard ones shown in Figure 4-15 will work, use the following information from the IEEE 802.3u specification.

To check a path to make sure the path delay value (PDV) does not exceed 512 bit times, add up the following delays:

* All link segment delays
* All repeater delays
* DTE delay
* A safety margin (0 to 5 bit times)

Use the following steps to calculate the PDV:


Determine the delay for each link segment; this is the link segment delay value (LSDV), including interrepeater links, using the following formula. (Multiply by two so it is a round-trip delay.)

LSDV = 2 _ segment length _ cable delay for this segment.

For end-node segments, the segment length is the cable length between the physical interface at the repeater and the physical interface at the DTE. Use your two farthest DTEs for a worst-case calculation. For interrepeater links, the segment length is the cable length between the repeater physical interfaces.

Cable delay is the delay specified by the manufacturer if available. When actual cable lengths or propagation delays are not known, use the delay in bit times as specified in Table 4-4.

Cable delay must be specified in bit times per meter (BT/m).

Add together the LSDVs for all segments in the path.

Determine the delay for each repeater in the path. If model-specific data is not available from the manufacturer, determine the class of repeater (I or II).

MII cables for 100BaseT should not exceed 0.5 meters each in length. When evaluating system topology, MII cable lengths need not be accounted for separately. Delays attributed to the MII are incorporated into DTE and repeater delays.

Use the DTE delay value shown in Table 4-4 unless your equipment manufacturer defines a different value.

Decide on an appropriate safety margin from 0 to 5 bit times. Five bit times is a safe value.

Insert the values obtained from the preceding calculations into the formula for calculating the PDV:

PDV = link delays + repeater delays + DTE delay + safety margin


If the PDV is less than 512, the path is qualified in terms of worst-case delay.

Round-Trip Delay

Table 4-4 shows round-trip delay in bit times for standard cables and maximum round-trip delay in bit times for DTEs, repeaters, and maximum-length cables.


Note that the values shown in Table 4-4 have been multiplied by two to provide a round-trip delay. If you use these numbers, you need not multiply by two again in the LSDV formula (LSDV = 2 _ segment length _ cable delay for this segment).
Table 4-4 Network Component Delays1


Round-Trip Delay in Bit Times per Meter

Maximum Round-Trip Delay in Bit Times




Two T4 DTEs



One T4 DTE and one TX/FX DTE



Category 3 cable segment


114 (100 meters)

Category 4 cable segment


114 (100 meters)

Category 5 cable segment


111.2 (100 meters)

STP cable segment


111.2 (100 meters)

Fiber-optic cable segment


412 (412 meters)

Class I repeater



Class II repeater with all ports TX or FX



Class II repeater with any port T4



Example Network Cabling Implementation

See Figure 4-16 for this example. Company ABC has all UTP Category 5 cabling. Two Class II repeaters are separated by 20 meters instead of the standard 5 meters. The network administrators are trying to determine whether this configuration will work.

Figure 4-16 An Example Network Cabling Implementation for Company ABC (Showing the Two Most Distant DTEs)

To ensure that the PDV does not exceed 512 bit times, the network administrators must calculate a worst-case scenario using DTE 1 and DTE 2, which are 75 meters from their repeaters.

Assume that DTE 1 starts transmitting a minimum-sized frame of 64 bytes (512 bits). DTE 2 just barely misses hearing DTE 1's transmission and starts transmitting also. The collision happens on the far-right side of the network and must traverse back to DTE 1. These events must occur within 512 bit times. If they take any longer than 512 bit times, then DTE 1 will have stopped sending when it learns about the collision and will not know that its frame was damaged by the collision. To calculate the link delays for the Category 5 cable segments, the repeaters, and DTEs, the administrators use the values from Table 4-4. (Remember that Table 4-4 uses round-trip delay values, so you need not multiply by two.)

To test whether this network will work, the network administrators filled in Table 4-5.
Table 4-5 Delays of Components in Company ABC's Network

Delay Cause

Calculation of Network Component Delay

Total (Bit Times)

Link 1

75m _ 1.112 bit times/m


Link 2

75m _ 1.112 bit times/m


Interrepeater link

20m _ 1.112 bit times/m


Repeater A

92 bit times


Repeater B

92 bit times


DTE 1 and 2

100 bit times


Safety margin

5 bit times


Grand Total

Add Individual Totals


The grand total in Table 4-5 is fewer than 512 bit times, so this network will work.
Calculating Cable Delays

Some cable manufacturers specify propagation delays relative to the speed of light or in nanoseconds per meter (ns/m). To convert these values to bit times per meter (BT/m), use Table 4-6.
Table 4-6 Conversion to Bit Times per Meter for Cable Delays1

Speed Relative to Speed of Light

Nanoseconds per Meter (ns/m)

Bit Times per Meter (BT/m)


















































































Token Ring Design Rules

Table 4-7 lists some scalability concerns when designing Token Ring segments. Refer to IBM's Token Ring planning guides for more information on the maximum segment sizes and maximum diameter of a network.
Table 4-7 Scalability Constraints for Token Ring

IBM Token Ring

IEEE 802.5



Not specified

Maximum Segment Length (Meters)

Depends on type of cable, number of MAUs, and so on

Depends on type of cable, number of MAUs, and so on

Maximum Number of Attachments per Segment

260 for STP, 72 for UTP


Maximum Network Diameter

Depends on type of cable, number of MAUs, and so on

Depends on type of cable, number of MAUs, and so on

Gigabit Ethernet Design Rules

The most recent development in the Ethernet arena is Gigabit Ethernet. Gigabit Ethernet is specified by two standards: IEEE 802.3z and 802.3ab. The 802.3z standard specifies the operation of Gigabit Ethernet over fiber and coaxial cable and introduces the Gigabit Media Independent Interface (GMII). The 802.3z standard was approved in June 1998.

The 802.3ab standard specifies the operation of Gigabit Ethernet over Category 5 UTP. Gigabit Ethernet still retains the frame formats and frame sizes and it still uses CSMA/CD. As with Ethernet and Fast Ethernet, full duplex operation is possible. Differences can be found in the encoding; Gigabit Ethernet uses 8B/10B coding with simple nonreturn to zero (NRZ). Because of the 20 percent overhead, pulses run at 1250 MHz to achieve a 1000 Mbps. Table 4-8 covers Gigabit Ethernet scalability constraints.
Table 4-8 Gigabit Ethernet Scalability Constraints



Maximum segment length




1000 Mbps



Cat 5 UTP

1000BaseLX (long wave)

1000 Mbps



Single/multiple mode fiber

1000BaseSX (short wave)

1000 Mbps

62.5 micrometers: 220m

50 micrometers: 500m


Multimode fiber


1000 Mbps



Shielded balanced copper

FDDI Design Rules

The FDDI specification does not actually specify the maximum segment length or network diameter. It specifies the amount of allowed power loss, which works out to the approximate distances shown in Table 4-9.
Table 4-9 Scalability Constraints for FDDI

Multimode Fiber

Single-Mode Fiber



Dual ring, tree of concentrators, and others

Dual ring, tree of concentrators, and others


Maximum Segment Length

2km between stations

60km between stations

100m from hub to station

Maximum Number of Attachments per Segment

1000 (500 dual-attached stations)

1000 (500 dual-attached stations)

2 (hub and station or hub-hub)

Maximum Network Diameter




LAN Hardware

The CCDA objectives covered in this section are as follows:


Describe the advantages, disadvantages, scalability issues, and applicability of standard internetwork topologies.


Recognize scalability constraints and issues for standard LAN technologies.

This section covers the following hardware technologies as they can be applied to LAN design:

* Repeaters
* Hubs
* Bridges
* Switches
* Routers
* Layer 3 switches
* Combining hubs, switches, and routers


Repeaters are the basic unit used in networks to connect separate segments. Repeaters take incoming frames, regenerate the preamble, amplify the signals, and send the frame out all other interfaces. Repeaters operate in the physical layer of the OSI model. Because repeaters are not aware of packets or frame formats, they do not control broadcasts or collision domains. Repeaters are said to be protocol transparent because they are not aware of upper-layer protocols such as IP, IPX, and so on.

One basic rule of using repeaters is the 5-4-3 Rule. The maximum path between two stations on the network should not be more than 5 segments with 4 repeaters between those segments and no more than 3 populated segments. Repeaters introduce a small amount of latency, or delay, when propagating the frames. A transmitting device must be able to detect a collision with another device within the specified time after the delay introduced by the cable segments and repeaters is factored in. The 512 bit-time specification also governs segment lengths. A more detailed explanation of the specification can be found at Figure 4-17 illustrates an example of the 5-4-3 Rule.

Figure 4-17 Repeater 5-4-3 Rule

With the increasing density of LANs in the late 80s and early 90s, hubs were introduced to concentrate Thinnet and 10BaseT networks in the wiring closet. Traditional hubs operate on the physical layer of the OSI model and perform the same functions as basic repeaters.

Bridges are used to connect separate segments of a network. They differ from repeaters in that bridges are intelligent devices that operate in the data link layer of the OSI model. Bridges control the collision domains on the network. Bridges also learn the MAC layer addresses of each node on each segment and on which interface they are located. For any incoming frame, bridges forward the frame only if the destination MAC address is on another port or if the bridge is not aware of its location. The latter is called flooding. Bridges filter any incoming frames with destination MAC addresses that are on the same segment from where the frame arrives; they do not forward the frame on.

Bridges are store and forward devices. They store the entire frame and verify the CRC before forwarding. If a CRC error is detected, the frame is discarded. Bridges are protocol transparent; they are not aware of the upper-layer protocols like IP, IPX, and AppleTalk. Bridges are designed to flood all unknown and broadcast traffic.

Bridges implement the Spanning-Tree Protocol to build a loop free network topology. Bridges communicate with each other, exchanging information such as priority and bridge interface MAC addresses. They select a root bridge and then implement the Spanning-Tree Protocol. Some interfaces are placed in a hold state, while other bridges will have interfaces in forwarding mode. Looking at Figure 4-18, note that there is no load sharing or dual paths with bridge protocols as there is in routing.

Figure 4-18 Spanning-Tree Protocol

Switches are the evolution of bridges. Switches use fast integrated circuits that reduce the latency that bridges introduce to the network. Switches also enable the capability to run in cut-through mode. In cut-through mode, the switch does not wait for the entire frame to enter its buffer; instead, it forwards the frame after it has read the destination MAC address field of the frame. Cut-through operation increases the probability that error frames are propagated on the network, which increases CRC and runt frames on the network. Because of these problems, most switches today perform store-and-forward operation with CRC check as bridges do. Figure 4-19 shows a switch; note that it controls collision domains but not broadcast domains.

Figure 4-19 Switches Control Collision Domains

Switches have characteristics similar to bridges; however, they have more ports and run faster. Switches keep a table of MAC addresses per port, and they implement Spanning-Tree Protocol. Switches also operate in the data link layer and are protocol transparent. Each port on a switch is a separate collision domain but part of the same broadcast domain. Switches do not control broadcasts on the network.

Routers make forwarding decisions based on network layer addresses. In addition to controlling collision domains, routers control broadcast domains. Each interface of a router is a separate broadcast domain defined by a subnet and a mask. Routers are protocol aware, which means they are capable of forwarding packets of routed protocols such as IP, IPX, Decnet, and AppleTalk. Figure 4-20 describes a router; each interface is a broadcast and a collision domain.

Figure 4-20 Routers Control Broadcast and Collision Domains

Routers exchange information about destination networks by using one of several routing protocols. The following are lists of routing protocols. The lists are divided by the protocols that can be routed.

For routing TCP/IP:

* Enhanced Interior Gateway Routing Protocol (EIGRP)
* Open Shortest Path First (OSPF)
* Routing Information Protocol (RIP)
* Intermediate System-to-Intermediate System (ISIS)
* Protocol Independent Multicast (PIM)

For routing Novell:

* Novell Routing Information Protocol (Novell RIP)
* NetWare Link Services Protocol (NLSP)
* Enhanced Interior Gateway Routing Protocol (EIGRP)

For routing AppleTalk:

* Routing Table Maintenance Protocol (RTMP)
* Enhanced Interior Gateway Routing Protocol (EIGRP)

Routing protocols are discussed in further detail in Chapter 6, "Designing for Specific Protocols."

Routers are the preferred method of forwarding packets between networks of differing media, such as Ethernet to Token Ring, Ethernet to FDDI, or Ethernet to Serial. They also provide methods to filter traffic based on the network layer address, route redundancy, load balancing, hierarchical addressing, and multicast routing.
Layer 3 Switches

LAN switches that are capable of running routing protocols are Layer 3 switches. These switches are capable of running routing protocols and communicating with neighboring routers. An example is a Catalyst 5500 with a Routing Switch Module (RSM). Layer 3 switches have LAN technology interfaces that perform network layer forwarding; legacy routers provide connectivity to WAN circuits. The switches off-load local traffic from the WAN routers.

Layer 3 switches perform the functions of both data link layer switches and network layer routers. Each port is a collision domain. Interfaces are grouped into broadcast domains (subnets) and a routing protocol is selected to provide network information to other Layer 3 switches and routers.
Combining Hubs, Switches, and Routers

Available in Ethernet and Fast Ethernet, hubs are best used in small networks where there are few nodes on the segment. Hubs do not control the broadcasts nor do they filter collision domains on the network. If higher bandwidth is required, use 100 Mbps hubs. When the number of nodes on the network grows, move to switches.

With the cost of switch ports comparable to hubs, use switches as the basic network connectivity devices on the network. Switches reduce collisions and resolve media contention on the network by providing a collision domain per port. Replace hubs with switches if the utilization is over 40 percent on Ethernet networks or above 70 percent on Token Ring and FDDI networks. Switches cannot resolve broadcast characteristics of protocols; use routing to resolve protocol-related problems. As you can see in the sample in Figure 4-21, the repeaters are pushed to the outer layer of the design, connecting to switches. Switches control the collision domains. Fast Layer 3 switches are used for routing between LAN segments, and the router provides access to the WAN.

Figure 4-21 Combining Routers, Switches, and Hubs

Use routers for segmenting the network into separate broadcast domains, security filtering, and access to the WAN. If broadcast traffic on the network is over 20 percent, use routing.
Cisco LAN Equipment

The CCDA objectives covered in this section are as follows:


Assemble Cisco product lines into an end-to-end networking solution.


Recommend Cisco products and LAN technologies that will meet a customer's requirements for performance, capacity, and scalability in small- to medium-sized networks.


Update the network topology drawing you created in the previous section to include hardware and media.

A CCDA must be familiar with Cisco products, product capabilities, and how to best apply the products to meet performance, scalability, redundancy, and cost requirements. This section lists and explains Cisco equipment for LAN requirements. A complete list of Cisco products can be found at the CCO web site.
FastHub 400

The FastHub 400 10/100 series is a full line of products that includes 12- and 24-port 10/100 Fast Ethernet repeaters in managed and manageable versions. The FastHub 400 10/100 series provides low-cost 10/100 autosensing desktop connectivity where dedicated bandwidth is not required. The Cisco 412 provides 12 UTP ports of 10/100 Fast Ethernet. The Cisco 424M provides 24 UTP ports of 10/100 Fast Ethernet in a SNMP-managed version.
Cisco Catalyst 1900/2820 Series

The Catalyst 1900 and 2820 series provide 12- or 24-switched, 10-Mbps 10BaseT ports. Different models provide Fast Ethernet uplinks in 100BaseT and 100BaseF media. Different models can keep 1KB, 2KB, or 8KB storage of MAC addresses. The specifications of the various models in these series are presented in Table 4-10.

Table 4-10 Catalyst 1900 and 2820 Series Specifications




* 12 10BaseT
* Two 100BaseTX
* Enterprise Edition


* 12 10BaseT
* One 100BaseTX
* One 100BaseFX
* Enterprise Edition


* 24 10BaseT
* Two 100BaseTX
* Enterprise Edition


* 24 10BaseT
* One 100BaseTX
* One 100BaseFX
* Enterprise Edition


* 24 10BaseT
* Two 100BaseFX
* Enterprise Edition


* 24 10BaseT
* Two 100BaseTX
* 48-volt DC Dual-Feed Power System
* Enterprise Edition


* 24 10BaseT
* Two slots
* Enterprise Edition


* 24 10BaseT
* Two slots
* Enterprise Edition

Catalyst 2900

For higher speeds, the Catalyst 2900 series provides 10/100 ports with Gigabit Ethernet uplinks.

Catalyst 2948G offers 48 ports of 10/100 Ethernet with two Gigabit Ethernet uplinks.
Catalyst 3000 Series Stackable Switches

The Catalyst 3100 switch is designed for networks that require flexibility and growth with minimal initial investment. This switch contains 24 fixed 10BaseT Ethernet ports, one StackPort slot for scalability, and one expansion FlexSlot for broad media support. It is designed for a variety of campus LAN and enterprise WAN solutions; the Catalyst 3100 switch fits well in a wiring closet and branch office applications.

The Catalyst 3200 is a high port density stackable switch chassis with a modular Catalyst 3000 architecture supervisor engine and seven additional media expansion module slots. The expansion slots are backward compatible with all existing Catalyst 3000 media expansion modules. The seventh slot, called FlexSlot, is an expansion slot that accepts either a standard Catalyst 3000 expansion module or new doublewide expansion modules providing forward and backward investment protection.

The 3011 WAN access module for the Catalyst 3200 and Catalyst 3100 provides WAN interconnect integrated with the switch backplane. The 3011 WAN access module was the first FlexSlot module to be introduced. Based on the Cisco 2503 router, the 3011 provides two high-speed serial ports, an ISDN BRI port, and an auxiliary (AUX) port.
Catalyst 3900 Token Ring Stackable Switch

The Catalyst 3920 switch provides 24 Token Ring ports. With the Catalyst 3920 switch, you can start with a single 24-port switch and add capacity as you need it, while still managing the entire stack system as one device.
Catalyst 3500 10/100 Autosensing Switch

The Catalyst 3500 XL architecture is designed to meet the technical requirements of autosensing 10/100BaseT Ethernet interfaces. Autosensing enables each port to self-configure to the correct bandwidth upon determining whether it is connected to a 10- or 100-Mbps Ethernet channel. This feature simplifies setup and configuration and provides flexibility in the mix of 10 and 100 Mbps connections the switch supports. Network managers can alter connections without having to replace port interfaces.
GBIC-Based Gigabit Ethernet Ports

Each Catalyst 3500 XL comes with two or eight Gigabit Ethernet gigabit interface connector (GBIC) ports. Customers can use any of the following IEEE 802.3z-compliant GBICs based on their connection needs: 1000BaseSX, 1000BaseLX/LH, or the Cisco GigaStack stacking GBIC. These GBIC ports support standards-based, field-replaceable media modules and provide unprecedented flexibility in switch deployment while protecting customers' investments.
Catalyst 4000

The Catalyst 4912G is a 12-port dedicated Gigabit Ethernet switch featuring high-performance Layer 2 switching and intelligent Cisco OSI network (Layer 3) services for high-speed network aggregation.

The Catalyst 4003 offers 24 Gbps of switching bandwidth and provides expansion to 96 ports of 10/100 Ethernet or 36 ports of Gigabit Ethernet. Up to 96 10/100 Ethernet ports, or up to 36 Gigabit Ethernet ports, can be installed into one managed unit.

The Catalyst 4000 series provides an advanced high-performance enterprise switching solution optimized for wiring closets with up to 96 users and data center server environments that require up to 36 Gigabit Ethernet ports. New FlexiMod uplinks support up to eight 100BaseFX riser connections with EtherChannel benefits. The Catalyst 4000 series provides intelligent Layer 2 services leveraging a multiGigabit architecture for 10/100/1000-Mb Ethernet switching. The modular three-slot Catalyst 4003 system leverages the software code base from the industry-leading Catalyst 5500/5000 series to provide the rich and proven feature set that customers demand in the wiring closet for true end-to-end enterprise networking.
Catalyst 5000 Switch Series

The Cisco Catalyst 5000 series features modular chassis in 2-, 5-, 9-, and 13-slot versions. All chassis share the same set of line cards and software features, which provides scalability while maintaining interoperability across all chassis.

The Catalyst 5002 is positioned to deliver a consistent architecture and features set in a smaller package that addresses the needs of smaller wiring closets. The Catalyst 5002 switches at the 1 Mpps (million packets per second) range. The Catalyst 5002 is a fully modular, two-slot Catalyst 5000 series member, using the same architecture and software as the Catalyst 5000. The switch can deliver more than one million packets per second throughput across a 1.2-Gbps, media-independent backplane that supports Ethernet, Fast Ethernet, FDDI, Token Ring, and ATM.

The Catalyst 5000 will continue to address the needs of switched 10BaseT and group switched wiring closets with performance in the 1–3 Mpps range.

The Catalyst 5505, a five-slot chassis like the Catalyst 5000, is designed for a high-end wiring closet and data applications with performance in the 1–25 Mpps range. The Catalyst 5505 combines the size of the original Catalyst 5000 with the performance boost and added features of the Catalyst 5500 series.

The Catalyst 5509 supports high-density 10/100 Ethernet for the wiring closet, or high-density Gigabit Ethernet for backbone applications, delivering over 25-Mpps switching performance. The Catalyst 5509 provides dedicated switching for up to 384 users, making this chassis an ideal platform for wiring closet solutions. The Catalyst 5509 also supports high-density Gigabit Ethernet for switched intranet backbones and data centers.

The Catalyst 5500 is the most versatile switch in the Catalyst family, able to support LightStream 1010 ATM switching or Catalyst 8500 Layer 3 switching line cards in addition to all the Catalyst 5000 family line cards. The Catalyst 5500 is positioned as a high-capacity wiring closet or data center switch, delivering over 25-Mpps switching performance.

The Catalyst 5500 is a 13-slot chassis that is rack-mountable using the rack-mount kit. All functional components, including power supplies, fan trays, supervisors, ATM switch processors (ASPs), and interface modules are accessible and hot-swappable from the network side of the chassis. This setup ensures ease of use in tight wiring closets.