Explain the Types of IDS and IDP Systems and Provide Some Examples of Each.

This is always a good start to the IDP portion of the interview because it gets immediately to the heart of the IDS/IDP debate. Network security hardware and software vendors have confused many network professionals with markets and submarkets of intrusion detection. Originally, there was just “intrusion detection.” The systems (also called sensors) were either network-based, also known as a network IDS (NIDS), or host-based, also known as a host IDS (HIDS). The early systems simply tapped into the network at switches with monitored/mirrored ports.
The term IDS is now considered a first-generation term. Today, many vendors distinguish detection from prevention. Detection means passive monitoring, whereas prevention means active monitoring. With the introduction of the term prevention came the introduction of new acronyms — NIPS (Network IPS) and HIPS (Host IPS). Vendors like to call detection reactive and prevention proactive. The IPS devices have the ability to detect as well as prevent attacks using a response mechanism—if in a tap/span configuration — or blocking mechanism — if using an inline configuration. IPS is more of a second-generation term. Many equate an IDS to a burglar alarm. Network sensors detect an intrusion much the same way a door or window sensor detects unwanted entry. The home sensors alert the alarm company or home siren to the intruder. An IPS is a way to prevent attacks from penetrating
the network. Some vendors and even NIST have gone so far as to use the acronym IDP (Intrusion Detection and Prevention) to include both the IDS and IPS functionality. Throughout this chapter, we continue to use IDP to represent both IDS and IPS systems.

VPN Interview Q&A

The following questions provide an idea of some of the types of questions you should be able to answer.

Q: How many messages are exchanged during Phase I Aggressive mode, and would you recommendit for a site-to-site VPN?
A: Three and no: The identity of the peer is exposed during the reduced number of messages.

Q: What are some issues related to deploying IPsec with ESP across the network?
A: In tunnel mode, ESP adds a new IP and ESP header, which results in a larger packet size. In addition, ESP packets are marked Do Not Fragment (DF) in most vendor implementations. Configuring a Path MTU discovery feature or other vendor-specific feature is required to reduce your MSS to allow successful transmission of full-size packets.

Q: Please explain how you can identify which type of IPsec security protocol and mode are used with only a packet sniffer.
A: The next header field indicates which protocol is next. If TCP, then transport mode is used; if IP, then tunnel mode is used. In the IP header, the Protocol field will indicate what type of protocol is used (ESP IP protocol 50 or AH IP protocol 51).

Q: What protocols are required to pass through the edge of your network to an IPsec appliance running ESP with IKE?
A: ESP is IP Protocol 50, and IKE traditionally operations over UDP Port 500. If NAT Traversal is used, then UDP Port 4500 is traditionally used.

Q: Describe the purpose of a Security Assocation (SA) and what the minimum number are to establish a VPN tunnel with a remote peer using only ESP/tunnel mode.
A: An SA is a uni-directional set of parameters used to establish a secure communication channel with a far end gateway or host. A minimum of two SAs is required to establish bi-directional communication with the far-end peer.

Q: What is the minimum number of parameters needed to uniquely identify a Security
Association?
A: Three: SPI, peer IP address, and type of protocol used (ESP or AH).

Q: Please describe some of the resources you have used to evaluate and select an IPsec platform.
A: Depending on the business requirements (and any possible government regulations—FIPS 140-2), I would create a short list of vendors from sources such as Gartner, and Forrester because they review financials and customer support issues that may be difficult to confirm in a lab. Continuing to work against business requirements, I would review the IPsec vendor list from ICSA Labs (“Google ICSA Labs IPsec Certified”) if there are interoperability requirements with other
vendors or a planned migration. I would reference NIST’s Common Criteria Evaluation and Validation Scheme (CCEVS) to reduce my shortlist to a “very” short list. FIPS 140-2 evaluates the vendor’s cryptographic implementation for adherence with the standard. Selecting a vendor that meets or exceeds this standard can further reduce your short list. I would also reference the VPN Consortium to review any vendor- specific enhancements or interoperability issues. Finally, I would bring in the top three vendors that meet the stated business requirements, and work with the gear

Symmetric Key Cryptography

Symmetric key encryption uses a bi-directional or reversible encryption algorithm to provide confidentially of data. In other words, the sender and receiver of the sensitive data share a secret key. The sender feeds the secret key and data into any of a number of symmetric key algorithms to encrypt the plaintext data into cipher text. The receiver uses the exact same secret key to decrypt the cipher text back
into plain-text using the same symmetric key algorithm. If Alice and Bob, to use the classic crypto characters, are sitting on different floors of the same building, then securely exchanging the secret key may not pose a risk. There is still a question of storage of the key, so compromise may still be an issue. However,
if Alice is in Virginia, and Bob is on vacation in Singapore, then exchanging the secret key securely presents an issue and opens the door to potential comprise of the secret key.

There are currently three NIST approved symmetric ciphers. The newest addition to this list, the Advanced Encryption Standard (AES,) was added in November 2001. The Whitehouse Office of Management and Budget (OMB), responsible for the OMB circulars, delivered a notice shortly after NIST released AES stating that the new encryption method is expected to be valid for the next 20–30 years. NIST has stated that it will review AES every five years for continued use.

Symmetric ciphers are divided into stream and block ciphers. Block ciphers exercise their mathematical prowess on fixed-size chunks of data. Stream ciphers, on the other hand, operate on the data in a serial fashion or continuous stream — one bit at a time.

The two protocols we discuss in this pages are block ciphers. Therefore, we focus our discussion on these block ciphers only. For more information on streaming ciphers, Google “Streaming cipher.” One of the most popular streaming ciphers in use today is RC4, which is implemented in the original IEEE 802.11b, aka WEP. The current evolution of WEP is called WPA, which alleviates a key scheduler issue by using the Temporal Key Integrity Protocol (TKIP). The latest standard on the street is IEEE 802.11i, which introduces AES as the required encryption protocol. Your Internet browser also uses RC4 when connecting to most Internet sites using SSL.

Firewall Interview Q&A

Q: What is a packet filter firewall?
A: A packet filter firewall inspects traffic on a per-packet basis. It matches only on an individual packet basis. It is not capable of determining a packet flow or session. Packet filters can match a packet from the simple source and destination IP up to and including specific protocol flags such as TCP SYN and FIN. This varies based upon the vendor’s implementation of a packet filter.

Q: What is stateful inspection?
A: In stateful inspection, a firewall inspects traffic based upon the state of the connection. The firewall is aware of the beginning, middle, and end of a connection. If the connection goes out of state, the firewall is able to detect it.

Q: What is an application proxy firewall?
A: An application proxy firewall proxies connections that attempt to go through the firewall. The client’s request is always proxied to server. The server’s response is proxied back to the client as well. This allows the proxy to completely inspect the connection.

Q: What does the term DMZ stand for?
A: This stands for demilitarized zone. It is a term that represents a segmented network to which access is protected by a firewall.

Q: Why would you want a high-availability firewall deployment in your network?
A: Because a firewall is often placed at a critical point in your network. If it were to fail, you would lose access to critical resources such as Internet access.

Q: What are the characteristics of an appliance firewall?
A: An appliance-based firewall is a device that is built for a specific purpose. The purpose in this case is to be a firewall.

Q: What is NAT?
A: NAT stands for Network Address Translation. With NAT, a packet has either the source or destination IP address modified as it passes through a firewall.

Q: What is Unified Threat Management?
A: Unified Threat Management, or UTM, is a collection of technologies that are bundled together to eliminate threats on the network. These technologies include deep-packet inspection, antivirus, antispam, and URL filtering.

Q: What are the main configuration components in a firewall?
A: The firewall’s configuration (networking/routing), the firewall policy (the policy that restricts traffic for a device), and the firewall’s objects (the components used during the firewall’s policy configuration.

Q: What is a secure router?
A: A secure router is a device that couples the features of a router and a firewall, including the use of WAN interfaces, firewall services, and, often, a UTM feature set.

Q: What company was the first to implement firewall technologies?
A: Cisco Systems originally implemented firewall features in the form of packet filters on routers.

Q: Who are the three market leaders in the firewall technology space?
A: Cisco, Check Point, and Juniper Networks are the three market leaders. Cisco is the number one leader in firewall appliances. Check Point is the market leader in software-based firewalls. Juniper Networks is in second place behind Cisco for firewall appliances.

Q: What was Check Point’s most important impact on the firewall market?
A: The creation of an easy-to-use central management tool. This tool contained easy-to-use GUIs and still sets the bar for user interfaces today.

Q: What is the most basic deployment for a firewall?
A: The most basic deployment for a firewall is placing a firewall between an untrusted network, such as the Internet, and the local area network. This placement limits the access that the Internet has to the local area network. The local area network has important services that should not be Internet accessible. These services include file servers and e-mail servers.

Q: Can you list the three core firewall technologies?
A: Packet filter, stateful inspection, and application proxies are the core firewall technologies.

Q: What are three technologies you can find in the UTM feature set?
A: Antivirus inspection, antispam, and deep-packet inspection. Antivirus technologies often focus on the inspection of Web and e-mail traffic.

Unified Threat Management Firewall

Unified Threat Management (UTM) is a new term in the firewall industry — in fact, UTM is the hottest buzzword in the industry today. The term describes the combination of several security technologies on one device. The typical UTM technologies are the following: stateful firewall; IPS; antivirus; antispyware;
antiphishing; anti-adware; antispam; and Web filtering. This technology is included typically on a firewall that employs a stateful firewall as its core technology. This technology is used in lower-speed deployments of a gigabit-per-second throughput or less.

UTM increases the security of a stateful firewall by adding different layers of inspection. It does this and still maintains the important throughput, which is one of the important benefits of stateful inspection. The Intrusion Prevention System functions implemented in UTM are usually subsets of full-blown IPS features. This form of IPS was formerly known as Deep Inspection or Deep Packet Inspection. The
IPS feature looks for specific attacks inside flows. These attacks are usually divided into categories of severity. The IPS component is usually deployed to stop the most critical attacks that are active threats, such as worms.
A discussion of the IPS features of UTM can be found at www.securityfocus.com/
infocus/1716.c. Network-based antivirus technology is often limited to a small set of protocols. These protocols are the ones in which viruses are most commonly found, such as HyperText Transfer Protocol (HTTP), Simple Mail Transfer Protocol (SMTP), and Post Office Protocol version 3 (POP3). Anti-x protocols (spyware, adware, and spam) block or at least limit the amount of incoming spyware, adware, and spam. These products can be developed by the firewall vendor themselves or they can be
products developed by partners. As with the antivirus products, these products are used only on specific protocols, such as mail protocols in the case of antispam.
Web filtering allows you to block Web sites that are inappropriate for your organization. By including filtering on the firewall, you reduce the number of devices that need to be managed in your environment.

The features for integrated Web filtering can be limited as opposed to using a full installation of a filtering product. Often, Web filtering is done by partnering with a major player such as Websense or Surf Control.
UTM features are often best deployed in low throughput environments with low user
counts. Over the years, performance of such features has become much better. In the past, you would never want to deploy these features in environments with more than 50 people. Today, however, many products can support several hundred users. UTM is a great technology to add to your environment, and the future for it looks bright.

Intrusion Protection System (IPS) technologies have been deployed on stand-alone devices in the past. However, today you can run a complete IPS system on your firewall. You do so by combining (usually) a stateful firewall technology with an IPS engine. Throughput typically depends on the implementation. Some vendors choose to dedicate specific hardware resources to the IPS inspection. These devices have the highest throughput — much more than a completely software-based implementation. This IPS technology deployment differs from UTM because it is much more feature rich and supports more protocols. A typical UTM deployment consists of a couple hundred signatures and supports a dozen signatures whereas a true IPS deployment consists of several thousand signatures and 40 or more protocols. A signature is a specific pattern or combination of patterns that match an attack. The inclusion of IPS on firewalls provides stateful firewalls with the security that an application proxy can
provide yet at incredibly fast, multigigabit speeds.

You can find a more in-depth discussion of IPS at www.securityfocus.com/infocus/1670.

Network Address Translation Network Address Translation (NAT) is a technology that allows you to change one IP address into another as a packet passes through a firewall. This can be done to the source IP, destination IP, or both. NAT gives you the ability to do several different things, the first being the ability to hide your network’s IP address range, obfuscating what its true IP address range is. You can use a set of nonroutable IP addresses for your private network. These typically come from the Request For Comment (RFC) 1918 address set. Because these addresses are not routable on the Internet, you need to hide them behind public IP addresses.

Most organizations do not have the ability to provide one public IP address for each private IP address. In these cases, a combination of NAT and Port Address Translation (PAT) is used. PAT swaps the source port of the packet to a higher port and then uses a single IP address to hide many internal IP addresses behind. The firewall tracks the connection by mapping the original source port to the new PAT port.

Doing so allows it to know which connection belongs to which internal IP.
To read more about NAT, go to www.tcpipguide.com and search for NAT. You can find a more indepth discussion there.

Virtual Private Networks
A Virtual Private Network (VPN) is created by employing a protocol that allows packets to be transported between two endpoints yet seem as though they are part of the same network. Using one of several protocols such as Multiprotocol Label Switching (MPLS), IPsec, or Generic Router Encapsulation, you can create a VPN. Most firewalls create VPNs using IPsec. IPsec is a protocol suite that enables the secure
transport of traffic between two endpoints. Most firewall products on the market today allow for thecreation of IPsec VPNs. IPsec VPN.

Application Proxy Firewall

An application proxy is the most secure firewall technology on the market today. An application proxy firewall operates as a middleman to all of the connections that attempt to pass through it. As the technology’s name suggests, this type of firewall proxies an application’s connection. When a client attempts to make a connection through the firewall, the firewall terminates the connection to it. Then the firewall opens and initiates a connection to the destination host on behalf of the client. All the data can be inspected by the application proxy firewall as it passes between the client-proxy connection and proxy-server connection.

This type of separation, plus the capability to inspect all the data, is why the application proxy firewall is the most secure. The firewall must have a protocol decoder built in for each of the supported protocols. If it doesn’t, it is possible to support any protocol with a generic proxy. A generic proxy, however, does not provide the same level of inspection as a custom protocol decoder. The generic proxy still can proxy a connection but is unable to understand the application inside the connection.

The application proxy must open a connection for each session passing through the firewall, which takes a great amount of work on the firewall’s part. The slower performance that results from managing so many connections has led to the general disuse of this technology as the main firewall for an organization.

However, many companies still use application proxy firewalls in limited-use scenarios and for environments in which performance isn’t a factor.
Application proxy firewalls are most commonly used for providing Web-based services. This use includes an authenticated proxy for monitoring outbound Web access and application accelerator products. Application accelerator products sit in front of the Web servers and proxy connections while also providing SSL acceleration and content compression.

The two most notable vendors providing application proxy firewalls today are Microsoft and Secure Computing. Microsoft uses proxy technology in its Internet Security and Acceleration Server (ISA)product. The ISA server, although not used as a main firewall device, is still highly popular in Microsoftfocused
organizations. Secure Computing’s Sidewinder G2 firewall is also a widely deployed product. Secure Computing purchased the Gauntlet firewall in 2002. Gauntlet was the most popular application proxy firewall during the peak usage of the application proxy technology.

You can read more about ISA server at www.microsoft.com/isaserver/.

You can find more information about secure computing and the G2 firewall at www.securecomputing.com.

Stateful Firewall

A stateful firewall was invented to resolve the shortcomings of packet filtering. Originally called a circuit level firewall, the stateful firewall is considered a second-generation technology. As its name implies, this firewall technology is aware of the state of the ongoing communications. This firewall technology is the
most commonly deployed today in firewall products.

The ability to maintain state is crucial for almost all security deployments. Stateful firewall technology is based upon a few important concepts. In contrast to packet filters, stateful firewalls watch and maintain the entire state of a connection. A connection is made up of two separate flows. A client system initiates
a connection to a remote server. This flow starts the session setup. At this point, the firewall must determine the beginning state of the communication. The determination is based upon the type of protocol being used.

We first look at the truly stateful transport protocol TCP (or transmission control protocol). TCP has a clear beginning, middle, and end to each of its network conversations. When a TCP connection begins, it marks the initial packet coming from the client with a SYN flag. This flag tells the remote host that the
client system wants to initialize a connection. The server sends a SYN/ACK packet and acknowledges the original SYN packet. To confirm that this packet was received, the client system sends an ACK back to the server. This process is called a three-way handshake. During the communication, each packet is flagged with a SYN/ACK packet during the conversation. Figure 5-1 shows an example of a three-way
handshake.


To close the conversation gracefully, the closing host sends a FIN packet and the receiving host sends an ACK packet back. The receiving host sends a FIN packet and the initial closing host sends an ACK packet. This process is called a four-way handshake. A four-way handshake is done to ensure that no data is lost, with both sides acknowledging the close. A host also can abruptly close a session by sending just an RST or reset packet. Figure 5-2 shows an example of the four-way connection close.


Now you can see that identifying the entire state of the communication for a TCP session is possible and allows a stateful firewall to keep track of the state of the session. When creating policies on a stateful firewall, you typically need to match only three components: the source IP, the destination IP, and the service you want to allow or deny. In contrast to a packet filter firewall, the return packets are automatically allowed if they are consistent with the state of the communications. After the session is closed, the return path that was dynamically created through the firewall is closed. This process is much more secure then leaving the return path open via a static ACL such as a packet filter.

UDP and Internet Control Message Protocol (ICMP) are different beasts to deal with, however. Neither of these protocols is truly stateful. Each firewall vendor uses different mechanisms to determine the state of the protocol. Vendors typically create short-lived sessions for these protocols, assuming that they will be short lived. However, the length of the session varies based upon the UDP or ICMP
implementation.

Some applications do not act in a way that is firewall friendly, which typically means that the application uses another mechanism to communicate the change of state besides the underlying transport protocol. An example is the File Transfer Protocol (FTP). This protocol negotiates a port on which the client will connect to the server. This new port is random and unpredictable from just looking at the transport
protocol. The firewall must look inside the application layer to determine this information. To do this vendors implement what is known as an Application Layer Gateway (ALG). An ALG looks at specific protocols at the application layer, thereby allowing the firewall to monitor for changes that are not done at the transport layer. The firewall can then create a pinhole in the firewall to allow the communication to continue.

Although stateful firewalls are the most popular firewalls and are considered the mainstream technology, they are not without their downside. The biggest challenge is maintaining the sessions. Each session takes up a specific amount of resources on the firewall. A network attack can attempt to overwhelm the firewall by taking up all the available sessions. Doing so can easily crash the firewall and create a service loss on the network. To fight against such an attack, a vendor uses both its hardware and software to mitigate the risk of this occurring.

The second drawback of a stateful firewall is that it is not the most secure type of firewall. A stateful firewall does not do a full protocol decode and typically operates only at the network and transport layers (TCP/UDP/IP). However, stateful firewalls implement several different technologies to overcome this limitation. A stateful firewall provides secure transport between two networks by securing the traffic and allowing the minimal number of ports to be open at any given time. This technology balances speed and security to create the most viable firewall for organizations.

You can find additional information covering the concept of stateful firewalls at www.answers.com/topic/stateful-inspection.

An application proxy is considered the most secure firewall type.

Packet Filter Firewall

The oldest firewall technology is the packet filter, which looks at each packet as it passes through a network interface. This technology originated on Cisco routers around the year 1985. The implementation of this technology was quite simple and its concepts have been passed on to all subsequent firewall technologies.

A packet filter does exactly what its name suggests — it filters packets. The technology is designed to either allow an individual packet or deny it based on the configured filter or Access Control List (ACL).

An ACL consists of several different criteria that you can configure. At the end of each ACL is an implicit deny. This will drop any traffic that is not explicitly allowed.

The original implementation of packet filters is allowed only for the combination of source and destination Internet Protocol, or IP, addresses.

Packet filtering is a fairly basic mechanism to control access into or out of a network. This model induced a strict allow or deny policy based upon IP address alone. If the source IP address were allowed to the destination, the source would have access to all services on the destination. In the beginning, this was
sufficient, but it was far from ideal.

As new network-based services became available, the requirement to reduce the level of access between sources and destinations increased. Because hosts now served dozens of different services, restricting access to a specific service was required. New revisions of packet filters came into existence. The second iteration of filtering capabilities allowed for the inclusion of source port, destination port, and IP protocol type as decision criteria. This change created a much stricter security implementation. Access between hosts or networks could be restricted down to the service port and protocol such as Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and IP. For protocols such as Internet Control Message Protocol (ICMP), the specific message types could also be selected.

A great Web site to use to learn more about TCP/IP is www.tcpipguide.com. This information can also be found in book form (The TCP/IP Guide, No Starch Press, ISBN 159327047X).

Although security capability increased, so did the complexity of the situation. Because the inspection of packets has now gone beyond just IP addressing, the problem of dealing with the bidirectional nature of communications comes into play. When two hosts talk to each other using TCP, for example, they use a set of ports. The initiating host talks to its destination host using a random port greater than 1025 to a static port of the destination host. When creating an access list, you must specify the static port of the destination host. You can see this in the following pseudo code:

From source IP X to dest IP Y with source port 1025-65535 to destination port 80

Although this ingress-based policy is very straightforward, creating a return policy to restrict egress traffic is not as easy. Because the nature of TCP communications involves the creation of two separate flows to create a session, you must configure an inverse policy to allow return traffic from the server
back to the host. You can see this in the following pseudo code:

From source IP X to dest IP Y with source port 80 to destination port 1025-65535

The issue here is that the source IP address can now access the destination on any port. Although this does not seem like an issue, it ends up being a huge problem by allowing unsolicited connections back through the network to the destination host. The possibility thereby exists for an attacker to attempt to hijack sessions or exploit vulnerabilities on the OS or its hosted services. This situation created the need to monitor the state of the connection. The resulting technology is called a stateful firewall, which it is discussed in the following section.

Although this development may seem to render ACLs completely useless, the reality is quite the contrary. Packet filtering technology is still used today and it is considered best practice to do so. The technology is deployed in many locations where monitoring state is not required. You can filter out network garbage such as specific protocols or ports that you may not want your firewall to deal with. Some
routers are capable of doing this in hardware-based processing, which allows for line rate filtering of network traffic.

The capabilities of a packet filter today are extended in many ways. In some devices, you can filter down at a very low level. The pseudo code that follows this paragraph is an example. You can now look deeper in the packet to make a match, which includes looking deeper inside of protocols to look at specific flags
or options being set. Doing so extends the older firewall technology to keep it relevant today.

From source IP X to dest IP Y with TCP flag SYN and TCP flag FIN
OR
From source any to destination any where IP protocol is equal to 50

Understanding Regulations, Legislation, and Guidance Interview Q&A

Q: Why does my organization need to worry about regulations and legislation?
A: Two concepts drive this discussion: compliance and due diligence. If your organization falls under any particular piece of legislation or regulation, you must show that you are taking steps to be in compliance with the directive. If you are not in compliance, you may face fines or other sanctions against your organization. The other concept is due diligence. Due diligence addresses a logical, minimum, and necessary level of security within organizations to support and serve the customer and the employees. Failure with due diligence can result in customer dissatisfaction
and employee loss.

Q: Don’t all the regulations and legislation basically say the same thing?
A: Approximately 80 percent of all information security–related regulations and legislation basically say the same thing, which relates to the concept of due diligence. The remaining 20 percent relates to industry or governmental specific implementation requirements.

Q: What is the difference between a requirement and guidance?
A: A requirement is something you must do. Guidance is something you might consider doing and implementing if it makes sense in your environment.

Q: What federal law regarding computer security and compliance applies to all government agencies under the executive branch of the United States government?
A: FISMA (Federal Information System Management Act).

Q: What FIPS document provides the guidance on the categorization and classification of risk for federal computer systems?
A: FIPS 199 – Standards for Security Categorization of Federal Information and Information Systems.

Q: What two NIST Special Publications provide the listing of security controls for federal information systems and how to access the controls?

A: NIST SP 800-53 – Recommended Security Controls for Federal Information Systems and NIST SP 800-53A – Guide for Assessing the Security Controls in Federal Information Systems.

Q: What government agency has the authority to set requirements for systems that contain national intelligence information?
A: The Central Intelligence Agency (CIA). The Director, CIA establishes rules for intelligence systems working with DoD and the military services.

Q: What is considered to be national infrastructure?
A: Food supplies, water supplies, power, public health, national defense, national icons, and national financial stability.

Q: I am a publicly held health care organization. What regulations or legislation do I have to worry about?
A: HIPAA, Sarbanes-Oxley, state requirements, and possibly PCI.

Q: Why was Sarbanes-Oxley passed?
A: In the wake of financial scandals at Enron and other companies, Congress passed Sarbanes- Oxley in an attempt to get public companies to provide accurate and ethical financial results to protect shareholders and employees from being financially hurt by intentional actions of individual decision-makers in the organization. With Sarbanes-Oxley, CEOs and CFOs must account for the accuracy of the information in the financial statements and can be held individually
accountable if they are not accurate.

IT Security Interview Q&A

Q: What is access control?
A: Access control provides the mechanism for ensuring that only authorized individuals can access organizational information. More stringent access control mechanisms are needed as the value of the information to the organization increases.

Q: What is Least Privilege?
A: Least Privilege states that users should have access only to exactly the information required to perform their job duties and nothing more.

Q: How do you define confidentiality?
A: Confidentiality basically means keeping private information private. You don’t want outside competitors seeing your organization’s work in research and development, and you don’t want sensitive customer information being stolen and used for identity theft or fraud.

Q: What is integrity as it applies to information security?
A: Integrity is ensuring that information remains in the proper state while it’s being used or stored. A loss of integrity occurs if information is modified in an unauthorized manner.

Q: What is availability?
A: Availability means that information is ready and waiting, right when you need to access it.

Q: How do you define risk?
A: Risk is a combination of threat, vulnerabilities, and impact to the organization (or value). Risk exists when all three of these elements are present simultaneously. For example, if an organization knows of a threat agent with a desire to steal its critical research information and a vulnerability exists that could allow that to happen, risk is present.

Q: What is the importance of classifying data?
A: Classifying data allows an organization to differentiate between routine information types and those types that have a critical impact on how it does business. Classifying also allows management to appropriately budget for the protection of varying information types, instead of protecting everything at the same level and wasting resources.

Q: How do you describe data labeling?
A: Data labeling is intended to aid with the identification of information. After information has been appropriately identified to users, steps can be taken to ensure the security of that information; such as storage and handling measures.

Describe Routing Filters and What They Accomplish

Route filters are used in several routing protocols. Most common are the OSPF and BGP implementations.

OSPF uses route filters, or route maps, to restrict summary routes and prevent routes from being imported into the route table. Most route maps use match clauses to match prefixes that they wish to accept or deny.

BGP routers use route filters to enforce policy. There are three filter types that can be applied to match updates exchanged between BGP speakers:

❑ Path filters: Using the AS-PATH attribute, if the update matches the filter criteria, the update is accepted or denied.
❑ Prefix filters: Using the prefix in the NRLI, if the update matches the filter criteria, the update is accepted or denied.
❑ Route maps: As with the interior routing protocols, route maps can have more actions associated with the match criteria. Routes can be accepted or denied, but attributes can be changed as well. There is also work being done in the field of exchanging the route filters between BGP speakers.

For more information on OSPF and BGP route filtering, check out Routing TCP/IP, Volume 1 (2nd Edition) (CCIE Professional Development) by Jeff Doyle and Jennifer Carroll (Cisco Press. 2006). For more information on BGP outbound route filtering, check out “Outbound Route Filtering Capability for BGP-4 - draft-ietf-idr-route-filter-16.txt,” by E. Chen and Y. Rekhter.

Explain BGP, the Differences between BGP and OSPF, What Prefixes Are, and What Attributes and Types Are Used in BGP

The Border Gateway Protocol (BGP) is a favorite subject for many technical interviewers. It is the exterior routing protocol of choice in today’s networks and is quite different from interior routing protocols such as OSPF. BGP fulfills the role of mediating between two “administratively controlled” networks.

These administratively controlled networks are known as autonomous systems (ASs). BGP, requiring a reliable connection between peers, uses TCP port 179. Each peer session gets a single TCP session. BGP is an application layer protocol, so it requires the TCP session to be established before exchanging any route information. BGP sessions can be authenticated using MD5 signatures when exchanging updates.
An UPDATE message can have a variable number of attributes; however, they cannot be repeated. As for the prefixes, an UPDATE message can advertise only one route. It can, however, list routes to be deleted. BGP is considered a path vector protocol because it stores route information in addition to path attributes. The route selection is done in a deterministic fashion based on best route policy. The policy is based off the path attributes. Where interior routing protocols use metrics such as delay, link utilization, or hops, BGP does not. Understand that BGP is capable of running in two modes: exterior and interior.

EBGP is used for peering between different autonomous systems (AS). IBGP is used for routers within the same AS. Path attributes are different for the two modes; these are discussed shortly.

There are two key differences between BGP and OSPF (or any internal routing protocol). The first difference is how the protocols scale up to accommodate large numbers of routes. BGP scales up well because it sends a complete route update only once when a session is established with a peer. After that, the BGP speaker will send only incremental changes. Even though OSPF mostly sends link state information, there are still periods in which all its routing information is sent. The second key difference is the support for path attributes in BGP. BGP uses path attributes to form routing policies. This works well when you have to route between separately owned and maintained networks (autonomous systems). The routing policies allow you to make a decision as to whether to accept, reject, or change (summarize/aggregate)
routes from a peer network. This helps protect the network and control how routes are propagated throughout the internal network.
A prefix is the network portion of the IP address and implies the use of classless addressing. BGP uses prefixes in the Network Layer Reachability Information (NRLI) field in the UPDATE messages. The path attributes convey the prefix characteristics to the peer router. Another hot topic in BGP is the ability to perform route dampening. Route dampening is a feature that controls the frequency of routes changing state— up, down, up, down, and so on. This frequent changing of state is called route flapping. Most routers today can sense the flapping and remove the offending route. To do so, they monitor how often the flapping occurs and penalize the route each time. After the penalties exceed a set threshold, the route is removed and updates are ignored. The route can be reused after a certain
amount of time. One of the greatest arguments in BGP is which attributes should or should not be used when sharing information between two networks. (Just a quick definition note: The words update and advertisement are used interchangeably.) In BGP, there are numerous path attributes that accompany an update between
two BGP speakers who wish to exchange routing information. We draw from RFC 4271/1771 for the following information. There are four defined categories for BGP attributes:

❑ Well-known mandatory
❑ Well-known discretionary
❑ Optional transitive
❑ Optional nontransitive

As the name implies, any vendor who wishes to implement BGP must have the well-known attributes. The mandatory attributes are ones that have to be included in every update. Discretionary attributes do not. Optional attributes are ones that some BGP speakers may use and others may not. The transitive bit in the update determines whether a BGP neighbor propagates the attribute or simply deletes it. It is always good to review the well-known mandatory attributes first. There are three mandatory
attributes that are well-known: ORIGIN, AS_PATH, and NEXT_HOP. There are two well-known, discretionary attributes: Local Preference, and Atomic Aggregate. All these attributes are described in the following list:

❑ ORIGIN: The Origin code is how the route originated, or the source of the route. The choices are internal gateway protocol (IGP), external gateway protocol (EGP), or incomplete. A great follow-up question is, “What is the cause of an unknown/incomplete?” Some of the most common reasons are route aggregation/summarization and redistribution.

❑ AS_PATH: The AS_PATH attribute is simply a list of all the autonomous systems (AS) that the given route in the update transits through. As the update passes through each AS, each BGP host adds its own AS to the list.

❑ NEXT_HOP: The NEXT_HOP attribute is the IP address of the first router in the next AS. And this first router may be more than one hop away. When this is the case, the interior routing protocol will compute a route to the BGP NEXT_HOP IP address. Just remember that Internal BGP sessions will not change the NEXT_HOP attribute — only external BGP sessions do.

❑ LOCAL_PREF: The local preference attribute is used to inform internal BGP peers of the preferred AS egress point for the included route.

❑ ATOMIC_AGGREGATE: The atomic aggregate attribute is used when a BGP speaker has overlapping routes from one of its peers. The BGP speaker will set the attribute when it makes a less-specific route selection. Aggregation, also known as summarization, hides network reachability and topology information. The atomic aggregate attribute is the mechanism used to hide the AS path. Examples of the optional transitive attributes are the Aggregator, Communities, and Extended
Communities attributes.

❑ Aggregator: The Aggregator attribute is a way for a BGP speaker to notify its peer that it has aggregated a given route and provides its own AS number and IP address.

❑ Communities: Communities are the “catch-all” attributes. In most large networks today, BGP communities are used to enforce policy. They do not directly affect the route selection algorithm of BGP, but they can shape how routes are treated when received in an update. There are three communities that are commonly used: NO_EXPORT, NO_ADVERTISE, and NO_EXPORT_SUBCONFED. The NO_EXPORT community attribute is a tag that notifies the peer whether the route can be exported to an external AS. The NO_ADVERTISE community attribute notifies the peer to not advertise the route at all. The NO_EXPORT_SUBCONFED community extends the NO_EXPORT attribute to include confederated ASs.

❑ Extended Communities: Extended Communities extend the BGP attributes further. There are a number of Extended Communities in draft and used in some BGP implementations. Ones to mention include the Autonomous System Specific, Route Target, Route Origin, and Link Bandwidth.

❑ MULTI_EXIT_DISC: The MED attribute is an optional, nontransitive attribute that provides a means to advertise multiple exit points for the local AS. Each exit point is given a metric, and the lowest metric will be the preferred exit point. Much has been written on BGP but the great references for BGP are still the RFCs. There are many and they all deserve attention: RFC 4271 - A Border Gateway Protocol 4 (BGP-4); RFC 4272 - BGP Security Vulnerabilities Analysis; RFC 4273 - Definitions of Managed Objects for BGP-4; RFC 4276 - BGP-4 Implementation Report; RFC 1772 - Application of the Border Gateway Protocol in the Internet; RFC 1773 - Experience with the BGP-4 protocol; RFC 1774/4274 - BGP-4 Protocol Analysis; RFC 1997 - BGP Communities Attribute; and RFC 1998 - An Application of the BGP Community Attribute in Multi-home
Routing as well as Internet-Draft document draft-ietf-idr-bgp-ext-communities, BGP Extended Communities Attribute.

Draw the Diagram of a Typical OSPF Network and Explain Generally How It Works: DR, BDR, Election, ASBR, ABR, Route Redistribution, and Summarization

This question is a great one and often makes the interviewee wonder where to start. Intentionally openended, the responses vary widely. What you should convey in your answer is an in-depth knowledge of OSPF. Scratch the surface and dive in deeper if you see positive responses from your interviewers.
Preferably using a whiteboard, start with the hierarchy of OSPF — a two-level model — and draw a diagram like the one in Figure 2-9.



Discuss having a backbone Area 0 (or 0.0.0.0) and that all areas must connect to the backbone area. Other area types include the following: stub area (an area that does not receive AS external routes); total stubby area (an area that does not allow summary routes or external routes); and not-so-stubby area (an area that can import AS external routes and send them to the backbone area, but will not receive AS external routes from the backbone or other areas). You want to include the type of routers contained in the hierarchy: internal routers, area border routers (ABR), backbone routers, and autonomous system boundary routers (ASBR). Mention that the shortest path first (SPF) calculation is performed independently on each area. You should state that the time to converge is faster than distance-vector routing protocols (DVRPs) such as RIP. Include a brief statement on the low bandwidth requirement for LSAs. Also mention support for classless routing, Variable-Length Subnet Masking (VLSM), authentication, and multipath statements.

Describe the OSPF algorithm and generally how it works: Changes in the network generate LSAs, routers exchange the LSAs, and each router builds and maintains its own database. So if the network is in a steady state, there will be refresh LSAs only every 30 minutes. You want to cover the five types of routing protocol packets: Hello, Database description, Link-state request, Link-state update, and Link-state
acknowledgment. Hello packets are multicast on 224.0.0.5 and routers use them to form adjacency relationships.

You want to cover the types of Link State Advertisements (LSAs): Router link (LSA type 1), Network link (LSA type 2), Network summary (LSA type 3), ASBR (LSA type 4), External (LSA type 5), and NSSA external (LSA type 7). Do not neglect a discussion on IPv6 and that OSPFv3 supports it. You might want to discuss briefly how vendors implement OSPFv3 using a ships-in-the-night approach to support both v3 and v2 simultaneously. OSPFv3 distributes IPv6 prefixes and uses the same interfaces
and nearly the same LSA types as OSPFv2. It uses the same methods for neighbor discovery and adjacency forming. The only differences are that OSPFv3 has to use a network link rather than a subnet.

And there can be multiple instances of OSPFv3 on a given link. You should mention that the topology in OSPFv3 is a bit different as well — using a router ID and Link ID. And because OSPFv3 uses links, there’s a new Link LSA type as well as an Intra-Area Prefix LSA for the IPv6 prefixes.

To fully answer the question, you have to go through the process of neighbor finding and adjacency creation. Routers sharing a common network segment or link will become neighbors using the Hello protocol. Routers send Hello packets out each interface with the multicast address of 224.0.0.5. When a router sees its own primary address in a Hello packet from another router, the routers are then neighbors. As
neighbors, the routers have to agree on the following things: the area-id (of the area they belong to); a preshared password (for authentication); hello and dead intervals (how often hello packets are sent and how long to wait to for a neighbor’s hello); and a stub area flag (whether the router is in a stub area). Neighbor routers then form adjacencies. When the routers exchange their databases, they are adjacent. To limit the volume of information exchanged on a network segment, routers go through an election process. This election nominates a designated router (DR) and a backup designated router (BDR). The DR is the sole source for updates on the segment. All other routers on the segment exchange route information with the DR and BDR. Area Border Routers (ABRs) collect all the routes for the area and combine/
summarize them into a single advertisement to the backbone area (inter-area route summarization). The backbone routers then forward these summarized routes. External route summarization may occur as well when distributing the routes to another protocol.

For more information on interior routing protocols, check out Routing TCP/IP, Volume 1 (2nd Edition) (CCIE Professional Development) by Jeff Doyle and Jennifer Carroll (Cisco Press. 2006).

What Is the Difference between a Routed Protocol and a Routing Protocol?

This is another “softball” question but you would be surprised by how it trips up candidates. A routed protocol is one that defines the header within a network layer packet and is used at each Layer 3 packet inspection. For example, IP addresses are used to forward packets from device to device in the network.

A routing protocol is one that shares routing information between routers. Routing protocols use messages to exchange routes and network health information. Examples of routing protocols are Routing Information Protocol (RIP), Border Gateway Protocol (BGP), Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP), and Enhanced Interior Gateway Routing Protocol (EIGRP).

Describe Variable-Length Subnet Masking (VLSM)

Similar to the previous question, this is another favorite fundamental question. VLSM is a feature of OSPF, RIPv2, and BGP that enables classless routing. With classful routing protocols such as RIPv1 or IGRP, every autonomous system uses the same subnet mask. For example, 192.168.16.0, 192.168.17.0, and 192.168.18.0 are all Class C networks and therefore have a /24 or 255.255.255.0 subnet mask.

VLSM allows an autonomous system to support different subnet masks such as 192.168.18.0/26 and 192.168.18.128/25 to support subnets. VLSM also supports “supernetting,” such as describing 192.168.20.0/23 to include all hosts in 192.168.20.0 and 192.168.21.0. Do not be surprised if you are asked to perform a
few subnet and supernet examples.

For more information on VLSM, Google “VLSM site:cisco.com.”

What Iis the Difference between Classful and Classless Routing protocols?

This is another softball question and one that CCNAs are often asked. Classful routing protocols are ones that strictly follow the Class A (8-bit prefix), B (16-bit prefix), and C (24-bit prefix) address boundaries.

Examples include RIP and IGRP. Classless routing protocols are ones that throw out the traditional rules of classful routing and allow summarization of routes into smaller, more manageable groups.

Classless routing is also known as supernetting and formally known as Classless Inter-Domain Routing (CIDR). For example, with the traditional Class C address of 192.168.16.0/24, a classful routing protocol would advertise only the /24. Every network device on the network would share the same subnet. If you had subnetted your network to use 192.168.16.16/25, you would have to advertise this more specific route using a classless routing protocol. The same applies to summarization or aggregation. If you have multiple Class C networks such as 192.168.16.0/24 and 192.168.17.0/24, using a classless routing protocol, these routes could be written as 192.168.16.0/23. Classless routing protocols include EIGRP, OSPF, RIPv2, and BGP.

What Well-Known Port Numbers Are You Familiar With?

This is always a good question to probe how often a candidate deals with specific higher-layer protocols.

There are too many ports to mention, but the critical ones are the following: FTP (20-21/TCP:UDP),secure shell (22/TCP:UDP), Telnet (23/TCP:UDP), TACACS (49/TCP:UDP), DNS (53/TCP:UDP), SMTP (25/TCP:UDP), TFTP (69/UDP), HTTP (80/TCP), SSL (443/TCP), POP3 (110/TCP), RADIUS (1812-1813/UDP), SNMP (161-162/UDP), BGP (179/TCP), LDAP (389/TCP:UDP), RDP (3389/TCP), and IKE/ISAKMP (500/TCP:UDP).

Description of TCP and UDP Packet Headers

Interviewers love to throw this question in to see how well you know the fields in the TCP and UDP headers. RFC 791 defines the standard TCP header. RFC 768 defines the standard UDP header. The TCP header has a minimum size of 20 bytes and a maximum of 60 bytes, and it uses the IP protocol number of 6. The UDP header has a static size of 8 bytes and uses the IP protocol number of 17.

Following is a list of the fields in the TCP header (shown in Figure 2-7):

❑ Source Port (16 bits): Defines the source port of the connection.
❑ Destination Port (16 bits): Defines the destination port of the connection.
❑ Sequence Number (32 bits): Identifies the byte count relative to the initial sequence number established in the first SYN packet during the three-way handshake.
❑ Acknowledgment Number (32 bits): Contains the last sequence number of the current transmission as well as those received correctly by the host.
❑ Data Offset (4 bits): Identifies the number of 32-bit words in the header portion of the packet.
❑ Reserved (6 bits): As the name implies, this field is not currently used and is reserved for future use.
❑ Flags (6 bits): These are a series of 1-bit fields that are controls for the connection, as follows:

Urgent (URG); Acknowledgment (ACK); Push (PSH); Reset (RST); Synchronize (SYN); and
Finish (FIN).

❑ Window (16 bits): This field defines how many packets can be sent before an acknowledgment is received. This is critical to the performance of a connection, and the window size is adjusted accordingly.
❑ Checksum (16 bits): This field is a simple ones-complement function of a pseudo-header, the TCP header, and data that is computed by the client and recomputed by the server for integrity. If these do not match, a packet retransmission occurs. The pseudo header is the source and destination IP addresses and the TCP header.
❑ Urgent Pointer (16 bits): This field is the byte location of any urgent data that signals to the destination host to process immediately as opposed to buffering.
❑ Options and Padding (32 bits): These fields are used to pad the length of the TCP header so that it results in a multiple of 32 bits.
❑ Data: This is the data passed from the application layer (of the TCP/IP model).


As stated earlier in the chapter, UDP is used by applications that are not concerned with reliability or packet loss. These applications just need to transmit quickly and with as little overhead as possible. The UDP header has four 16-bit fields (see Figure 2-8), making the total overhead 8 bytes. Similarly to TCP, the Source Port and Destination Port fields define the ports to be used. The Length field is the length of the entire packet (header and data). As is the TCP checksum, the UDP checksum field is a onescomplement of a pseudo header, the UDP header and the data. One important note is that the UDP checksum is optional and can be set to 0.

TCP Three-Way Handshake and Relate It to the TCP State Diagram

The purpose of this question is to probe how well you know basic fundamentals. The first part of the question is relatively easy. The second part is the kicker. The TCP three-way handshake establishes a TCP session, or connection, and is the foundation for all reliable communication on the Internet today. As shown in Figure 2-5, the originating client will send the first packet with a SYN flag and a sequence number (X). The destination server replies with a SYN/ACK, its own sequence number (Y), and an acknowledgment of the clients’ sequence number (X+1). The client returns an ACK and increments its own sequence number (X+1) and the sequence number of the server (Y+1). The connection is now established and the client can begin transmitting the data. The connection will remain open until one of the following happens: The client or server sends a FIN packet to finish or an RST packet to reset the connection, or the connection times out. All this exchange is referenced in the TCP state diagram shown in Figure 2-6. It is well worth the time to understand and be able to draw this diagram from memory. It is impressive to see candidates who understand the inner workings of the protocol.



How to Use Windows special keys in Linux

Why are all the new keyboards sold with Win95 keys on them? How about making them do real keyboard functions while in X Window? Here is how.

First you need to find out which key mapping you are using. Usually it will be US, it might also be en_US, ca or else. Locate the file, usually in /usr/X11/lib/X11/xkb and edit it with your favorite editor. For me the file is called /usr/X11/lib/X11/xkb/symbols/ca.

The file lists all the key codes and what they do. The key codes for the Win95 special keys are LWIN, RWIN and MENU. All you need to do is add them to the list, with the functions for them. I decided to map the left

WIN key to "@" and the right WIN key and MENU keys to "{" and "}". Here are the lines I added:

key { [ braceleft ] };
key { [ at ] };
key { [ braceright ] };

By browsing the file you can find all the other symbols and what they do. You can also add multiple functions to a key, by using ALT and SHIFT.

The changes will take effect when you restart X Window. With the XKB extension (you do need to have it

enabled in /etc/XF86Config btw) it's easy to change the mapping of any key.

100 Linux Tips
36

Linux Non-PostScript printers

Unfortunately, most printers are non-PostScript compatible. This means that your LPR program won't like it.
You will probably notice that when you first use 'lpr' to print, the output looks weird on your printer. This is because these models do not support PostScript. You will need a converting program for it.

Note that newer versions of RedHat already have those programs or similar filters so it may not apply to all Linux systems.

First, you need to go read the Printing HOWTO to find out how to use lpr and related printing programs.

Then, you'll need to get 2 programs from http://metalab.unc.edu:
· bjf
· aps
These are the filters to convert text and PostScript to your printer's format.

First, install bjf which will be used to print text. Installation is very simple. type:
make

cp bjf /bin/bjf

Then, make a simple shell script to print text files and call it print.sh:
#!/bin/sh

/bin/bjf <$1> /dev/lp0

Where /dev/lp0 is your printer.

Now, install aps by running the SETUP script in its package. It's really easy to setup, but you do need to have the GhostScript program installed prior to installation. You are now ready to print PostScript files from, for example, Netscape or XV.

100 Linux Tips
35

How to detect two ethernet cards in linux

To configure an ethernet card in Linux, you need to enable it in the kernel. Then the kernel will detect your ethernet card if it is at a common IO port. But it will stop there, and will never check if you have 2 ethernet cards.

The trick is to tell the ethernet driver that there are 2 cards in the system. The following line will tell the kernel that there is an ethernet card at IRQ 10 and IO 0x300, and another one at IRQ 9 and IO 0x340:

ether=10,0x300,eth0 ether=9,0x340,eth1

You can add that line on bootup at the "boot:" prompt, or in the /etc/lilo.conf file. Don't forget to run:

lilo

That will reload the lilo.conf file and enable changes.

Enable FTP access restrictions

When you first install Linux, it comes with a lot of Internet services running, including mail, telnet, finger and FTP. You really should disable all those that you don't need from /etc/inetd.conf and your startup scripts.

FTP may be very useful, but must be configured correctly. It can allow people to log into their accounts, it can allow anonymous users to login to a public software directory, and it can display nice messages to them.

The files that you will probably want to modify are /etc/ftpusers and /etc/ftpaccess.
The file /etc/ftpusers is very simple. It lists the people that will not be allowed to use FTP to your system. The root account, and other system accounts should be in that file.
The file /etc/ftpaccess is a bit more complex and controls the behaviour of the FTP server. It tells it what to use as README file to display on a directory listing, what kind of logs to create and what messages to
display.

Note that if you create an anonymous FTP area, you will need to read the FTP man page and do exactly what it tells you to avoid possible security risks.

Creating CD-ROM images in Linux

With other operating systems, such as Microsoft Windows or IBM OS/2, you are not allowed in the license to make your own CD-ROM with the OS on it and then distribute it.

Linux, being Open Source and free, can be copied. You can download a distribution or buy it from an online store and burn your own copy, and then install it on many computers, or give it to your friends. Usually, you will find instructions on how to do that on the FTP server for your favorite distribution. You will need the main directory on the CD-ROM. The sources are not needed since they are available from the FTP site.

Some distributions also come with ISO images of their CD-ROM. This is a single file that can be put onto a CD-ROM, and will create a full file system with files on it.

One thing you have to be careful is not to copy commercial programs. The basic CD-ROM where the Linux distribution is located is composed of free software. But some distributions may come with other commercial programs, and you should read the license first.

LILO and boot problems

When a computer starts, the number of beeps the BIOS outputs tells you the state of the computer. On some computers, one beep means all is ok, but 2 beeps mean there is an error. LILO uses the same kind of codes.

The number of letters you see from the word LILO on the screen says what is wrong. The whole word means everything is fine, only LI means only the first part of LILO could be loaded. A full description of this is available from the Bootdisk HOWTO.

When LILO can't load, it's a major problem. This often means that the boot code was corrupted. The only way to boot is from a floppy disk. In RedHat, you can use the rescue disk, in Slackware, you can use the boot disk with the "mount" image.

When LILO is fine, it's often easier to figure a boot problem. If the kernel panics when it tries to boot, it is usualy due to a configuration error. You can tell LILO to mount another kernel you may have, like a "safe" or "old" image you kept for these cases. If the problem is in initialization scripts, you can tell LILO to boot directly into a shell with the following boot command line:

LILO boot: linux init=/bin/sh

Where "linux" would be your kernel image.

Bytes per inodes

When you format a partition using Linux's primary file system, ext2, you have the choice of how many bytes per inode you want. From the man page:

-i bytes-per-inode
Specify the bytes/inode ratio. mke2fs creates an
inode for every bytes-per-inode bytes of space on
the disk. This value defaults to 4096 bytes.
bytes-per-inode must be at least 1024.

This means that by using a smaller size, you will save disk space but may slow down the system. It is a space/speed trade off.

This is similar to one of FAT16/FAT32' major differences.

linux default boot mode

When a Linux system boots, it loads the kernel, all its drivers, and the networking servers, then the system will display a text login prompt. There, users can enter their user names and their passwords. But it doesn't have to boot this way.

There are 3 modes defined in most Linux distributions that can be used for booting. They are defined in /etc/inittab and have specific numbers. The first mode, also called runlevel 1, is single user mode. That mode will only boot the system for 1 user, with no networking. Runlevel 3 is the default mode. It will load the networking servers and display a text login prompt. Runlevel 5 is the graphical mode. If you have X Window installed and configured, you can use it to display a graphical login prompt.

The way to change this is to edit /etc/inittab and change the initdefault line:

id:3:initdefault:

Changing a 3 to a 5 will make the system display a xdm graphical screen on bootup.

Linux Default file permissions

When you create a file, the system gives it default permissions. On most systems the permissions are 755 (read, write and execute for the owner, and read and execute for others).

This default is setup with the umask command. To use the command, you need to find the right octal number to give it. The permissions in the umask are turned off from 666. This means that a umask of 022 will give you the default of 755. To change your default permissions from 755 to 700,

you would use this command:

umask 077

Setup Multiple Kernels

When you compile a new kernel, you will often change your configuration. This means you may forget to include an important driver, like the IDE driver, or otherwise make your system unbootable. The solution is to always keep your old kernel.

When you compile your kernel, the compilation procedure will often copy your old kernel into vmlinuz.old.

If it does not, you can do it manually. What you should do is add an entry to /etc/lilo.conf allowing you to boot your old kernel. You should view the lilo man page for the complete syntax. You could also add entries for different kernels, for example if you want to have an older stable

version of the kernel and the newest
development version on your system.

Note that some distributions name their kernel with the version they represent. For example, your current

kernel may be /boot/vmlinuz-2.0.36-0.7

Linux International console

Most Linux distributions are configured to use a US english keyboard. If you need to write on a french or any other kind of keyboard, you will want to change the locale so special keys like accents appear in the console.

The way to do this is to change the system locale with a program called loadkeys. For example, to enable a canadian-french locale, you need to add this line in your startup files:

loadkeys cf

Here cf means the canadian-french keyboard. Other locales are us, fr and more.

Annoying boot messages

When recompiling your kernel, you might end up seeing strange messages on bootup like:

modprobe: cannot find net-pf-5
modprobe: cannot find char-major-14

These are messages from the modules loader telling you that he can't find specific modules. This usualy happens when you compile modules, but modprobe tries to load modules that were not compiled and it can't find them. The way to remove those messages is to set the modules to off.

In the file /etc/conf.modules you

may want to add:
alias net-pf-5 off
alias char-major-14 off
This will stop modprobe from trying to load them. Of course you could also try to resove the problem by compiling the modules and make sure modprobe knows where they are.

How to allow users to run root programs

When a user starts a command, it runs with the permissions of that user. What if you want to allow them to run some commands with root permissions? You can, and that's called suid.
You can set a command to be suid root with the chmod command. This will make it run as root even if a user starts it. Here is how to set mybin suid root:

chmod +s mybin

Note that you must be very careful with this option. If the command has any security hole, or allows the userto access other files or programs, the user could take over the root account and the whole system.

mount Floppy drive in Linux

By default, Linux will not allow users to mount drives. Only root can do it, and making the mount binary suid root is not a good idea. With a special command in the /etc/fstab file, you can change that.

This is a typical line for the fd0 (A:) drive in /etc/fstab:

/dev/fd0 /mnt auto noauto,user 1 1

The keywords here are noauto and user. Noauto tells mount not the try to mount a diskette on boot, and user allows any user to mount the drive into /mnt. The auto keyword is also interesting. It tells mount to try to find out which file system is on the diskette. You could also use msdos or ext2.

Linux Tricks: Master boot record and LILO

What is the master boot record (MBR) and why does LILO erase the old boot loader? Every hard drive has a top space called the MBR where the BIOS will try to load an operating system. Every system has its own loader. DOS has DOS-MBR, Windows NT has the NTLDR and Linux has LILO.
When you install LILO, you can install it in the MBR or in a boot record for the Linux partition. If you want to keep your current boot loader, you can select the Linux partition, and make sure it is the active partition in fdisk. This way you will be able to boot to LILO, and then boot the old loader from the MBR.

If you plan on only using Linux on your system, you can tell LILO to boot right into Linux and not display a

"boot:" prompt, and you can install it in the MBR.

Clear IE7 Browsing History From the Command Line

If you like to build batch files to automate cleanup on your computer, you'll probably want to include at least one of these commands in your batch script. You can automate any one of the functions on the Internet Explorer 7 Delete Browsing History dialog.

Here's the dialog that you are probably used to seeing:

And here's the commands that correspond to the different buttons. The most important one from a cleanup perspective is the first, which will delete just the temporary internet files that are cluttering up your computer.

To use these commands, just run them from the command line, the start menu search box in vista, or a batch file.

Temporary Internet Files

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 8

Cookies

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 2

History

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 1

Form Data

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 16

Passwords

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 32

Delete All

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 255

Delete All - "Also delete files and settings stored by add-ons"

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 4351

Note; These commands should work in Internet Explorer 7 on XP or on Windows Vista.

Linux tricks - Wrong memory size found

The Linux kernel will detect various settings from your computer configuration. This includes the size of memory you have. In some cases, it will find the wrong size. For example, it could find only 64 megs of memory when in fact you have 128 megs.

The trick here is to specify the amount of RAM memory you have with the "mem=" parameter. Here is what you would type when your system boots if you have 128 megs of memory:

LILO boot: linux mem=128M

This will tell LILO to load the linux kernel with 128 megs of memory

How to make a trick on boot prompt

The Linux system uses a program called LILO to boot itself. This is the Linux Loader, and will load a kernel and can pass various parameters. This is what the "boot:" prompt is for.

At the "boot:" prompt, you can enter a lot of parameters. You can send parameters to drivers like the ethernet driver, telling it at which IRQ the ethernet card is located, or you can pass parameters to the kernel, like memory size or what to do in a panic. Reading the LILO manual will tell you all of the nice things LILO can be used for.

Note that for device drivers compiled as modules, you need to pass values when you load these drivers, and not on the "boot:" prompt.

Linux Tricks Kernel size and modules

To configure Linux to detect a new hardware part, especially on a new kernel, you may need to recompile the kernel. If you add too many devices in the kernel configuration, you may get an error message telling you that the kernel is too big. The trick is to enable modules.

The kernel itself must be a certain size because it needs to be loaded in a fixed memory size. This is one reason why modules can be very handy. If you enable modules, you will need to make them:
make modules and install them:

make modules_install

Then using the modprobe utility you can load selected modules on bootup. This way the kernel will be smaller and will compile with no error.

Linux tricks with more swap using a swap file

You installed a new Linux system, but forgot to set enough swap space for your needs. Do you need to repartition and reinstall? No, the swap utilities on Linux allow you to make a real file and use it as swap space.

The trick is to make a file and then tell the swapon program to use it. Here's how to create, for example, a 64 megs swap file on your root partition (of course make sure you have at least 64 megs free):

dd if=/dev/zero of=/swapfile bs=1024 count=65536

This will make a 64 megs (about 67 millions bytes) file on your hard drive. You now need to initialize it:

mkswap /swapfile 65536
sync

And you can then add it to your swap pool:

swapon /swapfile

With that you have 64 megs of swap added. Don't forget to add the swapon command to your startup files so the command will be repeated at each reboot.

Swap and Memory Linux Tricks

One important setting in any protected mode operating system like Linux is the swap space. In the installation, you will need to create a swap partition. A common question is what size should the partition be?

The proper size depends on 2 things: The size of your hard drive and the size of your RAM memory. The less RAM you have, the more swap you will need. Usually you will want to set your swap space size to be twice the RAM size, with a maximum of 128 megs. This of course requires you to have a hard drive with enough free space to create such a partition.

If you have 16 megs of RAM, making the swap space 32 megs or even 64 megs is very important. You will need it. If you have 128 megs of RAM on the other hand, you won't need much swap because the system will already have 128 megs to fill before using swap space. So a swap partition of 128 megs or even 32 megs could be enough.

If you don't select enough swap, you may add more later.

How to Install Linux with no Cdrom or Modem

Most Linux distributions come on a CD-ROM. You can also download them from an FTP site, but that requires an Internet connection. What if you have a system with no CD-ROM drive or Internet connection, like an old 486 laptop? The trick here is to have another desktop system with a CD-ROM drive, and a null-modem serial cable.

I will show you how to do it with Slackware. It is also possible with most other Linux distributions. Insert the Linux CD-ROM in the drive on the desktop and copy the A (base) and N (networking) packages on diskettes.

You need at least those in order to use a serial cable to transfer the rest of the packages.

Now you need to enable NFS networking on the desktop, and allow the laptop to connect. You can give a temporary IP address to the laptop, like 192.168.1.11 that you need to add to your /etc/exports file on your desktop.

To link the two systems together, this is what you need to type on the laptop:

/usr/sbin/pppd -detach crtscts lock 192.168.1.11:192.168.1.10 /dev/ttyS1 115200

And this on the PC:
/usr/sbin/pppd -detach crtscts lock 192.168.1.10:192.168.1.11 /dev/ttyS1 115200

This is assuming the cable is linked to ttyS1 (COM2) on both systems.
With NFS, you can mount the CD-ROM drive remotely and tell the installation program to use a specific

path to install the remaining packages. Mount the CD-ROM with a command like this:

mount -tnfs 192.168.1.10:/cdrom /mnt

Then run the installation program:
setup

and enter the new path for the packages files.

How to Create a Swap File

To add a swap file:

  1. Determine the size of the new swap file in megabytes and multiply by 1024 to determine the number of blocks. For example, the block size of a 64 MB swap file is 65536.

  2. At a shell prompt as root, type the following command with count being equal to the desired block size:

    dd if=/dev/zero of=/swapfile bs=1024 count=65536
  3. Setup the swap file with the command:

    mkswap /swapfile
  4. To enable the swap file immediately but not automatically at boot time:

    swapon /swapfile
  5. To enable it at boot time, edit /etc/fstab to include the following entry:

    /swapfile          swap            swap    defaults        0 0

    The next time the system boots, it enables the new swap file.

  6. After adding the new swap file and enabling it, verify it is enabled by viewing the output of the command cat /proc/swaps or free.

How to Create an LVM2 Logical Volume for Swap

To add a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you want to add):

  1. Create the LVM2 logical volume of size 256 MB:

    # lvm lvcreate VolGroup00 -n LogVol02 -L 256M
  2. Format the new swap space:

    # mkswap /dev/VolGroup00/LogVol02
  3. Add the following entry to the /etc/fstab file:

    /dev/VolGroup00/LogVol02   swap     swap    defaults     0 0
  4. Enable the extended logical volume:

    # swapon -va
  5. Test that the logical volume has been extended properly:

    # cat /proc/swaps
    # free

How to Extending Swap file on an LVM2 Logical Volume

Extending Swap on an LVM2 Logical Volume

To extend an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you want to extend):

  1. Disable swapping for the associated logical volume:

    # swapoff -v /dev/VolGroup00/LogVol01
  2. Resize the LVM2 logical volume by 256 MB:

    # lvm lvresize /dev/VolGroup00/LogVol01 -L +256M
  3. Format the new swap space:

    # mkswap /dev/VolGroup00/LogVol01
  4. Enable the extended logical volume:

    # swapon -va
  5. Test that the logical volume has been extended properly:

    # cat /proc/swaps
    # free

How to setup Cryptographic Filesystem using CFS

If you want to keep private your personal files, such as those containing phone numbers, correspondence or journals, you could keep them in a hidden directory named ~/.private with mode 0700, so only you could read the files. Are you chuckling yet? Then let's consider employing a stronger privacy technique: cryptography. Specifically, let's look at Matt Blaze's open-source Cryptographic Filesystem (CFS) for UNIX and Linux.

Briefly, CFS allows you to safeguard your files in encrypted form in a normal directory. By using a key (or password, if you will), you temporarily decrypt your files to clear-text form for the window of time in which you need to work with them.

CFS makes your clear-text files available to you via a local loopback NFS mount; the CFS documentation refers to this as an "attach". Modifications you make to your clear-text files then are reflected automatically in the encrypted versions. You end your CFS session with a "detach", which makes your clear-text files disappear until the next time that you attach them.

This article reports some of the benefits and methods of using CFS as of version 1.4.0beta2. Some handy tools for use with CFS also accompany this article; see the Resources section.

CFS vs. Other Tools

Other ways to improve your privacy with open source tools are available; there's TCFS, the Transparent Cryptographic Filesystem, and OpenSSL, among other tools. Here's a brief summary of the relative merits of some of them, including TCFS, CFS and OpenSSL:

  • CFS: runs in user space, and no kernel patches are required. CFS uses an ordinary NFS loopback (a local NFS export with a local mount) that may create some security worries. Use caution in exporting directories. CFS was developed on SunOS and BSDI, then ported to Linux and other OSes, which bodes well for its ongoing utility. CFS supports several choices of encryption algorithms.

  • TCFS: requires a Linux-specific NFS module or kernel configuration. The tighter kernel bindings and extended filesystem attribute requirements yield better security but, potentially, less portability.

  • OpenSSL: runs in user space, and no kernel patches are required. OpenSSL supports a wide variety of encryption methods, as well as support for hardware tokens. OpenSSL is available for Linux, MS Windows and other environments. OpenSSL handles encryption or decryption of only one stream or file at a time, as of version 3.4.

  • OpenSSH: apples and oranges. You might use OpenSSH in conjunction with the other tools, but OpenSSH is mainly for interactive session privacy, not stored data privacy.

  • Linux loop device mount: comes with Red Hat Linux. At this time, DES appears to be the only serious encryption method available for loop device mounts. It requires preparation of a fixed-size container file and either root privileges or user permissions on loop device files. See mount(8) and losetup(8).

Installing CFS

A source RPM, cfs-1.4.0.beta2j-6.2a.src.rpm, is available with the other tools accompanying this article on the LJ FTP site; see the Resources section. The beta2j version of the RPM includes, in addition to the components of the base beta2: one more security patch for Linux; two Red Hat Linux-friendly setup scripts, cfs.init and cfs-setup; and two handy tools, decrypt and dpw.py. All of these are broken out separately for those of you disinclined to use RPMs. This RPM was tested on Red Hat Linux 6.2, 7.1 and 7.2.

Always consider searching for later versions of CFS in either RPM or tarball form, and check for security patches. CFS version 1.4.1 exists as of this writing (see the Resources section); it adds support for NetBSD but no new features or bug fixes.

NFS is a prerequisite for using CFS. Be very selective with whom you share your filesystem resources--don't export your root directory and everything below it to the whole world. Consider using a personal firewall to forbid external access to most service ports, especially the ports the NFS and RPC port mapper dæmons use, 2049 and 111 (TCP and UDP), respectively.

In the following examples of commands, prompts are shown in bold type. The # shell prompt indicates root privileges; $ is the prompt for ordinary (non-root) users of bash and Bourne shells. Make any appropriate adjustments for your choice of shell.

Install the CFS source RPM package with the usual RPM command as root:

# rpm -iv cfs-1.4.0.beta2j-6.2a.src.rpm

Afterward, build and install the CFS package as follows, again as root:

        # cd /usr/src/redhat/SPECS
# rpm -bb cfs.spec
# cd ../RPMS/i386
# rpm -ivv cfs-1.4.0.beta2j.i386.rpm

If you have difficulties installing this particular RPM, by all means seek out and install a more suitable RPM or tarball of the CFS distribution. Adapt the value-added files accompanying this article (on the FTP site) to your own needs and tastes. In particular, note that some NFS set up is required. See the cfs-setup script accompanying this article or read Matt Blaze's document "CFS Installation and Operation" (see Resources).

Getting Started with CFS

The following instructions are suitable for use with Red Hat Linux 6.2, 7.1, 7.2, and 7.3; you may need to make some adjustments for your variant of Linux.

Make certain that NFS is running:

# ps auxww | grep rpc.mountd

If rpc.mountd isn't running, then crank up NFS:

# /etc/rc.d/init.d/nfs start

Then start the CFS dæmon, cfsd, by running its boot-time startup file as root:

# /etc/rc.d/init.d/cfsd start

As yourself, create a private notes directory and attach it. We'll demonstrate two ways of doing this:

  1. The easy way uses the decrypt tool that accompanies this article (see the Resources section):

            $ decrypt  -init
    Key: (type your key here to create the private directory, and
    remember the key)

    Again: (type your key again here)
    Key: (retype your key to proceed with the attachment)
  2. The other way to create the private notes directory uses the native CFS cmkdir and cattach tools:

            $ mkdir ~/cdata
    $ cd ~/cdata
    $ cmkdir notes
    Key: (type your key here and remember it for future use)
    Again: (type your key again here)
    $ cattach notes $LOGNAME-notes
    Key: (re-type your key)

(In the example above, the predefined environment variable $LOGNAME contains your login name. It's used in order to avoid name collisions, but feel free to substitute a simpler clear-text directory name.)

In both cases it may take a minute or two before the CFS dæmon (cfsd) makes available the clear-text directory, $LOGNAME-notes.

Next, create a test file in the attached clear-text directory, as follows:

        $ pushd /mnt/crypt/$LOGNAME-notes
$ echo "Test." > test.txt
$ popd

End your CFS session, and see what transpired in the relevant directories:

        $ cdetach $LOGNAME-notes
$ ls /mnt/crypt
$ ls -R ~/cdata

The listing of ~/cdata should show an obscured name for your test.txt file, such as 03fa2aa5242d5a741866a6605de1ae3b.

Re-attach the directory in order to verify that your test file is still there. Again, there are two ways to do that:

  1. Here's the easy way, using decrypt:

            $ decrypt
    Key: (retype your key)
  2. Here's the normal way, using the CFS tool cattach:

            $ cd ~/cdata
    $ cattach notes $LOGNAME-notes
    Key: (retype your key)

Next, verify that your test file is still there:

$ cat /mnt/crypt/$LOGNAME-notes/test.txt

Now go in search of the documentation for CFS, which includes on-line man pages for the commands cmkdir, cattach, cdetach, cpasswd and others. The underpinnings of CFS are described well in Matt Blaze's papers, "CFS Installation and Operation" and "A Cryptographic Filesystem for Unix". Read these using nroff -ms /usr/doc/cfs*/notes.ms. Among other useful tidbits in these papers is a suggestion for speeding up CFS performance by modifying the NFS rsize and wsize mount options.

After that, look for the README file that accompanies this article (see the Resources section), and check out the decrypt and dpw.py tools. The decrypt script simplifies the management of your private directories with CFS. Try this command:

$ decrypt -help

The dpw.py tool provides a graphical user interface for searching a private file of passwords you maintain with decrypt. Run dpw.py and click the help button. The dpw.py tool requires the standard Python module Tkinter, among others.

Strengths and Vulnerabilities of CFS

CFS's strengths include certain kinds of error reduction or error prevention:

  • After working with your clear-text files, CFS doesn't require a separate re-encryption step, thus avoiding the problem of re-encrypting with the wrong key.

  • Revision control, at least with RCS, is less error-prone. In contrast, where files are individually decrypted with OpenSSL, accidentally checking-in the clear-text file would leave it exposed.

  • CFS supports an inactivity timeout so the clear-text file isn't accidentally left available for long periods of time. Be sure to use the timeout option (-i) with the cattach CFS command.

Vulnerabilities to consider when using CFS and some other privacy tools:

  • Keyboard snooping can expose your secret key when you type the key for encryption or decryption.

  • Privileged users (intruders or not) can snoop out your attached clear-text files through various means.

  • Your clear-text files may be exposed on the network in various ways. OpenSSH can help to some extent, but it's best to confine your use of CFS and OpenSSL to your unshared local host's directly connected console and keyboard and to confine your private data and clear-text attach points to your local host's filesystems.

  • Consider keeping some private files in separate private directories; that way, not everything can be compromised simultaneously.

  • When applying revision control tools to your private files, think carefully about how to keep your clear-text files exposed only temporarily, for example, in the face of CVS's directory copying approach. With CFS, consider using RCS in the clear-text attach directory, as in cd /mnt/crypt/mycleartext && co -l myjournal.

Matt Blaze's CFS documents more thoroughly examine CFS's security issues and design considerations.

Conclusion

We shouldn't delude ourselves that CFS alone is going to protect us if we attract the interest of tenacious snoopers or if we're careless with our network security. We should use CFS for the same reason that we lock our doors and secure our windows at home. It's not necessarily going to prevent the worst, but offering some obstacles may help to keep things safer, longer.

A mobile laptop computer running Linux likely would be a fabulous place to employ CFS. A laptop, being largely self-contained and unshared, offers fewer vulnerabilities when other practical security precautions are employed, such as erecting a personal firewall and disabling unnecessary network services. And in the event that your laptop is stolen, your CFS-encrypted private files most likely will remain unseen. Don't forget to back up.

On my wish list of desired improvements to the open-source version of CFS would be hardware security token support, perhaps borrowed from OpenSSL. Requiring a hardware security token ameliorates the problem of password exposure from keyboard sniffing, although not necessarily keystroke capture over time. Also desirable would be a port of CFS to MS Windows for use with multiboot hosts.