You are on page 1of 82

6 Steps to Security Policy Excellence Dominic Saunders, Senior Vice President Cryptzone

Striking the right balance between risk mitigation and the commercial demands of the business is an essential skill, which must be adapted according to the nature of your industry and the size, culture and risk appetite of your organization. This role needs to have clear ownership at senior management level. Organizations need to take a systematic and proactive approach to risk mitigation if they are to be better prepared to satisfy evolving legal and regulatory requirements, manage the costs of compliance and realize competitive advantage. Achieving and maintaining policy compliance becomes more difficult to sustain as organizations grow, become more geographically dispersed and more highly regulated. But, it doesn't have to be this way. The Purpose of Policies and Procedures Policies and procedures establish guidelines to behaviour and business processes in accordance with an organization's strategic objectives. While typically developed in response to legal and regulatory requirements, their primary purpose should be to convey accumulated wisdom on how best to get things done in a risk-free, efficient and compliant way. Policy Pitfalls Here are some of the most common grounds for policy non-compliance: Poorly worded policies Badly structured policies Out-of-date policies Inadequately communicated policies Unenforced policies Lack of management scrutiny

So, what is the secret for effective policy management? Six Steps to Policy Excellence Step One: Create and Review It is important to understand, when creating policies, that those created purely to satisfy auditors and regulatory bodies are unlikely to improve business performance or bring about policy compliance, as they rarely change employee behaviour appropriately. While satisfying legal departments, and looking impressive to auditors and regulators, busy employees will instantly be turned off by lengthy policy documents full of technical and legal jargon. External factors that affect policies are evolving all the time. For example, technology advances may lead to information security policies and procedures becoming obsolete. Additionally, changes in the law or industry regulations require operational policies to be frequently adjusted. Some policies, such as Payment Card Industry DSS compliance, have to be re-presented and signed up to on an annual basis. Typically, most "policy" documents are lengthy, onerous and largely unreadable. Many are written using complex jargon, and most contain extraneous content that would be better classed as procedures, standards, guidelines and forms. Such documents should be associated with the policy. Documents must be written using language that is appropriate for the target audience and should spell out the consequences of noncompliance. Smaller, more manageable documents are easier for an organization to review and update, while

also being more palatable for the intended recipients. Inadequate version control and high production costs can be reduced by automating the entire process using an electronic system. Step Two: Distribute A key step in the policy management lifecycle is to ensure that staff are aware of relevant policies and procedures. Organizations need to effectively distribute policies, both new and updated, in a timely and efficient manner. These need to be consistently enforced across an organization. After all, what is the point of expending considerable effort and cost to write and approve policies, if they are not effectively distributed and read? Step Three: Achieve Consent In many cases, regulatory requirements call for evidence of policy acceptance, demanding a more pro-active and thorough approach to the policy management lifecycle. A process needs to be implemented that monitors users' response to policies. Policy distribution should be prioritised, ensuring that higher risk policies are signed off earlier by users than other lower risk documents. For example, an organization may want to ensure that a user signs up to their Information Governance policy on the first day that they start employment, whilst having up to two weeks to sign up to the Travel & Expense Policy. Systems need to in place to grant a user two weeks to process a particular document, after which the system should automatically force the user to process it. Step Four: Understanding To monitor and measure staff comprehension and effectiveness of policies and associated documentation, organizations should test all, or perhaps a subset of, users. Any areas that show weaknesses can be identified and corrected accordingly. Additional training or guidance may be necessary or, if it's the policy that is causing confusion, it can be reworded or simplified. Step Five: Auditability In many cases regulatory requirements call for evidence of policy acceptance, which demands a more proactive and thorough approach to the policy management lifecycle. The full revision history of all documents needs to be maintained as well as who has read what, when and, if possible, how long it took; who declined a policy and why. This record should be stored for future reference and may be stored in conjunction with test results. Step Six: Reporting To affect change and improve compliance it helps if key performance indicators relating to policy uptake are clearly visible across all levels of an enterprise. Dashboard visibility of policy uptake compliance by geographical or functional business units helps to consolidate information and highlights exceptions. Being able to quickly drill down for specific details in areas of poor policy compliance dramatically improves management's ability to understand and address underlying issues. Bringing It All Together To check the level of policy compliance that exists within your organization you need to periodically answer the following questions: Where are you current policies? - Are the accessible to staff? Who has seen your current policies? Who has read your current policies? Do your staff understand them? Are your policies being followed by everyone? Are your policies effectively managed? Are your policies up to date? Can you prove this to the Auditors?

For those organizations that are serious about staff reading, understanding and signing up to policies, they should consider adopting automated policy management software. This raises standards of policy compliance and provides managers with practical tools to improve policy uptake and adherence. Ultimately, policy compliance is about getting people to do the right thing, in the right way, every time. Ensuring everyone understands what is expected of them and how they are required to carry out their jobs according to corporate policies and procedures is not a new practice. Embedding an automated policy management solution into an organization is really the only viable way to create and sustain a culture of compliance, where people understand their responsibilities and the importance of adhering to corporate standards. Doing so empowers people to do their jobs within an acceptable governance framework rather than constrained by a rigid set of unenforceable rules. By effectively handling the policy management lifecycle you can create a firm foundation for effective risk mitigation and governance. Automation helps the benefits of policy compliance for board members, line managers and the general workforce get to grips with policy compliance and puts forward a cost-efficient approach for achieving policy excellence.

What is Computer Forensics? Data lost intentionally or accidentally can be recovered with the help of data recovery experts. Computer forensic is one such type where the cause for data loss is identified. There are many definitions of computer forensics however generally, computer forensic refers to the detail investigation of the computers to carry out the required tasks. It performs the investigation of the maintained data of the computer to check out what exactly happened to the computer and who is responsible for it. The investigation process starts from the analysis of the ground situation and moves on further to the insides of the computers operating system. Computer forensic is a broader concept which is mainly related to the crimes happening in computer which is against law. Various laws have been imposed to check out the crimes but still they exist and are difficult to find the criminal due to lack of evidence. All these difficulties can be overcome with the help of computer forensics. The main motto of computer forensic experts is not only to find the criminal but also to find out the evidence and the presentation of the evidence in a manner that leads to legal action of the culprit. The major reasons for criminal activity in computers are: 1. 2. 3. 4. 5. 6. Unauthorized use of computers mainly stealing a username and password Accessing the victims computer via the internet Releasing a malicious computer program that is virus Harassment and stalking in cyberspace E-mail Fraud Theft of company documents.

Computer forensic facilitates the organized and careful detection of computer related crime and abuse cases. The computer forensics expert should have a great deal of knowledge of the data recovery software as well as the hardware and should possess the qualification and knowledge required to carry out the task.

About the FAQ This collection of Frequenty Asked Questions (FAQs) and answers has been compiled over a period of years, seeing which questions people ask about firewalls in such fora as Usenet, mailing lists, and Web sites. If you have a question, looking here to see whether it's answered before posting your question is good form. Don't send your questions about firewalls to the FAQ maintainers. The maintainers welcome input and comments on the contents of this FAQ. Comments related to the FAQ should be addressed to paul@compuwar.net. Before you send us mail, please be sure to see sections 1.2 and 1.3 to make sure this is the right document for you to be reading. Please use a subject line of FW-FAQ in your message.

1.2 For Whom Is the FAQ Written? Firewalls have come a long way from the days when this FAQ started. They've gone from being highly customized systems administered by their implementors to a mainstream commodity. Firewalls are no longer solely in the hands of those who design and implement security systems; even security-conscious end-users have them at home. We wrote this FAQ for computer systems developers and administrators. We have tried to be fairly inclusive, making room for the newcomers, but we still assume some basic technical background. If you find that you don't understand this document, but think that you need to know more about firewalls, it might well be that you actually need to get more background in computer networking first. We provide references that have helped us; perhaps they'll also help you. We focus predominately on "network" firewalls, but ``host'' or ``"personal'' firewalls will be addressed where appropriate.

1.3 Before Sending Mail Note that this collection of frequently-asked questions is a result of interacting with many people of different backgrounds in a wide variety of public fora. The firewalls-faq address is not a help desk. If you're trying to use an application that says that it's not working because of a firewall and you think that you need to remove your firewall, please do not send us mail asking how. If you want to know how to ``get rid of your firewall'' because you cannot use some application, do not send us mail asking for help. We cannot help you. Really. Who can help you? Good question. That will depend on what exactly the problem is, but here are several pointers. If none of these works, please don't ask us for any more. We don't know. The provider of the software you're using. The provider of the hardware ``appliance'' you're using. The provider of the network service you're using. That is, if you're on AOL, ask them. If you're trying to use something on a corporate network, talk to your system administrator.

1.4 Where Can I find the Current Version of the FAQ? The FAQ can be found on the Web at http://www.compuwar.net/pubs/fwfaq/. http://www.interhack.net/pubs/fwfaq/.

Posted versions are archived in all the usual places. Unfortunately, the version posted to Usenet and archived from that version lack the pretty pictures and useful hyperlinks found in the web version.

1.5 Where Can I Find Non-English Versions of the FAQ? Several translations are available. (If you've done a translation and it's not listed here, please write us so we can update the master document.) Norwegian Translation by Jon Haugsand http://helmersol.nr.no/haandbok/doc/brannmur/brannmur-faq.html

1.6 Contributors Many people have written helpful suggestions and thoughtful commentary. We're grateful to all contributors. We'd like to thank afew by name: Keinanen Vesa, Allen Leibowitz, Brent Chapman, Brian Boyle, D. Clyde Williamson, Richard Reiner, Humberto Ortiz Zuazaga, Theodore Hope, and Patrick Darden.

1.7 Copyright and Usage Copyright 1995-1996, 1998 Marcus J. Ranum. Copyright 1998-2002 Matt Curtin. Copyright 2004-2009, Paul D. Robertson. All rights reserved. This document may be used, reprinted, and redistributed as is providing this copyright notice and all attributions remain intact. Translations of the complete text from the original English to other languages are also explicitly allowed. Translators may add their names to the ``Contributors'' section.

2 Background and Firewall Basics Before being able to understand a complete discussion of firewalls, it's important to understand the basic principles that make firewalls work.

2.1 What is a network firewall? A firewall is a system or group of systems that enforces an access control policy between two or more networks. The actual means by which this is accomplished varies widely, but in principle, the firewall can be thought of as a pair of mechanisms: one which exists to block traffic, and the other which exists to permit traffic. Some firewalls place a greater emphasis on blocking traffic, while others emphasize permitting traffic. Probably the most important thing to recognize about a firewall is that it implements an access control policy. If you don't have a good idea of what kind of access you want to allow or to deny, a firewall really won't help you. It's also important to recognize that the firewall's configuration, because it is a mechanism for enforcing policy, imposes its policy on everything behind it. Administrators for firewalls managing the connectivity for a large number of hosts therefore have a heavy responsibility.

2.2 Why would I want a firewall? The Internet, like any other society, is plagued with the kind of jerks who enjoy the electronic equivalent of writing on other people's walls with spraypaint, tearing their mailboxes off, or just sitting in the street blowing their car horns. Some people try to get real work done over the Internet, and others have sensitive or proprietary data they must protect. Usually, a firewall's purpose is to keep the jerks out of your network while still letting you get your job done. Many traditional-style corporations and data centers have computing security policies and practices that must be followed. In a case where a company's policies dictate how data must be protected, a firewall is very important, since it is the embodiment of the corporate policy. Frequently, the hardest part of hooking to the Internet, if you're a large company,

is not justifying the expense or effort, but convincing management that it's safe to do so. A firewall provides not only real security--it often plays an important role as a security blanket for management. Lastly, a firewall can act as your corporate ``ambassador'' to the Internet. Many corporations use their firewall systems as a place to store public information about corporate products and services, files to download, bug-fixes, and so forth. Several of these systems have become important parts of the Internet service structure (e.g., UUnet.uu.net, whitehouse.gov, gatekeeper.dec.com) and have reflected well on their organizational sponsors. Note that while this is historically true, most organizations now place public information on a Web server, often protected by a firewall, but not normally on the firewall itself.

2.3 What can a firewall protect against? Some firewalls permit only email traffic through them, thereby protecting the network against any attacks other than attacks against the email service. Other firewalls provide less strict protections, and block services that are known to be problems. Generally, firewalls are configured to protect against unauthenticated interactive logins from the ``outside'' world. This, more than anything, helps prevent vandals from logging into machines on your network. More elaborate firewalls block traffic from the outside to the inside, but permit users on the inside to communicate freely with the outside. The firewall can protect you against any type of network-borne attack if you unplug it. Firewalls are also important since they can provide a single ``choke point'' where security and audit can be imposed. Unlike in a situation where a computer system is being attacked by someone dialing in with a modem, the firewall can act as an effective ``phone tap'' and tracing tool. Firewalls provide an important logging and auditing function; often they provide summaries to the administrator about what kinds and amount of traffic passed through it, how many attempts there were to break into it, etc. Because of this, firewall logs are critically important data. They can be used as evidence in a court of law in most countries. You should safeguard, analyze and protect yoru firewall logs accordingly. This is an important point: providing this ``choke point'' can serve the same purpose on your network as a guarded gate can for your site's physical premises. That means anytime you have a change in ``zones'' or levels of sensitivity, such a checkpoint is appropriate. A company rarely has only an outside gate and no receptionist or security staff to check badges on the way in. If there are layers of security on your site, it's reasonable to expect layers of security on your network.

2.4 What can't a firewall protect against? Firewalls can't protect against attacks that don't go through the firewall. Many corporations that connect to the Internet are very concerned about proprietary data leaking out of the company through that route. Unfortunately for those concerned, a magnetic tape, compact disc, DVD, or USB flash drives can just as effectively be used to export data. Many organizations that are terrified (at a management level) of Internet connections have no coherent policy about how dial-in access via modems should be protected. It's silly to build a six-foot thick steel door when you live in a wooden house, but there are a lot of organizations out there buying expensive firewalls and neglecting the numerous other back-doors into their network. For a firewall to work, it must be a part of a consistent overall organizational security architecture. Firewall policies must be realistic and reflect the level of security in the entire network. For example, a site with top secret or classified data doesn't need a firewall at all: they shouldn't be hooking up to the Internet in the first place, or the systems with the really secret data should be isolated from the rest of the corporate network. Lost or stolen PDAs, laptops, cell phones, USB keys, external hard drives, CDs, DVDs, etc. For protection against this type of data loss, you will need a good policy, encryption, and some sort of enterprise auditing/enforcement. Places that really care about Intellectual Property (IP) and data loss prevention use USB firewalling technology on their desktops and systems in public areas. The details are outside the scope of this FAQ. Badly written, pooly thought out, or non-existent organizational policy. A firewall is the end extension of an organization's security policy. If that policy is ill-informed, pooly formed, or not formed at all, then the state of

the firewall is likely to be similar. Executive buy-in is key to good security practice, as is the complete and unbiased enforcement of your policies. Firewalls can't protect against political exceptions to the policy, so these must be documented and kept at a miniumum. Another thing a firewall can't really protect you against is traitors or idiots inside your network. While an industrial spy might export information through your firewall, he's just as likely to export it through a telephone, FAX machine, or Compact Disc. CDs are a far more likely means for information to leak from your organization than a firewall. Firewalls also cannot protect you against stupidity. Users who reveal sensitive information over the telephone are good targets for social engineering; an attacker may be able to break into your network by completely bypassing your firewall, if he can find a ``helpful'' employee inside who can be fooled into giving access to a modem pool or desktop through a "remote support" type portal. Before deciding this isn't a problem in your organization, ask yourself how much trouble a contractor has getting logged into the network or how much difficulty a user who forgot his password has getting it reset. If the people on the help desk believe that every call is internal, you have a problem that can't be fixed by tightening controls on the firewalls. Firewalls can't protect against tunneling over most application protocols to trojaned or poorly written clients. There are no magic bullets and a firewall is not an excuse to not implement software controls on internal networks or ignore host security on servers. Tunneling ``bad'' things over HTTP, SMTP, and other protocols is quite simple and trivially demonstrated. Security isn't ``fire and forget''. Lastly, firewalls can't protect against bad things being allowed through them. For instance, many Trojan Horses use the Internet Relay Chat (IRC) protocol to allow an attacker to control a compromised internal host from a public IRC server. If you allow any internal system to connect to any external system, then your firewall will provide no protection from this vector of attack.

2.5 What about viruses and other malware? Firewalls can't protect very well against things like viruses or malicious software (malware). There are too many ways of encoding binary files for transfer over networks, and too many different architectures and viruses to try to search for them all. In other words, a firewall cannot replace security-consciousness on the part of your users. In general, a firewall cannot protect against a data-driven attack--attacks in which something is mailed or copied to an internal host where it is then executed. This form of attack has occurred in the past against various versions of sendmail, ghostscript, scripting mail user agents like Outlook, and Web browsers like Internet Explorer. Organizations that are deeply concerned about viruses should implement organization-wide virus control measures. Rather than only trying to screen viruses out at the firewall, make sure that every vulnerable desktop has virus scanning software that is run when the machine is rebooted. Blanketing your network with virus scanning software will protect against viruses that come in via floppy disks, CDs, modems, and the Internet. Trying to block viruses at the firewall will only protect against viruses from the Internet. Virus scanning at the firewall or e-mail gateway will stop a large number of infections. An increasing number of firewalls are offering antivirus and malware capabilities. These are applied towards industry standard protocols of email, web traffic, instant messaging, and file transfers, and only on proxyable services. These are a very small number of protocols out of thousands, and only apply towards industry standards (e.g. smtp must be over 25, web over 80, etc. etc.). Such antivirus/malware firewalls are of limited use unless your policies state that only industry standards will be followed, and your firewall administrators strictly adhere to this approach. They are not a panacea. You must also balance the risks associated with the failure of a single component in an all-in-one solution and the ability to compromsie the entire system versus using different platforms for each feature. Lots of malicious software, or malware is packed, encrypted, compressed or archived. Traditionally, software authors have had issues dealing with the changing formats of and recursive implementations of archivers in ways that provided malware authors with more vectors to attack. Antivirus/Antimalware systems should be defenses in depth--firewalls, servers, and desktops should all be protected, preferably by separate/different systems so that if one can't protect against a particular malware another might.

A strong firewall is never a substitute for sensible software that recognizes the nature of what it's handling--untrusted data from an unauthenticated party--and behaves appropriately. Do not think that because ``everyone'' is using that mailer or because the vendor is a gargantuan multinational company, you're safe. In fact, it isn't true that ``everyone'' is using any mailer, and companies that specialize in turning technology invented elsewhere into something that's ``easy to use'' without any expertise are more likely to produce software that can be fooled. Further consideration of this topic would be worthwhile [3], but is beyond the scope of this document.

2.6 Will IPSEC make firewalls obsolete? Some have argued that this is the case. Before pronouncing such a sweeping prediction, however, it's worthwhile to consider what IPSEC is and what it does. Once we know this, we can consider whether IPSEC will solve the problems that we're trying to solve with firewalls. IPSEC (IP SECurity) refers to a set of standards developed by the Internet Engineering Task Force (IETF). There are many documents that collectively define what is known as ``IPSEC'' [6]. IPSEC solves two problems which have plagued the IP protocol suite for years: host-to-host authentication (which will let hosts know that they're talking to the hosts they think they are) and encryption (which will prevent attackers from being able to watch the traffic going between machines). Note that neither of these problems is what firewalls were created to solve. Although firewalls can help to mitigate some of the risks present on an Internet without authentication or encryption, there are really two classes of problems here: integrity and privacy of the information flowing between hosts and the limits placed on what kinds of connectivity is allowed between different networks. IPSEC addresses the former class and firewalls the latter. What this means is that one will not eliminate the need for the other, but it does create some interesting possibilities when we look at combining firewalls with IPSEC-enabled hosts. Namely, such things as vendor-independent virtual private networks (VPNs), better packet filtering (by filtering on whether packets have the IPSEC authentication header), and application-layer firewalls will be able to have better means of host verification by actually using the IPSEC authentication header instead of ``just trusting'' the IP address presented.

2.7 What are good sources of print information on firewalls? There are several books that touch on firewalls. The best known are: Building Internet Firewalls, 2d ed. Authors Elizabeth D. Zwicky, Simon Cooper, and D. Brent Chapman Publisher O'Reilly Edition 2000 ISBN 1-56592-871-7 Firewalls and Internet Security: Repelling the Wily Hacker Authors

Bill Cheswick, Steve Bellovin, Avi Rubin Publisher Addison Wesley Edition 2003 ISBN 020163466X Practical Internet & Unix Security Authors Simson Garfinkel and Gene Spafford Publisher O'Reilly Edition 1996 ISBN 1-56592-148-8 Note Discusses primarily host security. Related references are: Internetworking with TCP/IP Vols I, II, and III Authors Douglas Comer and David Stevens Publisher Prentice-Hall Edition 1991 ISBN 0-13-468505-9 (I), 0-13-472242-6 (II), 0-13-474222-2 (III) Comment

A detailed discussion on the architecture and implementation of the Internet and its protocols. Volume I (on principles, protocols and architecture) is readable by everyone. Volume 2 (on design, implementation and internals) is more technical. Volume 3 covers client-server computing. Unix System Security--A Guide for Users and System Administrators Author David Curry Publisher Addison Wesley Edition 1992 ISBN 0-201-56327-4

2.8 Where can I get more information on firewalls on the Internet? Site Security Handbook http://www.rfc-editor.org/rfc/rfc2196.txt The Site Security Handbook is an information IETF document that describes the basic issues that must be addressed for building good site security. Firewalls are one part of a larger security strategy, as the Site Security Handbook shows. Firewall-Wizards Mailing List http://listserv.icsalabs.com/mailman/listinfo/firewall-wizards The Firewall Wizards Mailing List is a moderated firewall and security related list that is more like a journal than a public soapbox. Firewall HOWTO http://www.linuxdoc.org/HOWTO/Firewall-HOWTO.html Describes exactly what is needed to build a firewall, particularly using Linux. Firewall Toolkit (FWTK) and Firewall Papers ftp://ftp.tis.com/pub/firewalls/ Marcus Ranum's firewall related publications http://www.ranum.com/pubs/ Texas A&M University security tools http://www.net.tamu.edu/ftp/security/TAMU/ COAST Project Internet Firewalls page http://www.cerias.purdue.edu/coast/firewalls/

3 Design and Implementation Issues

3.1 What are some of the basic design decisions in a firewall? There are a number of basic design issues that should be addressed by the lucky person who has been tasked with the responsibility of designing, specifying, and implementing or overseeing the installation of a firewall. The first and most important decision reflects the policy of how your company or organization wants to operate the system: is the firewall in place explicitly to deny all services except those critical to the mission of connecting to the Net, or is the firewall in place to provide a metered and audited method of ``queuing'' access in a non-threatening manner? There are degrees of paranoia between these positions; the final stance of your firewall might be more the result of a political than an engineering decision. The second is: what level of monitoring, redundancy, and control do you want? Having established the acceptable risk level (i.e., how paranoid you are) by resolving the first issue, you can form a checklist of what should be monitored, permitted, and denied. In other words, you start by figuring out your overall objectives, and then combine a needs analysis with a risk assessment, and sort the almost always conflicting requirements out into a laundry list that specifies what you plan to implement. The third issue is financial. We can't address this one here in anything but vague terms, but it's important to try to quantify any proposed solutions in terms of how much it will cost either to buy or to implement. For example, a complete firewall product may cost between $100,000 at the high end, and free at the low end. The free option, of doing some fancy configuring on a Cisco or similar router will cost nothing but staff time and a few cups of coffee. Implementing a high end firewall from scratch might cost several man-months, which may equate to $30,000 worth of staff salary and benefits. The systems management overhead is also a consideration. Building a home-brew is fine, but it's important to build it so that it doesn't require constant (and expensive) attention. It's important, in other words, to evaluate firewalls not only in terms of what they cost now, but continuing costs such as support. On the technical side, there are a couple of decisions to make, based on the fact that for all practical purposes what we are talking about is a static traffic routing service placed between the network service provider's router and your internal network. The traffic routing service may be implemented at an IP level via something like screening rules in a router, or at an application level via proxy gateways and services. The decision to make is whether to place an exposed stripped-down machine on the outside network to run proxy services for telnet, FTP, news, etc., or whether to set up a screening router as a filter, permitting communication with one or more internal machines. There are benefits and drawbacks to both approaches, with the proxy machine providing a greater level of audit and, potentially, security in return for increased cost in configuration and a decrease in the level of service that may be provided (since a proxy needs to be developed for each desired service). The old trade-off between ease-of-use and security comes back to haunt us with a vengeance.

3.2 What are the basic types of firewalls? Conceptually, there are three types of firewalls: 1. 2. 3. Network layer Application layer Hybrids

They are not as different as you might think, and latest technologies are blurring the distinction to the point where it's no longer clear if either one is ``better'' or ``worse.'' As always, you need to be careful to pick the type that meets your needs. Which is which depends on what mechanisms the firewall uses to pass traffic from one security zone to another. The International Standards Organization (ISO) Open Systems Interconnect (OSI) model for networking defines seven layers,

where each layer provides services that ``higher-level'' layers depend on. In order from the bottom, these layers are physical, data link, network, transport, session, presentation, application. The important thing to recognize is that the lower-level the forwarding mechanism, the less examination the firewall can perform. Generally speaking, lower-level firewalls are faster, but are easier to fool into doing the wrong thing. These days, most firewalls fall into the ``hybrid'' category, which do network filtering as well as some amount of application inspection. The amount changes depending on the vendor, product, protocol and version, so some level of digging and/or testing is often necessary. 3.2.1 Network layer firewalls These generally make their decisions based on the source, destination addresses and ports (see Appendix 6 for a more detailed discussion of ports) in individual IP packets. A simple router is the ``traditional'' network layer firewall, since it is not able to make particularly sophisticated decisions about what a packet is actually talking to or where it actually came from. Modern network layer firewalls have become increasingly sophisticated, and now maintain internal information about the state of connections passing through them, the contents of some of the data streams, and so on. One thing that's an important distinction about many network layer firewalls is that they route traffic directly though them, so to use one you either need to have a validly assigned IP address block or to use a ``private internet'' address block [5]. Network layer firewalls tend to be very fast and tend to be very transparent to users.

Figure 1: Screened Host Firewall

In Figure 1, a network layer firewall called a ``screened host firewall'' is represented. In a screened host firewall, access to and from a single host is controlled by means of a router operating at a network layer. The single host is a bastion host; a highly-defended and secured strong-point that (hopefully) can resist attack.

Figure 2: Screened Subnet Firewall

Example Network layer firewall: In Figure 2, a network layer firewall called a ``screened subnet firewall'' is represented. In a screened subnet firewall, access to and from a whole network is controlled by means of a router operating at a network layer. It is similar to a screened host, except that it is, effectively, a network of screened hosts. 3.2.2 Application layer firewalls These generally are hosts running proxy servers, which permit no traffic directly between networks, and which perform elaborate logging and auditing of traffic passing through them. Since the proxy applications are software components running on the firewall, it is a good place to do lots of logging and access control. Application layer firewalls can be used as network address translators, since traffic goes in one ``side'' and out the other, after having passed through an application that effectively masks the origin of the initiating connection. Having an application in the way in some cases may impact performance and may make the firewall less transparent. Early application layer firewalls such as those built using the TIS firewall toolkit, are not particularly transparent to end users and may require some training. Modern application layer firewalls are often fully transparent. Application layer firewalls tend to provide more detailed audit reports and tend to enforce more conservative security models than network layer firewalls.

Figure 3: Dual Homed Gateway

Example Application layer firewall: In Figure 3, an application layer firewall called a ``dual homed gateway'' is represented. A dual homed gateway is a highly secured host that runs proxy software. It has two network interfaces, one on each network, and blocks all traffic passing through it. Most firewalls now lie someplace between network layer firewalls and application layer firewalls. As expected, network layer firewalls have become increasingly ``aware'' of the information going through them, and application layer firewalls have become increasingly ``low level'' and transparent. The end result is that now there are fast packet-screening systems that log and audit data as they pass through the system. Increasingly, firewalls (network and application layer) incorporate encryption so that they may protect traffic passing between them over the Internet. Firewalls with end-to-end encryption can be used by organizations with multiple points of Internet connectivity to use the Internet as a ``private backbone'' without worrying about their data or passwords being sniffed. (IPSEC, described in Section 2.6, is playing an increasingly significant role in the construction of such virtual private networks.)

3.3 What are proxy servers and how do they work? A proxy server (sometimes referred to as an application gateway or forwarder) is an application that mediates traffic between a protected network and the Internet. Proxies are often used instead of router-based traffic controls, to prevent traffic from passing directly between networks. Many proxies contain extra logging or support for user authentication. Since proxies must ``understand'' the application protocol being used, they can also implement protocol specific security (e.g., an FTP proxy might be configurable to permit incoming FTP and block outgoing FTP). Proxy servers are application specific. In order to support a new protocol via a proxy, a proxy must be developed for it. One popular set of proxy servers is the TIS Internet Firewall Toolkit (``FWTK'') which includes proxies for Telnet, rlogin, FTP, the X Window System, HTTP/Web, and NNTP/Usenet news. SOCKS is a generic proxy system that can be compiled into a client-side application to make it work through a firewall. Its advantage is that it's easy to use, but it doesn't support the addition of authentication hooks or protocol specific logging. For more information on SOCKS, see http://www.socks.nec.com/.

3.4 What are some cheap packet screening tools? The Texas A&M University security tools include software for implementing screening routers. Karlbridge is a PC-based screening router kit available from ftp://ftp.net.ohio-state.edu/pub/kbridge/. There are numerous kernel-level packet screens, including ipf, ipfw, ipchains, pf, and ipfwadm. Typically, these are included in various free Unix implementations, such as FreeBSD, OpenBSD, NetBSD, and Linux. You might also find these tools available in your commercial Unix implementation. If you're willing to get your hands a little dirty, it's completely possible to build a secure and fully functional firewall for the price of hardware and some of your time.

3.5 What are some reasonable filtering rules for a kernel-based packet screen? This example is written specifically for ipfwadm on Linux, but the principles (and even much of the syntax) applies for other kernel interfaces for packet screening on ``open source'' Unix systems. There are four basic categories covered by the ipfwadm rules: -A Packet Accounting -I Input firewall -O Output firewall -F Forwarding firewall ipfwadm also has masquerading (-M) capabilities. For more information on switches and options, see the ipfwadm man page. 3.5.1 Implementation Here, our organization is using a private (RFC 1918) Class C network 192.168.1.0. Our ISP has assigned us the address 201.123.102.32 for our gateway's external interface and 201.123.102.33 for our external mail server. Organizational policy says: Allow all outgoing TCP connections Allow incoming SMTP and DNS to external mail server Block all other traffic

The following block of commands can be placed in a system boot file (perhaps rc.local on Unix systems). ipfwadm -F -f ipfwadm -F -p deny ipfwadm -F -i m -b -P tcp -S 0.0.0.0/0 1024:65535 -D 201.123.102.33 25

ipfwadm -F -i m -b -P tcp -S 0.0.0.0/0 1024:65535 -D 201.123.102.33 53 ipfwadm -F -i m -b -P udp -S 0.0.0.0/0 1024:65535 -D 201.123.102.33 53 ipfwadm -F -a m -S 192.168.1.0/24 -D 0.0.0.0/0 -W eth0 /sbin/route add -host 201.123.102.33 gw 192.168.1.2 3.5.2 Explanation Line one flushes (-f) all forwarding (-F) rules. Line two sets the default policy (-p) to deny. Lines three through five are input rules (-i) in the following format: ipfwadm -F (forward) -i (input) m (masq.) -b (bi-directional) -P protocol)[protocol]-S (source)[subnet/mask] [originating ports]-D (destination)[subnet/mask][port] Line six appends (-a) a rule that permits all internal IP addresses out to all external addresses on all protocols, all ports. Line eight adds a route so that traffic going to 201.123.102.33 will be directed to the internal address 192.168.1.2.

3.6 What are some reasonable filtering rules for a Cisco? The example in Figure 4 shows one possible configuration for using the Cisco as filtering router. It is a sample that shows the implementation of as specific policy. Your policy will undoubtedly vary.

Figure 4: Packet Filtering Router

In this example, a company has Class C network address 195.55.55.0. Company network is connected to Internet via IP Service Provider. Company policy is to allow everybody access to Internet services, so all outgoing connections are accepted. All incoming connections go through ``mailhost''. Mail and DNS are only incoming services. 3.6.1 Implementation Allow all outgoing TCP-connections Allow incoming SMTP and DNS to mailhost Allow incoming FTP data connections to high TCP port ( Try to protect services that live on high port numbers 1024)

Only incoming packets from Internet are checked in this configuration. Rules are tested in order and stop when the first match is found. There is an implicit deny rule at the end of an access list that denies everything. This IP access list assumes that you are running Cisco IOS v. 10.3 or later. no ip source-route ! interface ethernet 0 ip address 195.55.55.1 no ip directed-broadcast ! interface serial 0 no ip directed-broadcast ip access-group 101 in ! access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip any 0.0.0.255 255.255.255.0 access-list 101 deny ip any 0.0.0.0 255.255.255.0 ! access-list 101 deny ip 195.55.55.0 0.0.0.255 access-list 101 permit tcp any any established ! access-list 101 permit tcp any host 195.55.55.10 eq smtp access-list 101 permit tcp any host 195.55.55.10 eq dns access-list 101 permit udp any host 192.55.55.10 eq dns ! access-list 101 deny tcp any any range 6000 6003 access-list 101 deny tcp any any range 2000 2003 access-list 101 deny tcp any any eq 2049 access-list 101 deny udp any any eq 2049 ! access-list 101 permit tcp any 20 any gt 1024 ! access-list 101 permit icmp any any ! snmp-server community FOOBAR RO 2 line vty 0 4 access-class 2 in access-list 2 permit 195.55.55.0 0.0.0.255 3.6.2 Explanations Drop all source-routed packets. Source routing can be used for address spoofing. Drop directed broadcasts, which are used in smurf attacks. If an incoming packet claims to be from a local net, loopback network, or private network, drop it. All packets which are part of already established TCP-connections can pass through without further checking. All connections to low port numbers are blocked except SMTP and DNS. Block all services that listen for TCP connections on high port numbers. X11 (port 6000+), OpenWindows (port 2000+) are a few candidates. NFS (port 2049) runs usually over UDP, but it can be run over TCP, so you should block it. Incoming connections from port 20 into high port numbers are supposed to be FTP data connections. Access-list 2 limits access to router itself (telnet & SNMP) All UDP traffic is blocked to protect RPC services

3.6.3 Shortcomings You cannot enforce strong access policies with router access lists. Users can easily install backdoors to their systems to get over ``no incoming telnet'' or ``no X11'' rules. Also crackers install telnet backdoors on systems where they break in. You can never be sure what services you have listening for connections on high port numbers. (You can't be sure of what services you have listening for connections on low port numbers, either, especially in highly decentralized environments where people can put their own machines on the network or where they can get administrative access to their own machines.) Checking the source port on incoming FTP data connections is a weak security method. It also breaks access to some FTP sites. It makes use of the service more difficult for users without preventing bad guys from scanning your systems.

Use at least Cisco version 9.21 so you can filter incoming packets and check for address spoofing. It's still better to use 10.3, where you get some extra features (like filtering on source port) and some improvements on filter syntax. You have still a few ways to make your setup stronger. Block all incoming TCP-connections and tell users to use passive-FTP clients. You can also block outgoing ICMP echo-reply and destination-unreachable messages to hide your network and to prevent use of network scanners. Cisco.com use to have an archive of examples for building firewalls using Cisco routers, but it doesn't seem to be online anymore. There are some notes on Cisco access control lists, at least, at ftp://ftp.cisco.com/pub/mibs/app_notes/access-lists.

3.7 What are the critical resources in a firewall? It's important to understand the critical resources of your firewall architecture, so when you do capacity planning, performance optimizations, etc., you know exactly what you need to do, and how much you need to do it in order to get the desired result. What exactly the firewall's critical resources are tends to vary from site to site, depending on the sort of traffic that loads the system. Some people think they'll automatically be able to increase the data throughput of their firewall by putting in a box with a faster CPU, or another CPU, when this isn't necessarily the case. Potentially, this could be a large waste of money that doesn't do anything to solve the problem at hand or provide the expected scalability. On busy systems, memory is extremely important. You have to have enough RAM to support every instance of every program necessary to service the load placed on that machine. Otherwise, the swapping will start and the productivity will stop. Light swapping isn't usually much of a problem, but if a system's swap space begins to get busy, then it's usually time for more RAM. A system that's heavily swapping is often relatively easy to push over the edge in a denialof-service attack, or simply fall behind in processing the load placed on it. This is where long email delays start. Beyond the system's requirement for memory, it's useful to understand that different services use different system resources. So the configuration that you have for your system should be indicative of the kind of load you plan to service. A 1400 MHz processor isn't going to do you much good if all you're doing is netnews and mail, and are trying to do it on an IDE disk with an ISA controller.

Table 1: Critical Resources for Firewall Services Service Email Critical Resource Disk I/O

Netnews Web IP Routing

Disk I/O Host OS Socket Performance Host OS Socket Performance

Web Cache Host OS Socket Performance, Disk I/O

3.8 What is a DMZ, and why do I want one? ``DMZ'' is an abbreviation for ``demilitarized zone''. In the context of firewalls, this refers to a part of the network that is neither part of the internal network nor directly part of the Internet. Typically, this is the area between your Internet access router and your bastion host, though it can be between any two policy-enforcing components of your architecture. A DMZ can be created by putting access control lists on your access router. This minimizes the exposure of hosts on your external LAN by allowing only recognized and managed services on those hosts to be accessible by hosts on the Internet. Many commercial firewalls simply make a third interface off of the bastion host and label it the DMZ, the point is that the network is neither ``inside'' nor ``outside''. For example, a web server running on NT might be vulnerable to a number of denial-of-service attacks against such services as RPC, NetBIOS and SMB. These services are not required for the operation of a web server, so blocking TCP connections to ports 135, 137, 138, and 139 on that host will reduce the exposure to a denial-of-service attack. In fact, if you block everything but HTTP traffic to that host, an attacker will only have one service to attack. This illustrates an important principle: never offer attackers more to work with than is absolutely necessary to support the services you want to offer the public.

3.9 How might I increase the security and scalability of my DMZ? A common approach for an attacker is to break into a host that's vulnerable to attack, and exploit trust relationships between the vulnerable host and more interesting targets. If you are running a number of services that have different levels of security, you might want to consider breaking your DMZ into several ``security zones''. This can be done by having a number of different networks within the DMZ. For example, the access router could feed two Ethernets, both protected by ACLs, and therefore in the DMZ. On one of the Ethernets, you might have hosts whose purpose is to service your organization's need for Internet connectivity. These will likely relay mail, news, and host DNS. On the other Ethernet could be your web server(s) and other hosts that provide services for the benefit of Internet users. In many organizations, services for Internet users tend to be less carefully guarded and are more likely to be doing insecure things. (For example, in the case of a web server, unauthenticated and untrusted users might be running CGI, PHP, or other executable programs. This might be reasonable for your web server, but brings with it a certain set of risks that need to be managed. It is likely these services are too risky for an organization to run them on a bastion host, where a slip-up can result in the complete failure of the security mechanisms.) By putting hosts with similar levels of risk on networks together in the DMZ, you can help minimize the effect of a breakin at your site. If someone breaks into your web server by exploiting some bug in your web server, they'll not be able to use it as a launching point to break into your private network if the web servers are on a separate LAN from the bastion hosts, and you don't have any trust relationships between the web server and bastion host.

Now, keep in mind that this is Ethernet. If someone breaks into your web server, and your bastion host is on the same Ethernet, an attacker can install a sniffer on your web server, and watch the traffic to and from your bastion host. This might reveal things that can be used to break into the bastion host and gain access to the internal network. (Switched Ethernet can reduce your exposure to this kind of problem, but will not eliminate it.) Splitting services up not only by host, but by network, and limiting the level of trust between hosts on those networks, you can greatly reduce the likelihood of a breakin on one host being used to break into the other. Succinctly stated: breaking into the web server in this case won't make it any easier to break into the bastion host. You can also increase the scalability of your architecture by placing hosts on different networks. The fewer machines that there are to share the available bandwidth, the more bandwidth that each will get.

3.10 What is a `single point of failure', and how do I avoid having one? An architecture whose security hinges upon one mechanism has a single point of failure. Software that runs bastion hosts has bugs. Applications have bugs. Software that controls routers has bugs. It makes sense to use all of these components to build a securely designed network, and to use them in redundant ways. If your firewall architecture is a screened subnet, you have two packet filtering routers and a bastion host. (See question 3.2 from this section.) Your Internet access router will not permit traffic from the Internet to get all the way into your private network. However, if you don't enforce that rule with any other mechanisms on the bastion host and/or choke router, only one component of your architecture needs to fail or be compromised in order to get inside. On the other hand, if you have a redundant rule on the bastion host, and again on the choke router, an attacker will need to defeat three mechanisms. Further, if the bastion host or the choke router needs to invoke its rule to block outside access to the internal network, you might want to have it trigger an alarm of some sort, since you know that someone has gotten through your access router.

3.11 How can I block all of the bad stuff? For firewalls where the emphasis is on security instead of connectivity, you should consider blocking everything by default, and only specifically allowing what services you need on a case-by-case basis. If you block everything, except a specific set of services, then you've already made your job much easier. Instead of having to worry about every security problem with everything product and service around, you only need to worry about every security problem with a specific set of services and products. Before turning on a service, you should consider a couple of questions: Is the protocol for this product a well-known, published protocol? Is the application to service this protocol available for public inspection of its implementation? How well known is the service and product? How does allowing this service change the firewall architecture? Will an attacker see things differently? Could it be exploited to get at my internal network, or to change things on hosts in my DMZ?

When considering the above questions, keep the following in mind: ``Security through obscurity'' is no security at all. Unpublished protocols have been examined by bad guys and defeated. Despite what the marketing representatives say, not every protocol or service is designed with security in mind. In fact, the number that are is very few. Even in cases where security is a consideration, not all organizations have competent security staff. Among those who don't, not all are willing to bring a competent consultant into the project. The end result is that otherwise-competent, well-intended developers can design insecure systems.

The less that a vendor is willing to tell you about how their system really works, the more likely it is that security (or other) problems exist. Only vendors with something to hide have a reason to hide their designs and implementations [2].

3.12 How can I restrict web access so users can't view sites unrelated to work? A few years ago, someone got the idea that it's a good idea to block ``bad'' web sites, i.e., those that contain material that The Company views ``inappropriate''. The idea has been increasing in popularity, but there are several things to consider when thinking about implementing such controls in your firewall. It is not possible to practically block everything that an employer deems ``inappropriate''. The Internet is full of every sort of material. Blocking one source will only redirect traffic to another source of such material, or cause someone to figure a way around the block. Most organizations do not have a standard for judging the appropriateness of material that their employees bring to work, e.g., books and magazines. Do you inspect everyone's briefcase for ``inappropriate material'' every day? If you do not, then why would you inspect every packet for ``inappropriate material''? Any decisions along those lines in such an organization will be arbitrary. Attempting to take disciplinary action against an employee where the only standard is arbitrary typically isn't wise, for reasons well beyond the scope of this document. Products that perform site-blocking, commercial and otherwise, are typically easy to circumvent. Hostnames can be rewritten as IP addresses. IP addresses can be written as a 32-bit integer value, or as four 8-bit integers (the most common form). Other possibilities exist, as well. Connections can be proxied. Web pages can be fetched via email. You can't block them all. The effort that you'll spend trying to implement and manage such controls will almost certainly far exceed any level of damage control that you're hoping to have.

The rule-of-thumb to remember here is that you cannot solve social problems with technology. If there is a problem with someone going to an ``inappropriate'' web site, that is because someone else saw it and was offended by what he saw, or because that person's productivity is below expectations. In either case, those are matters for the personnel department, not the firewall administrator.

4 Various Attacks

4.1 What is source routed traffic and why is it a threat? Normally, the route a packet takes from its source to its destination is determined by the routers between the source and destination. The packet itself only says where it wants to go (the destination address), and nothing about how it expects to get there. There is an optional way for the sender of a packet (the source) to include information in the packet that tells the route the packet should take to get to its destination; thus the name ``source routing''. For a firewall, source routing is noteworthy, since an attacker can generate traffic claiming to be from a system ``inside'' the firewall. In general, such traffic wouldn't route to the firewall properly, but with the source routing option, all the routers between the attacker's machine and the target will return traffic along the reverse path of the source route. Implementing such an attack is quite easy; so firewall builders should not discount it as unlikely to happen. In practice, source routing is very little used. In fact, generally the main legitimate use is in debugging network problems or routing traffic over specific links for congestion control for specialized situations. When building a firewall, source routing should be blocked at some point. Most commercial routers incorporate the ability to block source routing specifically, and many versions of Unix that might be used to build firewall bastion hosts have the ability to disable or to ignore source routed traffic.

4.2 What are ICMP redirects and redirect bombs? An ICMP Redirect tells the recipient system to override something in its routing table. It is legitimately used by routers to tell hosts that the host is using a non-optimal or defunct route to a particular destination, i.e., the host is sending it to the wrong router. The wrong router sends the host back an ICMP Redirect packet that tells the host what the correct route should be. If you can forge ICMP Redirect packets, and if your target host pays attention to them, you can alter the routing tables on the host and possibly subvert the security of the host by causing traffic to flow via a path the network manager didn't intend. ICMP Redirects also may be employed for denial of service attacks, where a host is sent a route that loses it connectivity, or is sent an ICMP Network Unreachable packet telling it that it can no longer access a particular network. Many firewall builders screen ICMP traffic from their network, since it limits the ability of outsiders to ping hosts, or modify their routing tables. Before you decide to block all ICMP packets, you should be aware of how the TCP protocol does ``Path MTU Discovery'', to make certain that you don't break connectivity to other sites. If you can't safely block it everywhere, you can consider allowing selected types of ICMP to selected routing devices. If you don't block it, you should at least ensure that your routers and hosts don't respond to broadcast ping packets.

4.3 What about denial of service? Denial of service is when someone decides to make your network or firewall useless by disrupting it, crashing it, jamming it, or flooding it. The problem with denial of service on the Internet is that it is impossible to prevent. The reason has to do with the distributed nature of the network: every network node is connected via other networks which in turn connect to other networks, etc. A firewall administrator or ISP only has control of a few of the local elements within reach. An attacker can always disrupt a connection ``upstream'' from where the victim controls it. In other words, if someone wanted to take a network off the air, he could do it either by taking the network off the air, or by taking the networks it connects to off the air, ad infinitum. There are many, many, ways someone can deny service, ranging from the complex to the trivial brute-force. If you are considering using Internet for a service which is absolutely time or mission critical, you should consider your fallback position in the event that the network is down or damaged. TCP/IP's UDP echo service is trivially abused to get two servers to flood a network segment with echo packets. You should consider commenting out unused entries in /etc/inetd.conf of Unix hosts, adding no ip smallservers to Cisco routers, or the equivalent for your components.

4.4 What are some common attacks, and how can I protect my system against them? Each site is a little different from every other in terms of what attacks are likely to be used against it. Some recurring themes do arise, though. 4.4.1 SMTP Server Hijacking (Unauthorized Relaying) This is where a spammer will take many thousands of copies of a message and send it to a huge list of email addresses. Because these lists are often so bad, and in order to increase the speed of operation for the spammer, many have resorted to simply sending all of their mail to an SMTP server that will take care of actually delivering the mail. Of course, all of the bounces, spam complaints, hate mail, and bad PR come for the site that was used as a relay. There is a very real cost associated with this, mostly in paying people to clean up the mess afterward. The Mail Abuse Prevention System1Transport Security Initiative2maintains a complete description of the problem, and how to configure about every mailer on the planet to protect against this attack.

4.4.2 Exploiting Bugs in Applications Various versions of web servers, mail servers, and other Internet service software contain bugs that allow remote (Internet) users to do things ranging from gain control of the machine to making that application crash and just about everything in between. The exposure to this risk can be reduced by running only necessary services, keeping up to date on patches, and using products that have been around a while. 4.4.3 Bugs in Operating Systems Again, these are typically initiated by users remotely. Operating systems that are relatively new to IP networking tend to be more problematic, as more mature operating systems have had time to find and eliminate their bugs. An attacker can often make the target equipment continuously reboot, crash, lose the ability to talk to the network, or replace files on the machine. Here, running as few operating system services as possible can help. Also, having a packet filter in front of the operating system can reduce the exposure to a large number of these types of attacks. And, of course, chosing a stable operating system will help here as well. When selecting an OS, don't be fooled into believing that ``the pricier, the better''. Free operating systems are often much more robust than their commercial counterparts

5 How Do I...

5.1 Do I really want to allow everything that my users ask for? It's entirely possible that the answer is ``no''. Each site has its own policies about what is and isn't needed, but it's important to remember that a large part of the job of being an organization's gatekeeper is education. Users want streaming video, real-time chat, and to be able to offer services to external customers that require interaction with live databases on the internal network. That doesn't mean that any of these things can be done without presenting more risk to the organization than the supposed ``value'' of heading down that road is worth. Most users don't want to put their organization at risk. They just read the trade rags, see advertisements, and they want to do those things, too. It's important to look into what it is that they really want to do, and to help them understand how they might be able to accomplish their real objective in a more secure manner. You won't always be popular, and you might even find yourself being given direction to do something incredibly stupid, like ``just open up ports foo through bar''. If that happens, don't worry about it. It would be wise to keep all of your exchanges on such an event so that when a 12-year-old script kiddie breaks in, you'll at least be able to separate yourself from the whole mess.

5.2 How do I make Web/HTTP work through my firewall? There are three ways to do it. 1. 2. 3. Allow ``established'' connections out via a router, if you are using screening routers. Use a web client that supports SOCKS, and run SOCKS on your bastion host. 3 4 Run some kind of proxy-capable web server on the bastion host. Some options include Squid , Apache , 5 Netscape Proxy , and http-gw from the TIS firewall toolkit. Most of these can also proxy other protocols (such as gopher and ftp), and can cache objects fetched, which will also typically result in a performance boost for

the users, and more efficient use of your connection to the Internet. Essentially all web clients (Mozilla, Internet Explorer, Lynx, etc.) have proxy server support built directly into them.

5.3 How do I make SSL work through the firewall? SSL is a protocol that allows secure connections across the Internet. Typically, SSL is used to protect HTTP traffic. However, other protocols (such as telnet) can run atop SSL. Enabling SSL through your firewall can be done the same way that you would allow HTTP traffic, if it's HTTP that you're using SSL to secure, which is usually true. The only difference is that instead of using something that will simply relay HTTP, you'll need something that can tunnel SSL. This is a feature present on most web object caches. You can find out more about SSL from Netscape6.

5.4 How do I make DNS work with a firewall? Some organizations want to hide DNS names from the outside. Many experts don't think hiding DNS names is worthwhile, but if site/corporate policy mandates hiding domain names, this is one approach that is known to work. Another reason you may have to hide domain names is if you have a non-standard addressing scheme on your internal network. In that case, you have no choice but to hide those addresses. Don't fool yourself into thinking that if your DNS names are hidden that it will slow an attacker down much if they break into your firewall. Information about what is on your network is too easily gleaned from the networking layer itself. If you want an interesting demonstration of this, ping the subnet broadcast address on your LAN and then do an ``arp -a.'' Note also that hiding names in the DNS doesn't address the problem of host names ``leaking'' out in mail headers, news articles, etc. This approach is one of many, and is useful for organizations that wish to hide their host names from the Internet. The success of this approach lies on the fact that DNS clients on a machine don't have to talk to a DNS server on that same machine. In other words, just because there's a DNS server on a machine, there's nothing wrong with (and there are often advantages to) redirecting that machine's DNS client activity to a DNS server on another machine. First, you set up a DNS server on the bastion host that the outside world can talk to. You set this server up so that it claims to be authoritative for your domains. In fact, all this server knows is what you want the outside world to know; the names and addresses of your gateways, your wildcard MX records, and so forth. This is the ``public'' server. Then, you set up a DNS server on an internal machine. This server also claims to be authoritative for your domains; unlike the public server, this one is telling the truth. This is your ``normal'' nameserver, into which you put all your ``normal'' DNS stuff. You also set this server up to forward queries that it can't resolve to the public server (using a ``forwarders'' line in /etc/named.boot on a Unix machine, for example). Finally, you set up all your DNS clients (the /etc/resolv.conf file on a Unix box, for instance), including the ones on the machine with the public server, to use the internal server. This is the key. An internal client asking about an internal host asks the internal server, and gets an answer; an internal client asking about an external host asks the internal server, which asks the public server, which asks the Internet, and the answer is relayed back. A client on the public server works just the same way. An external client, however, asking about an internal host gets back the ``restricted'' answer from the public server. This approach assumes that there's a packet filtering firewall between these two servers that will allow them to talk DNS to each other, but otherwise restricts DNS between other hosts. Another trick that's useful in this scheme is to employ wildcard PTR records in your IN-ADDR.ARPA domains. These cause an an address-to-name lookup for any of your non-public hosts to return something like ``unknown.YOUR.DOMAIN'' rather than an error. This satisfies anonymous FTP sites like ftp.uu.net that insist on

having a name for the machines they talk to. This may fail when talking to sites that do a DNS cross-check in which the host name is matched against its address and vice versa.

5.5 How do I make FTP work through my firewall? Generally, making FTP work through the firewall is done either using a proxy server such as the firewall toolkit's ftp-gw or by permitting incoming connections to the network at a restricted port range, and otherwise restricting incoming connections using something like ``established'' screening rules. The FTP client is then modified to bind the data port to a port within that range. This entails being able to modify the FTP client application on internal hosts. In some cases, if FTP downloads are all you wish to support, you might want to consider declaring FTP a ``dead protocol'' and letting you users download files via the Web instead. The user interface certainly is nicer, and it gets around the ugly callback port problem. If you choose the FTP-via-Web approach, your users will be unable to FTP files out, which, depending on what you are trying to accomplish, may be a problem. A different approach is to use the FTP ``PASV'' option to indicate that the remote FTP server should permit the client to initiate connections. The PASV approach assumes that the FTP server on the remote system supports that operation. (See ``Firewall-Friendly FTP'' [1].) Other sites prefer to build client versions of the FTP program that are linked against a SOCKS library.

5.6 How do I make Telnet work through my firewall? Telnet is generally supported either by using an application proxy such as the firewall toolkit's tn-gw, or by simply configuring a router to permit outgoing connections using something like the ``established'' screening rules. Application proxies could be in the form of a standalone proxy running on the bastion host, or in the form of a SOCKS server and a modified client.

5.7 How do I make Finger and whois work through my firewall? Many firewall admins permit connections to the finger port from only trusted machines, which can issue finger requests in the form of: finger user@host.domain@firewall. This approach only works with the standard Unix version of finger. Controlling access to services and restricting them to specific machines is managed using either tcp_wrappers or netacl from the firewall toolkit. This approach will not work on all systems, since some finger servers do not permit user@host@host fingering. Many sites block inbound finger requests for a variety of reasons, foremost being past security bugs in the finger server (the Morris internet worm made these bugs famous) and the risk of proprietary or sensitive information being revealed in user's finger information. In general, however, if your users are accustomed to putting proprietary or sensitive information in their .plan files, you have a more serious security problem than just a firewall can solve.

5.8 How do I make gopher, archie, and other services work through my firewall? The majority of firewall administrators choose to support gopher and archie through web proxies, instead of directly. Proxies such as the firewall toolkit's http-gw convert gopher/gopher+ queries into HTML and vice versa. For supporting archie and other queries, many sites rely on Internet-based Web-to-archie servers, such as ArchiePlex. The Web's tendency to make everything on the Internet look like a web service is both a blessing and a curse. There are many new services constantly cropping up. Often they are misdesigned or are not designed with security in mind, and their designers will cheerfully tell you if you want to use them you need to let port xxx through your router. Unfortunately, not everyone can do that, and so a number of interesting new toys are difficult to use for people behind firewalls. Things like RealAudio, which require direct UDP access, are particularly egregious examples. The thing to

bear in mind if you find yourself faced with one of these problems is to find out as much as you can about the security risks that the service may present, before you just allow it through. It's quite possible the service has no security implications. It's equally possible that it has undiscovered holes you could drive a truck through.

5.9 What are the issues about X11 through a firewall? The X Windows System is a very useful system, but unfortunately has some major security flaws. Remote systems that can gain or spoof access to a workstation's X11 display can monitor keystrokes that a user enters, download copies of the contents of their windows, etc. While attempts have been made to overcome them (E.g., MIT ``Magic Cookie'') it is still entirely too easy for an attacker to interfere with a user's X11 display. Most firewalls block all X11 traffic. Some permit X11 traffic through application proxies such as the DEC CRL X11 proxy (FTP crl.dec.com). The firewall toolkit includes a proxy for X11, called x-gw, which a user can invoke via the Telnet proxy, to create a virtual X11 server on the firewall. When requests are made for an X11 connection on the virtual X11 server, the user is presented with a pop-up asking them if it is OK to allow the connection. While this is a little unaesthetic, it's entirely in keeping with the rest of X11.

5.10 How do I make RealAudio work through my firewall? RealNetworks maintains some information about how to get RealAudio working through your firewall 7. It would be unwise to make any changes to your firewall without understanding what the changes will do, exactly, and knowing what risks the new changes will bring with them.

5.11 How do I make my web server act as a front-end for a database that lives on my private network? The best way to do this is to allow very limited connectivity between your web server and your database server via a specific protocol that only supports the level of functionality you're going to use. Allowing raw SQL, or anything else where custom extractions could be performed by an attacker isn't generally a good idea. Assume that an attacker is going to be able to break into your web server, and make queries in the same way that the web server can. Is there a mechanism for extracting sensitive information that the web server doesn't need, like credit card information? Can an attacker issue an SQL select and extract your entire proprietary database? ``E-commerce'' applications, like everything else, are best designed with security in mind from the ground up, instead of having security ``added'' as an afterthought. Review your architecture critically, from the perspective of an attacker. Assume that the attacker knows everything about your architecture. Now ask yourself what needs to be done to steal your data, to make unauthorized changes, or to do anything else that you don't want done. You might find that you can significantly increase security without decreasing functionality by making a few design and implementation decisions. Some ideas for how to handle this: Extract the data you need from the database on a regular basis so you're not making queries against the full database, complete with information that attackers will find interesting. Greatly restrict and audit what you do allow between the web server and database.

5.12 But my database has an integrated web server, and I want to use that. Can't I just poke a hole in the firewall and tunnel that port? If your site firewall policy is sufficiently lax that you're willing to manage the risk that someone will exploit a vulnerability in your web server that will result in partial or complete exposure of your database, then there isn't much preventing you from doing this.

However, in many organizations, the people who are responsible for tying the web front end to the database back end simply do not have the authority to take that responsibility. Further, if the information in the database is about people, you might find yourself guilty of breaking a number of laws if you haven't taken reasonable precautions to prevent the system from being abused. In general, this isn't a good idea. See question 5.11 for some ideas on other ways to accomplish this objective.

5.13 How Do I Make IP Multicast Work With My Firewall? IP multicast is a means of getting IP traffic from one host to a set of hosts without using broadcasting; that is, instead of every host getting the traffic, only those that want it will get it, without each having to maintain a separate connection to the server. IP unicast is where one host talks to another, multicast is where one host talks to a set of hosts, and broadcast is where one host talks to all hosts. The public Internet has a multicast backbone (``MBone'') where users can engage in multicast traffic exchange. Common uses for the MBone are streams of IETF meetings and similar such interaction. Getting one's own network connected to the MBone will require that the upstream provider route multicast traffic to and from your network. Additionally, your internal network will have to support multicast routing. The role of the firewall in multicast routing, conceptually, is no different from its role in other traffic routing. That is, a policy that identifies which multicast groups are and aren't allowed must be defined and then a system of allowing that traffic according to policy must be devised. Great detail on how exactly to do this is beyond the scope of this document. Fortunately, RFC 2588 [4] discusses the subject in more detail. Unless your firewall product supports some means of selective multicast forwarding or you have the ability to put it in yourself, you might find forwarding multicast traffic in a way consistent with your security policy to be a bigger headache than it's worth.

6 TCP and UDP Ports by Mikael Olsson This appendix will begin at a fairly ``basic'' level, so even if the first points seem childishly self-evident to you, you might still learn something from skipping ahead to something later in the text.

6.1 What is a port? A ``port'' is ``virtual slot'' in your TCP and UDP stack that is used to map a connection between two hosts, and also between the TCP/UDP layer and the actual applications running on the hosts. They are numbered 0-65535, with the range 0-1023 being marked as ``reserved'' or ``privlileged'', and the rest (102465535) as ``dynamic'' or ``unprivileged''. There are basically two uses for ports: ``Listening'' on a port. This is used by server applications waiting for users to connect, to get to some ``well known service'', for instance HTTP (TCP port 80), Telnet (TCP port 23), DNS (UDP and sometimes TCP port 53). Opening a ``dynamic'' port. Both sides of a TCP connection need to be identified by IP addresses and port numbers. Hence, when you want to ``connect'' to a server process, your end of the communications channel also needs a ``port''. This is done by choosing a port above 1024 on your machine that is not currently in use by another communications channel, and using it as the ``sender'' in the new connection.

Dynamic ports may also be used as ``listening'' ports in some applications, most notably FTP.

Ports in the range 0-1023 are almost always server ports. Ports in the range 1024-65535 are usually dynamic ports (i.e., opened dynamically when you connect to a server port). However, any port may be used as a server port, and any port may be used as an ``outgoing'' port. So, to sum it up, here's what happens in a basic connection: At some point in time, a server application on host 1.2.3.4 decides to ``listen'' at port 80 (HTTP) for new connections. You (5.6.7.8) want to surf to 1.2.3.4, port 80, and your browser issues a connect call to it. The connect call, realising that it doesn't yet have local port number, goes hunting for one. The local port number is necessary since when the replies come back some time in the future, your TCP/IP stack will have to know to what application to pass the reply. It does this by remembering what application uses which local port number. (This is grossly simplified, no flames from programmers, please.) Your TCP stack finds an unused dynamic port, usually somewhere above 1024. Let's assume that it finds 1029. Your first packet is then sent, from your local IP, 5.6.7.8, port 1029, to 1.2.3.4, port 80. The server responds with a packet from 1.2.3.4, port 80, to you, 5.6.7.8, port 1029. This procedure is actually longer than this, read on for a more in-depth explanation of TCP connect sequences.

6.2 How do I know which application uses what port? There are several lists outlining the ``reserved'' and ``well known'' ports, as well as ``commonly used'' ports, and the best one is: ftp://ftp.isi.edu/in-notes/iana/assignments/port-numbers. For those of you still reading RFC 1700 to find out what port number does what, STOP DOING IT. It is horribly out of date, and it won't be less so tomorrow. Now, as for trusting this information: These lists do not, in any way, constitute any kind of holy bible on which ports do what. Wait, let me rephrase that: THERE IS NO WAY OF RELIABLY DETERMINING WHAT PORT DOES WHAT SIMPLY BY LOOKING IN A LIST.

6.3 What are LISTENING ports? Suppose you did ``netstat -a'' on your machine and ports 1025 and 1030 showed up as LISTENing. What do they do? Right, let's take a look in the assigned port numbers list. blackjack iad1 1025/tcp 1030/tcp network blackjack BBN IAD

Wait, what's happening? Has my workstation stolen my VISA number and decided to go play blackjack with some rogue server on the internet? And what's that software that BBN has installed? This is NOT where you start panicking and send mail to the firewalls list. In fact, this question has been asked maybe a dozen times during the past six months, and every time it's been answered. Not that THAT keeps people from asking the same question again. If you are asking this question, you are most likely using a windows box. The ports you are seeing are (most likely) two listening ports that the RPC subsystem opens when it starts up. This is an example of where dynamicly assigned ports may be used by server processes. Applications using RPC will later on connect to port 135 (the netbios ``portmapper'') to query where to find some RPC service, and get an answer back saying that that particular service may be contacted on port 1025.

Now, how do we know this, since there's no ``list'' describing these ports? Simple: There's no substitute for experience. And using the mailing list search engines also helps a hell of a lot.

6.4 How do I determine what service the port is for? Since it is impossible to learn what port does what by looking in a list, how do i do it? The old hands-on way of doing it is by shutting down nearly every service/daemon running on your machine, doing netstat -a and taking note of what ports are open. There shouldn't be very many listening ones. Then you start turning all the services on, one by one, and take note of what new ports show up in your netstat output. Another way, that needs more guess work, is simply telnetting to the ports and see what comes out. If nothing comes out, try typing some gibberish and slamming Enter a few times, and see if something turns up. If you get binary garble, or nothing at all, this obviously won't help you. :-) However, this will only tell you what listening ports are used. It won't tell you about dynamically opened ports that may be opened later on by these applications. There are a few applications that might help you track down the ports used. On Unix systems, there's a nice utility called lsof that comes preinstalled on many systems. It will show you all open port numbers and the names of the applications that are using them. This means that it might show you a lot of locally opened files aswell as TCP/IP sockets. Read the help text. :-) On windows systems, nothing comes preinstalled to assist you in this task. (What's new?) There's a utility called ``Inzider'' which installs itself inside the windows sockets layer and dynamically remembers which process opens which port. The drawback of this approach is that it can't tell you what ports were opened before inzider started, but it's the best that you'll get on windows (to my knowledge). http://ntsecurity.nu/toolbox/inzider/.

6.5 What ports are safe to pass through a firewall? ALL. No, wait, NONE. No, wait, uuhhh... I've heard that all ports above 1024 are safe since they're only dynamic?? No. Really. You CANNOT tell what ports are safe simply by looking at its number, simply because that is really all it is. A number. You can't mount an attack through a 16-bit number. The security of a ``port'' depends on what application you'll reach through that port. A common misconception is that ports 25 (SMTP) and 80 (HTTP) are safe to pass through a firewall. *meep* WRONG. Just because everyone is doing it doesn't mean that it is safe. Again, the security of a port depends on what application you'll reach through that port. If you're running a well-written web server, that is designed from the ground up to be secure, you can probably feel reasonably assured that it's safe to let outside people access it through port 80. Otherwise, you CAN'T. The problem here is not in the network layer. It's in how the application processes the data that it receives. This data may be received through port 80, port 666, a serial line, floppy or through singing telegram. If the application is not safe, it does not matter how the data gets to it. The application data is where the real danger lies.

If you are interested in the security of your application, go subscribe to bugtraq 8or or try searching their archives. This is more of an application security issue rather than a firewall security issue. One could argue that a firewall should stop all possible attacks, but with the number of new network protocols, NOT designed with security in mind, and networked applications, neither designed with security in mind, it becomes impossible for a firewall to protect against all data-driven attacks.

6.6 The behavior of FTP Or, ``Why do I have to open all ports above 1024 to my FTP server?'' FTP doesn't really look a whole lot like other applications from a networking perspective. It keeps one listening port, port 21, which users connect to. All it does is let people log on, and establish ANOTHER connection to do actual data transfers. This second connection is usually on some port above 1024. There are two modes, ``active'' (normal) and ``passive'' mode. This word describes the server's behaviour. In active mode, the client (5.6.7.8) connects to port 21 on the server (1.2.3.4) and logs on. When file transfers are due, the client allocates a dynamic port above 1024, informs the server about which port it opened, and then the server opens a new connection to that port. This is the ``active'' role of the server: it actively establishes new connections to the client. In passive mode, the connection to port 21 is the same. When file transfers are due, the SERVER allocates a dynamic port above 1024, informs the client about which port it opened, and then the CLIENT opens a new connection to that port. This is the ``passive'' role of the server: it waits for the client to establish the second (data) connection. If your firewall doesn't inspect the application data of the FTP command connection, it won't know that it needs to dynamically open new ports above 1024. On a side note: The traditional behaviour of FTP servers in active mode is to establish the data session FROM port 20, and to the dynamic port on the client. FTP servers are steering away from this behaviour somewhat due to the need to run as ``root'' on unix systems in order to be able to allocate ports below 1024. Running as ``root'' is not good for security, since if there's a bug in the software, the attacker would be able to compromise the entire machine. The same goes for running as ``Administrator'' or ``SYSTEM'' (``LocalSystem'') on NT machines, although the low port problem does not apply on NT. To sum it up, if your firewall understands FTP, it'll be able to handle the data connections by itself, and you won't have to worry about ports above 1024. If it does NOT, there are four issues that you need to address: Firewalling an FTP server in active mode You need to let your server open new connections to the outside world on ports 1024 and above Firewalling an FTP server in passive mode You need to let the outside world connect to ports 1024 and above on your server. CAUTION!!!! There may be applications running on some of these ports that you do NOT want outside people using. Disallow access to these ports before allowing access to the 1024-65535 port range. Firewalling FTP clients in active mode You need to let the outside world connect to ports 1024 and above on your clients. CAUTION!!!! There may be applications running on some of these ports that you do NOT want outside people using. Disallow access to these ports before allowing access to the 1024-65535 port range. Firewalling FTP clients in passive mode You need to let your clients open new connections to the outside world on ports 1024 and above.

Again, if your firewall understands FTP, none of the four points above apply to you. Let the firewall do the job for you.

6.7 What software uses what FTP mode? It is up to the client to decide what mode to use; the default mode when a new connection is opened is ``active mode''. Most FTP clients come preconfigured to use active mode, but provide an option to use ``passive'' (``PASV'') mode. An exception is the windows command line FTP client which only operates in active mode. Web Browsers generally use passive mode when connecting via FTP, with a weird exception: MSIE 5 will use active FTP when FTP:ing in ``File Explorer'' mode and passive FTP when FTP:ing in ``Web Page'' mode. There is no reason whatsoever for this behaviour; my guess is that someone in Redmond with no knowledge of FTP decided that ``Of course we'll use active mode when we're in file explorer mode, since that looks more active than a web page''. Go figure.

6.8 Is my firewall trying to connect outside? My firewall logs are telling me that my web server is trying to connect from port 80 to ports above 1024 on the outside. What is this?! If you are seeing dropped packets from port 80 on your web server (or from port 25 on your mail server) to high ports on the outside, they usually DO NOT mean that your web server is trying to connect somewhere. They are the result of the firewall timing out a connection, and seeing the server retransmitting old responses (or trying to close the connection) to the client. TCP connections always involve packets traveling in BOTH directions in the connection. If you are able to see the TCP flags in the dropped packets, you'll see that the ACK flag is set but not the SYN flag, meaning that this is actually not a new connection forming, but rather a response of a previously formed connection. Read point 8 below for an in-depth explanation of what happens when TCP connections are formed (and closed)

6.9 The anatomy of a TCP connection TCP is equipped with 6 ``flags'', which may be ON or OFF. These flags are: FIN ``Controlled'' connection close SYN Open new connection RST ``Immediate'' connection close PSH Instruct receiver host to push the data up to the application rather than just queue it ACK

``Acknowledge'' a previous packet URG ``Urgent'' data which needs to be processed immediately In this example, your client is 5.6.7.8, and the port assigned to you dynamically is 1049. The server is 1.2.3.4, port 80. You begin the connection attempt: 5.6.7.8:1049 -> 1.2.3.4:80 SYN=ON The server receives this packet and understands that someone wants to form a new connection. A response is sent: 1.2.3.4:80 -> 5.6.7.8:1049 SYN=ON ACK=ON The client receives the response, and informs that the response is received 5.6.7.8:1049 -> 1.2.3.4:80 ACK=ON Here, the connection is opened. This is called a three-way handshake. Its purpose is to verify to BOTH hosts that they have a working connection between them. The internet being what it is, unreliable and flooded, there are provisions to compensate for packet loss. If the client sends out the initial SYN without receiving a SYN+ACK within a few seconds, it'll resend the SYN. If the server sends out the SYN+ACK without receiving an ACK in a few seconds, it'll resend the SYN+ACK packet. The latter is actually the reason that SYN flooding works so well. If you send out SYN packets from lots of different ports, this will tie up a lot of resources on the server. If you also refuse to respond to the returned SYN+ACK packets, the server will KEEP these connections for a long time, resending the SYN+ACK packets. Some servers will not accept new connections while there are enough connections currently forming; this is why SYN flooding works. All packets transmitted in either direction after the three-way handshake will have the ACK bit set. Stateless packet filters make use of this in the so called ``established'' filters: They will only let packets through that have the ACK bit set. This way, no packet may pass through in a certain direction that could form a new connection. Typically, you don't allow outside hosts to open new connections to inside hosts by requiring the ACK bit set on these packets. When the time has come to close the connection, there are two ways of doing it: Using the FIN flag, or using the RST flag. Using FIN flags, both implementations are required to send out FIN flags to indicate that they want to close the connection, and then send out acknowledgements to these FINs, indicating that they understood that the other end wants to close the connection. When sending out RST's, the connection is closed forcefully, and you don't really get an indication of whether the other end understood your reset order, or that it has in fact received all data that you sent to it. The FIN way of closing the connection also exposes you to a denial-of-service situation, since the TCP stack needs to remember the closed connection for a fairly long time, in case the other end hasn't received one of the FIN packets. If sufficiently many connections are opened and closed, you may end up having ``closed'' connections in all your connection slots. This way, you wouldn't be able to dynamically allocate more connections, seeing that they're all used. Different OSes handle this situation differently.

A. Some Commercial Products and Vendors

We feel this topic is too sensitive to address in a FAQ, however, an independently maintained list (no warranty or recommendations are implied) can be found online. 9

B. Glossary of Firewall-Related Terms Abuse of Privilege When a user performs an action that they should not have, according to organizational policy or law. Access Control Lists Rules for packet filters (typically routers) that define which packets to pass and which to block. Access Router A router that connects your network to the external Internet. Typically, this is your first line of defense against attackers from the outside Internet. By enabling access control lists on this router, you'll be able to provide a level of protection for all of the hosts ``behind'' that router, effectively making that network a DMZ instead of an unprotected external LAN. Application-Layer Firewall A firewall system in which service is provided by processes that maintain complete TCP connection state and sequencing. Application layer firewalls often re-address traffic so that outgoing traffic appears to have originated from the firewall, rather than the internal host. Authentication The process of determining the identity of a user that is attempting to access a system. Authentication Token A portable device used for authenticating a user. Authentication tokens operate by challenge/response, timebased code sequences, or other techniques. This may include paper-based lists of one-time passwords. Authorization The process of determining what types of activities are permitted. Usually, authorization is in the context of authentication: once you have authenticated a user, they may be authorized different types of access or activity. Bastion Host A system that has been hardened to resist attack, and which is installed on a network in such a way that it is expected to potentially come under attack. Bastion hosts are often components of firewalls, or may be ``outside'' web servers or public access systems. Generally, a bastion host is running some form of general purpose operating system (e.g., Unix, VMS, NT, etc.) rather than a ROM-based or firmware operating system. Challenge/Response An authentication technique whereby a server sends an unpredictable challenge to the user, who computes a response using some form of authentication token. Chroot

A technique under Unix whereby a process is permanently restricted to an isolated subset of the filesystem. Cryptographic Checksum A one-way function applied to a file to produce a unique ``fingerprint'' of the file for later reference. Checksum systems are a primary means of detecting filesystem tampering on Unix. Data Driven Attack A form of attack in which the attack is encoded in innocuous-seeming data which is executed by a user or other software to implement an attack. In the case of firewalls, a data driven attack is a concern since it may get through the firewall in data form and launch an attack against a system behind the firewall. Defense in Depth The security approach whereby each system on the network is secured to the greatest possible degree. May be used in conjunction with firewalls. DNS spoofing Assuming the DNS name of another system by either corrupting the name service cache of a victim system, or by compromising a domain name server for a valid domain. Dual Homed Gateway A dual homed gateway is a system that has two or more network interfaces, each of which is connected to a different network. In firewall configurations, a dual homed gateway usually acts to block or filter some or all of the traffic trying to pass between the networks. Encrypting Router see Tunneling Router and Virtual Network Perimeter. Firewall A system or combination of systems that enforces a boundary between two or more networks. Host-based Security The technique of securing an individual system from attack. Host based security is operating system and version dependent. Insider Attack An attack originating from inside a protected network. Intrusion Detection Detection of break-ins or break-in attempts either manually or via software expert systems that operate on logs or other information available on the network. IP Spoofing An attack whereby a system attempts to illicitly impersonate another system by using its IP network address. IP Splicing / Hijacking

An attack whereby an active, established, session is intercepted and co-opted by the attacker. IP Splicing attacks may occur after an authentication has been made, permitting the attacker to assume the role of an already authorized user. Primary protections against IP Splicing rely on encryption at the session or network layer. Least Privilege Designing operational aspects of a system to operate with a minimum amount of system privilege. This reduces the authorization level at which various actions are performed and decreases the chance that a process or user with high privileges may be caused to perform unauthorized activity resulting in a security breach. Logging The process of storing information about events that occurred on the firewall or network. Log Retention How long audit logs are retained and maintained. Log Processing How audit logs are processed, searched for key events, or summarized. Network-Layer Firewall A firewall in which traffic is examined at the network protocol packet layer. Perimeter-based Security The technique of securing a network by controlling access to all entry and exit points of the network. Policy Organization-level rules governing acceptable use of computing resources, security practices, and operational procedures. Proxy A software agent that acts on behalf of a user. Typical proxies accept a connection from a user, make a decision as to whether or not the user or client IP address is permitted to use the proxy, perhaps does additional authentication, and then completes a connection on behalf of the user to a remote destination. Screened Host A host on a network behind a screening router. The degree to which a screened host may be accessed depends on the screening rules in the router. Screened Subnet A subnet behind a screening router. The degree to which the subnet may be accessed depends on the screening rules in the router. Screening Router A router configured to permit or deny traffic based on a set of permission rules installed by the administrator.

Session Stealing See IP Splicing. Trojan Horse A software entity that appears to do something normal but which, in fact, contains a trapdoor or attack program. Tunneling Router A router or system capable of routing traffic by encrypting it and encapsulating it for transmission across an untrusted network, for eventual de-encapsulation and decryption. Social Engineering An attack based on deceiving users or administrators at the target site. Social engineering attacks are typically carried out by telephoning users or operators and pretending to be an authorized user, to attempt to gain illicit access to systems. Virtual Network Perimeter A network that appears to be a single protected network behind firewalls, which actually encompasses encrypted virtual links over untrusted networks. Virus A replicating code segment that attaches itself to a program or data file. Viruses might or might not not contain attack programs or trapdoors. Unfortunately, many have taken to calling any malicious code a ``virus''. If you mean ``trojan horse'' or ``worm'', say ``trojan horse'' or ``worm''. Worm A standalone program that, when run, copies itself from one host to another, and then runs itself on each newly infected host. The widely reported ``Internet Virus'' of 1988 was not a virus at all, but actually a worm.

What is computer security risk? Information security is concerned with three main areas: Confidentiality information should be available only to those who rightfully have access to it Integrity information should be modified only by those who are authorized to do so Availability information should be accessible to those who need it when they need it

These concepts apply to home Internet users just as much as they would to any corporate or government network. You probably wouldnt let a stranger look through your important documents. In the same way, you may want to keep the tasks you perform on your computer confidential, whether its tracking your investments or sending email messages to family and friends. Also, you should have some assurance that the information you enter into your computer remains intact and is available when you need it. Before we get to what you can do to protect your computer or home network, lets take a closer look at some of these security risks. 1. Trojan horse program security risk Trojan horse programs are a common way for intruders to trick you (sometimes referred to as social engineering) into installing back door programs. These can allow intruders easy access to your computer without your knowledge, change your system configurations, or infect your computer with a computer virus. 2. Chat client security risk Internet chat applications, such as instant messaging applications and Internet Relay Chat (IRC) networks, provide a mechanism for information to be transmitted bi-directionally between computers on the Internet. Chat clients provide groups of individuals with the means to exchange dialog, web URLs, and in many cases, files of any type. As always, you should be wary of exchanging files with unknown parties. 3. Back door and remote administration program security risk On Windows computers, three tools commonly used by intruders to gain remote access to your computer are BackOrifice, Netbus, and SubSeven. These back door or remote administration programs, once installed, allow other people to access and control your computer. 4. Unprotected Windows shares security risk Unprotected Windows networking shares can be exploited by intruders in an automated way to place tools on large numbers of Windows-based computers attached to the Internet. Because site security on the Internet is interdependent, a compromised computer not only creates problems for the computers owner, but it is also a threat to other sites on the Internet. 5. Mobile code (Java/JavaScript/ActiveX) security risk There have been reports of problems with mobile code (e.g. Java, JavaScript, and ActiveX). These are programming languages that let web developers write code that is executed by your web browser. Although the code is generally useful, it can be used by intruders to gather information (such as which web sites you visit) or to run malicious code on your computer. It is possible to disable Java, JavaScript, and ActiveX in your web browser. We recommend that you do so if you are browsing web sites that you are not familiar with or do not trust. 6. Cross-site scripting security risk A malicious web developer may attach a script to something sent to a web site, such as a URL, an element in a form, or a database inquiry. Later, when the web site responds to you, the malicious script is transferred to your browser. You can potentially expose your web browser to malicious scripts by Following links in web pages, email messages, or newsgroup postings without knowing what they link to Using interactive forms on an untrustworthy site Viewing online discussion groups, forums, or other dynamically generated pages where users can post text containing HTML tags

7. Denial of service security risk Another form of attack is called a denial-of-service (DoS) attack. This type of attack causes your computer to crash or to become so busy processing data that you are unable to use it. In most cases, the latest patches will prevent the attack. 8. Being an intermediary for another attack security risk Intruders will frequently use compromised computers as launching pads for attacking other systems. An example of this is how distributed denial-of-service (DDoS) tools are used. The intruders install an agent (frequently through a Trojan horse program) that runs on the compromised computer awaiting further instructions. Then, when a number of agents are running on different computers, a single handler can instruct all of them to launch a denial-of-service attack on another system. Thus, the end target of the attack is not your own computer, but someone elses your computer is just a convenient tool in a larger attack. 9. Email spoofing security risk Email spoofing is when an email message appears to have originated from one source when it actually was sent from another source. Email spoofing is often an attempt to trick the user into making a damaging statement or releasing sensitive information (such as passwords). Spoofed email can range from harmless pranks to social engineering ploys. Examples of the latter include Email claiming to be from a system administrator requesting users to change their passwords to a specified string and threatening to suspend their account if they do not comply Email claiming to be from a person in authority requesting users to send them a copy of a password file or other sensitive information

10. Email borne viruses security risk Viruses and other types of malicious code are often spread as attachments to email messages. Before opening any attachments, be sure you know the source of the attachment. It is not enough that the mail originated from an address you recognize. The Melissa virus spread precisely because it originated from a familiar address. Also, malicious code might be distributed in amusing or enticing programs. 11. Hidden file extensions security risk Windows operating systems contain an option to Hide file extensions for known file types. The option is enabled by default, but a user may choose to disable this option in order to have file extensions displayed by Windows. Multiple email-borne viruses are known to exploit hidden file extensions. The first major attack that took advantage of a hidden file extension was the VBS/LoveLetter worm which contained an email attachment named LOVE-LETTER-FORYOU.TXT.vbs. Other malicious programs have since incorporated similar naming schemes. Examples include Downloader (MySis.avi.exe or QuickFlick.mpg.exe) VBS/Timofonica (TIMOFONICA.TXT.vbs) VBS/CoolNote (COOL_NOTEPAD_DEMO.TXT.vbs) VBS/OnTheFly (AnnaKournikova.jpg.vbs)

12. Packet sniffing security risk A packet sniffer is a program that captures data from information packets as they travel over the network. That data may include user names, passwords, and proprietary information that travels over the network in clear text. With perhaps hundreds or thousands of passwords captured by the packet sniffer, intruders can launch widespread attacks on systems. Installing a packet sniffer does not necessarily require administrator-level access.

Relative to DSL and traditional dial-up users, cable modem users have a higher risk of exposure to packet sniffers since entire neighborhoods of cable modem users are effectively part of the same LAN. A packet sniffer installed on any cable modem users computer in a neighborhood may be able to capture data transmitted by any other cable modem in the same neighborhood. In next article, Ill give some advices how to protect computer security.

Understanding Network Attacks A network attack can be defined as any method, process, or means used to maliciously attempt to compromise network security. There are a number of reasons that an individual(s) would want to attack corporate networks. The individuals performing network attacks are commonly referred to as network attackers, hackers, or crackers. A few different types of malicious activities that network attackers and hackers perform are summarized here: Illegally using user accounts and privileges Stealing hardware Stealing software Running code to damage systems Running code to damage and corrupt data Modifying stored data Stealing data

Using data for financial gain or for industrial espionage Performing actions that prevent legitimate authorized users from accessing network services and resources. Performing actions to deplete network resources and bandwidth.

A few reasons for network attackers attempting to attack corporate networks are listed here: Individuals seeking fame or some sort of recognition. Script kiddies usually seek some form of fame when they attempt to crash Web sites and other public targets on the Internet. A script kiddie could also be looking for some form of acceptance or recognition from the hacker community or from black hat hackers. Possible motives for structured external threats include: o Greed o Industrial espionage o Politics o Terrorism o Racism o Criminal payoffs Displeased employees might seek to damage the organizations data, reliability, or financial standing. There are some network attackers that simply enjoy the challenge of trying to compromise highly secured networks security systems. These types of attackers simply see their actions as a means of exposing existing security vulnerabilities.

Network attacks can be classified into the following four types: Internal threats External threats o Unstructured threats o Structured threats

Threats to the network can be initiated from a number of different sources, hence the reason for network attacks being classified as either external or internal network attacks/threats: External threats: Individuals carry out external threats or network attacks without assistance from internal employees or contractors. A malicious and experienced individual, a group of experienced individuals, an experienced malicious organization, or inexperienced attackers (script kiddies) carry out these attacks. Such attackers usually have a predefined plan and the technologies (tools) or techniques to carry out the attack. One of the main characteristics of external threats is that they usually involve scanning and gathering information. Users can therefore detect an external attack by scrutinizing existing firewall logs. Users can also install an Intrusion Detection System to quickly identify external threats. External threats can be further categorized into either structured threats or unstructured threats: o Structured external threats: These threats originate from a malicious individual, a group of malicious individual(s), or a malicious organization. Structured threats are usually initiated from network attackers that have a premeditated thought on the actual damages and losses that they want to cause. Possible motives for structured external threats include greed, politics, terrorism, racism, and criminal payoffs. These attackers are highly skilled on network design, avoiding security measures, Intrusion Detection Systems (IDSs), access procedures, and hacking tools. They have the necessary skills to develop new network attack techniques and the ability to modify existing hacking tools for their exploitations. In certain cases, an internal authorized individual may assist the attacker. o Unstructured external threats: These threats originate from an inexperienced attacker, typically from a script kiddie. Script kiddie refers to an inexperienced attacker who uses cracking tools or scripted tools readily available on the Internet to perform a network attack. Script kiddies are usually inadequately skilled to create the threats on their own. They can be considered bored individuals seeking some form of fame by attempting to crash websites and other public targets on the Internet. External attacks can also occur either remotely or locally: Remote external attacks: These attacks are usually aimed at the services that an organization offers to the public. The various forms that remote external attacks can take are: Remote attacks aimed at the services available for internal users. This remote attack usually occurs when there is no firewall solution implemented to protect these internal services. Remote attacks aimed at locating modems to access the corporate network. Denial of service (DoS) attacks to place an exceptional processing load on servers in an attempt to prevent authorized user requests from being serviced. War dialing of the corporate private branch exchange (PBX). Attempts to brute force password authenticated systems. o Local external attacks: These attacks typically originate from situations where computing facilities are shared and access to the system can be obtained. Internal threats: Internal attacks originate from dissatisfied or unhappy inside employees or contractors. Internal attackers have some form of access to the system and usually try to hide their attack as a normal process. For instance, internal disgruntled employees have local access to some resources on the internal network already. They could also have some administrative rights on the network. One of the best means to protect against internal attacks is to implement an Intrusion Detection System and to configure it to scan for both external and internal attacks. All forms of attacks should be logged and the logs should be reviewed and followed up. o

With respect to network attacks, the core components that should be included when users design network security are: Network attack prevention

Network attack detection Network attack isolation Network attack recovery

What is Hacking? The term hacking initially referred to the process of finding solutions to rather technical issues or problems. These days, hacking refers to the process whereby intruders maliciously attempt to compromise the security of corporate networks to destroy, interpret, or steal confidential data or to prevent an organization from operating. Terminologies that refer to criminal hacking: Cracking Cybercrime Cyberespionage Phreaking

To access a network system, the intruder (hacker) performs a number of activities: Footprinting: This is basically the initial step in hacking a corporate network. Here the intruder attempts to gain as much information on the targeted network by using sources that the public can access. The aim of footprinting is to create a map of the network to determine what operating systems, applications, and address ranges are being utilized and to identify any accessible open ports. The methods used to footprint a network are: o Access information publicly available on the company website to gain any useful information. o Try to find any anonymous File Transfer Protocol (FTP) sites and intranet sites that are not secured. o Gather information on the companys domain name and the IP address block being used. o Test for hosts in the networks IP address block. Tools such as Ping or Flping are typically used. o Using tools such as Nslookup, the intruder attempts to perform Domain Name System (DNS) zone transfers. o A tool such as Nmap is used to find out what the operating systems are that are being used. o Tools such as Tracert are used to find routers and to collect subnet information. Port scanning: Port scanning or scanning is when intruders collect information on the network services on a target network. Here, the intruder attempts to find open ports on the target system. The different scanning methods that network attackers use are: o Vanilla scan/SYNC scan: TCP SYN packets are sent to each address port in an attempt to connect to all ports. Port numbers 0 65,535 are utilized. o Strobe scan: Here, the attacker attempts to connect to a specific range of ports that are typically open on Windows based hosts or UNIX/Linux based hosts. o Sweep: A large set of IP addresses are scanned in an attempt to detect a system that has one open port. o Passive scan: Here, all network traffic entering or leaving the network is captured and traffic is then analyzed to determine what the open ports are on the hosts within the network. o User Datagram Protocol (UDP) scan: Empty UDP packets are sent to the different ports of a set of addresses to determine how the operating responds. Closed UDP ports respond with the Port Unreachable message when any empty UDP packets are received. Other operating systems respond with the Internet Control Message Protocol (ICMP) error packet. o FTP bounce: To hide the attackers location, the scan is initiated from an intermediary File Transfer Protocol (FTP) server. o FIN scan: TCP FIN packets that specify that the sender wants to close a TCP session are sent to each port for a range of IP addresses. Enumeration: The unauthorized intruder uses a number of methods to collect information on applications and hosts on the network and on the user accounts utilized on the network. Enumeration is particularly successful in networks that contain unprotected network resources and services: o Network services that are running but are not being utilized. o Default user accounts that have no passwords specified.

o Guest accounts that are active. Acquiring access: Access attacks are performed when an attacker exploits a security weakness so that he/she can obtain access to a system or the network. Trojan horses and password hacking programs are typically used to obtain system access. When access is obtained, the intruder is able to modify or delete data and add, modify, or remove network resources. The different types of access attacks are: o Unauthorized system access entails the practice of exploiting the vulnerabilities of operating systems or executing a script or a hacking program to obtain access to a system. o Unauthorized privilege escalation is a frequent type of attack. Privilege escalation occurs when an intruder attempts to obtain a high level of access, like administrative privileges, to gain control of the network system. o Unauthorized data manipulation involves interpreting, altering, and deleting confidential data. Privilege escalation: When an attacker initially gains access to the network, low level accounts are typically used. Privilege escalation occurs when an attacker escalates his/her privileges to obtain a higher level of access, like administrative privileges, in order to gain control of the network system. The privilege escalation methods that attackers use are: o The attacker searches the registry keys for password information. o The attacker can search documents for information on administrative privileges. o The attacker can execute a password cracking tool on targeted user accounts. o The attacker can use a Trojan in an attempt to obtain the credentials of a user account that has administrative privileges. Install backdoors: A hacker can also implement a mechanism such as some form of access granting code with the intent of using it at some future stage. Attackers typically install back doors so that they can easily access the system at some later date. After a system is compromised, users can remove any installed backdoors by reinstalling the system from a backup that is secure. Removing evidence of activities: Attackers typically attempt to remove all evidence of their activities.

What are Hackers or Network Attackers? A hacker or network attacker is someone who maliciously attacks networks, systems, computers, and applications and captures, corrupts, modifies, steals, or deletes confidential company information. A hacker can refer to a number of different individuals who perform activities aimed at hacking systems and networks and it can also refer to individuals who perform activities that have nothing to do with criminal activity: Programmers who hack complex technical problems to come up with solutions. Script kiddies who use readily available tools on the Internet to hack into systems. Criminal hackers who steal or destroy company data. Protesting activists who deny access to specific Web sites as part of their protesting strategy.

Hackers these days are classified according to the hat they wear. This concept is illustrated below: Black hat hackers are malicious or criminal hackers who hack at systems and computers to damage data or who attempt to prevent businesses from rendering their services. Some black hat hackers simply hack security protected systems to gain prestige in the hacking community. White hat hackers are legitimate security experts who are trying to expose security vulnerabilities in operating system platforms. White hat hackers have the improvement of security as their motive. They do not damage or steal company data nor do they seek any fame. These security experts are usually quite knowledgeable about the hacking methods that black hat hackers use. Grey hat hacker: These are individuals who have motives between that of black hat hackers and white hat hackers.

The Common Types of Network Attacks While there are many different types of network attacks, a few can be regarded as the more commonly performed network attacks. These network attacks are discussed in this section of the Article: Data modification or data manipulation pertains to a network attack where confidential company data is interpreted, deleted, or modified. Data modification is successful when data is modified without the sender actually being aware that it was tampered with. A few methods of preventing attacks aimed at compromising data integrity are listed here: o Use digital signatures to ensure that data has not been modified while it is being transmitted or simply stored. o Implement access control lists (ACLs) to control which users are allowed to access your data. o Regularly back up important data. o Include specific code in applications that can validate data input. Eavesdropping: This type of network attack occurs when an attacker monitors or listens to network traffic in transit then interprets all unprotected data. While users need specialized equipment and access to the telephone company switching facilities to eavesdrop on telephone conversations, all they need to eavesdrop on an Internet Protocol (IP) based network is a sniffer technology to capture the traffic being transmitted. This is basically due to the Transmission Control Protocol/Internet Protocol (TCP/IP) being an open architecture that transmits unencrypted data over the network. A few methods of preventing intruders from eavesdropping on the network are: Implement Internet Protocol Security (IPSec) to secure and encrypt IP data before it is sent over the network. Implement security policies and procedures to prevent attackers from attaching a sniffer on the network. Install anti-virus software to protect the corporate network from Trojans. Trojans are typically used to discover and capture sensitive, valuable information such as user credentials.

IP address spoofing/IP spoofing/identity spoofing: IP address spoofing occurs when an attacker assumes the source Internet Protocol (IP) address of IP packets to make it appear as though the packet originated from a valid IP address. The aim of an IP address spoofing attack is to identify computers on a network. Most IP networks utilize the users IP address to verify identities and routers also typically ignore source IP addresses when routing packets. Routers use the destination IP addresses to forward packets to the intended destination network. These factors could enable an attacker to bypass a router and to launch a number of subsequent attacks, including: o Initiation of a denial of service (DoS) attacks. o Initiation of man in the middle (MITM) attacks to hijack sessions. o Redirect traffic. A few methods of preventing IP address spoofing attacks are: Encrypt traffic between routers and external hosts Define ingress filters on routers and firewalls to stop inbound traffic where the source address is from a trusted host on the internal network Sniffer attacks: Sniffing refers to the process that attackers use to capture and analyze network traffic. The packets contents on a network are analyzed. The tools that attackers use for sniffing are called sniffers or more correctly, protocol analyzers. While protocol analyzers are really network troubleshooting tools, hackers also use them for malicious purposes. Sniffers monitor, capture, and obtain network information such as passwords and valuable customer information. When an individual has physical access to a network, he/she can easily attach a protocol analyzer to the network and then capture traffic. Remote sniffing can also be performed and network attackers typically use them. There are protocol analyzers or sniffers available for most networking technologies including: o Asynchronous Transfer Mode (ATM) o o

o o o o o

Ethernet Fiber Channel Serial connections Small Computer System Inter-face (SCSI) Wireless

There are a number of common sniffers that network security administrators and malicious hackers use: o o o o o o o o o Dsniff Ethereal Etherpeek Network Associatess Sniffer Ngrep Sniffit Snort Tcpdump Windump

To protect against sniffers, implement Internet Protocol Security (IPSec) to encrypt network traffic so that any captured information cannot be interpreted. Password attacks: Password based attacks or password crackers are aimed at guessing the password for a system until the correct password is determined. One of the primary security weaknesses associated with password based access control is that all security is based on the user ID and password being utilized. But who is the individual using the credentials at the keyboard? Some of the older applications do not protect password information. The password information is simply sent in clear or plain text no form of encryption is utilized! Remember that network attackers can obtain user ID and password information and can then pose as authorized users and attack the corporate network. Attackers can use dictionary attacks or brute force attacks to gain access to resources with the same rights as the authorized user. A big threat would be present if the user has some level of administrative rights to certain portions of the network. An even bigger threat would exist if the same password credentials are used for all systems. The attacker would then have access to a number of systems. Password based attacks are performed in two ways: o Online cracking: The network attacker sniffs network traffic to seize authentication sessions in an attempt to capture password based information. There are tools that are geared at sniffing out passwords from traffic. o Offline cracking: The network attacker gains access to a system with the intent of gaining access to password information. The attacker then runs some password cracker technology to decipher valid user account information. A dictionary attack occurs when all the words typically used for passwords are attempted to detect a password match. There are some technologies that can generate a number of complex word combinations and variations. Modern operating systems only store passwords in an encrypted format. To obtain password credentials, users have to have administrative credentials to access the system and information. Operating systems these days also support password policies. Password policies define how passwords are managed and define the characteristics of passwords that are considered acceptable. Password policy settings can be used to specify and enforce a number of rules for passwords: o o o o o o Define whether passwords are simple or complex Define whether password history is maintained Define the minimum length for passwords Define the minimum password age Define the maximum password age Define whether passwords are stored with reversible encryption or irreversible encryption

Account lockout policies should be implemented if the environment is particularly vulnerable to threats arising from passwords that are being guessed. Implementing an account lockout policy ensures that the users account is locked after an individual has unsuccessfully tried for several times to provide the correct password. The important factor to remember when defining an account lockout policy is that a policy that permits some degree of user error, but that also prevents hackers from using the user accounts should be implemented. The following password and account lockout settings are located in the Account Lockout Policy area in Account Policies: Account lockout threshold: This setting controls the number of times after which an incorrect password attempt results in the account being locked out of the system. o Account lockout duration: This setting controls the duration that an account that is locked remains locked. A setting of 0 means that an administrator has to manually unlock the locked account. o Reset account lockout counter after: This setting determines the time duration that must pass subsequent to an invalid logon attempt occurring prior to the reset account lockout counter being reset. Brute force attack: Brute force attacks simply attempt to decode a cipher by trying each possible key to find the correct one. This type of network attack systematically uses all possible alpha, numeric, and special character key combinations to find a password that is valid for a user account. Brute force attacks are also typically used to compromise networks that utilize Simple Mail Transfer Protocol (SNMP). Here, the network attacker initiates a brute force attack to find the SNMP community names so that he/she can outline the devices and services running on the network. A few methods of preventing brute force attacks are listed here: o Enforce the use of long password strings. o For SNMP, use long, complex strings for community names. o Implement an intrusion detection system (IDS). By examining traffic patterns, an IDS is capable of detecting when brute force attacks are underway. Denial of Service (DoS) attack: A DoS attack is aimed at preventing authorized, legitimate users from accessing services on the network. The DoS attack is not aimed at gathering or collecting data. It is aimed at preventing authorized, legitimate users from using computers or the network normally. The SYN flood from 1996 was the earliest form of a DoS attack that exploited a Transmission Control Protocol (TCP) vulnerability. A DoS attack can be initiated by sending invalid data to applications or network services until the server hangs or simply crashes. The most common form of a DoS attack is TCP attacks. DoS attacks can use either of the following methods to prevent authorized users from using the network services, computers, or applications: o Flood the network with invalid data until traffic from authorized network users cannot be processed. o Flood the network with invalid network service requests until the host providing that particular service cannot process requests from authorized network users. The network would eventually become overloaded. o Disrupt communication between hosts and clients through either of the following methods: Modification of system configurations. Physical network destruction. Crashing a router, for instance, would prevent users from accessing the system. There are a number of tools easily accessible and available on the Internet that can initiate DoS attacks: o o o o o Bonk LAND Smurf Teardrop WinNuke o

A network attacker can increase the enormity of a DoS attack by initiating the attack against a single network from multiple computers or systems. This type of attack is known as a distributed denial of service (DDoS) attack. Network administrators can experience great difficulty in fending off DDoS attacks, simply because blocking all the attacking computers can also result in blocking authorized users. The following measures can be implemented to protect a network against DoS attacks:

o o o o o

Implement and enforce strong password policies Back up system configuration data regularly Disable or remove all unnecessary network services Implement disk quotas for user and service accounts. Configure filtering on the routers and patch operating systems.

The following measures can be implemented to protect a network against DDoS attacks: o Limit the number of ICMP and SYN packets on router interfaces. o Filter private IP addresses using router access control lists. o Apply ingress and egress filtering on all edge routers. Man in the middle (MITM) attack: A man in the middle (MITM) attack occurs when a hacker eavesdrops on a secure communication session and monitors, captures, and controls the data being sent between the two parties communicating. The attacker attempts to obtain information so that he/she can impersonate the receiver and sender communicating. For an MITM attack to be successful, the following sequence of events has to occur: o The hacker must be able to obtain access to the communication session to capture traffic when the receiver and sender establish the secure communication session. o The hacker must be able to capture the messages being sent between the parties and then send messages so that the session remains active. There are some public key cryptography systems such as the Diffie-Hellman (DH) key exchange that are rather susceptible to man in the middle attacks. This is due to the Diffie-Hellman (DH) key exchange using no authentication.

What are Viruses? A virus is a malicious code that affects and infects system files. Numerous instances of the files are then recreated. Viruses usually lead to some sort of data loss and/or system failure. There are numerous methods by which a virus can get into a system: Through infected floppy disks Through an e-mail attachment infected with the virus Through downloading software infected with the virus

A few common types of viruses are: Boot sector viruses: These are viruses that infect a hard drives master boot record. The virus is then loaded into memory whenever the system starts or is rebooted. File viruses or program viruses or parasitic viruses: These are viruses that are attached to executable programs. Whenever the particular program is executed, the viruses are loaded into memory. Multipartite viruses: These are viruses that are a combination of a boot sector virus and a file virus. Macro viruses: These are viruses that are written in macro languages that applications use, of which Microsoft Word is one. Macro viruses usually infect systems through e-mail. Polymorphic viruses: These viruses can be considered the more difficult viruses to defend against because they can modify their code. Virus protection software often find polymorphic viruses harder to detect and remove.

If a virus infects a system, use the recommendations listed here: Scan each system to gauge how infected the infrastructure is. To prevent the virus from spreading any further, immediately disconnect all infected systems.

All infected systems should be installed from a clean backup copy, that is, a back up taken when the system was clean from virus infections. Inform the anti-virus vendor so that the vendors virus signature database is updated accordingly.

A few methods of protecting network infrastructure against viruses are: Install virus protection software on systems Regularly update all installed virus protection software Regularly back up systems after they have been scanned for viruses and are considered clean from virus infection. Users should be educated to not open any e-mail attachments that were sent from individuals they do not recognize.

What are Worms? As mentioned previously, a virus is a malicious code that infects files on the system. A worm on the other hand is an autonomous code that spreads over a network, targeting hard drive space and processor cycles. Worms not only infect files on one system, but spread to other systems on the network. The purpose of a worm is to deplete available system resources. Hence the reason for a worm repeatedly making copies of itself. Worms basically make copies of themselves or replicate until available memory is used, bandwidth is unavailable, and legitimate network users are no longer able to access network resources or services. There are a few worms that are sophisticated enough to corrupt files, render systems un-operational, and even steal data. These worms usually have one or numerous viral codes. A few previously encountered worms are: Te ADMw0rm worm took advantage of a buffer overflow in Berkeley Internet Name Domain (BIND). The Code Red worm utilized a buffer overflow vulnerability in Microsoft Internet Information Services (IIS) version 4 and IIS version 5. The LifeChanges worm exploited a Microsoft Windows weakness, which allowed scrap shell files to be utilized for running arbitrary code. The LoveLetter worm used a Visual Basic Script to replicate or mass mail itself to all individuals in the Windows address book. The Melissa worm utilized a Microsoft Outlook and Outlook Express vulnerability to mass mail itself to all individuals in the Windows address book. The Morris worm exploited a Sendmail debug mode vulnerability. The Nimda worm managed to run e-mail attachments in Hypertext Markup Language (HTML) messages through the exploitation of HTML IFRAME tag. The Slapper worm exploited an Apache Web server platform buffer overflow vulnerability. The Slammer worm exploited a buffer overflow vulnerability on unpatched machines running Microsoft SQL Server.

What are Trojan Horses? A Trojan horse or Trojan is a file or e-mail attachment disguised as a friendly, legitimate file. When executed though, the file corrupts data and can even install a backdoor that hackers can utilize to access the network. A Trojan horse differs from a virus or worm in the following ways: Trojan horses disguise themselves as friendly programs. Viruses and worms are much more obvious in their actions. Trojan horses do not replicate like worms and viruses do.

A few different types of Trojan horses are listed here:

Keystroke loggers monitor the keystrokes that a user types and then e-mails the information to the network attacker. Password stealers are disguised as legitimate login screens that wait for users to provide their passwords so that hackers can steal them. Password stealers are aimed at discovering and stealing system passwords for hackers. Hackers use Remote administration tools (RATs) to gain control over the network from some remote location. Zombies are typically used to initiate distributed denial of service (DDoS) attacks on the hosts within a network.

Predicting Network Threats To protect network infrastructure, users need to be able to predict the types of network threats to which it is vulnerable. This should include an analysis of the risks that each identified network threat imposes on the network infrastructure. Security experts use a model known as STRIDE to classify network threats: Spoofing identity: These are attacks that are aimed at obtaining user account information. Spoofing identity attacks typically affect data confidentiality. Tampering with data: These are attacks that are aimed at modifying company information. Data tampering usually ends up affecting the integrity of data. A man-in-the-middle attack is a form of data tampering. Repudiation: Repudiation takes place when a user performs some form of malicious action on a resource and then later denies carrying out that particular activity. Network administrators usually have no evidence to back up their suspicions. Information disclosure: Here, private and confidential information is made available to individuals who should not have access to the particular information. Information disclosure usually impacts data confidentiality and network resource confidentiality. Denial of service: These attacks affect the availability of company data and network resources and services. DoS attacks are aimed at preventing legitimate users from accessing network resources and data. Elevation of privilege: Elevation of privilege occurs when an attacker escalates his/her privileges to obtain a high level of access like administrative privileges, in an attempt to gain control of the network system.

Identifying Threats to DHCP Implementations A few threats specific to DHCP implementations are: Because the IP address number in a DHCP scope is limited, an unauthorized user could initiate a denial of service (DoS) attack by requesting or obtaining a large numbers of IP addresses. A network attacker could use a rogue DHCP server to offer incorrect IP addresses to DHCP clients. A denial of service (DoS) attack can be launched through an unauthorized user performing a large number of DNS dynamic updates via the DHCP server. Assigning DNS IP addresses and WINS IP addresses through the DHCP server increases the possibility of hackers using this information to attack DNS and WINS servers.

To protect a DHCP environment from network attacks, use the following strategies: Implement firewalls Close all open unused ports If necessary, use VPN tunnels Use MAC address filters

Identifying Threats to DNS Implementations A few threats specific to DNS implementations: Denial of service (DoS) attacks occur when DNS servers are flooded with recursive queries in an attempt to prevent the DNS server from servicing legitimate client requests for name resolution. A successful DoS attack can result in the unavailability of DNS services and eventual network shut down.

In DNS, footprinting occurs when an intruder intercepts DNS zone information. When the intruder has this information, he/she is able to discover DNS domain names, computer names, and IP addresses being used on the network. The intruder then uses this information to decide which computers he/she wants to attack. IP Spoofing: After an intruder has obtained a valid IP address from a footprinting attack, he/she can use the IP address to send malicious packets to the network or access network services. The intruder can also use the valid IP address to modify data. In DNS, a redirection attack occurs when an intruder is able to make the DNS server forward or redirect name resolution requests to the incorrect servers. In this case, the incorrect servers are under the intruders control. A redirection attack is achieved when an intruder corrupts the DNS cache in a DNS server that accepts unsecured dynamic updates.

To protect an external DNS implementation from network attacks, use the following list of recommendations: DNS servers should be placed in a DMZ or in a perimeter network. Access rules and packet filtering should be configured firewalls to control both source and destination addresses and ports. Host DNS servers on different subnets and ensure that the DNS servers have different configured routers. Install the latest service packs on DNS servers All unnecessary services should be removed. Secure zone transfer data by using VPN tunnels or IPSec. Ensure that zone transfer is only allowed to specific IP addresses. For Internet facing DNS servers, disable recursion, disable dynamic updates, and enable protection against cache pollution. Use a stealth primary server to update secondary DNS servers that are registered with ICANN.

Identifying Threats to Internet Information Server (IIS) Servers (Web servers) The security vulnerabilities of the earlier Internet Information Server (IIS) versions including IIS version 5 were continuously patched up by service packs and hotfixes available from Microsoft. Previously when IIS was installed, all services were enabled and started, all service accounts had high system rights, and permissions were assigned to the lowest levels. This basically meant that the IIS implementation was vulnerable to all sorts of attacks from hackers. Microsoft introduced the Security Lockdown Wizard in an attempt to address the security loopholes and vulnerabilities that existed in the previous versions of IIS. The Security Lockdown Wizard in IIS 6 has been included in the Web Service Extensions (WSE). IIS is installed in lock down mode with IIS 6. The only feature immediately available is static content. Users actually need to utilize the WSE feature in the IIS Manager console tree to manually enable IIS to run applications and its features. By default, all applications and extensions are prohibited from running. To protect IIS servers from network attacks, use the following recommendations: To prevent hackers from using default account names, all default account names including the Administrator account and Guest account should be changed. Utilize names that are difficult to guess. To prevent a hacker from compromising Active Directory, should the Web server be compromised, the Web server should be a stand alone server or a member of a forest other than the forest that the private network uses. All the latest released security updates, service packs, and hotfixes should be applied to the Web server. All sample applications should be removed from a Web server. A few sample application files are installed by default with IIS 5.0. All unnecessary services should be removed or disabled. This would ensure that network attackers cannot exploit these services to compromise the Web server. Disable parent path utilization. Hackers typically attempt to access unauthorized disk subsystem areas through parent paths. Apply security to each content type. Content should be categorized into separate folders based on content type. Apply discretionary access control lists for each content type identified. To protect commonly attacked ports, use IPSec.

To protect the Web servers secure areas, use the Secure Socket Layer (SSL) protocol. To detect hacking activity, implement an intrusion detection system (IDS). A few recommendations for writing secure code for ASP or ASP.NET applications are summarized here: o ASP pages should not contain any hard coded administrator account names and administrator account passwords. o Sensitive and confidential information and data should not be stored in hidden input fields on Web pages and in cookies. o Verify and validate form input prior to it being processed. o Do not use information from HTTP request headers to code decision branches for applications. o Be wary of buffer overflows that unsound coding standardsenerate. o Use Secure Sockets Layer (SSL) to encrypt session cookies.

Identifying Threats to Wireless Networks A few threats specific to DNS implementations: Eavesdropping attacks: The hacker attempts to capture traffic when it is being transmitted from the wireless computer to the wireless access point (WAP). Masquerading: Here, the hacker masquerades as an authorized wireless user to access network resources or services. Denial of service (DoS) attacks: The network attacker attempts to prevent authorized wireless users from accessing network resources by using a transmitter to block wireless frequencies. Man-in-the-middle attacks: If an attacker successfully launches a man-in-the-middle attack, the attacker could be able to replay and modify wireless communications. Attacks at wireless clients: The attacker starts a network attack at the actual wireless computer that is connected to an untrusted wireless network.

To protect wireless networks from network attacks, use the following strategies: Administrators should require all wireless communications to be authenticated and encrypted. The common technologies used to protect wireless networks from security threats are Wired Equivalent Privacy (WEP), WiFi Protected Access (WPA), and IEEE 802.1X authentication. Regularly apply all firmware updates to wireless devices. Place the wireless network in a wireless demilitarized zone (WDMZ). A router or firewall should isolate the private corporate network from the WDMZ. DHCP should not be used in the wireless demilitarized zone. To ensure a high level of wireless security, wireless devices should support 802.1X authentication using Extensible Authentication Protocol (EAP) authentication and Temporal Key Integrity Protocol (TKIP). Use IPSec to secure communication between the AP and the RADIUS server. The default administrative password that manages the AP should be a complex, strong password. The SSID should not contain the name of the company, the address of the company, and any other identification information. Do not utilize shared key encryption because it can lead to the compromise of the WEP keys. To protect the network from site survey mechanisms, disable SSID broadcasts.

Determining Security Requirements for Different Data Types When determining security requirements for different data types, it is often helpful to categorize data as follows: Public data: This category includes all data that is already publicly available on the companys website or news bulletins. Because the data is already publicly available, no risk is typically associated with the data being stolen. Users do, however, need to maintain and ensure the integrity of public data. Private data: Data that falls within this category is usually well known within an organizations environment but is not well known to the public. A typical example of data that falls within this category is data on the corporate intranet.

Confidential data: Data that falls within this category is data such as private customer information that should be protected from unauthorized access. The organization would almost always suffer some sort of loss if confidential data is intercepted. Secret data: This is data that can be considered more confidential and sensitive in nature than confidential data. Secret data consists of trade secrets, new product and business strategy information, and patent information. Secret data should have the highest levels of security.

Creating an Incidence Response Plan The terminology, incident response refers to planned actions in response to a network attack or any similar event that affects systems, networks, and company data. An Incident Response plan is aimed at outlining the response procedures that should take place when a network is being attacked or security is being compromised. The Incident Response plan should assist an organization with dealing with the incident in an orderly manner. Reacting to network attacks by following a planned approach that a security policy defines is the better approach. These security policies should clearly define the following: The response to follow each incident type. The individual(s) who are responsible for dealing with these incidents. The escalation procedures that should be followed.

An Incident Response plan can be divided into the following four steps: Response: Determine how network attacks and security breaches will be dealt with. Investigation: Determine how the attack occurred, why the specific attack occurred, and the extent of the attack. Restoration: All infected systems should be taken offline and then restored from a clean backup. Reporting: The network attack or security breach should be reported to the appropriate authorities.

Before attempting to determine the existing state of a machine that is being attacked, it is recommended that users first record the information listed here: The name of the machine The IP address of the machine The installed operating system, operating system version, and installed service packs. All running processes and services List all parties that are dependent on the server. These are the individuals who need to be informed of the current situation. Obtain the following valuable information: o Application event log information o System event log information o Security event log information o All other machine specific event logs such as DNS logs, DHCP logs, or File Replication logs. Record all information that indicates malicious activities. This should include: o All files that have been modified, corrupted, or deleted. o All unauthorized processes running. Try to identify and record the source of the network attack

In This Chapter An explanation of attacker methodology

Descriptions of common attacks How to categorize threats How to identify and counter threats at the network, host, and application levels

Overview When you incorporate security features into your application's design, implementation, and deployment, it helps to have a good understanding of how attackers think. By thinking like attackers and being aware of their likely tactics, you can be more effective when applying countermeasures. This chapter describes the classic attacker methodology and profiles the anatomy of a typical attack. This chapter analyzes Web application security from the perspectives of threats, countermeasures, vulnerabilities, and attacks. The following set of core terms are defined to avoid confusion and to ensure they are used in the correct context. Asset. A resource of value such as the data in a database or on the file system, or a system resource Threat. A potential occurrence malicious or otherwise that may harm an asset Vulnerability. A weakness that makes a threat possible Attack (or exploit). An action taken to harm an asset Countermeasure. A safeguard that addresses a threat and mitigates risk

This chapter also identifies a set of common network, host, and application level threats, and the recommended countermeasures to address each one. The chapter does not contain an exhaustive list of threats, but it does highlight many top threats. With this information and knowledge of how an attacker works, you will be able to identify additional threats. You need to know the threats that are most likely to impact your system to be able to build effective threat models. These threat models are the subject of Chapter 3, "Threat Modeling." How to Use This Chapter The following are recommendations on how to use this chapter: Become familiar with specific threats that affect the network host and application. The threats are unique for the various parts of your system, although the attacker's goals may be the same. Use the threats to identify risk. Then create a plan to counter those threats. Apply countermeasures to address vulnerabilities. Countermeasures are summarized in this chapter. Use Part III, "Building Secure Web Applications," and Part IV, "Securing Your Network, Host, and Application," of this guide for countermeasure implementation details. When you design, build, and secure new systems, keep the threats in this chapter in mind. The threats exist regardless of the platform or technologies that you use.

Anatomy of an Attack By understanding the basic approach used by attackers to target your Web application, you will be better equipped to take defensive measures because you will know what you are up against. The basic steps in attacker methodology are summarized below and illustrated in Figure 2.1: Survey and assess Exploit and penetrate Escalate privileges Maintain access Deny service

Figure 2.1 Basic steps for attacking methodology Survey and Assess Surveying and assessing the potential target are done in tandem. The first step an attacker usually takes is to survey the potential target to identify and assess its characteristics. These characteristics may include its supported services and protocols together with potential vulnerabilities and entry points. The attacker uses the information gathered in the survey and assess phase to plan an initial attack. For example, an attacker can detect a cross-site scripting (XSS) vulnerability by testing to see if any controls in a Web page echo back output. Exploit and Penetrate Having surveyed a potential target, the next step is to exploit and penetrate. If the network and host are fully secured, your application (the front gate) becomes the next channel for attack. For an attacker, the easiest way into an application is through the same entrance that legitimate users use for example, through the application's logon page or a page that does not require authentication. Escalate Privileges After attackers manage to compromise an application or network, perhaps by injecting code into an application or creating an authenticated session with the Microsoft Windows 2000 operating system, they immediately attempt to escalate privileges. Specifically, they look for administration privileges provided by accounts that are members of the Administrators group. They also seek out the high level of privileges offered by the local system account. Using least privileged service accounts throughout your application is a primary defense against privilege escalation attacks. Also, many network level privilege escalation attacks require an interactive logon session. Maintain Access Having gained access to a system, an attacker takes steps to make future access easier and to cover his or her tracks. Common approaches for making future access easier include planting back-door programs or using an existing account that lacks strong protection. Covering tracks typically involves clearing logs and hiding tools. As such, audit logs are a primary target for the attacker. Log files should be secured, and they should be analyzed on a regular basis. Log file analysis can often uncover the early signs of an attempted break-in before damage is done.

Deny Service Attackers who cannot gain access often mount a denial of service attack to prevent others from using the application. For other attackers, the denial of service option is their goal from the outset. An example is the SYN flood attack, where the attacker uses a program to send a flood of TCP SYN requests to fill the pending connection queue on the server. This prevents other users from establishing network connections. Understanding Threat Categories While there are many variations of specific attacks and attack techniques, it is useful to think about threats in terms of what the attacker is trying to achieve. This changes your focus from the identification of every specific attack which is really just a means to an end to focusing on the end results of possible attacks. STRIDE Threats faced by the application can be categorized based on the goals and purposes of the attacks. A working knowledge of these categories of threats can help you organize a security strategy so that you have planned responses to threats. STRIDE is the acronym used at Microsoft to categorize different threat types. STRIDE stands for: Spoofing. Spoofing is attempting to gain access to a system by using a false identity. This can be accomplished using stolen user credentials or a false IP address. After the attacker successfully gains access as a legitimate user or host, elevation of privileges or abuse using authorization can begin. Tampering. Tampering is the unauthorized modification of data, for example as it flows over a network between two computers. Repudiation. Repudiation is the ability of users (legitimate or otherwise) to deny that they performed specific actions or transactions. Without adequate auditing, repudiation attacks are difficult to prove. Information disclosure. Information disclosure is the unwanted exposure of private data. For example, a user views the contents of a table or file he or she is not authorized to open, or monitors data passed in plaintext over a network. Some examples of information disclosure vulnerabilities include the use of hidden form fields, comments embedded in Web pages that contain database connection strings and connection details, and weak exception handling that can lead to internal system level details being revealed to the client. Any of this information can be very useful to the attacker. Denial of service. Denial of service is the process of making a system or application unavailable. For example, a denial of service attack might be accomplished by bombarding a server with requests to consume all available system resources or by passing it malformed input data that can crash an application process. Elevation of privilege. Elevation of privilege occurs when a user with limited privileges assumes the identity of a privileged user to gain privileged access to an application. For example, an attacker with limited privileges might elevate his or her privilege level to compromise and take control of a highly privileged and trusted process or account.

STRIDE Threats and Countermeasures Each threat category described by STRIDE has a corresponding set of countermeasure techniques that should be used to reduce risk. These are summarized in Table 2.1. The appropriate countermeasure depends upon the specific attack. More threats, attacks, and countermeasures that apply at the network, host, and application levels are presented later in this chapter. Table 2.1 STRIDE Threats and Countermeasures Threat Spoofing user identity Use strong authentication. Countermeasures

Do not store secrets (for example, passwords) in plaintext. Do not pass credentials in plaintext over the wire. Protect authentication cookies with Secure Sockets Layer (SSL). Use data hashing and signing. Use digital signatures. Tampering with data Use strong authorization. Use tamper-resistant protocols across communication links. Secure communication links with protocols that provide message integrity. Repudiation Create secure audit trails. Use digital signatures. Use strong authorization. Information disclosure Use strong encryption. Secure communication links with protocols that provide message confidentiality. Do not store secrets (for example, passwords) in plaintext. Denial of service Elevation of privilege Use resource and bandwidth throttling techniques. Validate and filter input. Follow the principle of least privilege and use least privileged service accounts to run processes and access resources.

Network Threats and Countermeasures The primary components that make up your network infrastructure are routers, firewalls, and switches. They act as the gatekeepers guarding your servers and applications from attacks and intrusions. An attacker may exploit poorly configured network devices. Common vulnerabilities include weak default installation settings, wide open access controls, and devices lacking the latest security patches. Top network level threats include: Information gathering Sniffing Spoofing Session hijacking Denial of service

Information Gathering Network devices can be discovered and profiled in much the same way as other types of systems. Attackers usually start with port scanning. After they identify open ports, they use banner grabbing and enumeration to detect device types and to determine operating system and application versions. Armed with this information, an attacker can attack known vulnerabilities that may not be updated with security patches. Countermeasures to prevent information gathering include:

Configure routers to restrict their responses to footprinting requests. Configure operating systems that host network software (for example, software firewalls) to prevent footprinting by disabling unused protocols and unnecessary ports.

Sniffing Sniffing or eavesdropping is the act of monitoring traffic on the network for data such as plaintext passwords or configuration information. With a simple packet sniffer, an attacker can easily read all plaintext traffic. Also, attackers can crack packets encrypted by lightweight hashing algorithms and can decipher the payload that you considered to be safe. The sniffing of packets requires a packet sniffer in the path of the server/client communication. Countermeasures to help prevent sniffing include: Use strong physical security and proper segmenting of the network. This is the first step in preventing traffic from being collected locally. Encrypt communication fully, including authentication credentials. This prevents sniffed packets from being usable to an attacker. SSL and IPSec (Internet Protocol Security) are examples of encryption solutions.

Spoofing Spoofing is a means to hide one's true identity on the network. To create a spoofed identity, an attacker uses a fake source address that does not represent the actual address of the packet. Spoofing may be used to hide the original source of an attack or to work around network access control lists (ACLs) that are in place to limit host access based on source address rules. Although carefully crafted spoofed packets may never be tracked to the original sender, a combination of filtering rules prevents spoofed packets from originating from your network, allowing you to block obviously spoofed packets. Countermeasures to prevent spoofing include: Filter incoming packets that appear to come from an internal IP address at your perimeter. Filter outgoing packets that appear to originate from an invalid local IP address.

Session Hijacking Also known as man in the middle attacks, session hijacking deceives a server or a client into accepting the upstream host as the actual legitimate host. Instead the upstream host is an attacker's host that is manipulating the network so the attacker's host appears to be the desired destination. Countermeasures to help prevent session hijacking include: Use encrypted session negotiation. Use encrypted communication channels. Stay informed of platform patches to fix TCP/IP vulnerabilities, such as predictable packet sequences.

Denial of Service Denial of service denies legitimate users access to a server or services. The SYN flood attack is a common example of a network level denial of service attack. It is easy to launch and difficult to track. The aim of the attack is to send more requests to a server than it can handle. The attack exploits a potential vulnerability in the TCP/IP connection establishment mechanism and floods the server's pending connection queue. Countermeasures to prevent denial of service include: Apply the latest service packs.

Harden the TCP/IP stack by applying the appropriate registry settings to increase the size of the TCP connection queue, decrease the connection establishment period, and employ dynamic backlog mechanisms to ensure that the connection queue is never exhausted. Use a network Intrusion Detection System (IDS) because these can automatically detect and respond to SYN attacks.

Host Threats and Countermeasures Host threats are directed at the system software upon which your applications are built. This includes Windows 2000, Microsoft Windows Server 2003, Internet Information Services (IIS), the .NET Framework, and SQL Server depending upon the specific server role. Top host level threats include: Viruses, Trojan horses, and worms Footprinting Profiling Password cracking Denial of service Arbitrary code execution Unauthorized access

Viruses, Trojan Horses, and Worms A virus is a program that is designed to perform malicious acts and cause disruption to your operating system or applications. A Trojan horse resembles a virus except that the malicious code is contained inside what appears to be a harmless data file or executable program. A worm is similar to a Trojan horse except that it self-replicates from one server to another. Worms are difficult to detect because they do not regularly create files that can be seen. They are often noticed only when they begin to consume system resources because the system slows down or the execution of other programs halt. The Code Red Worm is one of the most notorious to afflict IIS; it relied upon a buffer overflow vulnerability in a particular ISAPI filter. Although these three threats are actually attacks, together they pose a significant threat to Web applications, the hosts these applications live on, and the network used to deliver these applications. The success of these attacks on any system is possible through many vulnerabilities such as weak defaults, software bugs, user error, and inherent vulnerabilities in Internet protocols. Countermeasures that you can use against viruses, Trojan horses, and worms include: Stay current with the latest operating system service packs and software patches. Block all unnecessary ports at the firewall and host. Disable unused functionality including protocols and services. Harden weak, default configuration settings.

Footprinting Examples of footprinting are port scans, ping sweeps, and NetBIOS enumeration that can be used by attackers to glean valuable system-level information to help prepare for more significant attacks. The type of information potentially revealed by footprinting includes account details, operating system and other software versions, server names, and database schema details. Countermeasures to help prevent footprinting include: Disable unnecessary protocols. Lock down ports with the appropriate firewall configuration. Use TCP/IP and IPSec filters for defense in depth. Configure IIS to prevent information disclosure through banner grabbing.

Use an IDS that can be configured to pick up footprinting patterns and reject suspicious traffic.

Password Cracking If the attacker cannot establish an anonymous connection with the server, he or she will try to establish an authenticated connection. For this, the attacker must know a valid username and password combination. If you use default account names, you are giving the attacker a head start. Then the attacker only has to crack the account's password. The use of blank or weak passwords makes the attacker's job even easier. Countermeasures to help prevent password cracking include: Use strong passwords for all account types. Apply lockout policies to end-user accounts to limit the number of retry attempts that can be used to guess the password. Do not use default account names, and rename standard accounts such as the administrator's account and the anonymous Internet user account used by many Web applications. Audit failed logins for patterns of password hacking attempts.

Denial of Service Denial of service can be attained by many methods aimed at several targets within your infrastructure. At the host, an attacker can disrupt service by brute force against your application, or an attacker may know of a vulnerability that exists in the service your application is hosted in or in the operating system that runs your server. Countermeasures to help prevent denial of service include: Configure your applications, services, and operating system with denial of service in mind. Stay current with patches and security updates. Harden the TCP/IP stack against denial of service. Make sure your account lockout policies cannot be exploited to lock out well known service accounts. Make sure your application is capable of handling high volumes of traffic and that thresholds are in place to handle abnormally high loads. Review your application's failover functionality. Use an IDS that can detect potential denial of service attacks.

Arbitrary Code Execution If an attacker can execute malicious code on your server, the attacker can either compromise server resources or mount further attacks against downstream systems. The risks posed by arbitrary code execution increase if the server process under which the attacker's code runs is over-privileged. Common vulnerabilities include weak IIS configuration and unpatched servers that allow path traversal and buffer overflow attacks, both of which can lead to arbitrary code execution. Countermeasures to help prevent arbitrary code execution include: Configure IIS to reject URLs with "../" to prevent path traversal. Lock down system commands and utilities with restricted ACLs. Stay current with patches and updates to ensure that newly discovered buffer overflows are speedily patched.

Unauthorized Access Inadequate access controls could allow an unauthorized user to access restricted information or perform restricted operations. Common vulnerabilities include weak IIS Web access controls, including Web permissions and weak NTFS permissions.

Countermeasures to help prevent unauthorized access include: Configure secure Web permissions. Lock down files and folders with restricted NTFS permissions. Use .NET Framework access control mechanisms within your ASP.NET applications, including URL authorization and principal permission demands.

Application Threats and Countermeasures A good way to analyze application-level threats is to organize them by application vulnerability category. The various categories used in the subsequent sections of this chapter and throughout the guide, together with the main threats to your application, are summarized in Table 2.2. Table 2.2 Threats by Application Vulnerability Category Category Input validation Authentication Authorization Configuration management Sensitive data Session management Cryptography Parameter manipulation Exception management Auditing and logging Input Validation Input validation is a security issue if an attacker discovers that your application makes unfounded assumptions about the type, length, format, or range of input data. The attacker can then supply carefully crafted input that compromises your application. Threats Buffer overflow; cross-site scripting; SQL injection; canonicalization Network eavesdropping; brute force attacks; dictionary attacks; cookie replay; credential theft Elevation of privilege; disclosure of confidential data; data tampering; luring attacks Unauthorized access to administration interfaces; unauthorized access to configuration stores; retrieval of clear text configuration data; lack of individual accountability; over-privileged process and service accounts Access sensitive data in storage; network eavesdropping; data tampering Session hijacking; session replay; man in the middle Poor key generation or key management; weak or custom encryption Query string manipulation; form field manipulation; cookie manipulation; HTTP header manipulation Information disclosure; denial of service User denies performing an operation; attacker exploits an application without trace; attacker covers his or her tracks

When network and host level entry points are fully secured; the public interfaces exposed by your application become the only source of attack. The input to your application is a means to both test your system and a way to execute code on an attacker's behalf. Does your application blindly trust input? If it does, your application may be susceptible to the following: Buffer overflows Cross-site scripting SQL injection Canonicalization

The following section examines these vulnerabilities in detail, including what makes these vulnerabilities possible. Buffer Overflows Buffer overflow vulnerabilities can lead to denial of service attacks or code injection. A denial of service attack causes a process crash; code injection alters the program execution address to run an attacker's injected code. The following code fragment illustrates a common example of buffer overflow vulnerability. void SomeFunction( char *pszInput ) { char szBuffer[10]; // Input is copied straight into the buffer when no type checking is performed strcpy(szBuffer, pszInput); . . . } Managed .NET code is not susceptible to this problem because array bounds are automatically checked whenever an array is accessed. This makes the threat of buffer overflow attacks on managed code much less of an issue. It is still a concern, however, especially where managed code calls unmanaged APIs or COM objects. Countermeasures to help prevent buffer overflows include: Perform thorough input validation. This is the first line of defense against buffer overflows. Although a bug may exist in your application that permits expected input to reach beyond the bounds of a container, unexpected input will be the primary cause of this vulnerability. Constrain input by validating it for type, length, format and range. When possible, limit your application's use of unmanaged code, and thoroughly inspect the unmanaged APIs to ensure that input is properly validated. Inspect the managed code that calls the unmanaged API to ensure that only appropriate values can be passed as parameters to the unmanaged API. Use the /GS flag to compile code developed with the Microsoft Visual C++ development system. The /GS flag causes the compiler to inject security checks into the compiled code. This is not a fail-proof solution or a replacement for your specific validation code; it does, however, protect your code from commonly known buffer overflow attacks. For more information, see the .NET Framework Product documentation http://msdn.microsoft.com/en-us/library/8dbf701c(VS.71).aspx and Microsoft Knowledge Base article 325483 "WebCast: Compiler Security Checks: The GS compiler switch."

Example of Code Injection Through Buffer Overflows An attacker can exploit a buffer overflow vulnerability to inject code. With this attack, a malicious user exploits an unchecked buffer in a process by supplying a carefully constructed input value that overwrites the program's stack and alters a function's return address. This causes execution to jump to the attacker's injected code. The attacker's code usually ends up running under the process security context. This emphasizes the importance of using least privileged process accounts. If the current thread is impersonating, the attacker's code ends up running under the

security context defined by the thread impersonation token. The first thing an attacker usually does is call the RevertToSelf API to revert to the process level security context that the attacker hopes has higher privileges. Make sure you validate input for type and length, especially before you call unmanaged code because unmanaged code is particularly susceptible to buffer overflows. Cross-Site Scripting An XSS attack can cause arbitrary code to run in a user's browser while the browser is connected to a trusted Web site. The attack targets your application's users and not the application itself, but it uses your application as the vehicle for the attack. Because the script code is downloaded by the browser from a trusted site, the browser has no way of knowing that the code is not legitimate. Internet Explorer security zones provide no defense. Since the attacker's code has access to the cookies associated with the trusted site and are stored on the user's local computer, a user's authentication cookies are typically the target of attack. Example of Cross-Site Scripting To initiate the attack, the attacker must convince the user to click on a carefully crafted hyperlink, for example, by embedding a link in an email sent to the user or by adding a malicious link to a newsgroup posting. The link points to a vulnerable page in your application that echoes the unvalidated input back to the browser in the HTML output stream. For example, consider the following two links. Here is a legitimate link: www.yourwebapplication.com/logon.aspx?username=bob Here is a malicious link: www.yourwebapplication.com/logon.aspx?username=<script>alert('hacker code')</script> If the Web application takes the query string, fails to properly validate it, and then returns it to the browser, the script code executes in the browser. The preceding example displays a harmless pop-up message. With the appropriate script, the attacker can easily extract the user's authentication cookie, post it to his site, and subsequently make a request to the target Web site as the authenticated user. Countermeasures to prevent XSS include: Perform thorough input validation. Your applications must ensure that input from query strings, form fields, and cookies are valid for the application. Consider all user input as possibly malicious, and filter or sanitize for the context of the downstream code. Validate all input for known valid values and then reject all other input. Use regular expressions to validate input data received via HTML form fields, cookies, and query strings. Use HTMLEncode and URLEncode functions to encode any output that includes user input. This converts executable script into harmless HTML.

SQL Injection A SQL injection attack exploits vulnerabilities in input validation to run arbitrary commands in the database. It can occur when your application uses input to construct dynamic SQL statements to access the database. It can also occur if your code uses stored procedures that are passed strings that contain unfiltered user input. Using the SQL injection attack, the attacker can execute arbitrary commands in the database. The issue is magnified if the application uses an over-privileged account to connect to the database. In this instance it is possible to use the database server to run operating system commands and potentially compromise other servers, in addition to being able to retrieve, manipulate, and destroy data.

Example of SQL Injection Your application may be susceptible to SQL injection attacks when you incorporate unvalidated user input into database queries. Particularly susceptible is code that constructs dynamic SQL statements with unfiltered user input. Consider the following code: SqlDataAdapter myCommand = new SqlDataAdapter( "SELECT * FROM Users WHERE UserName ='" + txtuid.Text + "'", conn); Attackers can inject SQL by terminating the intended SQL statement with the single quote character followed by a semicolon character to begin a new command, and then executing the command of their choice. Consider the following character string entered into the txtuid field. '; DROP TABLE Customers This results in the following statement being submitted to the database for execution. SELECT * FROM Users WHERE UserName=''; DROP TABLE Customers --' This deletes the Customers table, assuming that the application's login has sufficient permissions in the database (another reason to use a least privileged login in the database). The double dash (--) denotes a SQL comment and is used to comment out any other characters added by the programmer, such as the trailing quote. Note The semicolon is not actually required. SQL Server will execute two commands separated by spaces. Other more subtle tricks can be performed. Supplying this input to the txtuid field: ' OR 1=1 builds this command: SELECT * FROM Users WHERE UserName='' OR 1=1 Because 1=1 is always true, the attacker retrieves every row of data from the Users table. Countermeasures to prevent SQL injection include: Perform thorough input validation. Your application should validate its input prior to sending a request to the database. Use parameterized stored procedures for database access to ensure that input strings are not treated as executable statements. If you cannot use stored procedures, use SQL parameters when you build SQL commands. Use least privileged accounts to connect to the database.

Canonicalization Different forms of input that resolve to the same standard name (the canonical name), is referred to as canonicalization. Code is particularly susceptible to canonicalization issues if it makes security decisions based on the name of a resource that is passed to the program as input. Files, paths, and URLs are resource types that are vulnerable to canonicalization because in each case there are many different ways to represent the same name. File names are also problematic. For example, a single file could be represented as: c:\temp\somefile.dat somefile.dat c:\temp\subdir\..\somefile.dat

c:\ temp\ somefile.dat ..\somefile.dat Ideally, your code should not accept input file names. If it does, the name should be converted to its canonical form prior to making security decisions, such as whether access should be granted or denied to the specified file. Countermeasures to address canonicalization issues include: Avoid using file names as input where possible and instead use absolute file paths that cannot be changed by the end user. Make sure that file names are well formed (if you must accept file names as input) and validate them within the context of your application. For example, check that they are within your application's directory hierarchy. Ensure that the character encoding is set correctly to limit how input can be represented. Check that your application's Web.config has set the requestEncoding and responseEncoding attributes on the <globalization> element.

Authentication Depending on your requirements, there are several available authentication mechanisms to choose from. If they are not correctly chosen and implemented, the authentication mechanism can expose vulnerabilities that attackers can exploit to gain access to your system. The top threats that exploit authentication vulnerabilities include: Network eavesdropping Brute force attacks Dictionary attacks Cookie replay attacks Credential theft

Network Eavesdropping If authentication credentials are passed in plaintext from client to server, an attacker armed with rudimentary network monitoring software on a host on the same network can capture traffic and obtain user names and passwords. Countermeasures to prevent network eavesdropping include: Use authentication mechanisms that do not transmit the password over the network such as Kerberos protocol or Windows authentication. Make sure passwords are encrypted (if you must transmit passwords over the network) or use an encrypted communication channel, for example with SSL.

Brute Force Attacks Brute force attacks rely on computational power to crack hashed passwords or other secrets secured with hashing and encryption. To mitigate the risk, use strong passwords. Additionally, use hashed passwords with salt; this slows down the attacker considerably and allows sufficient time for countermeasures to be activated. Dictionary Attacks This attack is used to obtain passwords. Most password systems do not store plaintext passwords or encrypted passwords. They avoid encrypted passwords because a compromised key leads to the compromise of all passwords in the data store. Lost keys mean that all passwords are invalidated. Most user store implementations hold password hashes (or digests). Users are authenticated by re-computing the hash based on the user-supplied password value and comparing it against the hash value stored in the database. If an attacker manages to obtain the list of hashed passwords, a brute force attack can be used to crack the password hashes.

With the dictionary attack, an attacker uses a program to iterate through all of the words in a dictionary (or multiple dictionaries in different languages) and computes the hash for each word. The resultant hash is compared with the value in the data store. Weak passwords such as "Yankees" (a favorite team) or "Mustang" (a favorite car) will be cracked quickly. Stronger passwords such as "?You'LlNevaFiNdMeyePasSWerd!", are less likely to be cracked. Note Once the attacker has obtained the list of password hashes, the dictionary attack can be performed offline and does not require interaction with the application. Countermeasures to prevent dictionary attacks include: Use strong passwords that are complex, are not regular words, and contain a mixture of upper case, lower case, numeric, and special characters. Store non-reversible password hashes in the user store. Also combine a salt value (a cryptographically strong random number) with the password hash.

For more information about storing password hashes with added salt, see Chapter 14, "Building Secure Data Access." Cookie Replay Attacks With this type of attack, the attacker captures the user's authentication cookie using monitoring software and replays it to the application to gain access under a false identity. Countermeasures to prevent cookie replay include: Use an encrypted communication channel provided by SSL whenever an authentication cookie is transmitted. Use a cookie timeout to a value that forces authentication after a relatively short time interval. Although this doesn't prevent replay attacks, it reduces the time interval in which the attacker can replay a request without being forced to re-authenticate because the session has timed out.

Credential Theft If your application implements its own user store containing user account names and passwords, compare its security to the credential stores provided by the platform, for example, a Microsoft Active Directory directory service or Security Accounts Manager (SAM) user store. Browser history and cache also store user login information for future use. If the terminal is accessed by someone other than the user who logged on, and the same page is hit, the saved login will be available. Countermeasures to help prevent credential theft include: Use and enforce strong passwords. Store password verifiers in the form of one way hashes with added salt. Enforce account lockout for end-user accounts after a set number of retry attempts. To counter the possibility of the browser cache allowing login access, create functionality that either allows the user to choose to not save credentials, or force this functionality as a default policy.

Authorization Based on user identity and role membership, authorization to a particular resource or service is either allowed or denied. Top threats that exploit authorization vulnerabilities include: Elevation of privilege Disclosure of confidential data Data tampering Luring attacks

Elevation of Privilege When you design an authorization model, you must consider the threat of an attacker trying to elevate privileges to a powerful account such as a member of the local administrators group or the local system account. By doing this, the attacker is able to take complete control over the application and local machine. For example, with classic ASP programming, calling the RevertToSelf API from a component might cause the executing thread to run as the local system account with the most power and privileges on the local machine. The main countermeasure that you can use to prevent elevation of privilege is to use least privileged process, service, and user accounts. Disclosure of Confidential Data The disclosure of confidential data can occur if sensitive data can be viewed by unauthorized users. Confidential data includes application specific data such as credit card numbers, employee details, financial records and so on together with application configuration data such as service account credentials and database connection strings. To prevent the disclosure of confidential data you should secure it in persistent stores such as databases and configuration files, and during transit over the network. Only authenticated and authorized users should be able to access the data that is specific to them. Access to system level configuration data should be restricted to administrators. Countermeasures to prevent disclosure of confidential data include: Perform role checks before allowing access to the operations that could potentially reveal sensitive data. Use strong ACLs to secure Windows resources. Use standard encryption to store sensitive data in configuration files and databases.

Data Tampering Data tampering refers to the unauthorized modification of data. Countermeasures to prevent data tampering include: Use strong access controls to protect data in persistent stores to ensure that only authorized users can access and modify the data. Use role-based security to differentiate between users who can view data and users who can modify data.

Luring Attacks A luring attack occurs when an entity with few privileges is able to have an entity with more privileges perform an action on its behalf. To counter the threat, you must restrict access to trusted code with the appropriate authorization. Using .NET Framework code access security helps in this respect by authorizing calling code whenever a secure resource is accessed or a privileged operation is performed. Configuration Management Many applications support configuration management interfaces and functionality to allow operators and administrators to change configuration parameters, update Web site content, and to perform routine maintenance. Top configuration management threats include: Unauthorized access to administration interfaces Unauthorized access to configuration stores Retrieval of plaintext configuration secrets Lack of individual accountability

Over-privileged process and service accounts

Unauthorized Access to Administration Interfaces Administration interfaces are often provided through additional Web pages or separate Web applications that allow administrators, operators, and content developers to managed site content and configuration. Administration interfaces such as these should be available only to restricted and authorized users. Malicious users able to access a configuration management function can potentially deface the Web site, access downstream systems and databases, or take the application out of action altogether by corrupting configuration data. Countermeasures to prevent unauthorized access to administration interfaces include: Minimize the number of administration interfaces. Use strong authentication, for example, by using certificates. Use strong authorization with multiple gatekeepers. Consider supporting only local administration. If remote administration is absolutely essential, use encrypted channels, for example, with VPN technology or SSL, because of the sensitive nature of the data passed over administrative interfaces. To further reduce risk, also consider using IPSec policies to limit remote administration to computers on the internal network.

Unauthorized Access to Configuration Stores Because of the sensitive nature of the data maintained in configuration stores, you should ensure that the stores are adequately secured. Countermeasures to protect configuration stores include: Configure restricted ACLs on text-based configuration files such as Machine.config and Web.config. Keep custom configuration stores outside of the Web space. This removes the potential to download Web server configurations to exploit their vulnerabilities.

Retrieval of Plaintext Configuration Secrets Restricting access to the configuration store is a must. As an important defense in depth mechanism, you should encrypt sensitive data such as passwords and connection strings. This helps prevent external attackers from obtaining sensitive configuration data. It also prevents rogue administrators and internal employees from obtaining sensitive details such as database connection strings and account credentials that might allow them to gain access to other systems. Lack of Individual Accountability Lack of auditing and logging of changes made to configuration information threatens the ability to identify when changes were made and who made those changes. When a breaking change is made either by an honest operator error or by a malicious change to grant privileged access, action must first be taken to correct the change. Then apply preventive measures to prevent breaking changes to be introduced in the same manner. Keep in mind that auditing and logging can be circumvented by a shared account; this applies to both administrative and user/application/service accounts. Administrative accounts must not be shared. User/application/service accounts must be assigned at a level that allows the identification of a single source of access using the account, and that contains any damage to the privileges granted that account. Over-privileged Application and Service Accounts If application and service accounts are granted access to change configuration information on the system, they may be manipulated to do so by an attacker. The risk of this threat can be mitigated by adopting a policy of using least privileged service and application accounts. Be wary of granting accounts the ability to modify their own configuration information unless explicitly required by design.

Sensitive Data Sensitive data is subject to a variety of threats. Attacks that attempt to view or modify sensitive data can target persistent data stores and networks. Top threats to sensitive data include: Access to sensitive data in storage Network eavesdropping Data tampering

Access to Sensitive Data in Storage You must secure sensitive data in storage to prevent a user malicious or otherwise from gaining access to and reading the data. Countermeasures to protect sensitive data in storage include: Use restricted ACLs on the persistent data stores that contain sensitive data. Store encrypted data. Use identity and role-based authorization to ensure that only the user or users with the appropriate level of authority are allowed access to sensitive data. Use role-based security to differentiate between users who can view data and users who can modify data.

Network Eavesdropping The HTTP data for Web application travels across networks in plaintext and is subject to network eavesdropping attacks, where an attacker uses network monitoring software to capture and potentially modify sensitive data. Countermeasures to prevent network eavesdropping and to provide privacy include: Encrypt the data. Use an encrypted communication channel, for example, SSL.

Data Tampering Data tampering refers to the unauthorized modification of data, often as it is passed over the network. One countermeasure to prevent data tampering is to protect sensitive data passed across the network with tamperresistant protocols such as hashed message authentication codes (HMACs). An HMAC provides message integrity in the following way: 1. 2. 3. The sender uses a shared secret key to create a hash based on the message payload. The sender transmits the hash along with the message payload. The receiver uses the shared key to recalculate the hash based on the received message payload. The receiver then compares the new hash value with the transmitted hash value. If they are the same, the message cannot have been tampered with.

Session Management Session management for Web applications is an application layer responsibility. Session security is critical to the overall security of the application. Top session management threats include:

Session hijacking Session replay Man in the middle

Session Hijacking A session hijacking attack occurs when an attacker uses network monitoring software to capture the authentication token (often a cookie) used to represent a user's session with an application. With the captured cookie, the attacker can spoof the user's session and gain access to the application. The attacker has the same level of privileges as the legitimate user. Countermeasures to prevent session hijacking include: Use SSL to create a secure communication channel and only pass the authentication cookie over an HTTPS connection. Implement logout functionality to allow a user to end a session that forces authentication if another session is started. Make sure you limit the expiration period on the session cookie if you do not use SSL. Although this does not prevent session hijacking, it reduces the time window available to the attacker.

Session Replay Session replay occurs when a user's session token is intercepted and submitted by an attacker to bypass the authentication mechanism. For example, if the session token is in plaintext in a cookie or URL, an attacker can sniff it. The attacker then posts a request using the hijacked session token. Countermeasures to help address the threat of session replay include: Re-authenticate when performing critical functions. For example, prior to performing a monetary transfer in a banking application, make the user supply the account password again. Expire sessions appropriately, including all cookies and session tokens. Create a "do not remember me" option to allow no session data to be stored on the client.

Man in the Middle Attacks A man in the middle attack occurs when the attacker intercepts messages sent between you and your intended recipient. The attacker then changes your message and sends it to the original recipient. The recipient receives the message, sees that it came from you, and acts on it. When the recipient sends a message back to you, the attacker intercepts it, alters it, and returns it to you. You and your recipient never know that you have been attacked. Any network request involving client-server communication, including Web requests, Distributed Component Object Model (DCOM) requests, and calls to remote components and Web services, are subject to man in the middle attacks. Countermeasures to prevent man in the middle attacks include: Use cryptography. If you encrypt the data before transmitting it, the attacker can still intercept it but cannot read it or alter it. If the attacker cannot read it, he or she cannot know which parts to alter. If the attacker blindly modifies your encrypted message, then the original recipient is unable to successfully decrypt it and, as a result, knows that it has been tampered with. Use Hashed Message Authentication Codes (HMACs). If an attacker alters the message, the recalculation of the HMAC at the recipient fails and the data can be rejected as invalid.

Cryptography Most applications use cryptography to protect data and to ensure it remains private and unaltered. Top threats surrounding your application's use of cryptography include:

Poor key generation or key management Weak or custom encryption Checksum spoofing

Poor Key Generation or Key Management Attackers can decrypt encrypted data if they have access to the encryption key or can derive the encryption key. Attackers can discover a key if keys are managed poorly or if they were generated in a non-random fashion. Countermeasures to address the threat of poor key generation and key management include: Use built-in encryption routines that include secure key management. Data Protection application programming interface (DPAPI) is an example of an encryption service provided on Windows 2000 and later operating systems where the operating system manages the key. Use strong random key generation functions and store the key in a restricted location for example, in a registry key secured with a restricted ACL if you use an encryption mechanism that requires you to generate or manage the key. Encrypt the encryption key using DPAPI for added security. Expire keys regularly.

Weak or Custom Encryption An encryption algorithm provides no security if the encryption is cracked or is vulnerable to brute force cracking. Custom algorithms are particularly vulnerable if they have not been tested. Instead, use published, well-known encryption algorithms that have withstood years of rigorous attacks and scrutiny. Countermeasures that address the vulnerabilities of weak or custom encryption include: Do not develop your own custom algorithms. Use the proven cryptographic services provided by the platform. Stay informed about cracked algorithms and the techniques used to crack them.

Checksum Spoofing Do not rely on hashes to provide data integrity for messages sent over networks. Hashes such as Secure Hash Algorithm (SHA1) and Message Digest compression algorithm (MD5) can be intercepted and changed. Consider the following base 64 encoding UTF-8 message with an appended Message Authentication Code (MAC). Plaintext: Place 10 orders. Hash: T0mUNdEQh13IO9oTcaP4FYDX6pU= If an attacker intercepts the message by monitoring the network, the attacker could update the message and recompute the hash (guessing the algorithm that you used). For example, the message could be changed to: Plaintext: Place 100 orders. Hash: oEDuJpv/ZtIU7BXDDNv17EAHeAU= When recipients process the message, and they run the plaintext ("Place 100 orders") through the hashing algorithm, and then recompute the hash, the hash they calculate will be equal to whatever the attacker computed. To counter this attack, use a MAC or HMAC. The Message Authentication Code Triple Data Encryption Standard (MACTripleDES) algorithm computes a MAC, and HMACSHA1 computes an HMAC. Both use a key to produce a checksum. With these algorithms, an attacker needs to know the key to generate a checksum that would compute correctly at the receiver.

Parameter Manipulation Parameter manipulation attacks are a class of attack that relies on the modification of the parameter data sent between the client and Web application. This includes query strings, form fields, cookies, and HTTP headers. Top parameter manipulation threats include: Query string manipulation Form field manipulation Cookie manipulation HTTP header manipulation

Query String Manipulation Users can easily manipulate the query string values passed by HTTP GET from client to server because they are displayed in the browser's URL address bar. If your application relies on query string values to make security decisions, or if the values represent sensitive data such as monetary amounts, the application is vulnerable to attack. Countermeasures to address the threat of query string manipulation include: Avoid using query string parameters that contain sensitive data or data that can influence the security logic on the server. Instead, use a session identifier to identify the client and store sensitive items in the session store on the server. Choose HTTP POST instead of GET to submit forms. Encrypt query string parameters.

Form Field Manipulation The values of HTML form fields are sent in plaintext to the server using the HTTP POST protocol. This may include visible and hidden form fields. Form fields of any type can be easily modified and client-side validation routines bypassed. As a result, applications that rely on form field input values to make security decisions on the server are vulnerable to attack. To counter the threat of form field manipulation, instead of using hidden form fields, use session identifiers to reference state maintained in the state store on the server. Cookie Manipulation Cookies are susceptible to modification by the client. This is true of both persistent and memory-resident cookies. A number of tools are available to help an attacker modify the contents of a memory-resident cookie. Cookie manipulation is the attack that refers to the modification of a cookie, usually to gain unauthorized access to a Web site. While SSL protects cookies over the network, it does not prevent them from being modified on the client computer. To counter the threat of cookie manipulation, encrypt and use an HMAC with the cookie. HTTP Header Manipulation HTTP headers pass information between the client and the server. The client constructs request headers while the server constructs response headers. If your application relies on request headers to make a decision, your application is vulnerable to attack. Do not base your security decisions on HTTP headers. For example, do not trust the HTTP Referer to determine where a client came from because this is easily falsified.

Exception Management Exceptions that are allowed to propagate to the client can reveal internal implementation details that make no sense to the end user but are useful to attackers. Applications that do not use exception handling or implement it poorly are also subject to denial of service attacks. Top exception handling threats include: Attacker reveals implementation details Denial of service

Attacker Reveals Implementation Details One of the important features of the .NET Framework is that it provides rich exception details that are invaluable to developers. If the same information is allowed to fall into the hands of an attacker, it can greatly help the attacker exploit potential vulnerabilities and plan future attacks. The type of information that could be returned includes platform versions, server names, SQL command strings, and database connection strings. Countermeasures to help prevent internal implementation details from being revealed to the client include: Use exception handling throughout your application's code base. Handle and log exceptions that are allowed to propagate to the application boundary. Return generic, harmless error messages to the client.

Denial of Service Attackers will probe a Web application, usually by passing deliberately malformed input. They often have two goals in mind. The first is to cause exceptions that reveal useful information and the second is to crash the Web application process. This can occur if exceptions are not properly caught and handled. Countermeasures to help prevent application-level denial of service include: Thoroughly validate all input data at the server. Use exception handling throughout your application's code base.

Auditing and Logging Auditing and logging should be used to help detect suspicious activity such as footprinting or possible password cracking attempts before an exploit actually occurs. It can also help deal with the threat of repudiation. It is much harder for a user to deny performing an operation if a series of synchronized log entries on multiple servers indicate that the user performed that transaction. Top auditing and logging related threats include: User denies performing an operation Attackers exploit an application without leaving a trace Attackers cover their tracks

User Denies Performing an Operation The issue of repudiation is concerned with a user denying that he or she performed an action or initiated a transaction. You need defense mechanisms in place to ensure that all user activity can be tracked and recorded. Countermeasures to help prevent repudiation threats include:

Audit and log activity on the Web server and database server, and on the application server as well, if you use one. Log key events such as transactions and login and logout events. Do not use shared accounts since the original source cannot be determined.

Attackers Exploit an Application Without Leaving a Trace System and application-level auditing is required to ensure that suspicious activity does not go undetected. Countermeasures to detect suspicious activity include: Log critical application level operations. Use platform-level auditing to audit login and logout events, access to the file system, and failed object access attempts. Back up log files and regularly analyze them for signs of suspicious activity.

Attackers Cover Their Tracks Your log files must be well-protected to ensure that attackers are not able to cover their tracks. Countermeasures to help prevent attackers from covering their tracks include: Secure log files by using restricted ACLs. Relocate system log files away from their default locations.

Summary By being aware of the typical approach used by attackers as well as their goals, you can be more effective when applying countermeasures. It also helps to use a goal-based approach when considering and identifying threats, and to use the STRIDE model to categorize threats based on the goals of the attacker, for example, to spoof identity, tamper with data, deny service, elevate privileges, and so on. This allows you to focus more on the general approaches that should be used for risk mitigation, rather than focusing on the identification of every possible attack, which can be a time-consuming and potentially fruitless exercise. This chapter has shown you the top threats that have the potential to compromise your network, host infrastructure, and applications. Knowledge of these threats, together with the appropriate countermeasures, provides essential information for the threat modeling process It enables you to identify the threats that are specific to your particular scenario and prioritize them based on the degree of risk they pose to your system. This structured process for identifying and prioritizing threats is referred to as threat modeling. For more information, see Chapter 3, "Threat Modeling."

Common Security Vulnerabilities in e-commerce Systems Updated: 02 Nov 2010 | 1 comment

0 0 Votes by K. K. Mookhey

1. Introduction The tremendous increase in online transactions has been accompanied by an equal rise in the number and type of attacks against the security of online payment systems. Some of these attacks have utilized vulnerabilities that have been published in reusable third-party components utilized by websites, such as shopping cart software. Other attacks have used vulnerabilities that are common in any web application, such as SQL injection or cross-site scripting. This article discusses these vulnerabilities with examples, either from the set of known vulnerabilities, or those discovered during the author's penetration testing assignments. The different types of vulnerabilities discussed here are SQL injection, crosssite scripting, information disclosure, path disclosure, price manipulation, and buffer overflows. Successful exploitation of these vulnerabilities can lead to a wide range of results. Information and path disclosure vulnerabilities will typically act as initial stages leading to further exploitation. SQL injection or price manipulation attacks could cripple the website, compromise confidentiality, and in worst cases cause the e-commerce business to shut down completely. Wherever examples of such vulnerabilities are given in advisories published by Bugtraq, we have given the Bugtraq ID in square brackets. Details of the vulnerability may be viewed by navigating to http://www.securityfocus.com/bid/<bid_number> . 2. Vulnerabilities 2.1 Background There are a number of reasons why security vulnerabilities arise in shopping cart and online payment systems. The reasons are not exclusive to these systems, but their impact becomes much greater simply because of the wide exposure that an online website has, and because of the financial nature of the transactions. One of the main reasons for such vulnerabilities is the fact that web application developers are often not very well versed with secure programming techniques. As a result, security of the application is not necessarily one of the design goals. This is exacerbated by the rush to meet deadlines in the fast-moving e-commerce world. Even one day's delay in publishing a brand new feature on your website could allow a competitor to steal a march over you. We've typically found this in cases where e-commerce sites need to add functionality rapidly to deal with a sudden change in the business environment or simply to stay ahead of the competition. In such a scenario, the attitude is to get the functionality online; security can always be taken care of later. Another reason why security vulnerabilities appear is because of the inherent complexity in most online systems. Nowadays, users are placing very demanding requirements on their e-commerce providers, and this requires complex designs and programming logic. In a number of cases, we've found that e-commerce sites tout their 128-bit SSL certificates as proof that their sites are well secured. The gullibility of customers to believe in this has reduced over the past few years, but even now there are thousands of web sites displaying Verisign or Thawte certificate icons as proof of their security. The following sections look at common security vulnerabilities that have been discovered in shopping cart and online payment systems. 2.2 SQL Injection SQL injection refers to the insertion of SQL meta-characters in user input, such that the attacker's queries are executed by the back-end database. Typically, attackers will first determine if a site is vulnerable to such an attack by sending in the single-quote (') character. The results from an SQL injection attack on a vulnerable site may range from a detailed error message, which discloses the back-end technology being used, or allowing the attacker to access restricted areas of the site because he manipulated the query to an always-true Boolean value, or it may even allow the execution of operating system commands. SQL injection techniques differ depending on the type of database being used. For instance, SQL injection on an Oracle database is done primarily using the UNION keyword [ref 1] and is much more difficult than on the MS SQL Server, where multiple queries can be executed by separating them with the semi-colon [ref 2]. In its default configuration, MS

SQL server runs with Local System privileges and has the 'xp_cmdshell' extended procedure, which allows execution of operating system commands. The most publicized occurrences of this vulnerability were on the e-commerce sites of Guess.com [ref 3] and PetCo.com [ref 4]. A 20-year old programmer in Orange County, California, Jeremiah Jacks discovered that it was possible to ferret out highly sensitive data such as credit card numbers, transaction details, etc. from these and a number of other sites using specially crafted URLs containing SQL meta-characters. SQL injection vulnerabilities have also been discovered in shopping cart software such as the VP-ASP Shopping Cart [bid 9967], IGeneric Free Shopping Cart [bid 9771], Web Merchant Services Storefront Shopping Cart [bid 9301], etc. Of these, the vulnerability in the Storefront Shopping Cart occurred in its login.asp page, and could potentially allow the attacker to execute malicious database queries, without needing to authenticate to the web site. 2.3 Price Manipulation This is a vulnerability that is almost completely unique to online shopping carts and payment gateways. In the most common occurrence of this vulnerability, the total payable price of the purchased goods is stored in a hidden HTML field of a dynamically generated web page. An attacker can use a web application proxy such as Achilles [ref 5] to simply modify the amount that is payable, when this information flows from the user's browser to the web server. Shown below is a snapshot of just such a vulnerability that was discovered in one of the author's penetration testing assignments.

Figure 1: Achilles web proxy The final payable price (currency=Rs&amount=879.00) can be manipulated by the attacker to a value of his choice. This information is eventually sent to the payment gateway with whom the online merchant has partnered. If the volume of transactions is very high, the price manipulation may go completely unnoticed, or may be discovered too late. Repeated attacks of this nature could potentially cripple the viability of the online merchant.

Similar vulnerabilities have also been found in third-party software such as in the 3D3 ShopFactory Shopping Cart [bid 6296], where price and item-related information was stored in client-side cookies, which could easily be manipulated by an attacker. Similarly, Smartwin Technology's CyberOffice Shopping Cart 2.0 could be attacked by downloading the order form locally, and resubmitting it to the target server with the hidden form fields modified to arbitrary values [bid 1733]. 2.4 Buffer overflows Buffer overflow vulnerabilities are not very common in shopping cart or other web applications using Perl, PHP, ASP, etc. However, sending in a large number of bytes to web applications that are not geared to deal with them can have unexpected consequences. In one of the author's penetration testing assignments, it was possible to disclose the path of the PHP functions being used by sending in a very large value in the input fields. As the sanitized snapshot below shows, when 6000 or more bytes were fed into a particular field, the back-end PHP script was unable to process them and the error that was displayed revealed the location of these PHP functions.

Figure 2: PHP timeout error Using this error information it was possible to access the restricted 'admin' folder. From the structure of the web site and the visible hyperlinks there would have been no way to determine that there existed the 'admin' directory within the 'func' sub-directory below the main $DocumentRoot. Multiple buffer overflows were also discovered in the PDGSoft Shopping Cart [bid 1256], which potentially allowed the attacker to execute code of his choice by over-writing the saved return address. Error pages can serve as a valuable source for critical information. These errors can be induced in web applications that do not follow strict input validation principles. For instance, the application may expect numeric values and would fail when alphabets or punctuation characters are supplied to it. This is exactly what has happened in the case below. Here, the e-commerce website used numbers for its various pages. Users would navigate using a link such as http://www.vulnerablesite.com/jsp/Navigate.jsp?pageid=123. By manipulating the URL and

supplying the value 'AA' for the pageid, the following error was induced:

Figure 3: Discovering information through navigation errors If you observe carefully, the highlighted information reveals the Oracle Application Server version as Oracle 9iAS 9.0.3.0.0 as well as certain third-party components being used by the web application, such as Orion Application Server. It also reveals the path where other (possibly vulnerable) .jsp scripts exist - /scripts/menu.jsp . 2.5 Cross-site scripting The Cross-site Scripting (XSS) [ref 6] attack is primarily targeted against the end user and leverages two factors:

1. 2.

The lack of input and output validation being done by the web application The trust placed by the end-user in a URL that carries the vulnerable web site's name.

The XSS attack requires a web form that takes in user input, processes it, and prints out the results on a web page, which also contains the user's original input. It is most commonly found in 'search' features, where the search logic will print out the results along with a line such as 'Results for <user_supplied_input>'. In this case, if the user input is printed out without being parsed, then an attacker can embed JavaScript by supplying it as part of the input. By crafting a URL, which contains this JavaScript, a victim can be social engineered into clicking on it, and the script executes on the victim's system. A typical XSS attack URL would look like this: http://www.vulnerablesite.com/cgibin/search.php?keywords=&lt;script>alert("OK")&lt;script>. In this case, when the victim clicks on this link, a message box with the text "OK" will open up on his system. In most cases, the attacker would craft the URL in order to try and steal the user's cookie, which would probably contain the session ID and other sensitive information. The JavaScript could also be coded to redirect the user to the attacker's website where malicious code could be launched using ActiveX controls or by utilizing browser vulnerabilities such as those in Internet Explorer or Netscape Navigator. However, the JavaScript can also be used to redirect the user to a site that looks similar to the original web site and requests the user to enter sensitive information such as his authentication details for that web site, or his credit card number or social security number. A related attack is shown below:

Figure 4: Phishing scam (Source: Article on SecurityFocus http://www.securityfocus.com/infocus/1745) In this case, the attacker has opened up two windows on the victim's system. The one in the background is the original Citibank web site, whereas the pop up window in front of it requests for the user's debit card number, PIN, and card expiration date. On hitting the submit button, this information is sent to the attacker's server. Called a 'phishing' attack [ref 7], it was done by sending a spoofed email that claimed to originate from Citibank and asked users to verify their details. The link in the spoofed email looked something like this http://www.citibank.com:ac=piUq3027qcHw003nfuJ2@sd96V.pIsEm.NeT/3/?3X6CMW2I2uPOVQ W Most users would not be aware that as per HTTP rules, this link would actually go to sd96v.pisem.net (highlighted above), and not www.citibank.com Similar attacks can be carried out if the web application has scripts that redirect users to other parts of the site, or to other related sites. For instance, in one of our assignments, the web application had a script that was used to send the user to dynamically created parts of the web site: http://www.vulnerablesite.com/cgibin/redirect.php?url=some_dynamic_value Due to the lack of security awareness of the web developers, they did not realize that an attacker could craft a URL such as http://www.vulnerablesite.com/cgibin/redirect.php?url=www.attackersite.com and send it to a victim. This URL can be trivially obfuscated by hex-encoding the part that follows 'url=' or by converting the attacker's IP address into hexadecimal, octal or double-word values. For instance if the attacker's IP address is 192.168.0.1, the URL could be crafted as follows: http://www.vulnerablesite.com/cgi-bin/redirect.php?url=http://7934518627/ . 2.6 Remote command execution The most devastating web application vulnerabilities occur when the CGI script allows an attacker to execute operating system commands due to inadequate input validation. This is most common with the use of the 'system' call in Perl and PHP scripts. Using a command separator and other shell metacharacters, it is possible for the attacker to execute

commands with the privileges of the web server. For instance, Hassan Consulting's Shopping Cart allowed remote command execution [bid 3308], because shell metacharacters such as |;& were not rejected by the software. However, directory traversal was not possible in this software. In another case, Pacific Software's Carello Shopping Cart [bid 5192] had a vulnerable DLL that allowed the execution of remote commands due to directory traversal attacks that could be carried out using a specially crafted URL. 2.7 Weak Authentication and Authorization Authentication mechanisms that do not prohibit multiple failed logins can be attacked using tools such as Brutus [ref 8]. Similarly, if the web site uses HTTP Basic Authentication or does not pass session IDs over SSL (Secure Sockets Layer), an attacker can sniff the traffic to discover user's authentication and/or authorization credentials. Since HTTP is a stateless protocol, web applications commonly maintain state using session IDs or transaction IDs stored in a cookie on the user's system. Thus this session ID becomes the only way that the web application can determine the online identity of the user. If the session ID is stolen (say through XSS), or it can be predicted, then an attacker can take over a genuine user's online identity vis--vis the vulnerable web site. Where the algorithm used to generate the session ID is weak, it is trivial to write a Perl script to enumerate through the possible session ID space and break the application's authentication and authorization schemes. This was illustrated in a paper by David Endler [ref 9], "Brute-Force Exploitation of Web Application Session IDs", where he explains how session IDs of sites like www.123greetings.com, www.register.com, and others could be trivially brute-forced. Similarly, in one such instance, we discovered that the order ID for the user's transactions was not generated randomly, and it was possible to access the orders placed by other users simply by writing a Perl script that enumerated all possible order IDs within a given range. The most pertinent point here is that although web application may have mechanisms to prevent a user from multiple password guessing attempts during authentication, they do not usually prevent a user from trying to brute-force sessions IDs by resubmitting the URLs as described in Endler's paper. 3. Countermeasures The most important point is to build security into the web application at the design stage itself. In fact, one of the key activities during the design phase should be a detailed risk assessment exercise. Here, the team must identify the key information assets that the web application will be dealing with. These could include configuration information, user transaction details, session IDs, credit card numbers, etc. Each of these information assets needs to be classified in terms of sensitivity. Depending upon the tentative architecture chosen, the developers along with security experts must analyze the threats, impact, vulnerabilities and threat probabilities for the system. Once these risks are listed out, system countermeasures must be designed and if necessary the architecture itself may be modified. Countermeasures should also include strict input validation routines, a 3-tier modular architecture, use of open-source cryptographic standards, and other secure coding practices. Some excellent resources on secure coding are David Wheeler's book "Security Linux Programming HOWTO" [ref 10], Michael Howard's "Writing Secure Code", and John Viega's "Secure Programming Cookbook for C and C++". The Open Web Application Security Project's Guide [ref 11] is also a highly useful document on web application security issues. 4. Conclusion The vulnerabilities discussed in this article are not necessarily exclusive to shopping carts or online payment systems. They could easily be present in other types of web applications as well. However, in the case of e-commerce systems, the vulnerabilities acquire a graver dimension due to the financial nature of transactions. What is at stake is not only a direct loss of revenues, but companies may face a serious loss to their reputations as well. In some cases, they may be faced with legal penalties for violating customer privacy or trust, as in the case of Guess.com and PetCo.com. It is of paramount importance for designers and developers of web applications to consider security as a primary design goal and to follow secure coding guidelines in order to provide the highest possible degree of assurance to their customers.

References

1. SQL injection and Oracle, Pete Finnigan http://www.securityfocus.com/infocus/1644 2. Advanced SQL injection, Chris Anley http://www.nextgenss.com/papers/advanced_sql_injection.pdf 3. News article on SQL Injection vulnerability at Guess.com http://www.securityfocus.com/news/346 4. Jeremiah Jacks at work again, this time at PetCo.com http://www.securityfocus.com/news/7581 5. Achilles can be downloaded from http://achilles.mavensecurity.com/ 6. CERT Advisory Malicious HTML HTML Tags Embedded in Client Web Requests http://www.cert.org/advisories/CA-2000-02.html 7. Definition of 'phishing' http://www.webopedia.com/TERM/p/phishing.html 8. Brutus can be downloaded from http://www.hoobie.net/brutus/ 9. Brute-Force Exploitation of Web Application Session IDs, David Endler http://www.idefense.com/application/poi/researchreports/display 10. Secure Programming for Linux and Unix HOWTO, David Wheeler, http://www.dwheeler.com/secure-programs/ 11. OWASP Guide http://www.owasp.org/ About the author View more articles by K. K. Mookhey on SecurityFocus. Comments and/or reprint requests can be sent to the editor. This article originally appeared on SecurityFocus.com -- reproduction in whole or in part is not allowed without expressed written consent.

Internet security is a broad term that refers to the various steps individuals and companies take to protect computers or computer networks that are connected to the Internet. One of the basic truths behind Internet security is that the Internet itself is not a secure environment. The Internet was originally conceived as an open, loosely linked computer network that would facilitate the free exchange of ideas and information. Data sent over the Internetfrom personal e-mail messages to online shopping orderstravel through an ever-changing series of computers and network links. As a result, unscrupulous hackers and scam artists have ample opportunities to intercept and change the information. It would be virtually impossible to secure every computer connected to the Internet around the world, so there will likely always be weak links in the chain of data exchange. Due to the growth in Internet use, the number of computer security breaches experienced by businesses has increased rapidly in recent years. At one time, 80 percent of security breaches came from inside the company. But this situation has changed as businesses have connected to the Internet, making their computer networks more vulnerable to access from outside troublemakers or industry spies. To make matters worse, as Vince Emery noted in How to Grow Your Business on the Internet, 97 percent of companies that experience breaches in computer security do not know it. When business owners do become aware of problems, furthermore, Emery estimated that only 15 percent report the security breach to authorities. Small business owners need to recognize the various threats involved in conducting business over the Internet and establish security policies and procedures to minimize their risks. As a writer for Business Week noted, "With your business ever more dependent on safe use of the Internet, security savvy has become as important as understanding marketing and finance." Internet security measures range from hardware and software protection against hackers and viruses, to training and information programs for employees and system administrators. It may be impossibleor at least impracticalfor a small business to achieve 100 percent secure computer systems. But small business owners can find ways to balance the risks of conducting business over the Internet with the benefits of speedy information transfer between the company and its employees, customers, and suppliers. COMMON SECURITY PROBLEMS In The E-Commerce Book, Steffano Korper and Juanita Ellis outline several common security problems that affect small business computers. For example, a well-known cause of computer problems are viruses, or damaging programs that are introduced to computers or networks. Some viruses rewrite coding to make software programs unusable, while others scramble or destroy data. Many viruses spread quickly and operate subtly, so they may not be noticed until the damage has already been done.

Hackers have two main methods of causing problems for businesses' computer systems: they either find a way to enter the system and then change or steal information from the inside, or they attempt to over-whelm the system with information from the outside so that it shuts down. One way a hacker might enter a small business's computer network is through an open port, or an Internet connection that remains open even when it is not being used. They might also attempt to appropriate passwords belonging to employees or other authorized users of a computer system. Many hackers are skilled at guessing common passwords, while others run programs that locate or capture password information. Another common method of attack used by hackers is e-mail spoofing. This method involves sending authorized users of a computer network fraudulent e-mail that appears as if it were sent by someone else, most likely a customer or someone else the user would know. Then the hacker tries to trick the user into divulging his or her password or other company secrets. Finally, some hackers manage to shut down business computer systems with denial of service attacks. These attacks involve bombarding a company's Internet site with thousands of messages so that no legitimate messages can get in or out. BASIC MEANS OF PROTECTION Luckily, computer experts have developed ways to help small businesses protect themselves against the most common security threats. For example, most personal computers sold today come equipped with virus protection. A wide variety of antivirus software is also available for use on computer networks. In addition, many software companies and Internet Service Providers put updates online to cover newly emerging viruses. In addition to installing antivirus software and updating it regularly, Korper and Ellis recommend backing up data frequently and teaching employees to minimize the risk of virus transmission. One of the most effective ways to protect a computer network that is connected to the Internet from unauthorized outside access is a firewall. A firewall is a hardware security device that is installed between a computer network and the Internet. It acts like a Web server, routing traffic, but also blocks external users from accessing the internal computer system. Of course, a firewall cannot protect information once it leaves the network. A common method of preventing third parties from capturing data while it is being transmitted over the Internet is encryption. Encryption programs put data into a scrambled form that cannot be read without a key. There are several methods available to help small businesses prevent unauthorized access to their computer systems. One of the most common methods is authentication of users through passwords. Since passwords can be guessed or stolen, some companies use more sophisticated authentication technologies, such as coded ID cards, voice recognition software, retinal scanning systems, or handprint recognition systems. All of these systems verify that the person seeking access to the computer network is an authorized user. They also make it possible to track computer activity and hold users accountable for their use of the system. Digital signatures can be used to authenticate e-mails and other outside documents. This technology provides proof of the origin of documents and helps prevent e-mail spoofing. PROTECTING E-COMMERCE CUSTOMERS In addition to protecting their own computers from security threats, companies that conduct business over the Internet must also take care to protect their online customers. Individuals and companies that make purchases online are becoming increasingly concerned about the security of the Web sites they visit. If a customer experiences problems using your small business's site, they are unlikely to trust you with their business again. They may use the mass communication potential of the Internet to inform other potential customers of the hazards. Furthermore, competitors may take advantage of the situation to steal your customers by advertising a secure Web server. "It's up to you to make your online customers feel safe and secure in their dealings with your company, " Paul J. Dowling, Jr. wrote in Web Advertising and Marketing. "And it's your responsibility to reduce their actual risk. Your customers have entrusted their money to your company; the least your company can do is safeguard it." Unfortunately, small businesses engaged in e-commerce are most vulnerable to Internet security threats. As Emery explained, the same programs that facilitate electronic shopping also create a potential hole in your computer system security. As you collect credit card numbers and other customer information from fill-in-the-blank forms, or grant potential customers access to your databases full of product information, you may also leave yourself open to attacks by hackers or competitive spies. Emery makes a series of recommendations for small businesses that conduct business over the Internet. First, he stresses that all Internet software should be kept as far as possible from regular system software. For example, a small business

might use a standalone computer to run its Web server or place a firewall between the Web server and the rest of the computer network. It may also be possible to run a small e-commerce operation on an Internet Service Provider's computer rather than a company machine. Emery also emphasizes that small businesses should never store customer informationespecially credit card numberson its Web server or any other computer connected to the Internet. It is also a good idea to avoid putting any sensitive or proprietary company information on these machines. For small businesses, which may not be able to employ computer experts who are qualified to establish and monitor Internet security systems, Emery recommends leaving e-commerce security to an Internet Service Provider (ISP). Many ISPs allow businesses to purchase Web space on a secure server for a reasonable price. In any case, small business owners should weigh the costs of implementing a secure Web serverand hiring the staff to continually monitor and maintain itagainst the potential profits they may receive from online sales. SECURITY POLICIES AND PROCEDURES In order for hardware and software security measures to be effective, small businesses must incorporate computer security into their basic operations. Korper and Ellis recommend that small business owners establish a set of policies and procedures for Internet security. These policies should encompass computer activity at both the user level and the system administrator level. At the user level, one of the first tasks is to educate users about the importance of computer security. Every user should require a password to access the company's computer system. Passwords should be at least eight characters long and include letters, numbers, and symbols. Employees should be advised to avoid obvious choices like names or birthdates. In addition, employees should be instructed never to store their password in a drawer or on a bulletin board, never to let anyone else log into the system using their name and password, and never to leave their computer on and unattended. Overall, small business owners need to convince employees that the information on the company's computer system is confidential, and that they have a responsibility to help protect it. Computer system administrators should be involved in developing and implementing security policies and procedures. They are in charge of ensuring that the system's hardware and software are secure, as well as controlling and monitoring access to the system. Korper and Ellis mention a number of steps administrators can take to help protect a company's computer systems. First, they recommend keeping servers in a locked room with limited access. Second, they suggest separating system files from data files on the computer network. Third, they advise administrators to install virus scanning software on all company computers and prohibit employees from copying out-side programs or files onto the network. Many of the system administrator's duties involve preventing unauthorized peopleboth inside and outside the companyfrom gaining access to the computer network. Internally, it is a good policy to limit employees' access to the system based upon their job needs. For example, it would probably not be necessary for person in accounting to have access to personnel records. The administrator should define user and group-access rights to allow employees to do their jobs without also making the system unnecessarily vulnerable to attacks from disgruntled workers. Another sound policy is to require employees to change passwords frequently, and to immediately disable passwords when employees leave the company or are terminated. Administrators should also grant Internet access only to those employees who need it for business purposes. It is possible to block employees' access to games, newsgroups, and adult sites on the Internet, and to install software that generates reports of the Internet destinations visited by employees. In order to prevent unauthorized external access to the computer system, administrators should define access rights granted to suppliers and customers. They should also make sure Internet ports are secure, and possibly implement a firewall to protect the internal network from outside access. Another important policy is never to store employee passwords on any computer that is connected to the Internet. Administrators should also be careful about establishing guest accounts on the company's computer system, since some such requests may come from hackers or competitive spies. There are a number of tools available to assist system administrators in monitoring the security of a company's computer network. For example, network auditing software tracks users who are accessing the system and what files are being changed. It also alerts the administrator to excessive failed log-in attempts. The best auditing packages generate network usage reports on demand, which allows the administrator to reconstruct events in case of a security breach.

Finally, a small business's computer security policies should cover emergency situations, such as detection of a virus or a security breach from outside the company. As Emery noted, it may be helpful to prepare a printed emergency response guide for both employees and system administrators. In a worst-case scenario, any guidelines stored on the computer system would be useless. Emery also outlines the basic steps companies should follow in case of severe system problems. First, employees who suspect a problem should contact the network administrator. The administrator should then get in touch with technical support at the ISP to determine the extent of the problem. It may also be helpful to contact the Computer Emergency Response Team (CERT) to find out if other companies are experiencing same problems. At this point, the administrator may wish to contact the small business owner or appropriate non-technical managers to inform them of the problems. Management can then decide whether to contact local law enforcement and what to tell employees. ASSISTANCE WITH INTERNET SECURITY Although dealing with the intricacies of Internet security may seem intimidating, there are a number of resources small business owners can turn to for help. For example, many companies have begun to offer packaged online security technologies, such as the hardware-based Web Safe system. In addition, secure Web servers and browsers are widely available. These systems, which include Netscape Navigator and Netscape Commerce Server, remove much of the Internet security burden from small businesses. Further-more, several Web sites provide free virus warnings and downloadable antivirus patches for Web browsers. The Computer Security Institute provides annual surveys on security breaches at www.gocsi.com. Another useful resource is the National Computer Security Association ( www.ncsa.com ), which provides tips on Internet security for business owners and supplies definitions of high-tech terms. Small businesses seeking to establish Internet security policies and procedures might begin by contacting CERT. This U.S. government organization, formed in 1988, works with the Internet community to raise awareness of security issues and organize the response to security threats. The CERT web site ( www.cert.org ) posts the latest security alerts and also provides security-related documents, tools, and training seminars. Finally, CERT offers 24-hour technical assistance in the event of Internet security breaches. Small business owners who contact CERT about a security problem will be asked to provide their company's Internet address, the computer models affected, the types of operating systems and software used, and the security measures that were in place.

Read more: Internet Security - advantage, benefits, Common security problems http://www.referenceforbusiness.com/small/Inc-Mail/Internet-Security.html#ixzz1cfgijGbt

You might also like