You are on page 1of 8

1. Access Control Levels Access control is a three-step process that involves identification, authentication, and authorization.

User Identification
Before accessing a resource, a user must first identify him or herself. User identification is the process of establishing the user's identity and usually requires very little interaction on the user's part. User identification can serve multiple purposes:
y

y y

Once a user has identified him or herself, a system can check whether that person has previously registered and is in fact allowed to use the system (providing that they prove their identification as described in the following section). A system may need an identity to establish a username for future access. This happens if a user has not previously registered his or her identity on a system. A system may simply want to identify users so that it can keep a tracking log of people who have accessed a system or simply record the number of times people have accessed a system. Anonymous FTP is an example of this kind of identification.

There are two popular methods of user identification in libraries. Library cards with magnetic strips or bar codes allow patrons to carry their identification with them in a physical form. Devices such as copiers and automated checkout systems often require such identification. This form can also conveniently keep track of monetary credit. The other popular method of identification is a username. This method requires the user to remember their identification; this is used primarily when logging in to a computer. In most cases, a user must first register her or his identification. Obtaining a library card is the most common way of registering an identity. Once a patron's or staff's identity is registered, the library can then determine what resources that user can access. Registration can require a lot of administrative overhead. Large libraries may implement some kind of automated or self registration process. Smaller libraries may register patron and staff accounts on the same network operating system and thus avoid implementing complicated user authentication databases. A common practice is to register a generic user account that is available to all patrons. This practice is beneficial when a user is accessing a resource that is not sensitive. In this case, all patrons have the same identity and authentication is avoided.

Authentication
Authentication is the process of a user proving that s/he is actually the person who s/he claims. There are three basic ways to do this:
1. By providing information the system or network is expecting, such as a password or Personal Identification Number (PIN)

2. By

providing

card,

such

as

library

card

or

ID

card

3. By providing some physical attribute which the system or network can verify, such as fingerprints or retinal pattern

Each of these authentication methods has drawbacks. Password or PIN authentication is easy and inexpensive, but it requires that the user commit something to memory (which is often easy to forget). Passwords can also be susceptible to compromise, depending on their length and what kinds of characters are used. Library cards or smart cards can store and track a lot of useful information, but unless they are password protected, a stolen card can be easily compromised. Card readers are more costly than password and PIN authentication. Biometrics literally means "life measure." It is a method for automatically identifying users based upon their unique physical characteristics, such as their retinal pattern or their fingerprints. They are useful for providing security for sensitive data. It is very hard to compromise this kind of security. However, biometrics can be highly cost prohibitive. Retinal scanners are still somewhat of an emerging technology, but fingerprint scanners are available now and cost less than retinal scanners (from $100 and upper device). However, when the fingerprint scanner cost is mupltiplied by the number of computers you have, biometrics ends up being considerably more costly than smart cards or passwords.

Authorization
Authorization is the final process in user-level security. It is the process of determining what resources a user can access after successful identification and authentication. Different users have different authorization to library resources. For example, a patron should only have authorization to read from a card catalog system, but a staff user should have authorization to make changes to references in the system, and an administrator should have full control to change anythingincluding what kind of authorization other users have. Another example is Internet filtering. Administrators and staff should have unlimited access to the Internet; however, patrons should only be authorized to view filtered content. It is up to the library automation staff to implement authorization. Certain systems are configured with default authorization for different users and groups. However, when implementing network security great care should be taken in determining what resources users are authorized to access. Security is as strong as the weakest link. If a system has a strong identification and authentication process but a weak authorization process, security may be easily compromised.

2. Message Digest
Message digest functions also called hash functions , are used to produce digital summaries of information called message digests. Message digests (also called hashes ) are commonly 128 bits to 160 bits in length and provide a digital identifier for each digital file or document. Message digest functions are mathematical functions that process information to produce a different message digest for each unique document. Identical
2

documents have the same message digest; but if even one of the bits for the document changes, the message digest changes. Figure 14.3 shows the basic message digest process.

Figure 14.3 Example of the Message Digest Process Because message digests are much shorter than the data from which the digests are generated and the digests have a finite length, duplicate message digests called collisions can exist for different data sets. However, good message digest functions use one-way functions to ensure that it is mathematically and computationally infeasible to reverse the message digest process and discover the original data. Finding collisions for good message digest functions is also mathematically and computationally infeasible but possible given enough time and computational effort. However, even if an attacker discovers a collision, it is highly improbable that the collision could be useful. For example, assume that an English message produces a message digest with a value of n, and an attacker somehow manages to computationally generate a second set of data that also produces a message digest of n . The second set of data would have to be in the English language and form a coherent and germane message for an attacker to be able to use it for an illicit purpose, such as sending a counterfeit message in the place of the original message. With the best message digest functions in use today, the probability that a second set of collision data would be in a known language or form a coherent message is minuscule. Message digests are commonly used in conjunction with public key technology to create digital signatures or "digital thumbprints" that are used for authentication, integrity, and nonrepudiation. Message digests also are commonly used with digital signing technology to provide data integrity for electronic files and documents. For example, to provide data integrity for e-mail messages, message digests can be generated from the completed mail message, digitally signed with the originator's private key, and then transmitted with e-mail messages. The recipient of the message can then do the following to check the integrity of the message:
y y y

Use the same message digest function to compute a digest for the message. Use the originator's public key to verify the signed message digest. Compare the new message digest to the original digest.

If the two message digests do not match, the recipient knows the message was altered or corrupted. Figure 14.4 shows a basic integrity check process with a digitally signed message digest.

Because the message digest is digitally signed with the sender's private key, it is not feasible for an intruder to intercept the message, modify it, and create a new valid encrypted message digest to send to the recipient. Another method of ensuring the integrity of data is to use message digests with a Hashed Message Authentication Code (HMAC) function, as described later in this chapter. Two of the most commonly used message digest algorithms today are MD5, a 128-bit digest developed by RSA Data Security, Inc., and SHA-1, a 160-bit message digest developed by the National Security Agency. The SHA-1 algorithm is generally considered to provide stronger cryptographic security than MD5, because it uses a longer message digest and it is not vulnerable to some attacks that can be conducted against MD5.

3. Digital Signatures
Digital Signatures

Just as handwritten signatures or physical thumbprints are commonly used to uniquely identify people for legal proceedings or transactions, so digital signatures ("digital thumbprints") are commonly used to identify electronic entities for online transactions. A digital signature uniquely identifies the originator of digitally signed data and also ensures the integrity of the signed data against tampering or corruption. One possible method for creating a digital signature is for the originator of data to create the signature by encrypting all of the data with the originator's private key and enclosing the signature with the original data. Anyone with the originator's public key can decrypt the signature and compare the decrypted message to the original message. Because only someone with the private key can create the signature, the integrity of the message is verified when the decrypted message matches the original. If an intruder alters the original message during transit, the intruder cannot also create a new valid signature. If an intruder alters the signature during transit, the signature does not verify properly and is invalid. However, encrypting all data to provide a digital signature is impractical for three reasons:
y

The ciphertext signature is the same size as the corresponding plaintext, so message sizes are doubled, consuming large amounts of bandwidth and storage space.
4

y y

Public key encryption is slow and places heavy computational loads on computer processors, so network and computer performance can be significantly degraded. Encrypting the entire contents of information produces large amounts of ciphertext, which can be used for cryptanalysis attacks, especially known plaintext attacks (where certain parts of the encrypted data, such as e-mail headers, are known beforehand to the attacker).

Digital signature algorithms use more efficient methods to create digital signatures. The most common types of digital signatures today are created by signing message digests with the originator's private key to create a digital thumbprint of the data. Because only the message digest is signed, the signature is usually much shorter than the data that was signed. Therefore, digital signatures place a relatively low load on computer processors during the signing process, consume insignificant amounts of bandwidth, and produce small amounts of ciphertext for cryptanalysis. Two of the most widely used digital signature algorithms today are the RSA digital signature process and the Digital Signature Algorithm (DSA).
Digital Signature Security Standard

Another widely used technology for creating digital signatures is the Digital Signature Security Standard (DSS) that was developed by the National Security Agency and adopted by the United States government as its digitalsignature standard. DSS defines the Digital Signature Algorithm (DSA), which functions in a manner similar to RSA. Although similar to RSA, DSA does not encrypt message digests with the private key or decrypt the message digest with the public key. Instead, DSA uses special mathematical functions to generate a digital signature composed of two 160-bit numbers that are derived from the message digest and the private key. DSA uses the public key to verify the signature, but the verification process is more complex than RSA. The digital signature processes for DSA and RSA are generally considered to be of equal strength. However, DSA requires the use of the SHA-1 message digest function to ensure strong digital signatures. RSA can be used with other message digest functions (besides SHA-1) that might produce weaker digital signatures. Because the DSA signature verification process increases computer processor load significantly, relative to the verification process for RSA (all other conditions being equal), the RSA digital signature process generally provides better overall performance. Because DSA is used only for digital signatures and makes no provisions for data encryption (for example, to provide secure secret key exchange), DSA is usually not subject to the export or import restrictions commonly imposed on RSA cryptography technology. Therefore, DSS digital signature technology can often be used when RSA digital signature technology cannot be used because of government-imposed export or import restrictions.
Uses for Digital Signatures

Anyone with the public key can use it to perform a validity check of digital signatures created by the private key. Only a digital signature created by the appropriate private key decrypts and validates properly with the public key. If a different private key was used to sign the data, the validity check fails. If the contents of digitally signed data or the digital signature have been tampered with or are corrupted, the validity check also fails. Valid digital signatures can be used to perform the following functions:
y y y

Authenticate online entities. Verify the authorship or origin of digital data. Ensure the integrity of digital data against tampering.

Many security technologies use digital signatures. For example, Microsoft Authenticode can be used to digitally sign software programs, safeguarding them when they are distributed on the intranet or Internet to help counter the threat of software tampering and the spread of viruses and other malicious code. Likewise, the S/MIME protocol can be used to digitally sign e-mail messages to ensure the integrity of mail communications. Data Encryption Key Distribution and key agreement
Data encryption is defined as the process of scrambling transmitted or stored information making it unintelligible until it is unscrambled by the intended recipient. With regard to computing, data encryption has historically been used primarily to protect mission critical data, government records and military secrets from foreign governments. It has been used increasingly over the past 10 years by the financial industry to protect money transfers, by businesses to protect credit-card information, for electronic commerce, and by corporations to secure sensitive transmission of proprietary information. Most of the encryption focus had been on data transmission prior to 2000 but the events of Sept. 11th, 2001 and the rise of compliance are moving the topic of encrypting data at rest, or stored data, much higher on the priority list of leading-edge data protection strategies today. The enciphering and deciphering of messages in secret code or cipher is called cryptology.

DES In 1977 the Data Encryption Standard (DES and later Triple DES) was adopted in the United States as the first federal standard. DES applies a 56-bit key to each 64-bit block of data. Other encryption algorithms in use include Secure Sockets Layer (SSL) for Internet transactions, Pretty Good Privacy (PGP), and Secure Hypertext Transfer Protocol (SHTTP). DES is now considered to be insecure for many applications. This is chiefly due to the 56-bit key size being too small as DES keys have been broken in less than 24 hours or less as microprocessor speeds increase. Computer chips currently exist for under $10 that can test 200 million DES keys/second. Since there was growing concern over the viability DES encryption algorithm, NIST (National Institutes of Standards and Technology) indicated DES would not be recertified as a standard and submissions for its replacement to become the encryption standard were accepted.

AES The second encryption standard to be adopted was known as the Advanced Encryption Standard (AES). Advanced Encryption Standard is a symmetric (Secret or Private Key) 128-bit block data encryption technique developed by Belgian cryptographers Joan Daemen and Vincent Rijmen. The U.S government adopted the algorithm as its encryption technique in October 2000 after a long standardization process, replacing the DES encryption algorithm. On December 6, 2001, the Secretary of Commerce officially approved (FIPS) Federal Information Processing Standard 197. It is expected to be used extensively worldwide as was the case with its predecessor DES. AES is more secure than DES as it offers a larger key size, while ensuring that the only known approach to decrypt a message is for an intruder to try every possible key. The AES algorithm can specify variable key lengths of 128-bit key (the default), a 192-bit key, or a 256-bit key. AES is a mutually acceptable algorithm that effectively protects sensitive government information. AES was initially used on a selective basis and is backwards compatible with DES. Symmetric standards such as DES and AES provide very high levels of security. Symmetric standards require that both the sender and the receiver must share the same key and also keep it secret from anyone else. Top Secret information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect US national security systems and/or information must be 6

reviewed and certified by NSA prior to their acquisition and use. As of 2005, no successful attacks against AES have been recognized.

Asymmetric Encryption Asymmetric Encryption differs from symmetric encryption in that uses two keys; a public key known to everyone and a private key, or secret key, known only to the recipient of the message. Asymmetric encryption lessens the risk of key exposure by using two mathematically related keys, the private key and the public key. When users want to send a secure message to another user, they use the recipient's public key to encrypt the message. The recipient then uses a private key to decrypt it. An important element to the public key system is that the public and private keys are related in such a way that only the public key can be used to encrypt messages and only the corresponding private key can be used to decrypt them. Moreover, it is virtually impossible to determine the private key if you know the public key.

There are a number of asymmetric key encryption systems but the best known and most widely used is RSA, named for its three co-inventors Rivest, Shamir and Adleman. The Secure Sockets Layer used for secure communications on the Internet uses RSA (the https protocol is simply http over SSL). Asymmetric encryption is based on algorithms that are more complex and its performance overhead is more significant making it unsuitable for encrypting very large amounts of data. It is possible to take advantage of the strengths of both key methods by encrypting data with a symmetric key, and then protecting this key with asymmetric encryption though this area of encryption is in its early stages. Asymmetric encryption is considered one level more secure than symmetric encryption, because the decryption key can be kept private.

Keys are the Key The basic idea of key-based encryption means that a block, file or other unit of data is scrambled in a way so that the original information is hidden within a level of encryption. The scrambled data is called cyphertext. In theory, only the person or machine doing the scrambling and the recipient of the cyphertext know how to decrypt or unscramble the data since it will have been encrypted using an agreed-upon set of keys. The difficulty of cracking an encrypted message is a function of the key length. For example, an 8-bit key allows for only 256 possible keys (28) and could be cracked quickly. A 128-bit key (which equates to searching 2128 keys) might take decades to crack. The same computer power that yields strong encryption can be used to break weak encryption schemes. Strong encryption makes data private, but not necessarily secure. To be secure, the recipient of the data, often a server, must be positively identified as being the approved party. This is usually accomplished online using digital signatures or certificates.

Secret Key Sharing A secret sharing scheme is a method of sharing a secret among a set of participants such that only certain groups of participants are qualified to reconstruct the secret by combining their shares. A secret sharing scheme is called perfect if each non-qualified subset of the participants gets no information about the secret (besides the set from which it is distributed). An access structure is the set of qualified groups. For reasons of efficiency and security it is important to keep the sizes of the shares as small as possible. This leads to the definition of the worst-case information rate, which is the ratio between the size of the secret and the maximum size of the shares, and the average information rate, which is the ratio between the size of the secret and the arithmetic mean of the size of all shares. In secret sharing, the main problems are: finding basic
7

constructions of secret sharing schemes (that is, building secret sharing schemes from scratch), finding composition constructions of secret sharing schemes (that is, composing known schemes in order to build a new scheme), finding upper bounds on the information rates, and finding upper and lower bounds on the amount of randomness needed to construct each share.

You might also like