You are on page 1of 7

The Caesar Cipher

The Caesar cipher has an important place in history. Julius Caesar is said to have been the first
to use this scheme, in which each letter is translated to a letter a fixed number of places after it
in the alphabet. Caesar used a shift of 3, so that plaintext letter pi was enciphered as ciphertext
letter ci by the rule
A full translation chart of the
ABCDEFGHIJKLMNO
Caesar cipher is shown here.
PQRSTUVWXYZ
Plaintext
defghijklmnopqrstuv
Ciphertext
wxyzabc
The Caesar Cipher
The Caesar cipher has an important place in history. Julius Caesar is said to have
been the first to use this scheme, in which each letter is translated to a letter a
fixed number of places after it in the alphabet. Caesar used a shift of 3, so that
plaintext letter pi was enciphered as ciphertext letter ci by the rule
A full translation chart of the Caesar cipher is shown here.
Plaintext
Ciphertext
ABCDEFGHIJKLMNOPQRSTUVWXYZ
defghijklmnopqrstuvwxyzabc
Using this encryption, the message
SIKKIM MANIPAL UNIVERSITY
would be encoded as
SIKKIMMANIPALUNIVERSITY
vlnnlppdqlsdoxqlyhuvlwb
Cryptanalysis of the Caesar Cipher
Let us take a closer look at the result of applying Caesar's encryption technique to
"SIKKIM MANIPAL UNIVERSITY" If we did not know the plaintext and were trying to
guess it, we would have many clues from the ciphertext. For example, the break
between the two words is preserved in the ciphertext, and double letters are
preserved: The SS is translated to vv. We might also notice that when a letter is
repeated, it maps again to the same ciphertext as it did previously. So the letter K
always translate to n. These clues make this cipher easy to break.
Suppose you are given the following ciphertext message, and you want to try to
determine the original plaintext.
wklv phvvdjh lv qrw wrr kdug wr euhdn

The message has actually been enciphered with a 27-symbol alphabet: A through Z
plus the "blank" character or separator between words. As a start, assume that the
coder was lazy and has allowed the blank to be translated to itself. If your
assumption is true, it is an exceptional piece of information; knowing where the
spaces are allows us to see which are the small words. English has relatively few
small words, such as am, is, to, be, he, we, and, are, you, she, and so on. Therefore,
one way to attack this problem andbreak the encryption is to substitute known short
words at appropriate places in the ciphertext until you have something that seems
to be meaningful. Once the small words fall into place, you can try substituting for
matching characters at other places in the ciphertext.
Look again at the ciphertext you are decrypting. There is a strong clue in the
repeated r of the word wrr. You might use this text to guess at three-letter words
that you know. For instance, two very common three-letter words having the pattern
xyy are see and too; other less common possibilities are add, odd, and off. (Of
course, there are also obscure possibilities like woo or gee, but it makes more sense
to try the common cases first.) Moreover, the combination wr appears in the
ciphertext, too, so you can determine whether the first two letters of the three-letter
word also form a two-letter word.
For instance, if wrr is SEE, wr would have to be SE, which is unlikely. However, if wrr
is TOO, wr would be TO, which is quite reasonable. Substituting T for w and O for r,
the message becomes
wklv phvvdjh lv qrw wrr kdug wr euhdn
T--- ------- -- -------- OT TOO ------ TO ----The -OT could be cot, dot, got, hot, lot, not, pot, rot, or tot; a likely choice is not.
Unfortunately, q = N does not give any more clues because q appears only once in
this sample.
The word lv is also the end of the word wklv, which probably starts with T. Likely
two-letter words that can also end a longer word include so, is, in, etc. However, so
is unlikely because the form T-SO is not recognizable; IN is ruled out because of the
previous assumption that q is N. A more promising alternative is to substitute IS for
lv throughout, and continue to analyze the message in that way.
By now, you might notice that the ciphertext letters uncovered are just three
positions away from their plaintext counterparts. You (and any experienced
cryptanalyst) might try that same pattern on all the unmatched ciphertext. The
completion of this decryption is left as an exercise.
The cryptanalysis described here is ad hoc, using deduction based on guesses
instead of solid principles. But you can take a more methodical approach,
considering which letters commonly start words, which letters commonly end words,

and which prefixes and suffixes are common. Cryptanalysts have compiled lists of
common prefixes, common suffixes, and words having particular patterns. (For
example, sleeps is a word that follows the pattern abccda.) In the next section, we
look at a different analysis technique.

2. Data Encryption Standard (DES)


The most widely used encryption scheme is based on Data Encryption standard. Data
Encryption Standard (DES), a system developed for the U.S. government, was intended for use
by the general public. It has been officially accepted as a cryptographic standard both in the
United States and abroad. Moreover, many hardware and software systems have been
designed with the DES. The Data Encryption Standard (DES) specifies an algorithm to be
implemented in electronic hardware devices and used for the cryptographic protection of
computer data ... Encrypting data converts it to an unintelligible form called cipher. Decrypting
cipher converts the data back to its original form. The algorithm... specifies both enciphering and
deciphering operations which are based on a binary number called a key ... Data can be
recovered from cipher only by using exactly the same key used to encipher it. However, recently
its adequacy has been questioned.
In the early 1970s, the U.S. National Bureau of Standards (NBS) recognized that the general
public needed a secure encryption technique for protecting sensitive information. Historically,
the U.S. Department of Defense and the Department of State had continuing interest in
encryption systems; it was thought that these departments were home to the greatest expertise
in cryptology. However, precisely because of the sensitive nature of the information they were
encrypting, the departments could not release any of their work. Thus, the responsibility for a
more public encryption technique was delegated to the NBS.
At the same time, several private vendors had developed encryption devices, using either
mechanical means or programs that individuals or firms could buy to protect their sensitive
communications. The difficulty with this commercial proliferation of encryption techniques was
exchange: Two users with different devices could not exchange encrypted information.
Furthermore, there was no independent body capable of testing the devices extensively to verify
that they properly implemented their algorithms.
It soon became clear that encryption was ripe for assessment and standardization, to promote
the ability of unrelated parties to exchange encrypted information and to provide a single
encryption system that could be rigorously tested and publicly certified. As a result, inissued a
call for proposals for producing a public encryption algorithm. The call specified desirable
criteria for such an algorithm include available to all users, high level of security, efficient to use,
specified and easy to understand, able to be validated, exportable, publishable, so that security
does not depend on the secrecy of the algorithm, adaptable for use in diverse applications,
economical to implement in electronic devices.
The NBS envisioned providing the encryption as a separate hardware device. To allow the
algorithm to be public, NBS hoped to reveal the algorithm itself, basing the security of the
system on the keys (which would be under the control of the users).
Few organizations responded to the call, so the NBS issued a second announcement in August
1974. The most promising suggestion was the Lucifer algorithm on which IBM had been working
for several years.
The data encryption algorithm developed by IBM for NBS was based on Lucifer, and
it became known as the Data Encryption Standard (DES), although its proper name
is DEA (Data Encryption Algorithm) in the United States and DEA1 (Data Encryption
Algorithm-1) in other countries. Then, NBS called on the Department of Defense
through its National Security Agency (NSA) to analyze the strength of the encryption

algorithm. Finally, the NBS released the algorithm for public scrutiny and discussion.
1972 the NBS
3.
In a public key or asymmetric encryption system, each user has two keys: a public key and a
private key. The user may publish the public key freely because each key does only half of the
encryption and decryption process. The keys operate as inverses, meaning that one key undoes
the encryption provided by the other key.
To see how, let kPRIV be a user's private key, and let kPUB be the corresponding public key. Then,
encrypted plaintext using the public key is decrypted by application of the private key; we write
the relationship as
P= D(kPRIV , E(kPUB, P))
That is, a user can decode with a private key what someone else has encrypted with the
corresponding public key. Furthermore, with some public key encryption algorithms, including
RSA, we have this relationship:
P= D(kPUB, E(kPRIV, P))
In other words, a user can encrypt a message with a private key, and the message can be
revealed only with the corresponding public key. These two properties tell you that public and
private keys can be applied in either order. In particular, the decryption function D can be
applied to any argument so that we can decrypt before we encrypt. With conventional
encryption, we seldom think of decrypting before encrypting. But the concept makes sense with
public keys, where it simply means applying the private transformation first and then the public
one.

You have noted that a major problem with private keys is the sheer number of keys a single
user has to store and track. With public keys, only two keys are needed per user: one public and
one private. Let us see what difference this makes in the number of keys needed. Suppose we
have three users, B, C, and D, who must pass protected messages to user A as well as to each
other. Since each distinct pair of users needs a key, each user would need three different keys;
for instance, A would need a key for B, a key for C, and a key for D. But using public key
encryption, each of B, C, and D can encrypt messages for A by using A's public key. If B has
encrypted a message using A's public key, C cannot decrypt it, even if C knew it was encrypted
with A's public key. Applying A's public key twice, for example, would not decrypt the message.
(We assume, of course, that A's private key remains secret.) Thus, the number of keys needed
in the public key system is relatively small.
4.

Digital Signature
Another typical situation parallels a common human need: an order to transfer funds from one
person to another. In other words, we want to be able to send electronically the equivalent of a
computerized check. We understand how this transaction is handled in the conventional, paper
mode:
A check is a tangible object authorizing a financial transaction.

The signature on the check confirms authenticity since (presumably) only the legitimate

signer can produce that signature.


In the case of an alleged forgery, a third party can be called in to judge authenticity.
Once a check is cashed, it is canceled so that it cannot be reused.
The paper check is not alterable. Or, most forms of alteration are easily detected.
Transacting business by check depends on tangible objects in a prescribed form. But tangible
objects do not exist for transactions on computers. Therefore, authorizing payments by
computer requires a different model. Let us consider the requirements of such a situation, both
from the standpoint of a bank and from the standpoint of a user.
Suppose Sam sends her bank a message authorizing it to transfer $100 to Tim.
Sam's bank must be able to verify and prove that the message really came from
Sandy if she should later disavow sending the message. The bank also wants to
know that the message is entirely Sandy's, that it has not been altered along the
way. On her part, Sandy wants to be certain that her bank cannot forge such
messages. Both parties want to be sure that the message is new, not a reuse of a
previous message, and that it has not
been altered during transmission. Using electronic signals instead of paper complicates this
process.
But we have ways to make the process work. A digital signature is a protocol that produces the
same effect as a real signature: It is a mark that only the sender can make, but other people can
easily recognize as belonging to the sender. Just like a real signature, a digital signature is used
to confirm agreement to a message.
Properties of Digital Signature
A digital signature must meet two primary conditions:
It must be authentic. If a person R receives the pair [M, S(P,M)] purportedly from P, R can
check that the signature is really from P. Only P could have created this signature, and the
signature is firmly attached to M.
It must be unforgeable. If person P signs message M with signature S(P,M), it is impossible

for anyone else to produce the pair [M, S(P,M)].


These two requirements, shown in figure 5.2 are the major hurdles in computer transactions.
Two more properties, also drawn from parallels with the paper-based environment, are desirable
for transactions completed with the aid of digital signatures:
It is not alterable. After being transmitted, M cannot be changed by S, R, or an interceptor.
It is not reusable. A previous message presented again will be instantly detected by R.
To see how digital signatures work, we first present a mechanism that meets the
first two requirements. Then, we add to that solution to satisfy the other
requirements
5.
Security Policy
The security policy determines the security services afforded to a packet. As mentioned earlier,
all IPSec implementations store the policy in a database called the SPD. The database is
indexed by selectors and contains the information on the security services offered to an IP
packet.

The security policy is consulted for both inbound and outbound processing of the IP packets. On
inbound or outbound packet processing, the SPD is consulted to determine the services
afforded to the packet. A separate SPD can be maintained for the inbound and the outbound
packets to support asymmetric policy, that is, providing different security services for inbound
and outbound packets between two hosts. However, the key management protocol always
negotiates bidirectional SAs. In practice, the tunneling and nesting will be mostly symmetric.
For the outbound traffic, the output of the SA lookup in the SADB is a pointer to the SA or SA
bundle, provided the SAs are already established. The SA or SA bundle will be ordered to
process the outbound packet as specified in the policy. If the SAs are not established, the key
management protocol is invoked to establish the packet. For the inbound traffic, the packet is
first afforded security processing. The SPD is then indexed by the selector to validate the policy
on the packet.
The security policy requires policy management to add, delete, and modify policy. The SPD is
stored in the kernel and IPSec implementations should provide an interface to manipulate the
SPD. This management of SPD is implementation specific and there is no standard defined.
However, the management application should provide the ability to handle all the fields defined
in the selectors that are discussed below.
To determine the security services afforded to a packet, selectors are used which are extracted
from the network and transport layer headers fields which are:
Source Address: The source address can be a wild card, an address range, a network prefix,
or a specific host. Wild card is particularly useful when the policy is the same for all the packets
originating from a host. The network prefix and address range is used for security gateways
providing security to hosts behind it and to build VPNs. A specific host is used either on a
multihomed host or in the gateways when a hosts security requirements are specific.
Destination Address: The destination address can be a wild card, an address range, a network
prefix, or a specific host. The first three are used for hosts behind secure gateways. The
destination address field used as a selector is different from the destination address used to
look up SAs in the case of tunneled IP packets. In the case of tunneled IP packets, the
destination IP address of the outer header can be different from that of the inner header when
the packets are tunneled. However, the policy in the destination gateway is set based on the
actual destination and this address is used to index into the SPD.
Name: The name field is used to identify a policy tied to a valid user or system
name. These include a DNS name, Distinguished Name, or other name types
defined in the IPSec DOI. The name field is used as a selector only during the IKE
negotiation, not during the packet processing. This field cannot be used as a
selector during packet processing as there is no way to tie an IP address to a name
presently.
Protocol: The protocol field specifies the transport protocol whenever the transport protocol is
accessible. In many cases, when ESP is used the transport protocol is not accessible. Under
these circumstances, a wild card is used.
Upper Layer Ports: In cases where there is session-oriented keying, the upper
layer ports represent the source and destination ports to which the policy is
applicable. The wild card is used when the ports are inaccessible.

6.

Wireless Transport Layer Security (WTLS) is the security level for Wireless
Application Protocol () applications. Based on Transport Layer Security () v1.0
(a security layer used in the Internet, equivalent to Secure Socket Layer 3.1),
WTLS was developed to address the problematic issues surrounding mobile
network devices - such as limited processing power and memory capacity,
and low bandwidth - and to provide adequate authentication , data integrity,
and privacy protection mechanisms.
Wireless transactions, such as those between a user and their bank, require
stringent authentication and encryption to ensure security to protect the
communication from attack during data transmission. Because mobile
networks do not provide end-to-end security, TLS had to be modified to
address the special needs of wireless users. Designed to support datagrams
in a high latency, low bandwidth environment, WTLS provides an optimized
handshake through dynamic key refreshing, which allows encryption keys to
be regularly updated during a secure session.

You might also like