You are on page 1of 18

INTRODUCTION

A transistor is a semiconductor device used to amplify and switch electronic


signals. It is made of a solid piece of semiconductor material, with at least
three terminals for connection to an external circuit. A voltage or current
applied to one pair of the transistor's terminals changes the current flowing
through another pair of terminals. Because the controlled (output) power can
be much more than the controlling (input) power, the transistor provides
amplification of a signal. Today, some transistors are packaged individually,
but many more are found embedded in integrated circuits.

The transistor is the fundamental building block of modern electronic


devices, and is ubiquitous in modern electronic systems. Following its release
in the early 1950s the transistor revolutionized the field of electronics, and
paved the way for smaller and cheaper radios, calculators, and computers,
amongst other things.

The transistor is the key active component in practically all modern


electronics, and is considered by many to be one of the greatest inventions
of the twentieth century. Its importance in today's society rests on its ability
to be mass produced using a highly automated process (semiconductor
device fabrication) that achieves astonishingly low per-transistor costs.

A transistor in computer system was a computer which used transistors


instead of vacuum tubes. The "first generation" of electronic computers used
vacuum tubes, which generated large amounts of heat, were bulky, and were
unreliable. A "second generation" of computers, through the late 1950s and
1960s featured boards filled with individual transistors and magnetic
memory cores. These machines remained the mainstream design into the
late 1960s, when integrated circuits started appearing and led to the "fourth
generation" machines.

The property of the transistor, being able to switch between two different
states (on-off) is very important for a computer's function. In a computer the
transistor can be made to switch between two binary states called 0 and 1.
The transistor is used by the computer to do calculations, etc. In today's
complex computers there are several thousands, even millions of transistors.
In a computer it is not present as a single isolated item, instead it is part of
something that is called an integrated circuit.

If you ask someone who lived during the late 1950s or 1960s what they
associated with the transistor, there is a good chance they’ll say “transistor
radio.” And with good reason. The transistor radio revolutionized the way
people listened to music, because it made radios smaller and portable. But,
nice as a hand-held radio is, the real transistor revolution was taking place in
the field of computers.

In a computer, the transistor is usually used as a switch rather than an


amplifier. Thousands and later tens of thousands of these switches were
needed to make up the complicated logic circuits that allowed computers to
compute. Unlike the earlier electron tubes (often called vacuum tubes),
transistors allowed the design of much smaller, more reliable computers—
they also addressed the seemingly insatiable need for speed.

The speed at which a computer can perform calculations depends heavily on


the speed at which transistors can switch from “on” to “off.” In other words,
the faster the transistors, the faster the computer. Researchers found that
making transistors switch faster required that the transistors themselves be
smaller and smaller, because of the way electrons move around in
semiconductors—if there is less material to move through, the electrons can
move faster. By the 1970s, mass-production techniques allowed nearly
microscopic transistors to be produced by the thousands on round silicon
wafers. These were cut up into individual pieces and mounted inside a
package for easier handling and assembly. The packaged, individual
transistors were then wired into circuits along with other components such
as resistors and capacitors.
EVOLUTION OF COMPUTER SYSTEM

The history of computer development is often referred to in reference to the


different generations of computing devices. Each generation of is
characterized by a major technological development that fundamentally
changed the way computers operate, resulting in increasingly smaller,
cheaper, more powerful and more efficient and reliable devices.

First Generation (1940-1956) vacuum tubes

The first computers used vacuum tubes for circuitry and magnetic drums for
memory, and were often enormous, taking up entire rooms. They were very
expensive to operate and in addition to using a great deal of electricity,
generated a lot of heat, which was often the cause of malfunctions.

First generation computers relied on machine language, the lowest-level


programming language understood by computers, to perform operations,
and they could only solve one problem at a time. Input was based on
punched cards and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation


computing devices. The UNIVAC was the first commercial computer delivered
to a business client, the U.S. Census Bureau in 1951.

Second Generation (1956-1963) Transistors

Transistors replaced vacuum tubes and ushered in the second generation of


computers. The transistor was invented in 1947 but did not see widespread
use in computers until the late 1950s. The transistor was far superior to the
vacuum tube, allowing computers to become smaller, faster, cheaper, more
energy-efficient and more reliable than their first-generation predecessors.
Though the transistor still generated a great deal of heat that subjected the
computer to damage, it was a vast improvement over the vacuum tube.
Second-generation computers still relied on punched cards for input and
printouts for output.

Second-generation computers moved from cryptic binary machine language


to symbolic, or assembly, languages, which allowed programmers to specify
instructions in words. High-level programming languages were also being
developed at this time, such as early versions of COBOL and FORTRAN.
These were also the first computers that stored their instructions in their
memory, which moved from a magnetic drum to magnetic core technology.

The first computers of this generation were developed for the atomic energy
industry.

Third Generation (1964-1971) Integrated Circuits

The development of the integrated circuit was the hallmark of the third
generation of computers. Transistors were miniaturized and placed on silicon
chips, called semiconductors, which drastically increased the speed and
efficiency of computers.

Instead of punched cards and printouts, users interacted with third


generation computers through keyboards and monitors and interfaced with
an operating system, which allowed the device to run many different
applications at one time with a central program that monitored the memory.
Computers for the first time became accessible to a mass audience because
they were smaller and cheaper than their predecessors.

Fourth Generation (1971-Present) Microprocessors

The microprocessor brought the fourth generation of computers, as


thousands of integrated circuits were built onto a single silicon chip. What in
the first generation filled an entire room could now fit in the palm

of the hand. The Intel 4004 chip, developed in 1971, located all the
components of the computer—from the central processing unit and memory
to input/output controls—on a single chip.

In 1981 IBM introduced its first computer for the home user, and in 1984
Apple introduced the Macintosh. Microprocessors also moved out of the
realm of desktop computers and into many areas of life as more and more
everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked


together to form networks, which eventually led to the development of the
Internet. Fourth generation computers also saw the development of GUIs, the
mouse and handheld devices.
Fifth Generation (Present and Beyond) Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still


in development, though there are some applications, such as voice
recognition, that are being used today. The use of parallel processing and
superconductors is helping to make artificial intelligence a reality. Quantum
computation and molecular and nanotechnology will radically change the
face of computers in years to come. The goal of fifth-generation computing is
to develop devices that respond to natural language input and are capable of
learning and self-organization.

HISTORY OF TRANSISTOR IN COMPUTER SYSTEM

The computers built in the 1950s and 1960s are considered the 2nd
generation computers. These computers make use of the transistors
invented by Bell Telephone laboratories and they had many of the same
components as the modern-day computer. For instance, 2nd generation
computers typically had a printer, some sort of tape or disk storage,
operating systems, stored programs, as well as some sort of memory. These
computers were also generally more reliable and were solid in design.

A Three Terminal Device

Vacuum tubes were made containing


several three terminal devices called
triodes.

The transistor is a three terminal, solid state electronic device. In a three


terminal device we can control electric current or voltage between two of the
terminals by applying an electric current or voltage to the third terminal. This
three terminal character of the transistor is what allows us to make an
amplifier for electrical signals, like the one in our radio. With the three-
terminal transistor we can also make an electric switch, which can be
controlled by another electrical switch. By cascading these switches
(switches that control switches that control switches, etc.) we can build up
very complicated logic circuits.

These logic circuits can be built very compact on a silicon chip with
1,000,000 transistors per square centimeter. We can turn them on and off
very rapidly by switching every 0.000000001 seconds. Such logic chips are
at the heart of your personal computer and many other gadgets you use
today.

Light Bulbs and Vacuum Tubes

The transistor was not the first three terminal device. The vacuum tube
triode preceded the transistor by nearly 50 years. Vacuum tubes played an
important role in the emergence of home electronics and in the scientific
discoveries and technical innovations which are the foundation for our
modern electronic technology.

Thomas Edison's light bulb was one of the first uses of vacuum tubes for
electrical applications. Soon after the discovery of the light bulb, a third
electrode was placed in the vacuum tube to investigate the effect that this
electrode would have on "cathode rays," which were observed around the
filament of the light bulb.

Joseph John Thomson developed a vacuum tube to carefully investigate the


nature of cathode rays, which resulted in his discovery, published in 1897.
He showed that the cathode rays were really made up of particles, or
"corpuscles" as Thomson called them, that were contained in all material.
Thomson had discovered the electron, for which he received the Nobel Prize
in Physics 1906.

Lee De Forest and The Radio

Radio brought information


rapidly to the masses and
was the first widely used
electronic device in the
home.
At the same time as physicists were trying to understand what cathode rays
were, engineers were trying to apply them to make electronic devices. In
1906, an American inventor and physicists, Lee De Forest, made the vacuum
tube triode, or audion as he called it. The triode was a three terminal device
that allowed him to make an amplifier for audio signals, making AM radio
possible. Radio revolutionized the way in which information and
entertainment reached the great majority of people.

The vacuum tube triode also helped push the development of computers
forward a great deal. Electronic tubes were used in several different
computer designs in the late 1940's and early 1950's. But the limits of these
tubes were soon reached. As the electric circuits became more complicated,
one needed more and more triodes. Engineers packed several triodes into
one vacuum tube (that is why the tube has so many legs) to make the tube
circuits more efficient.

Early Computers

The largest computers based


on vacuum tubes had racks
and racks of tubes filling
large rooms.

The vacuum tubes tended to leak, and the metal that emitted electrons in
the vacuum tubes burned out. The tubes also required so much power that
big and complicated circuits were too large and took too much energy to run.
In the late 1940's, big computers were built with over 10,000 vacuum tubes
and occupied over 93 square meters of space.

The problems with vacuum tubes lead scientists and engineers to think of
other ways to make three terminal devices. Instead of using electrons in
vacuum, scientists began to consider how one might control electrons in
solid materials, like metals and semiconductors.

Already in the 1920's, scientists understood how to make a two terminal


device by making a point contact between a sharp metal tip and a piece of
semiconductor crystal. These point-contact diodes were used to rectify
signals (change oscillating signals to steady signals), and make simple AM
radio receivers (crystal radios). However, it took many years before the three
terminal solid state device - the transistor - was discovered.

The First Transistor

The first point contact


transistor made use of the
semiconductor germanium.
Paper clips and razor blades
were used to make the
device.

In 1947, John Bardeen and Walter Brattain, working at Bell Telephone


Laboratories, were trying to understand the nature of the electrons at the
interface between a metal and a semiconductor. They realized that by
making two point contacts very close to one another, they could make a
three terminal device - the first "point contact" transistor.

They quickly made a few of these transistors and connected them with some
other components to make an audio amplifier. This audio amplifier was
shown to chief executives at Bell Telephone Company, who were very
impressed that it didn't need time to "warm up" (like the heaters in vacuum
tube circuits). They immediately realized the power of this new technology.

This invention was the spark that ignited a huge research effort in solid state
electronics. Bardeen and Brattain received the Nobel Prize in Physics, 1956,
together with William Shockley, "for their researches on semiconductors and
their discovery of the transistor effect." Shockley had developed a so-called
junction transistor, which was built on thin slices of different types of
semiconductor material pressed together. The junction transistor was easier
to understand theoretically, and could be manufactured more reliably.

Limits of Individual Transistors


Individual electronic
components were soldered
on to printed circuit boards.

For many years, transistors were made as individual electronic components


and were connected to other electronic components (resistors, capacitors,
inductors, diodes, etc.) on boards to make an electronic circuit. They were
much smaller than vacuum tubes and consumed much less power. Electronic
circuits could be made more complex, with more transistors switching faster
than tubes.

However, it did not take long before the limits of this circuit construction
technique were reached. Circuits based on individual transistors became too
large and too difficult to assemble. There were simply too many electronic
components to deal with. The transistor circuits were faster than vacuum
tube circuits, and there were noticeable problems due to time delays for
electric signals to propagate a long distance in these large circuits. To make
the circuits even faster, one needed to pack the transistors closer and closer
together.

The Integrated Circuit

Integrated circuits placed all


components in one chip,
drastically reducing the size
of the circuit and its
components.
In 1958 and 1959, Jack Kilby at Texas Instruments and Robert Noyce at
Fairchild Camera, came up with a solution to the problem of large numbers
of components, and the integrated circuit was developed. Instead of making
transistors one-by-one, several transistors could be made at the same time,
on the same piece of semiconductor. Not only transistors, but other electric
components such as resistors, capacitors and diodes could be made by the
same process with the same materials.

For more than 30 years, since the 1960's, the number of transistors per unit
area has been doubling every 1.5 years. This fantastic progression of circuit
fabrication is known as Moore's law, after Gordon Moore, one of the early
integrated circuit pioneers and founders of Intel Corporation. The Nobel Prize
in Physics 2000 was awarded to Jack Kilby for the invention of the integrated
circuit.

TECHNOLOGY/COMPONENT

The design complexity of CPUs increased as various technologies facilitated


building smaller and more reliable electronic devices. The first such
improvement came with the advent of the transistor. Transistorized CPUs
during the 1950s and 1960s no longer had to be built out of bulky,
unreliable, and fragile switching elements like vacuum tubes and electrical
relays. With this improvement more complex and reliable CPUs were built
onto one or several printed circuit boards containing discrete (individual)
components.

During this period, a method of manufacturing many transistors in a compact


space gained popularity. The integrated circuit (IC) allowed a large number
of transistors to be manufactured on a single semiconductor-based die, or
"chip." At first only very basic non-specialized digital circuits such as NOR
gates were miniaturized into ICs. CPUs based upon these "building block" ICs
are generally referred to as "small-scale integration" (SSI) devices. SSI ICs,
such as the ones used in the Apollo guidance computer, usually contained
transistor counts numbering in multiples of ten. To build an entire CPU out of
SSI ICs required thousands of individual chips, but still consumed much less
space and power than earlier discrete transistor designs. As microelectronic
technology advanced, an increasing number of transistors were placed on
ICs, thus decreasing the quantity of individual ICs needed for a complete
CPU. MSI and LSI (medium- and large-scale integration) ICs increased
transistor counts to hundreds, and then thousands.

In 1964 IBM introduced its System/360 computer architecture which was


used in a series of computers that could run the same programs with
different speed and performance. This was significant at a time when most
electronic computers were incompatible with one another, even those made
by the same manufacturer. To facilitate this improvement, IBM utilized the
concept of a microprogram (often called "microcode"), which still sees
widespread usage in modern CPUs. The System/360 architecture was so
popular that it dominated the mainframe computer market for decades and
left a legacy that is still continued by similar modern computers like the IBM
zSeries. In the same year (1964), Digital Equipment Corporation (DEC)
introduced another influential computer aimed at the scientific and research
markets, the PDP-8. DEC would later introduce the extremely popular PDP-11
line that originally was built with SSI ICs but was eventually implemented
with LSI components once these became practical. In stark contrast with its
SSI and MSI predecessors, the first LSI implementation of the PDP-11
contained a CPU composed of only four LSI integrated circuits .

Transistor-based computers had several distinct advantages over their


predecessors. Aside from facilitating increased reliability and lower power
consumption, transistors also allowed CPUs to operate at much higher
speeds because of the short switching time of a transistor in comparison to a
tube or relay. Thanks to both the increased reliability as well as the
dramatically increased speed of the switching elements (which were almost
exclusively transistors by this time), CPU clock rates in the tens of megahertz
were obtained during this period. Additionally while discrete transistor and IC
CPUs were in heavy usage, new high-performance designs like SIMD (Single
Instruction Multiple Data) vector processors began to appear. These early
experimental designs later gave rise to the era of specialized
supercomputers like those made by Cray Inc.
APPLICATION OF TRANSISTOR IN COMPUTER

The notion of an integrated circuit was there. But ten years were to pass
from the invention of the transistor before the technology involved had
matured sufficiently to allow the various elements to be fabricated in one
and the same basic material and in one piece. The invention is one in a
series of many that have made possible the great development of
information technology. The integrated circuit is still, after 40 years, in a
dynamic phase of development with no sign of flagging.

Transistors are vital for digital circuits to work. These components are used
as very fast switches in digital logic circuits. Transistors are normally so
small that hundreds of thousands fit on one processing chip on a computer
motherboard. The types of transistors used in school projects are normally
large enough to fit on the end of a small finger. However, the way they
switched on and off is the same . When a transistor is switched on it
produces a ‘1’ and when it is switched off it produces a ‘0’.
Transistors in the circuit of a computer microprocessor can switch on and off
thousands of times per second. Without the invention of the transistor,
computer processing power would be very limited and slow.

Two basic examples of simple transistor driven logic (AND / OR) circuits are
shown below.

This is an AND gate circuit and it can be made quite easily. The example
shown is built from a modular electronics kit. Both switches ‘A’ and ‘B’ must
be pressed together for the bulb to light.
If you construct this circuit, you may need to alter the value of the resistors.
This will depend on the type of transistors used and whether to bulb or an
LED is used.
This is an OR gate circuit. Either switch ‘A’ or ‘B’ must be pressed for the
bulb to light. The switches do not have to be pressed together.

FUNCTION/TASK
The transistor is the key active component in practically all modern electronics, and is
considered by many to be one of the greatest inventions of the twentieth century. Its importance
in today's society rests on its ability to be mass produced using a highly automated process
semiconductor device fabrication that achieves astonishingly low per-transistor costs.

Although several companies each produce over a billion individually packaged known as
discrete transistors every year,

the vast majority of transistors now produced are in integrated circuits often shortened to IC,
microchips or simply chips, along with diodes, resistors, capacitors and other electronic
components, to produce complete electronic circuits. A logic gate consists of up to about twenty
transistors whereas an advanced microprocessor, as of 2009, can use as many as 2.3 billion
transistors MOSFETs.

The transistor's low cost, flexibility, and reliability have made it a ubiquitous device.
Transistorized mechatronic circuits have replaced electromechanical devices in controlling
appliances and machinery. It is often easier and cheaper to use a standard microcontroller and
write a computer program to carry out a control function than to design an equivalent mechanical
control function.

Because of the low cost of transistors and hence digital computers, there is a
trend to digitize information. With digital computers offering the ability to
quickly find, sort and process digital information, more and more effort has
been put into making information digital. As a result, today, much media
data is delivered in digital form, finally being converted and presented in
analog form by computers. Areas influenced by the Digital Revolution include
television, radio, and newspapers.

Comparison with vacuum tubes

Prior to the development of transistors, vacuum (electron) tubes (or in the UK


"thermionic valves" or just "valves") were the main active components in
electronic equipment.

Advantages

The key advantages that have allowed transistors to replace their vacuum
tube predecessors in most applications are:

• Small size and minimal weight, allowing the development of


miniaturized electronic devices.
• Highly automated manufacturing processes, resulting in low per-unit
cost.
• Lower possible operating voltages, making transistors suitable for
small, battery-powered applications.
• No warm-up period for cathode heaters required after power
application.
• Lower power dissipation and generally greater energy efficiency.
• Higher reliability and greater physical ruggedness.
• Extremely long life. Some transistorized devices produced more than
30 years ago are still in service.
• Complementary devices available, facilitating the design of
complementary-symmetry circuits, something not possible with
vacuum tubes.
• Though in most transistors the junctions have different doping levels
and geometry, some allow bidirectional current flow.
• Ability to control very large currents, as much as several hundred
amperes.
• Insensitivity to mechanical shock and vibration, thus avoiding the
problem of microphonics in audio applications.
• More sensitive than the hot and macroscopic tubes

Computer "chips" consist of millions of transistors and sell for dollars, with
per-transistor costs in the thousandths-of-pennies. The average home might
contain a few tens of light bulbs and perhaps 100 metres of paper, things
many consider "cheap", but the computer you are using to read this contains
millions of transistors.

Bipolar Junction Transistor (BJT)

Conceptually, one can understand a bipolar junction transistor as two diodes


placed back to back, connected so they share either their positive or their
negative terminals. The forward-biased emitter-base junction allows charge
carriers to easily flow out of the emitter. The base is made thin enough so
that most of the injected carriers will reach the collector rather than
recombining in the base. Since small changes in the base current affect the
collector current significantly, the transistor can work as an electronic
amplifier. The rate of amplification, usually called the current gain (β), is
roughly one hundred for most types of BJTs. That is, one milliampere of base
current usually induces a collector current of about a hundred milliamperes.
BJTs prevail in all sorts of amplifiers from audio to radio frequency
applications and are also popular as electronic switching devices.

Field-Effect Transistor (FET)

The most common variety of field-effect transistors, the enhancement-mode


MOSFET (metal-oxide semiconductor field-effect transistor) can also be
viewed as two back-to-back diodes that separate the source and drain
terminals. The volume in between is covered by an extremely thin insulating
layer that carries the gate electrode. When a voltage is applied between gate
and source, an electric field is created in that volume, causing a thin
conductive channel to form between the source and drain and allowing
current to flow across. The amount of this current can be modulated, or
completely turned off, by varying the gate voltage. Because the gate is
insulated, no DC current flows to or from the gate electrode. This lack of a
gate current (as compared to the BJT's base current), and the ability of the
MOSFET to act like a switch, allows particularly efficient digital circuits to be
created. Hence, MOSFETs have become the dominant technology used in
computing hardware such as microprocessors and memory devices such as
RAM.

The most common form of MOSFET transistor in use today is the CMOS
(complementary metallic oxide semiconductor) which is the basis for
virtually all integrated circuits produced.
CONCLUSION

At first, the computer was not high on the list of potential applications for
this tiny device known as transistor. This is not surprising—when the first
computers were built in the 1940s and 1950s, few scientists saw in them the
seeds of a technology that would in a few decades come to permeate almost
every sphere of human life. Before the digital explosion, transistors were a
vital part of improvements in existing analog systems, such as radios and
stereos.

When it was placed in computers, however, the transistor became an


integral part of the technology boom. They are also capable of being mass-
produced by the millions on a sliver of silicon—the semiconductor chip. It is
this almost boundless ability to integrate transistors onto chips that has
fueled the information age. Today these chips are not just a part of
computers. They are also important in devices as diverse as video cameras,
cellular phones, copy machines, jumbo jets, modern automobiles,
manufacturing equipment, electronic scoreboards, and video games. Without
the transistor there would be no Internet and no space travel.

In the years following its creation, the transistor gradually replaced the
bulky, fragile vacuum tubes that had been used to amplify and switch
signals. The transistor became the building block for all modern electronics
and the foundation for microchip and computer technology.

You might also like