Professional Documents
Culture Documents
SYLLABUS:
9. ER modeling
SOFTWARE
1. SYSTEM SOFTWARE
• Refers to the operating system and all utility programs that manage
computer resources at a low level.
• The BIOS (basic input/output system) gets the computer system started after
you turn it on and manages the data flow between the operating system and
attached devices such as the hard disk, video adapter, keyboard, mouse, and
printer.
• The boot program loads the operating system into the computer's main
memory or random access memory (RAM).
• System software also includes system utilities, such as the disk defragmenter
and System Restore.
Systems software includes the programs that are dedicated to managing the
computer itself, such as the operating system, file management utilities, and
disk operating system (or DOS). The operating system manages the computer
hardware resources in addition to applications and data. Without systems
software installed in our computers we would have to type the instructions for
everything we wanted the computer to do!
2. PROGRAMMING SOFTWARE
3. APPLICATION SOFTWARE
• A program or group of programs designed for end users
PROGRAMMING LANGUAGE
Low-Level Languages
Low-level computer languages are either machine codes or are very close
them. A computer cannot understand instructions given to it in high-level
languages or in English. It can only understand and execute instructions given
in the form of machine language i.e. binary. There are two types of low-level
languages:
● Machine Language:a language that is directly interpreted into the hardware
● Assembly Language: a slightly more user-friendly language that directly
corresponds to machine language
• The problem is that the computer doesn't understand the assembly code, so
we need a way to convert it to machine code, which the computer does
understand. • Assembly language programs are translated into machine
language by a program called an assembler.
High-level computer languages use formats that are similar to English. The
purpose of developing high-level languages was to enable people to write
programs easily, in their own native language environment (English).
High-level languages are basically symbolic languages that use English words
and/or mathematical symbols rather than mnemonic codes. Each instruction in
the high-level language is translated into many machine language instructions
that the computer can understand.
Advantages Disadvantages
A high-level language has
to be translated into the
machine language by a
High-level languages are translator, which takes up
user-friendly time
The object code
generated by a translator
They are similar to English might be inefficient
and use English compared to an
vocabulary and well- equivalent assembly
known symbols language program
They are easier to learn
They are easier to
maintain
They are problem-
oriented rather than
‘machine’-based
A program written in a
high-level language can
be translated into many
machine languages and
can run on any computer
for which there exists an
appropriate translator
The language is
independent of the
machine on which it is
used i.e. programs
developed in a high-level
language can be run on
any computer text
As stated earlier, any program that is not written in machine language has to
be translated in machine language before it is executed by the computer. The
means used for translation are themselves computer programs. There are
three types of translator programs i.e. Assembler, Compilers and Interpreters.
Assembler:
Assembler is a computer program which is used to translate program written in
Assembly Language in to machine language. The translated program is called
as object program. Assembler checks each instruction for its correctness and
generates diagnostic messages, if there are mistakes in the program. Various
steps of assembling are:
It’s pretty evident that technology is accelerating at a rapid pace and humans
are becoming further dependent on it for every purpose. And with every new
day, software development is becoming more and more crucial since the
demand for software is fast rising from every corner imaginable. In fact, almost
every machine that we use can be a by-product of some software empowering
it. And going by the recent trends in the market, software development in India
is quite a bustling scene, what with India providing a huge chunk of software
intelligence that’s required to make this process successful.
The process of software development services in India goes through a series
of stages in step wise fashion that almost every developing company follows.
Known as the ‘software development life cycle,’ these six steps include
planning, analysis, design, development & implementation, testing &
deployment and maintenance. Let’s study each of these steps to know how
the perfect software is developed.
1. Planning: Without the perfect plan, calculating the strengths and
weaknesses of the project, development of software is meaningless. Planning
kicks off a project flawlessly and affects its progress positively.
2. Analysis: This step is about analyzing the performance of the software at
various stages and making notes on additional requirements. Analysis is very
important to proceed further to the next step.
3. Design: Once the analysis is complete, the step of designing takes over,
which is basically building the architecture of the project. This step helps
remove possible flaws by setting a standard and attempting to stick to it.
4. Development & Implementation: The actual task of developing the software
starts here with data recording going on in the background. Once the software
is developed, the stage of implementation comes in where the product goes
through a pilot study to see if it’s functioning properly.
5. Testing: The testing stage assesses the software for errors and documents
bugs if there are any.
6. Maintenance: Once the software passes through all the stages without any
issues, it is to undergo a maintenance process wherein it will be maintained and
upgraded from time to time to adapt to changes. Almost every software
development Indian company follows all the six steps, leading to the
reputation that the country enjoys in the software market today.
The cost of software development services in India is comparatively lower than
in other countries, making it a much sought after destination today. Custom
software development in India is a big hit among clients who are able to get
their business needs fulfilled at highly cost-effective rates.
Microsoft Excel provides several means and ways to analyze and interpret
data. The data can be from various sources. The data can be converted and
formatted in several ways. It can be analyzed with the relevant Excel
commands, functions and tools – encompassing Conditional Formatting,
Ranges, Tables, Text functions, Date functions, Time functions, Financial
functions, Subtotals, Quick Analysis, Formula Auditing, Inquire Tool, What-if
Analysis, Solvers, Data Model, PowerPivot, PowerView, PowerMap, etc.
You will be learning these data analysis techniques with Excel as part of two
parts −
● Data Analysis with Excel and
● Advanced Data Analysis with Excel
Data Analysis is a process of collecting, transforming, cleaning, and modeling
data with the goal of discovering the required information. The results so
obtained are communicated, suggesting conclusions, and supporting decision-
making. Data visualization is at times used to portray the data for the ease of
discovering the useful patterns in the data. The terms Data Modeling and Data
Analysis mean the same.
Data Analysis Process consists of the following phases that are iterative in
nature −
● Data Requirements Specification
● Data Collection
● Data Processing
● Data Cleaning
● Data Analysis
● Communication
Hard real-time systems guarantee that critical tasks complete on time. In hard
real-time systems, secondary storage is limited or missing and the data is
stored in ROM. In these systems, virtual memory is almost never found.
Soft real-time systems are less restrictive. A critical real-time task gets priority
over other tasks and retains the priority until it completes. Soft real-time
systems have limited utility than hard real-time systems. For example,
multimedia, virtual reality, Advanced Scientific Projects like undersea
exploration and planetary rovers, etc.
Operating System – Properties
Batch processing
Batch processing is a technique in which an Operating System collects the
programs and data together in a batch before processing starts. An operating
system does the following activities related to batch processing −
● The OS defines a job which has predefined sequence of commands,
programs and data as a single unit.
● The OS keeps a number a jobs in memory and executes them without any
manual information.
● Jobs are processed in the order of submission, i.e., first come first served
fashion.
● When a job completes its execution, its memory is released and the output
for the job gets copied into an output spool for later printing or processing.
Advantages
● Batch processing takes much of the work of the operator to the computer.
● Increased performance as a new job get started as soon as the previous job
is finished, without any manual intervention.
Disadvantages
● Difficult to debug program.
● A job could enter an infinite loop.
● Due to lack of protection scheme, one batch job can affect pending jobs.
Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by
switching between them. Switches occur so frequently that the users may
interact with each program while it is running. An OS does the following
activities related to multitasking −
Advantages
Disadvantages
Advantages
DBMS – Architecture
The design of a DBMS depends on its architecture. It can be centralized or
decentralized or hierarchical. The architecture of a DBMS can be seen as either
single tier or multi-tier. An n-tier architecture divides the whole system into
related but independent n modules, which can be independently modified,
altered, changed, or replaced.
In 1-tier architecture, the DBMS is the only entity where the user directly sits on
the DBMS and uses it. Any changes done here will directly be done on the
DBMS itself. It does not provide handy tools for end-users. Database designers
and programmers normally prefer to use single-tier architecture.
If the architecture of DBMS is 2-tier, then it must have an application through
which the DBMS can be accessed. Programmers use 2-tier architecture where
they access the DBMS by means of an application. Here the application tier is
entirely independent of the database in terms of operation, design, and
programming.
3-tier Architecture
A 3-tier architecture separates its tiers from each other based on the
complexity of the users and how they use the data present in the database. It is
the most widely used architecture to design a DBMS.
● Database (Data) Tier− At this tier, the database resides along with its query
processing languages. We also have the relations that define the data and
their constraints at this level.
● Application (Middle) Tier− At this tier reside the application server and the
programs that access the database. For a user, this application tier presents
an abstracted view of the database. End-users are unaware of any existence
of the database beyond the application. At the other end, the database tier
is not aware of any other user beyond the application tier. Hence, the
application layer sits in the middle and acts as a mediator between the end-
user and the database.
● User (Presentation) Tier− End-users operate on this tier and they know
nothing about any existence of the database beyond this layer. At this layer,
multiple views of the database can be provided by the application. All views
are generated by applications that reside in the application tier.
Multiple-tier database architecture is highly modifiable, as almost all its
components are independent and can be changed independently.
Database Schema
A database schema is the skeleton structure that represents the logical view of
the entire database. It defines how the data is organized and how the relations
among them are associated. It formulates all the constraints that are to be
applied on the data.
A database schema defines its entities and the relationship among them. It
contains a descriptive detail of the database, which can be depicted by means
of schema diagrams. It’s the database designers who design the schema to
help programmers understand the database and make it useful.
Database Instance
Data Independence
A database system normally contains a lot of data in addition to users’ data. For
example, it stores data about data, known as metadata, to locate and retrieve
data easily. It is rather difficult to modify or update a set of metadata once it is
stored in the database. But as a DBMS expands, it needs to change over time
to satisfy the requirements of the users. If the entire data is dependent, it
would become a tedious and highly complex job.
Metadata itself follows a layered architecture, so that when we change data at
one layer, it does not affect the data at another level. This data is independent
but mapped to each other.
Logical Data Independence
Logical data is data about database, that is, it stores information about how
data is managed inside. For example, a table (relation) stored in the database
and all its constraints, applied on that relation.
Logical data independence is a kind of mechanism, which liberalizes itself from
actual data stored on the disk. If we do some changes on table format, it
should not change the data residing on the disk.
Physical Data Independence
All the schemas are logical, and the actual data is stored in bit format on the
disk. Physical data independence is the power to change the physical data
without impacting the schema or logical data.
For example, in case we want to change or upgrade the storage system itself −
suppose we want to replace hard-disks with SSD − it should not have any
impact on the logical data or schemas.
Entity-Relationship Model
CREATE
DROP
ALTER