You are on page 1of 76

REVIEW OF VIRTUAL

INSTRUMENTATION
CONTENTS
Virtual instrumentation model
Graphical system design model
Block Diagram & Architecture of Virtual Instrument
Data-flow techniques
Hardware & software in virtual instrumentation
Virtual instrument and traditional instrument
Comparison with conventional programming
OPC (Object linking and Embedding (OLE) for process control)
HMI/SCADA software
Active X programming
Virtual Instrumentation
Virtual instrumentation is an interdisciplinary field that merges sensing, hardware
and software technologies in order to create flexible and sophisticated instruments
for control and monitoring applications.
Virtual instrumentation is the use of customizable software and modular
measurement hardware to create user-defined measurement systems, called virtual
instruments.
Virtual instruments are computer programs that interact with real world objects by
means of sensors and actuators and implement functions of real or imaginary
instruments.
The sensor is usually a simple hardware that acquires data from the object,
transforms it into electric signals and transmits to the computer for further
processing. Simple virtual measuring instruments just acquire and analyse data,
but more complex virtual instruments communicate with objects in both
directions.
Virtual Instrumentation Model
Virtual instrumentation is the combination of user-defined software and modular
hardware that implements custom systems (virtual instruments) with
components for acquisition, processing/analysis and presentation.
Modular Hardware - subdividing the entire hardware design into smaller parts called modules or
skids, that can be independently created and then used in different systems.
Data Acquisition
Data acquisition is the process of gathering or generating information in an
automated fashion from analog and digital measurement sources such as sensors
and devices under test.
A physical input/output signal is typically a voltage or current signal. A voltage
signal can typically be a 0-5V signal, while a current signal can typically be a
4- 20mA signal.
Data Analysis
Raw data may not convey useful information immediately.
Signal processing is frequently needed to transform the signal, remove noise
disturbances, or compensate for environmental effects.
Analysis is a fundamental part of many test, measurement, and control
applications.
Analysing a signal gives you additional insight into what your data meansyou
can get a clearer picture of your desired signal or monitor a signal for a particular
behaviour.
Data analysis may include Time Domain Analysis, Frequency (Spectral) Analysis,
Digital Filters, Curve Fitting and Data Modeling, Differential Equations, Linear
Algebra, Nonlinear Systems, Optimization, Root Finding, PID and Fuzzy Control.
Data Presentation
This stage is used to present the data in a suitable form for post analysis of data.
This is also called as data storage stage. e.g., CRO recorders, plotter and other
display devices.

Graphical system design (GSD) is a modern approach to designing measurement


and control systems that integrates system design software with COTS hardware
to dramatically simplify development.

Graphical system design is a revolutionary approach to solving design challenges


of an entire system that blends graphical programming and flexible commercial-
off-the-shelf (COTS) hardware.
Graphical System Design Model

One key element behind the success of the virtual instrumentation approach is
LabVIEW, a software development tool originally developed to support the
requirements of virtual instrumentation.

Researchers can use LabVIEW to design (modeling and simulation), prototype


(proof of concept), and deploy (field implementation) new technologies that
result from R&D activities.
The virtual instrumentation model is applied in each of the three phases of the
graphical system design model.
Data acquisition, analysis and presentation functions are used in the design,
prototyping and deployment phases.
Design (Model)
The researcher develops a mathematical model of the system, including sensors,
actuators, plants and controllers, and simulates them under a variety of initial
conditions and constraints.

Researchers can acquire reference data from files or databases and incorporate it
into the model.

Results from the simulation process are saved for post-analysis and visualization
and can be used to introduce changes into the model.

This is usually a software-centric process with a strong focus on numerical


methods/analysis and mathematics.
Prototype (Lab)
If experimental validation of the model is required, researchers develop and test a
prototype in the laboratory.
Signal processing and analysis as well as visualization can be implemented online
while data is being measured and acquired, or while the process is being
controlled.
The experimental results obtained in this phase can be used to modify and
optimize the original model, which in turn may require additional experiments.
Data captured can also be used for system identification and parameter estimation.
Usually, this experimental (prototyping) phase is executed on standard PCs or PXI
computers, using PCI/PXI data acquisition devices or external measuring devices
connected to a PC via USB, Ethernet, GPIB, or serial ports.
This process is usually more software/hardware-centric because sensors, actuators,
data acquisition devices, controllers, and the controlled/analyzed plant itself are all
key elements in the experimental setup.
Deployment (Field)
Finally, the model (controller, analyzer or both) is deployed in the field or lab
using either a PC (desktop, server or industrial) or PXI, or it can be downloaded to
a dedicated embedded controller such as CompactRIO, which usually operates in
stand-alone mode and in real-time mode.

The transition from the prototyping phase to the deployment phase can be very
fast and efficient because the same set of tools used for prototyping can, in most
cases, be applied to the final deployment of the system in the field.

The deploy stage is mostly about hardware - where you put your design in the
final stage.
GSD Advantages

Reduced time to market


Optimal system scalability
Quick design iteration
Increased performance at lower costs
Block Diagram of Virtual Instrumentation
The heart of any virtual instrument is flexible software.
Innovative engineer or scientist will apply his domain expertise to customize the
measurement and control application as per the requirement.
The result is a user-defined instrument, specific to the application needs.
With such software, engineers and scientists can interface with real-world signals;
analyze data for meaningful information, and share results and applications.
NI LabVIEW, is an example software component of the virtual Instrumentation
architecture, with the graphical development platform for test, design and control
applications.
The second virtual instrumentation component is the modular I/O for
measurements that require higher performance, resolution, or speeds.
Advanced Modular Instrument hardware use the latest I/O and data processing
technologies, including Analog to Digital Converters (ADC), Digital to Analog
Converters, Filed Programmable Gate Arrays (FPGAs), and PC busses to provide
high resolution and throughput for measurements .
The third virtual instrumentation element is computing platform PC to run the
software and connect to I/O module.
Virtual Instrumentation is rapidly revolutionizing the functions of control design,
distributed control, data logging, design verification, prototyping, simulation and
more.
Architecture of Virtual Instrument
Sensor Module - Data is acquired from the sensor and necessary signal
conditioning is performed.
sensor interface
Various protocols are used for interfacing the sensors to the
processing module (PC).
Wired Interfaces
Serial Communication Interfaces: Serial communication interfaces enable bit by-
bit transmission of data between two devices.
Examples of serial interfaces include the Universal Serial Bus (USB),
FireWire/IEEE 1394, RS 232C or RS 485 may be used to interface the Sensor
Module to the Process Module.
Parallel Communication Interfaces: Parallel communication enables simultaneous
transfer of multiple bits of information through individual wires bundled together.
Examples are HP Interface Bus (HPIB) or General-Purpose Interface Bus (GPIB),
Small Computer System Interface (SCSI), Peripheral Component Interconnect
(PCI) eXtensions for Instrumentation (PXI).
Wireless Interfaces
IEEE 802.11a/b/g/n - WLAN
General Packet Radio Service (GPRS)
GSM (Global System for Mobile communications)
Bluetooth
The advantage of wireless interfacing is that multiple devices can be connected
without requiring any communication channels.
Database Interface - The Database Interface provides connectivity between the
Database and the Processing Module.
eXtensible Markup Language (XML)
Database management system (DBMS) SQL Server/ORACLE etc.
Information System Interface - uses User Interface (UI) components such as
ActiveX objects along with more common place aspects such as the Uniform
Resource Locator (URL); each virtual instrument is identied uniquely by its
URL.
User Interface - The User Interface (UI) display and control is achieved through
text-based terminals such as using the SMS service for mobile alerts, graphical
systems that display the relevant and necessary data graphically, includes touch-
screens and audio visual systems such as those found at ATMs.
Dataflow programming
The programming language used in LabVIEW, also referred to as G, is a dataflow
programming language.
Execution is determined by the structure of a graphical block diagram on which
the programmer connects different function-nodes by drawing wires.
These wires propagate variables and any node can execute as soon as all its input
data become available.
Since this might be the case for multiple nodes simultaneously, G is inherently
capable of parallel execution.
Data Flow Techniques
LabVIEW follows a dataflow model for running VI's
A block diagram node executes when it receives all required inputs.
When a node executes, it produces output data and passes the data to the next node
in the dataflow path.
The movement of data through the nodes determines the execution order of the
VIs and functions on the block diagram.
Visual Basic, C++, Java, and most other text-based programming languages
follow a control flow model of program execution.
In control flow, the sequential order of program elements determines the execution
order of a program.
For a dataflow programming example, consider a block diagram that adds two
numbers and then subtracts 50.00 from the result of the addition, as shown in Fig.
Dataflow Programming Example

In this case, the block diagram executes from left to right, not because the objects
are placed in that order, but because the Subtract function cannot execute until the
Add function finishes executing and passes the data to the Subtract function.
Remember that a node executes only when data is available at all of its input
terminals and supplies data to the output terminals only when the node finishes
execution.
Dataflow Example for Multiple Code Segments
Consider which code segment would execute firstthe Add, Random Number, or
Divide function.
You cannot know because inputs to the Add and Divide functions are available at
the same time, and the Random Number function has no inputs.
In a situation where one code segment must execute before another, and no data
dependency exists between the functions, use other programming methods, such
as sequence structures or error clusters, to force the order of execution.
Wires
You transfer data among block diagram objects through wires. Wires connect the
control and indicator terminals to the Add and Subtract function.
Each wire has a single data source, but you can wire it to many VIs and functions
that read the data.
Wires are different colors, styles, and thicknesses, depending on their data types.

A broken wire appears as a dashed black line with a red X in the middle, as shown
above. Broken wires occur for a variety of reasons, such as when you try to wire
two objects with incompatible data types.
Common Wire Types
Hardware in Virtual Instrumentation
Input/Output plays a critical role in virtual instrumentation.
To accelerate test, control and design, I/O hardware must be rapidly adaptable to
new concepts and products.
Virtual instrumentation delivers this capability in the form of modularity within
scalable hardware platforms.
Virtual instrumentation is software-based; if we can digitize it, we can measure it.
Standard hardware platforms that house the I/O are important to I/O modularity.
Laptops and desktop computers provide an excellent platform where virtual
instrumentation can make the most of existing standards such as the USB, PCI,
Ethernet, and PCMCIA buses.
Software in Virtual Instrumentation
Software is the most important component of a virtual instrument.
With the right software tool, engineers and scientists can efficiently create their
own applications by designing and integrating the routines that a particular process
requires.
You can also create an appropriate user interface that best suits the purpose of the
application and those who will interact with it.
You can define how and when the application acquires data from the device, how
it processes, manipulates and stores the data, and how the results are presented to
the user.
With powerful software, we can build intelligence and decision-making
capabilities into the instrument so that it adapts when measured signals change
inadvertently or when more or less processing power is required.
An important advantage that software provides is modularity.
When dealing with a large project, engineers and scientists generally approach the
task by breaking it down into functional solvable units.
These subtasks are more manageable and easier to test, given the reduced
dependencies that might cause unexpected behaviour.
We can design a virtual instrument to solve each of these subtasks, and then join
them into a complete system to solve the larger task.
The ease with which we can accomplish this division of tasks depends greatly on
the underlying architecture of the software.
A virtual instrument is not limited or confined to a stand-alone PC.
In fact, with recent developments in networking technologies and the Internet, it is
more common for instruments to use the power of connectivity for the purpose of
task sharing.
Typical examples include supercomputers, distributed monitoring and control
devices, as well as data or result visualization from multiple locations.
Every virtual instrument is built upon flexible, powerful software by an innovative
engineer or scientist applying domain expertise to customize the measurement and
control application.
The result is a user-defined instrument specific to the application needs.
Virtual instrumentation software can be divided into several different layers like
the application software, test and data management software, measurement and
control services software.
Layers of virtual instrumentation software
Most people think immediately of the application software layer. This is the
primary development environment for building an application.
It includes software such as LabVIEW, LabWindows/CVI (ANSI C),
Measurement Studio (Visual Studio programming languages), Signal Express and
VI Logger.
Above the application software layer is the test executive and data management
software layer. This layer of software incorporates all of the functionality
developed by the application layer and provides system-wide data management.
Measurement and control services software is equivalent to the I/O driver software
layer. It is one of the most crucial elements of rapid application development.
This software connects the virtual instrumentation software and the hardware for
measurement and control.
It includes intuitive application programming interfaces, instrument drivers,
configuration tools, I/O assistants and other software included with the purchase
of hardware.
This software offers optimized integration with both hardware and application
development environments.
Virtual Instrument and Traditional Instrument
A traditional instrument is designed to collect data from an environment, or from a
unit under test, and to display information to a user based on the collected data.
Ex: Oscilloscopes, spectrum analyzers and digital multimeters.
A virtual instrument (VI) is defined as an industry-standard computer equipped
with user friendly application software, cost-effective hardware and driver
software that together perform the functions of traditional instruments. Simulated
physical instruments are called virtual instruments (VIs).
With virtual instrumentation, engineers and scientists reduce development time,
design higher quality products and lower their design costs.
Virtual Instrumentation is flexible. Virtual instruments are defined by the user
while traditional instruments have fixed vendor-defined functionality.
The associations within a virtual instrument are not fixed but rather managed by
software.
Virtual Instrument and Traditional Instrument
Every virtual instrument consists of two partssoftware and hardware.
A virtual instrument typically has a sticker price comparable to and many times
less than a similar traditional instrument for the current measurement task.
A traditional instrument provides them with all software and measurement
circuitry packaged into a product with a finite list of fixed-functionality using the
instrument front panel.
A virtual instrument provides all the software and hardware needed to accomplish
the measurement or control task.
In addition, with a virtual instrument, engineers and scientists can customize the
acquisition, analysis, storage, sharing and presentation functionality using
productive, powerful software.
Without the displays, knobs and switches of a conventional, external box-based
instrumentation products, a virtual instrument uses a personal computer for all
user interaction and control.
Virtual Instrument and Traditional Instrument
The cost to configure a virtual instrumentation-based system using a data
acquisition board or cards can be as little as 25% of the cost of a conventional
instrument.
Stand-alone traditional instruments such as oscilloscopes and waveform
generators are very powerful, expensive, and designed to perform one or more
specific tasks defined by the vendor.
However, the user generally cannot extend or customize them.
The knobs and buttons on the instrument, the built-in circuitry, and the functions
available to the user, are all specific to the nature of the instrument.
In addition, special technology and costly components must be developed to build
these instruments, making them very expensive and slow to adapt.
Traditional instruments also frequently lack portability, whereas virtual
instruments running on notebooks automatically incorporate their portable nature.
Traditional instruments and software based virtual
instruments
Traditional instruments and software-based virtual instruments largely share the
same architectural components, but radically different philosophies.
A traditional instrument might contain an integrated circuit to perform a particular
set of data processing functions.
In a virtual instrument, these functions would be performed by software running
on the PC processor. We can extend the set of functions easily, limited only by the
power of the software used.
Both require one or more microprocessors, communication ports (for example,
serial and GPIB), and display capabilities, as well as data acquisition modules.
By employing virtual instrumentation solutions, you can lower capital costs,
system development costs, and system maintenance costs, while improving time to
market and the quality of your own products.
There is a wide variety of available hardware that you can either plug into the
computer or access through a network. These devices offer a wide range of data
acquisition capabilities at a significantly lower cost than that of dedicated devices.
As integrated circuit technology advances, and off-the-shelf components become
cheaper and more powerful, so do the boards that use them. With these advances
in technology come an increase in data acquisition rates, measurement accuracy,
precision, and better signal isolation.
Traditional Vs Virtual Instruments
Traditional Instruments Virtual Instruments
Vendor-defined User-defined
Function-specific, stand-alone with Application-oriented system with
limited connectivity connectivity to networks, peripherals,
and applications
Hardware is the key Software is the key
Expensive Low-cost, reusable
Closed, fixed functionality Open, flexible functionality leveraging
off familiar computer technology
Slow turn on technology (510 year Fast turn on technology (12 year life
life cycle) cycle)
Minimal economics of scale Maximum economics of scale
High development and maintenance Software minimizes development and
costs maintenance costs
Comparison of Text Based and Graphical Programming
Text Based Programming Graphical Programming
Syntax must be known to do programming Syntax is knowledge but is not required for
programming
The execution of the program is from top to bottom The execution of the program is from left to right

To check for the error the program has to be compiled Errors are indicated as we wire the blocks
or executed
Front panel design needs extra coding or needs extra Front panel design is a part of programming
work
Text based programming is not interactive Graphical programming is highly interactive
This is the text based programming where the The programming is data flow programming
programming is a conventional method
Logical error finding is easy in large programs Logical error finding in large programs is quiet
complicated
Program flow is not visible Data flow is visible
It is Text based Programming It is icon based programming and wiring
Passing parameters to sub routine is difficult Passing Parameters to sub VI is easy
OPC (Object linking and Embedding (OLE) for
process control)
OLE for Process Control (OPC), which stands for Object Linking and Embedding (OLE)
for Process Control, is the original name for a standard specification developed in 1996.
The standard specifies the communication of real-time plant data between control devices
from different manufacturers.
The OPC Specification was based on the OLE, COM, and DCOM technologies developed
by Microsoft for the Microsoft Windows operating system family.
The evolution of OLE started, in 1990, on the top of Dynamic Data Exchange (DDE)
concept of Microsoft, and was later reimplemented with Microsoft Component Object
Model (COM) and then Distributed COM (DCOM) as its bases, and eventually led to
ActiveX controls.
Industrial automation systems require open standards which can be adopted by any
provider of control systems, instrumentation, and process control for multi-vendor
interoperability.
Term Meaning
OPC OPC stands for OLE for Process Control
OPC is based on the core OLE technologies COM and DCOM.

OLE Object Linking and Embedding.


COM In order to make objects implemented on different platforms or
computer architectures compatible with one another, it must be
determined how these platforms interpret an object. For this, something
called an object model is required. OLE uses the COM model
(Component Object Model). This defines the standard for interaction of
components. COM enables calls within a process to another process.

DCOM The object model for inter-computer calls is known as DCOM


(Distributed Component Object Model) which is integrated into the
operating system in Windows NT 4.0 and above.
Why do we need OPC
Application software do not readily communicate with digital plant-floor devices
as well as other applications.
Making these systems work together is the most pressing need of process
manufacturers.
A pressing need in both factory automation and process manufacturing is making
production and business systems work together.
Barriers are incompatibilities and proprietary communication interfaces.
OPC gives production and business applications across the manufacturing
enterprise access to real-time plant-floor information in a consistent manner,
making multi-vendor interoperability and "plug and play" connectivity a reality.
OPC client/server Architecture
OPC serves as an abstraction layer between Data Sources (PLC) and Data Sinks
(HMI) - enabling intercommunication without either side having to know the
others native protocol.
The OPC abstraction layer reveals two components: an OPC Client and an OPC
Server. OPC specifications define the messaging between these two components.

Programmable Logic Controller: a small industrial


computer that controls one or more hardware devices.
Human-Machine Interface: a graphical interface that allows
a person to interact with a control system. It may contain
trends, alarm summaries, pictures, or animation.
OPC is a software interface standard that allows Windows programs to
communicate with industrial hardware devices.
OPC is implemented in server/client pairs.
The OPC server is a software program that converts the hardware communication
protocol used by a PLC into the OPC protocol.
OPC Server
The OPC server is a software program that converts the hardware communication
protocol used by a PLC into the OPC protocol.
The OPC client uses the OPC server to get data from or send commands to the
hardware.
OPC servers provide a method for many different software packages (so long as it
is an OPC Client) to access data from a process control device, such as
a PLC or DCS.
Once an OPC Server is written for a particular device, it can be reused by any
application that is able to act as an OPC client.
OPC servers use Microsofts OLE technology (also known as the Component
Object Model, or COM) to communicate with clients.
OPC Client
The OPC client software is any program that needs to connect to the hardware,
such as an HMI .
The OPC client uses the OPC server to get data from or send commands to the
hardware.
The value of OPC is that it is an open standard, which means lower costs for
manufacturers and more options for users.
Hardware manufacturers need only provide a single OPC server for their devices
to communicate with any OPC client.
Software vendors simply include OPC client capabilities in their products and they
become instantly compatible with thousands of hardware devices.
Users can choose any OPC client software they need, resting assured that it will
communicate seamlessly with their OPC-enabled hardware, and vice-versa.
OPC Specifications
OPC Data Access (OPC DA)
OPC Historical Data Access (OPC HDA)
OPC Alarms and Events (OPC A&E)
OPC Data Access: The most common OPC specification is OPC Data Access,
which is used to read and write real-time data.
When vendors refer to OPC generically, they typically mean OPC Data Access.
LabVIEW supports only the OPC Data Access specification.
The main clients of OPC DA are visualization and control.
OPC Historical Data Access: It is used to transport historical data. The main
clients of OPC HDA are trend displays and history.
OPC Alarms and Events: It is used to transport alarming information. The main
clients of OPC HDA are alarms and event loggers.
Benefits
An OPC enabled application can freely communicate with any OPC-enabled data
source visible to it on the network without the need for any driver software,
specific to the data source.
OPC enabled applications can communicate with as many OPC-enabled data
sources as they need. There is no inherent OPC limitation on the number of
connections made.
Today OPC is so common that theres an OPC connector available for almost
every device on the market. Its easy to get started using OPC.
OPC-enabled data sources may be swapped out, exchanged or upgraded without
the need to update the drivers used by each application (data sink) communicating
with the data source via OPC. Only the OPC server for that data source needs to
be kept current one.
Users are free to choose the best suited devices, controllers and applications for
their projects without worrying about which vendor each came from and whether
they will communicate with each other.
Supervisory Control And Data Acquisition (SCADA)
SCADA systems are designed to collect field information, transfer it to a central
computer facility, and display the information to the operator graphically or
textually, thereby allowing the operator to monitor or control an entire system
from a central location in real time.
SCADA system is a combination of both hardware and software.
Typically hardware includes a control centre, communication equipment (e.g.
radio, telephone line, cable or satellite) and one or more geographically distributed
field sites consisting of either an RTU or a PLC, which controls the local process.
Overall SCADA system can be broken down into three categories:
Control Center
Programmable Logic Controllers(PLCs), Remote Terminal Units (RTUs),
Intelligent Electronic Devices (IEDs)
Communication Network
SCADA Architecture
Control Center
Control server or Master terminal unit (MTU):
It is the heart of the SCADA system that communicates with remote field side
RTUs.
It initiates all communication, collects the data, stores the data in database,
provides interfaces to operators and sends the information to other systems.
It continuously communicates with other devices in master station so as to
facilitate data logging, alarm processing, trending and reporting, graphical
interface and security system.
Human machine interface (HMI):
An HMI is a software application that presents information to an operator or user
about the state of a process, and to accept and implement the operators control
instructions. (Graphical User Interface or GUI).
An HMI is often a part of a SCADA (Supervisory Control and Data Acquisition)
system. The HMI also displays process status information, historical information
and other information to operators, administrators, managers & other authorized
users.
Engineering work station (EWS):
All the capabilities of an HMI are present in a EWS as well as the ability to design
& update controller logic based on the conditions in the respective field sites and
send the commands to manage & control PLCs & RTUs on the network through
communication equipment.
Data Historian:
Data historian is a computer or memory unit which acts as a centralized data base
for logging all process information within an industrial control system.
Information stored in this database can be accessed to support various analysis
from statistical process control to enterprise level planning.
Communication routers:
A router is a communication device that transfers messages between two networks
i.e. a LAN to a WAN or between MTUs & RTUs.
Geographically distributed field sites
RTU (Remote Terminal Unit):
Its primary task is to control and acquire data from process equipment at the
remote location and to transfer this data back to a central station.
Each RTU is connected with various sensors and actuators that manage local
process or field equipment's.
It collects the information from various sensors and sends the information to the
MTU.
Many RTUs store the data in their database and waits for a request from the MTU
to send or transmit the data.
In sophisticated systems, PLCs are used as RTUs which directly transfers the field
data and controls the parameters without a request from the MTU.
It is a remote telemetry data acquisition unit located at remote stations.
PLC (Programmable Logic Controller):
PLC is a small industrial computer which originally replaced relay logic. It had
inputs and output similar to those an RTU has.
It contained a program which executed a loop, scanning the inputs and taking
actions based on these inputs.
IED (Intelligent Electronic Devices):
An IED is a smart sensor/actuator containing the intelligence required to acquire
data, communicate to other devices, and perform local processing and control.
An IED, as it relates to the protection and power system automation industry, is a
device that performs electrical protection functions, advanced local control
intelligence, has the ability to monitor processes and can communicate directly to
a SCADA system.
An IED could combine an analog input/output sensors, low level control
capabilities communication device & program memory in one device.
Communication Network
It provides the link between RTUs (in the field) to MTU (in the control centre).
The communication can be wired or wireless or through internet which provides
bidirectional and uninterrupted communication between RTU and MTU.
SCADA systems can be connected using various communication mediums
including twisted pair cables, coaxial metal cables, fibre optic cables, satellites,
high frequency radio, telephone lines, and microwave radio.
SCADA Advantages & Applications
Coordinating processes in real-time
Reduced labour cost
Improvement in plant performance
Time-saving due to central availability and accessibility of all data
User-friendliness and ease of use
Displaying trends of variables over time dynamically or from the past data

Some SCADA applications include mining industries, water distribution system


and wastewater collection systems, modern process and manufacturing industries,
public and private utilities, electrical power grids, oil and gas pipelines and
railway transportation systems.
HMI
HMI is a software for visualisation and human interaction with SCADA.
A Human-Machine Interface (HMI), earlier called the Man-Machine Interface
(MMI), is the interface that separates the machine from the operator.
A properly designed HMI must provide ease of use to the end-user while offering
efficient functionality to the developer.
Typical operator interfaces must be designed in so that the data is categorically
presented in a way that does not overwhelm or confuse the operator.
Such design considerations are of prime importance since most HMI show
process-critical data, an error in which requires the operator to act with minimal
delay.
While HMI is usually local to one machine, a SCADA system may bring forth and
present various individual HMIs to a single operator.
HMI showing multiple monitors communicating
through a SCADA system
Active X Programming
ActiveX is the general name for a set of Microsoft Technologies that allows you to
reuse code and link individual programs together to suit your computing needs.
Based on COM (Component Object Model) technologies, ActiveX is an extension
of a previous technology called OLE (Object Linking and Embedding).
Each program does not need to regenerate components, but rather, reuse
components to give you the power to combine applications together.
LabVIEW offers support for ActiveX automation as a server as well as support for
ActiveX Containers, and ActiveX Events.
ActiveX is a software framework created by Microsoft that adapts its
earlier Component Object Model (COM) and Object Linking and
Embedding (OLE) technologies for content downloaded from a network,
particularly from the World Wide Web.
If you have internet explorer, then ActiveX is already installed on your computer.
ActiveX is not a programming language, but rather a set of rules for how
applications should share information.
ActiveX Automation
ActiveX/COM refers to the process of controlling one program from another via
ActiveX.
Like networking, one program acts as the client and the other as the server.
LabVIEW supports automation both as the client and the server. Both programs,
client and server, exist independent of each other but are able to share information.
The client communicates with the ActiveX objects that the server opens to allow
the sharing of information.
The automation client can access the object's properties and methods. Properties
are attributes of an object.
Another program can set or retrieve an object's attributes.
Similarly, methods are functions that perform an operation on objects. Other
applications can invoke methods.
An example of an ActiveX property is the program name, height or width. An
example of an ActiveX method is the save or print method.
ActiveX Controls and Containers
The most common usage of ActiveX is via ActiveX controls, which are
embeddable components that exist inside ActiveX containers.
Any program that can function as an ActiveX container allows you to drop
ActiveX controls into the program as a container.
From these containers, the ActiveX controls have their own functionality and
properties.
LabVIEW is an ActiveX container and can house ActiveX controls. Again,
properties and methods manipulate the embedded control.
The ability to view movie files, pdf files and similar interactive applications is
made possible through the use of ActiveX.
A container is an application in which the object is embedded. In the example of
an excel spread sheet that is embedded in a word document, MS word is the
container.
ActiveX Events
ActiveX is an event-driven technology. An event is an asynchronous notification
from the control to the container.
There are many types of events - request events, before events, after events and do
events.
The request event is when the control asks the container for permission to
perform an action. This event contains a pointer to a boolean variable.
The before event is sent by the control prior to performing an action. It is not
cancellable.
The after event is sent by the control to the container indicating an action has
occured. An example of this is a mouse click event.
The do event is a message from the container to control.
Programs report when certain events occur so that other programs, or you, can
react accordingly.
LabVIEW supports the processing of ActiveX events via both automation and
embedding of ActiveX controls.
ActiveX only works with Microsoft applications like Word, Excel, Internet
Explorer and PowerPoint, and will only work on a computer running the Windows
operating system.
Early on at Microsoft, programmers realized that many of the same functions
could be shared among different applications. For example, a spell checker is just
as useful in a word processing program like Word as in an e-mail application like
Outlook Express.
Instead of writing two separate versions of code for the same spell-checker, they
created a spell checker object. This object lives on the Windows operating system.
When any Windows application needs spell-checking functionality, it calls on the
spell-checker object.
The spell checker is an example of a COM. It's an independent module, or applet,
that can be accessed from any Windows application. COMs also allow for one
program to be embedded into another. For example, you can insert and edit an
Excel spreadsheet from within Word without ever opening the Excel application.
ActiveX controls are mostly talked about in reference to Internet Explorer, the
default Web browser for the Windows operating system.
Let's say you open a Web page with Internet Explorer that contains video clips
encoded as Windows Media files (.wmv).
Internet Explorer comes pre-loaded with an ActiveX control that allows for
Windows Media files to be played directly in the Web page.
In other words, you don't have to launch the Windows Media Player application
separately.
The ActiveX control accesses the functionality of the Windows Media Player
behind the scenes and plays back the file in the browser window.
ActiveX controls are small applications written in common programming
languages like Visual Basic and C++.
They're similar in function to Java applets, which are small programs that run
within Web browsers.
Applications that support ActiveX controls are called ActiveX containers.
Each ActiveX control contains a unique number called a class identifier
(CLSID).
ActiveX controls that work within Internet Explorer are usually associated with a
certain file or media type.
This way Internet Explorer knows which control to launch -- Flash, Adobe Reader
(for .PDFs), Windows Media Player -- for each type of file.
An ActiveX control runs in what is known as a container, an application program
that uses the Component Object Model program interfaces.
This reuseable component approach to application development reduces
development time and improves program capability and quality.
LabVIEW can be used as an ActiveX client to access the objects, properties,
methods, and events associated with other ActiveX-enabled applications.
LabVIEW also can act as an ActiveX server, so other applications can access
LabVIEW objects, properties, and methods.

You might also like