A AS THE CENTRAL NERVOUS system of the power network, the con trol center—along with its energy management system (EMS)—is a crit ical component of the power system operational reliability picture. Although it is true that utilities have been relying on EMS for over two decades, it is also true that many of the systems in place today are outdated, undermaintained, or underused compared with the total potential value that could be realized.
Trang 1Factors, trends, and requirements having the biggest
impact on developing such systems in the next decade
by Faramarz Maghsoodlou, Ralph Masiello, and Terry Ray
AA S T H E C E N T R A L N E RVO U S
system of the power network, the
con-trol center—along with its energy
management system (EMS)—is a
crit-ical component of the power system
operational reliability picture
Although it is true that utilities have
been relying on EMS for over two
decades, it is also true that many of the
systems in place today are outdated,
undermaintained, or underused
com-pared with the total potential value
that could be realized
Many utilities are operating with
EMS technology that was installed in
the early 1990s; thus, the technology
base and functionality is a decade
old The EMS industry overall did
not markedly alter its technology in
the second half of the decade as the
investment priority in the late 1990s
turned from
generation/reliability-centric to retail/market-generation/reliability-centric
appli-cations and the need for faster return
on investment, which meant
mini-mizing customization and
imple-menting new advances in technology
Trang 2unchanged, largely unaffected by much of the
Internet-driven advances in IT of the past few years
Almost every EMS deployed in the 1990s incorporated
state-of-the-art (at the time) network models, for example,
state estimation, contingency analyses, and operator study
analytical tools More than half of these systems also
includ-ed operator training simulators (OTSs) that were intendinclud-ed to
enable operator simulation training analogous to nuclear
operator or airline pilot simulator training In a recent survey
conducted by KEMA and META Group, however, it appears
that a sizable fraction of utilities that have OTS technology
deployed are not actually using it, not because the technology
doesn’t work, but because they can not afford the extra staff
required to maintain the models, develop the training
pro-grams, and conduct and receive training
The market’s shift, beginning in the late 1990s, from an
investment strategy focused on improving capacity/reliability
to one geared toward meeting the needs of a deregulating
market impacted not only day-to-day infrastructure
invest-ment but also research and developinvest-ment At the same time,
utilities decreased EMS budgets even further because
genera-tion scheduling and optimizagenera-tion came to be viewed as a
function of the deregulated side of the new market structure
Transmission entities therefore slashed their EMS budgets
and staff, thinking that local independent system operators
(ISOs) would assume responsibility for grid security and
operational planning Those factors, combined with the fact
that EMS technology has historically lagged behind the IT
world in general, has created a situation where control room
technology is further behind today than it has ever been
This article examines some of the factors, trends, and
requirements that will have the biggest impact on
develop-ment of energy managedevelop-ment systems in the next decade that
are more reliable, secure and flexible, and capable of meeting
the anticipated new requirements of pending legislation,
deregulation, and open access
Three Lessons
Looking past the system failures or applications not
function-ing that have been so well publicized in blackout analyses,
there are three vitally important lessons that the whole
sys-tems operations community should take away from the
reports issued by the U.S.-Canada Power System Outage
Task Force
First, the number of alarms—breaker operations and analog measurements of voltage and line flows exceeding limits and oscillating in and out of limits—far exceeded the design points
of the systems deployed in the early 1990s A more realistic design philosophy in light of this would be to develop “worst case” requirements, stipulating that systems must function when all measured points rapidly cycle in and out of alarm Second, the blackout did not occur instantly Rather, the voltages collapsed and lines successively tripped over a period of time Had the operators had good information and tools to guide them relative to what should have been done, it’s feasible that sufficient load could have been shed or other actions taken to have prevented such a widespread outage In other words, EMS solutions require network modeling and analysis tools that are sufficiently robust to be useful in these conditions They should converge accurately and reliably under extreme voltage conditions, run fast enough to be useful in the time available, and be able to rec-ommend remedial actions
Third, and perhaps most importantly, the traditional
“N− 1” criteria for transmission and operations planning is not adequate When systems are subject to disturbances, out-ages come in clusters Once the first outage has happened, subsequent outages are more likely, and multiple outages, due
to human error or failure to act, are more likely than we want
to acknowledge Systems and people need procedures and training that takes this into account
Furthermore, the U.S.-Canada Power System Outage Task Force discovered that, “some companies appear to have had only a limited understanding of the status of the electric sys-tems outside their immediate control.” They also determined that “besides the alarm software failure, Internet links to SCADA software weren’t properly secure and some operators lacked a system to view the status of electric systems outside their immediate control.”
Regaining System Control
Clark Gellings, EPRI vice president for power delivery and markets, notes that, “the nation’s power delivery sys-tem is being stressed in new ways for which it was not designed” and that “a number of improvements to the sys-tem could minimize the potential threat and severity of any future outages.”
There is some ground to be gained by simply getting the
EMS technology that is currently in use within the utility
industry fully functional again to release the true potential
value of the investment.
Trang 3“EPRI believes that the lessons learned from the 14
August power outage will support the development and
deployment of a more robust, functional and resilient power
delivery system,” he said
In the view of KEMA and META Group, rectifying the
shortcomings of our EMS/SCADA systems will be
accom-plished in three distinct, sequential phases See Figure 1
Phase I will encompass an emphasis on control room people,
processes, and policies and is happening this year Phase II
will encompass communications and control capabilities,
and, in some cases, plans for phase II activities and projects
are already underway Phase III will be investment in
infra-structure and intelligence, which will take longer to
accom-plish because of difficulties in funding large capital projects
and in getting needed regulatory and political approvals
Phase I—People, Processes, and Policies
NERC is currently driving phase I, with the formulation of
standards for mandatory EMS performance, availability,
tracking, reporting, and operator training Individual utilities
and, particularly, the six ISOs in North America are focusing
on their own processes and policies (California ISO, ERCOT,
ISO New England, Midwest ISO, New York ISO, and PJM)
NERC, in conjunction with the U.S.-Canada Power
System Outage Task Force studying the causes and
recommendations of the 14 August blackout, attributes
inef-fective communications, lack of operator training in
recog-nizing and responding to emergencies, inadequate processes
for monitoring and compliance assurance, the inadequacy of
power system visualization tools, inaccurate data, and
inade-quate system protection technologies as key causes of the
outage, and the resulting technical and strategic initiatives
will cause heavy emphasis to be placed on factors such as
✔ improving operator and reliability coordinator training,
leading, KEMA predicts, to resurgence in operator
training simulators (OTSs)
✔ evaluating and improving practices and processes
focused on reactive power and voltage control, system
modeling data, and data exchange
✔ evaluating and revising operating policies and
proce-dures to ensure that reliability coordinator and control area functions and responsibilities are clearly defined
✔ evaluating and improving the real-time operating tools and time-synchronized recording devices
Phase II—Communications, Control, and Capabilities
Phase II will focus on enhanced communications and control capabilities and on new software applications in control cen-ters Although more expensive and more difficult than phase
I activities, these are an order of magnitude less costly than major infrastructure investments that will occur in phase III Phase II will include developing a more interconnected approach to communication and control, for example, devel-opment of a regional approach to relay setting and coordina-tion, system planning at a regional level, and implementation
of policies, procedures, and technologies that facilitate real-time sharing of data among interconnected regions
The deployment of real-time phasor measurements around the country is being planned and, as this becomes available, the regional control systems at ISOs and regional transmission organizations (RTOs) and NERC regional coor-dinators will develop applications that can use this informa-tion dynamically to help guide operators during disturbances
Phase III—Investment, Infrastructure, and Intelligence
The emphasis of phase III will be on investment in enhanced instrumentation and intelligence, along with a renewed investment in the power system infrastructure and the tech-nology to better manage it
The new infrastructure may include, as many propose, FACTS devices and other new transmission technologies and devices providing what we think of as ancillary services (superconducting VAR support, for example) What we do know is that these prospective new technologies will require new modeling and new analysis in EMS applications
EMS and system operations will also have a role to play
in transmission planning for straightforward new transmis-sion line infrastructure We have learned that planning studies not closely aligned with operational data are too abstract
figure 1.Improving the EMS in three phases
Phase I
People Processes
Policies
Phase II
Commmunication Control Analytics
Phase III
Investment Infrastructure Intelligence
Trang 452 IEEE power & energy magazine september/october 2004
What matters is how new transmission lines will be operated
and how they will impact system operations EMS systems
must have the capacity to provide the data and analysis
needed to understand the answers to that question
As investments are made in the EMS solutions of
tomor-row, KEMA and META Group believe that several important
technology trends will come into play
Visualization
Control room visualization today is still limited primarily to
one-line diagrams, which are insufficient when it comes to
today’s needs to understand the availability of electricity at
any given time and location and in understanding load,
volt-age levels, real and reactive power flow, phase angles, the
impact of transmission-line loading relief (TLR) measures on
existing and proposed transactions, and network overloads In
fact, the Department of Energy’s 2002 National Grid Study
recommends visualization as means to better understand the
power system
Three-dimensional, geo-spatial, and other visualization
software will become increasingly indispensable as electricity
transactions continue to increase in number and complexity
and as power data, historically relevant to a contained group
of entities, is increasingly communicated more widely to the
ISOs and RTOs charged with managing an open grid Not
only do visualization capabilities enable all parties to display
much larger volumes of data as more readily understandable
computer-generated images, but they also provide the ability
to immediately comprehend rapidly changing situations and react almost instantaneously
Three-dimensional visualization is an invaluable tool for using abstract calculated values to graphically depict reactive power output, impacts of enforcing transmission line constraints, line loadings, and voltages magnitudes, making large volumes of data with complex relationships easily understood
Advanced Metering Technology
In this age of real-time information exchange, automated meter reading (AMR) has set new standards by which the energy market can more closely match energy supply and demand through more precise load forecasting and manage-ment, along with programs like demand-side management and time-of-use rate structures Beyond AMR, however, a host of real-time energy management capabilities are now on the market, which, through wireless communication with commercial, residential, or industrial meters, enable utilities
to read meters and collect load data as frequently as once every minute This enables utilities to better cope with dynamic market changes through real-time access to the criti-cal load forecasting and consumption information needed to optimize decision support
The convergence of demand-response technologies and real-time pricing, wireless communications, and the need
figure 2.Real-time event management (courtesy of Gensym Corp.)
Model
Respond
Diagnose and Explain
Outputs
Detect Data
Events Condition
or State
Advice and Corrective Actions How Do I Get the
System to the Condition I Want?
What Is the Significance of the Data?
What Is the State of the System?
• Detect
• Diagnose and Explain
• Respond with Models Knowledge-Based
Models Enable Reasoning
Trang 5for more reliable and timely settlement processes are all
drivers for enhanced metering capabilities This, in turn,
will create a demand for EMS solutions capable of handling
much larger volumes of data and the analytical tools to
manage this data
More Stringent Alarm Performance
The 2003 blackout drew attention to what has become a
potentially overwhelming problem—SCADA/EMS has little
or no ability to suppress the bombardment of alarms that can
overwhelm control room personnel during a rapidly
escalat-ing event In a matter of minutes, thousands of warnescalat-ings can
flood the screens of dispatchers facing an outage situation,
causing them to ignore the very system that’s been put in
place to help them
Although distribution SCADA has been able to take
advantage of straightforward priority and filtering schemes to
reduce the alarm overload, the transmission operations
sys-tems have not This is because transmission syssys-tems are
net-worked, and it is more difficult to analyze the alarms to
determine what needs to be shown to help the operator reach
a conclusion Also, reaction time is not an issue in
distribu-tion, and there is more value in taking the time to locate the
fault before taking action; short outages can be tolerated
Other industries, for example telecom, networking, and
refin-ing, have had good success with inference engines and other
rule-based systems for diagnosing alarm conditions and
pro-viding operator assistance These are worth a second look by
the EMS fraternity today
New analytical tools are needed in the EMS to enable
operators to manage and respond to abnormal events and
conditions See Figure 2 Lessons learned in other industries
in the application of model- and rule-based reasoning
methodologies in large-scale real-time systems can be
applied here These tools will be expected to provide the
fol-lowing capabilities:
✔ proactively monitor system conditions to avoid or
minimize disruptions
✔ analyze, filter, and correlate alarms to speed up
operator responses
✔ rapidly isolate the root cause of problems to accelerate
resolution
✔ guide operators through system recovery and service
restoration
✔ provide expert guidance so that operators of all skill
levels can effectively respond to problems
✔ predict the impact of system events and disturbances
so operators can prioritize actions
Also to be watched is the promise of the digital dash-board, heretofore unfulfilled in the control room environ-ment, but offering the ability to integrate information from many sources into information portals that provide ready desktop access to the data each user needs to perform his or her job functions, with an emphasis on business intelligence and knowledge management
Data Warehousing
For many years, utilities have been archiving the operational (real-time) and nonoperational (historic) information cap-tured by energy management systems Today’s thought lead-ership shift is to focus on how this archived operational and nonoperational data can be combined with emerging analytic functionality to meet a host of business needs, for example, to more readily identify parts of the network that are at the greatest risk of potential failure If integrated properly,
heads-up information stored by these systems can also aid utilities
in proactive replacement or reinforcement of weak links, thus reducing the probability of unplanned events
A recent study conducted by IBM showed that today, the typical company utilizes only 2–4% of the data collected in operational systems Data marts are one way to more fully leverage and use data to produce measurable improvements
in business performance
A data mart, as defined in this article, is a repository of the measurement and event data recorded by automated systems This data might be stored in an enterprise-wide database, data warehouse, or specialized database In practice, the terms data mart and data warehouse are sometimes used interchangeably; however, a data mart tends to start from the analysis of user needs, while a data warehouse starts from an analysis of what data already exists and how it can be collected in such a way that it can be used later The emphasis of a data mart is on meeting the specific demands of a particular group of users in terms of analysis, content, presentation, and ease of use
Most automated utility systems are installed by the vendor with built-in data marts developed specifically to archive data for that problem domain For some utilities, this means a decade of logged historical performance data is available for integration and analysis
The real need is to model and simulate the grid on an ongoing basis to understand how it responds Knowledge gained from simulations through tools such as state
EMSs should converge accurately and reliably under extreme
voltage conditions, run fast enough to be useful in the time
available, and be able to recommend remedial actions.
Trang 6estimation and contingency analysis allows protection to be
built into the system at the local or control levels
Operators can also use this knowledge to recognize
pat-terns leading up to potential failure and take corrective action
long before the situation becomes a crisis State estimation
combined with contingency analysis to support automated
decision rules or human intervention is the most practical
approach to addressing future grid vulnerability
It is possible to fine tune reliability centered maintenance
(RCM) and better schedule transformer
maintenance/replace-ment if the hourly loading history of a transformer can be
correlated with ambient temperature conditions The data
needed to do this is readily available from SCADA systems
This is an example of time-series data being stored in a data
warehouse designed for the task, such as PI Historian
Another example is that, to demonstrate compliance with
code of conduct and reliability procedures, it is necessary to
track all the approvals and operational actions associated with
a transformer outage This is a combination of transactional
information (requests and approvals) and event information
(control actions and alarms), linked over time This requires
the combination of a transactional data mart triggered by
entries on screens and data collection in real time
A third example is that reliability centered maintenance is
enhanced if the distortions in the 60-Hz waveforms on electrical
measurements at the transformer can be tracked over time This
is a waveform analysis over a sampled time series It requires
interaction with a substation computer and is not easily
support-ed in either a transactional or time-series database The solution
lies in the kinds of proprietary systems used for similar RCM
work against jet engines and combustion turbines
Risk Management and Security
Many utilities are coming to the realization that
compli-ance with the Sarbanes Oxley (SOX) can be extended to
mean that EMS systems and their shortcomings present
serious risk issues that must be addressed to prevent the
financial penalties that could accrue as a result of a long-term outage Similarly, when a utility has a rate case pending or operates under performance-based rates measured by reliability, there is a direct connection between the EMS and the financial ramifications of less-than-desirable results Therefore, the impact
of Sarbanes Oxley on operations will impact EMS systems—in terms of relia-bility, operator training, the availability
of software/systems that provide improved record keeping of who author-ized what, and adherence to standards Look for the application of technolo-gies that reduce risk by providing key per-formance indicators that help managers determine whether critical operating parameters are within expectations and that combine accurate cost/revenue meter-ing, power quality, and reliability monitoring to deliver rele-vant information in a timely fashion
There are three broad families of SOX relevance to EMS See Figure 3 First is the financial impact of loss of an EMS system and the measures taken to mitigate such loss One num-ber is used for loss of EMS for up to an hour measured in the efficiency loss of running units off dispatch, failing to meet schedules and paying balancing costs, etc Another higher fig-ure is used if EMS is out for a day to a week, resulting in
manu-al workarounds, extra staff in the field, and inefficiencies and costs incurred due to overconservative operations A third, even higher number, is used for longer outages as those temporary costs become permanent and emergency extra staff or extra sys-tems are deployed The second and third numbers are worsened
by increased probability of major outages with all its costs Second, SOX requires certification of cybersecurity and of the quality controls imposed on the software in production This will have implications on the QA and software life-cycle management tools and methods used by vendors and consult-ants as well as utilities
Finally, there is a need to show compliance with NERC, ISO, code of conduct, and other standards for operations EMS must be enhanced to provide easily recovered audit trails of all sorts of actions and system performance to pro-vide compliance reports and analyses
Advanced Network Analysis Applications
Another key factor that is critical to the success of the EMS technology of tomorrow is the incorporation of advanced net-work analysis algorithms and applications Most systems in place today are still based on the Newton-Raphson power flow analysis and related/derivative methodologies, with their inherent shortcoming being that they fail to converge when network conditions are too far from nominal, especially in times of near voltage collapse For real-time calculations,
figure 3.Sarbanes Oxley relevance to EMS
Sarbanes-Oxley Act Compliance
Financial Impact
of Loss of the
EMS System
Certification of Cybersecurity and Quality of Software
Compliance with NERC, ISO, and Other Standards Code of Conduct
Trang 7ferent idealizations of the model are needed to speed up the
ability to solve large series of power flows within a
reason-able time frame Projects recently completed in Spain may
have resulted in new algorithms that are noniterative and that
are much more robust than Newton-Raphson, which could
help with handling low-voltage conditions
Another needed improvement in application analysis falls
within the realm of state estimation In most state estimation
applications, measurements are obtained by the data
acquisi-tion system throughout the whole supervised network, at
approximately the same time, and are centrally processed by
a static-state estimator at regular intervals or on operator
request Although today’s high speed data acquisition
tech-nology is capable of obtaining new sets of measurements
every 1–10 seconds, current EMS technology allows state
estimation processing only every few minutes within the cost
parameters allowed for EMS
A more reliable state estimation operational scheme can be
achieved by shortening the time interval between consecutive
state estimations to allow a closer monitoring of the system,
particularly in emergency situations in which the system state
changes rapidly This mandates development of faster state
estimation algorithms and on the numerical stability of these
algorithms Other domains have advanced state estimation
technology considerably since it was introduced to electric
power Techniques such as sequential state estimation are
worth looking at, especially for ISO/RTO applications where
the time synchronization of the analog measurements is not as
robustly enforced
Operator/Dispatcher Training Simulator
Most EMS systems deployed in the 1990s already include
OTS functionality, but a recent survey initiated by KEMA
and META Group indicates that many are not in use,
primari-ly due to the lack of staff to support them and conduct the
training Based on the recommendations of NERC and other
industry and regulatory groups, this will change as more
utili-ties take the steps needed to leverage the technological
capa-bilities they already possess
As with other network analysis applications, OTS needs to
have robust algorithms that are capable of simulating
abnor-mal voltage conditions It is also imperative that the
represen-tation of network and equipment models in OTS be
consistent with those used in real-time applications to
realisti-cally simulate current and potential future conditions Ideally,
all model updates in the real-time system should be
automati-cally propagated to OTS to keep the two models in synch
The OTS will also be called upon to support “group” training
of transmission operations and ISO operation; therefore, the
network and process modeling has to be coordinated
hierar-chically across the individual utilities and the ISO
Communication Protocols
EMS systems must have the capacity to talk to “legacy,” i.e.,
preexisting, remote terminal units (RTUs) and, thus, are
severely handicapped today in that many still rely on serial RTU protocols that evolved in an era of very limited band-width As a result, most EMS solutions in use today are unable to exploit breakthroughs in communications, in partic-ular, secure communications such as encryption and valida-tion This will need to change Eventually, the need for encrypted, secure communications to the RTU, combined with adoption of substation automation and substation com-puters, may lead to the end of RTU protocols as we know them today and adoption of a common information model (CIM)-based data model for the acquisition of field data
Enterprise Architectures
To achieve the benefits offered by the technologies described here, EMS solutions need to be able to take advantage of modern enterprise architectures (EAs) EMS systems are typically not included as part of utility EA initiatives, but as their importance becomes readily apparent, this will change Though EA defini-tions vary, they share the notion of a comprehensive blueprint for
an organization’s business processes and IT investments The scope is the entire enterprise, including the control room, and, increasingly, the utility’s partners, vendors, and customers
A strategic information asset base, the EA effectively defines the business, the information necessary to operate the business, the technologies necessary to support the business operations, and the transitional processes necessary for implementing new technologies in response to changing busi-ness or regulatory requirements Further, it allows a utility to analyze its internal processes in new ways that are defined by changing business opportunities or regulatory requirements instead of by preconceived systems design (such as
monolith-ic data processing applmonolith-ications) In this architectural design,
an object model represents all aspects of the business, includ-ing what is known, what the business does, the business con-straints, and the business’ interactions and relationships
More practically, a good EA can provide the first com-plete view of a utility’s IT resources and how they relate to business processes Getting from a utility’s existing or base-line architecture to an effective EA requires defining both a target architecture and its relationship to business processes,
as well as the road map for achieving this target An effective
EA will encompass a set of specifically defined artifacts or systems models and include linkages between business objec-tives, information content, and information technology capa-bilities Typically, this will include definitions of
✔ business processes, containing the tasks performed by each entity, plus anticipated change agents such as pending legislation or regulations that might impact business processes
✔ information and the way it flows among business processes
✔ applications for processing the information
✔ a model of the data processed by the utility’s informa-tion systems
✔ a description of the technology infrastructure’s func-tional characteristics, capabilities, and connections
Trang 8Though no industry-standard technical/technology reference
model exists for defining an EA, it is clear that
component-based software standards, such as Web services, as well as
pop-ular data-exchange standards, such as the extensible markup
language (XML), are preferred, as are systems that are
interop-erable, scalable, and secure, such as Sun Microsystem’s Java 2,
Enterprise Edition (J2EE)
plat-form, or Microsoft’s Net
framework It is also clear that
frameworks and initiatives,
such as the Zachman
frame-work, Federal Enterprise
Architecture Framework
(FEAF), The Open Group
Architecture Framework
(TOGAF), and Rational
Uni-fied Process (RUP), will
strongly impact how enterprise
architectures for utility control
operations are defined and
implemented See Figure 4
By using shared, reusable
business models (not just
objects) on an
enterprise-wide scale, the EA provides tremendous benefits through
the combination of improved organizational, operational,
and technological effectiveness for the entire enterprise
Web Services Architecture
There are no EMS deployments today that take advantage of
modern Web services architecture, although the architecture
is providing tremendous benefits to businesses around the
world and holds big promise for control room operations
Past attempts at distributed computing have resulted in
systems where the coupling between the system’s various
components are both too tight and too easily broken for many
of the transactions that utilities should be able to perform via
the Internet The bulk of today’s IT systems, including
Web-oriented systems, can be characterized as tightly coupled
applications and subsystems
Monolithic systems like these are sensitive to change, and
a change in the output of one of the subsystems often causes
the whole system to break A switch to a new
implementa-tion of a subsystem will also often cause a breakdown in
col-laboration among systems As scale, demand, volume, and
rate of business change increase, this weakness can become
a serious problem marked by unavailable or unresponsive Web sites, lack of speed to market with new products and services, or inability to meet new business opportunities or competitive threats
As a result, the current trend is to move away from tightly
coupled monolithic systems and towards loosely coupled systems of dynamically bound components Web services provide a standard means of interoperability between dif-ferent software applications running on a variety of plat-forms or frameworks They are comprised of self-contained, modular appli-cations that can be described, published, located, and invoked over the Internet, and the Web services architecture
is a logical evolution of object-oriented design, with a focus on components geared toward e-business solutions
Like object-oriented design, Web services encompass fun-damental concepts like encapsulation, message passing, dynamic binding, and service description and querying With
a Web services architecture, everything is a “service,” encap-sulating behavior and providing the behavior through an API that can be invoked for use by other services on the network Systems built with these principles are more likely to domi-nate the next generation of e-business systems, with
flexibili-ty being the overriding characteristic of their success
As utilities move more of their existing IT applications to the Internet, a Web services architecture will enable them to take strong advantage of e-portals and to leverage standards, such as XML; Universal Description, Discovery, and Integra-tion (UDDI); Simple Object Access Protocol (SOAP); Web Services Definition Language (WSDL); Web Services Flow Language (WSFL); J2EE; and Microsoft.NET
The Web services architecture provides several benefits, including:
✔ promoting interoperability by minimizing the require-ments for shared understanding
Another key factor that is critical to the success
of the EMS technology of tomorrow is the incorporation
of advanced network analysis algorithms and applications.
figure 4.Integration standards
BizTalk, XML.org OAGIS, UIG-XML, CCAPI, CIM
XML
J2EE
(EJB)
CORBA MSFT
.Net (COM+)
IP
UML Workflow
Semantics Format Interaction Security Integrity Transport
Trang 9✔ enabling just-in-time integration
✔ reducing complexity by encapsulation
✔ enabling interoperability of legacy applications
Cybersecurity Standards
Used throughout the industrial infrastructure, control systems
have been designed to be efficient rather than secure As a
result, distributed control systems (DCSs), programmable
logic controllers (PLCs), and supervisory control and data
acquisition (SCADA) systems present attractive targets for
both intentional and unintentional catastrophic events
To better secure the control systems controlling the critical
infrastructures, there is a need for the government to support
the energy utility industry in two critical areas:
✔ establish an industry-wide information collection and
analysis center for control systems modeled after
Com-puter Emergency Response Team (CERT) to provide
information and awareness of control systems
vulnera-bilities to users and industry
✔ provide sufficient funding for the National SCADA
Test Bed to facilitate the timely and adequate
determi-nation of the actual vulnerabilities of the various
con-trol systems available in the market and develop
appropriate mitigation measures
Between the need for improved cybersecurity and
Sar-banes Oxley, the EMS world is likely to see a strong move
toward software that is “certifiable” to ensure that the code is
“clean.” This implies the need for modern, automated,
com-prehensive quality assurance processes and an ability to
veri-fy system performance on a regular basis
Summary
It’s clear that today’s EMS/SCADA systems have a long way
to go to meet the reliability and regulatory standards of
today’s evolving markets This presents not only a challenge,
but also an opportunity to invest in new technology that will
enable us to more effectively manage both the supply and
demand side of the energy equation and is an equally
impor-tant component to any long-term energy policy
Apart from demonstrating the vulnerability of the electric
grid, the 2003 blackout put enormous pressure on the energy
industry to show that it is serious about improving reliability
Although long-term infrastructure needs will require an
enor-mous capital investment, estimated by some at US$10 billion a
year for the next decade, at the very least, there are numerous
steps that can be taken toward greatly enhanced reliability through much smaller investments in processes and technology
Four key pieces of advice are as follows: One, there is some ground to be gained by simply getting the EMS tech-nology that is currently in use within the utility industry fully functional again to release the true potential value of the investment Two, reinvigorate OTS and training programs Three, investigate more robust approaches to network analy-ses, and, four, take the steps necessary to minimize the poten-tial financial impact of Sarbanes Oxley
For Further Reading
U.S.-Canada Power System Outage Task Force Final Report
on the August 14th Blackout in the United States and Canada [Online] Available: https://reports.energy.gov/
“Emerging tools target blackout prevention,” Comput World,
Aug 25, 2003 [Online] Available: http://computerworld.com.secu-ritytopics/security/recoverystory/0,10801,84322,00.html
Tuscon Electric Power press release [Online] Available: http://www.elequant.com.news/pr_20040526.html
Elequant launch press release [Online] Available: http://www.elequant.com.news/pr_20040605a.html
Biographies
Faramarz Maghsoodlou is an executive consultant and
director of systems and technology services with KEMA, Inc With over 25 years of experience in the energy and software industry, he specializes in energy systems plan-ning, operation, and optimization and enterprise software applications He can be reached at fmaghsoodlou@kema-consulting.com
Ralph Masiello is senior vice president, bulk power
con-sulting, with KEMA Inc A Fellow of the IEEE, he has over
20 years experience in transmission and distribution opera-tions and in control systems implementaopera-tions at many of North America’s largest utilities He can be reached at rmasiello@kemaconsulting.com
Terry Ray is vice president, energy information strategies,
with META Group Inc With over 35 years experience in the energy industry, he specializes in advising clients on the align-ment of business and IT strategies He has worked with investor-owned and public power organizations in North America and Europe He can be reached at terry.ray@metagroup.com
The 2003 blackout drew attention to a potentially overwhelming
problem—SCADA/EMS has little or no ability to suppress the
bombardment of alarms that can overwhelm control room
personnel during a rapidly escalating event.
p&e