top of page

Next generation distributed I/O brings users one step closer to seamless connectivity




By now, most anyone working in a role involving industrial automation has heard about digital transformation, the Internet of Things (IoT), and Industrial IoT (IIoT). These initiatives involve ever smarter devices installed progressively closer to the “edge,” perhaps connected to an internet “cloud,” or even connected through something called the “fog.” Even if we consolidate these terms under the umbrella of IIoT, for most folks a simple question remains: what is the goal of the IIoT?


Simply put, end users would like the IIoT to create a cohesive system of devices and applications able to share data seamlessly across machines, sites, and the enterprise to help them optimize production and discover new cost-saving opportunities.


This has always been a goal of industrial automation, but traditional operational technology (OT) architectures are poor at scaling, priced prohibitively and demand complex configuration and support. So what is changing?


Much as consumer hardware and software technologies have shifted to improve ease-of-use and connectivity, industrial products and methods are following the same trend by adopting information technology (IT) capabilities. This article discusses how a more distributed global architecture is enabling connectivity from the field to the cloud for sensors and actuators, and for the input/output (I/O) systems and controllers linked to them.


Up and down the architecture


Classical industrial automation architectures generally address data processing from a hierarchical standpoint. One good feature of this hierarchy is the clarity it provides with regards to where data can originate, be stored, undergo processing, and be delivered. However, the task of transporting data and processing it in context is often quite difficult because so many layers of equipment are required to connect devices and applications.


The lowest level of an automation architecture is generally considered to be the physical devices residing on process and machinery equipment: sensors, valve actuators, motor starters and so on. These are connected to the I/O points of control system programmable logic controllers (PLCs) and human-machine interfaces (HMIs). Both PLCs and HMIs are well suited for local control and visualization, but less useful for advanced calculations and processing. Fortunately, using industrial communications protocols, they can send data to upstream supervisory control and data acquisition (SCADA) systems where it might be historized and made available to corporate level analytical software. Sharing data within multi-vendor systems, however, often requires additional middleware such as an OPC server.


More advanced site manufacturing execution system (MES) and overall enterprise resource planning (ERP) software also reside at higher levels of the architecture, hosted on PCs or servers on site, or in the cloud, where the cloud is defined as providing large-scale, internet-based, shared computing and storage. Raw information generally flows up to higher levels to be analyzed and used to optimize operations.


Developments over the past decade are significantly altering this traditional hierarchy, flattening and simplifying it to a great extent.


Spanning edge, fog, and cloud


Computing capability and networking bandwidth used to be much less available. Each step up the hierarchy from a basic hardwired sensor to cloud computing systems was required to access greater computing resources and networking capabilities (Figure 1).


Figure 1: Traditional methods of acquiring data involve the complexity of configuring and maintaining many layers in a hierarchy of hardware and software. Courtesy: Opto 22


Today, the relationship has changed because sensors and other edge devices are far more capable, with some of them including processing and communications abilities similar to a PC. Each device can perform more as a peer, instead of acting in a passive listen-and-respond role. Therefore, the architecture is evolving to become flatter and more distributed (Figure 2).


The edge is still a critical source of data, and the cloud is still a valuable resource for heavyweight computing. However, the resources in between, especially at the site level, are becoming a blend of data-generating devices and data-processing infrastructure. This fuzzy middle ground earns the name “fog” because it is akin to a widespread, pervasive, and middleweight “cloud.”

Figure 2: Modern edge devices, such as the Opto 22 groov RIO, flatten and simplify the architecture required to connect field I/O signals to business and control applications. Courtesy: Opto 22


Many other factors besides advancing technology are driving this shift to a flatter architecture. The most straightforward motivation is to balance computing and networking demand between the edge and higher-level systems. Edge computing offloads central processing, preserves data fidelity, improves local responsiveness, and increases data transfer efficiency to the cloud.


Ultimately, however, this new edge-to-cloud architecture depends on having new options at the edge for acquiring and processing field data.


Distributed I/O evolves


Field data can be raw I/O points connected at the edge or derived calculation values. Either way, the problem with traditional architectures is the amount of work it takes to design, physically connect, configure, digitally map, communicate, and then maintain these data points. Adding even one point at a later date may require revisiting all these steps. To create more scalable, distributed systems, some vendors are making it possible to bypass these layers between the real world and intermediate or top-level analytics systems.


Classic I/O hardware, for example, is not very intelligent and must be mastered by some supervisory controller or system. But with enough computing power, all the necessary software for enabling communications can be embedded directly in an I/O device. Instead of requiring a control engine to configure and communicate I/O data to higher levels, I/O devices can transmit information on their own.


This kind of edge data processing is becoming possible also due to a proliferation of IIoT tools in recent years, for example:

  • MQTT with Sparkplug B: A secure, lightweight publish/subscribe communications protocol designed for machine-to-machine communications with a data payload designed for mission-critical industrial applications

  • OPC UA: A platform-independent OPC specification, useful for machine-to-machine communication with legacy devices

  • Node-RED: A low-code, open-source IoT programming language for managing data transfer across many protocols and web APIs.


Combined with standard IT protocols like VPN and DHCP for secure remote connection and automatic addressing, these technologies give today’s I/O hardware the ability to act as first-class participants in a distributed system, rather than requiring layers of supporting middleware (Figure 3).

Figure 3: Modern devices leverage edge computing to make direct I/O-to-cloud integration possible. Courtesy: Opto 22


Another obstacle to scalability for IIoT systems based on classic I/O hardware is the work required to provide power, network connections, and the right I/O module types. To address these issues, vendors are taking advantage of new technologies to make distributed remote I/O more feasible.


One example is power over Ethernet (PoE) capability, which uses a network cable to simultaneously supply low-voltage power and network connectivity. When PoE is embedded into a remote I/O device, it can even supply I/O loop power, simplifying electrical panel design and saving money on additional components and labor.


To make it easier for designers to specify the right I/O interface types, some new I/O devices also include more flexible configuration options, like mixed-signal I/O channels. These provide extensive options to mix and match I/O signal types as needed on one device, reducing front-end engineering work and spares management.


The combination of these features within distributed I/O devices makes it possible for implementers to easily add I/O points anywhere they are needed, starting with a few points and scaling up as much as necessary at any time. Wiring needs are minimized, so long as networking infrastructure is accessible.


For more comprehensive control and calculation, of course, any number of edge controllers can also be integrated. The combination of edge I/O and edge control leads to a new distributed data architecture.


Architecture options


So what new architectural possibilities are available to industrial automation designers using modern distributed I/O and edge computing? The logical hierarchy is flattened even as the geographical distribution is expanded, with edge devices making local data directly available to computing resources at the edge or at higher organizational levels (Figure 4).


Figure 4: Edge controllers and edge I/O enable new information architectures in which devices can share data locally and across the organization, through edge, fog, and cloud: 1) private shared infrastructure with edge data processing 2) legacy PLC integration with edge controller as IoT gateway 3) direct-to-cloud I/O network 4) regional many-to-many MQTT infrastructure. Courtesy: Opto 22


Here are some examples of new information architectures that are becoming possible for use in places like commercial facilities, campuses, laboratories and industrial plants:


Shared Multi-Site Infrastructure: Where field signals are distributed over large geographic areas or multiple sites, edge devices can facilitate data transmission to networked applications and databases, improving the efficiency and security of local infrastructure or replacing high-maintenance middleware such as Windows PCs.


Brownfield Site Integration: Edge I/O can form a basic data processing fabric for existing equipment I/O in brownfield sites and work in combination with more powerful edge controllers and gateways using OPC UA to integrate data from legacy RTUs, PLCs, and PACs. This approach improves security and connectivity without interfering with existing control systems.


Direct Field-to-Cloud Integration: Engineers can design simple, flat, data processing networks using only edge I/O devices (without controllers or gateways), expanding as needed to monitor additional field signals. A distributed I/O system like this can process and report data directly to cloud-based supervisory systems, predictive maintenance databases, or MQTT servers.


Many-to-Many Data Distribution: Edge devices with embedded MQTT clients can publish field data directly to a shared MQTT server or redundant MQTT server group located anywhere the network reaches: on premises, in the cloud, or as part of regional fog computing resources. The server can then share that data with any number of interested network clients across the organization, including control systems, web services, and other edge devices.


Seamless connectivity


Seamless connectivity is now a reality thanks to technologies that make ubiquitous data exchange possible. New hardware and software products enable interconnectivity among physical locations in the field, at the local control room, in the front office, across geographic regions, and up to global data centers.


Distributed edge I/O, edge computing, and associated networking technologies support data transfer through the edge, fog, and cloud portions of an industrial architecture. End users can erase the former boundaries between IT and OT domains and get the data they need to optimize operations.



Replacing obsolete controls equipment is inconvenient, but it is a necessary process and can be made easier with an internal audit that shows what needs to be done.



Courtesy: CFE Media and Technology

Most people have a least one control panel that’s like a recurring nightmare. Each time it’s opened, something breaks, so naturally don’t open it! Don’t look at it or even breathe around it! That tends to be the situation with a lot of manufacturing facilities, but there is a big problem with that mindset: it won’t work forever.


People don’t drive cars from 1985 every day and expect them to keep on chugging along forever; the same goes with the equipment vital to manufacturing facilities. Well-planned hardware obsolescence projects can ultimately prevent unplanned downtime and spending, as well as conquer the outdated equipment plaguing a plant.


While it might be inconvenient and costly, and the benefits aren’t realized, it certainly is better than the alternative. If something does go wrong, there will be a headache much larger than simply spending the money to replace the obsolete controls equipment. And it’s a good bet the problems will be far-reaching, leading to a lot of uncomfortable questions about why it wasn’t done sooner.


Obsolescence: Six questions to ask


Fortunately, manufacturers can avoid the hassle of unscheduled outages and surprise budget burdens.  All it takes is following these six simple steps:


1. Know what you have. A simple audit of the current hardware and status of electrical schematics (existing? on paper? electronic copy available?) is critical.


2. Identify future issues. Ask if the equipment runs perfectly and would there be a return on investment (ROI) associated with fixing an issue that could help offset the upgrade cost. Also ask if this piece of equipment should perhaps not live in the hottest/stickiest/wettest room of your plant. Also consider if there’s future functionality/equipment that should be planned for


3. Determine if there are corporate/plant programming or electrical standards that could be implemented to ensure ease of troubleshooting and commonality.


4When will new equipment be installed? Operations needs to be involved in the decision! This will ultimately drive the when and how more than any other factor. Aim for outage windows that will allow for testing prior to the system coming back online and generating product.


5. Know your costs. This can be handled by an internal engineering team or by having an assessment done to allow outside help to dig through the collateral. Add additional expense/benefits of hiring outside help.


6. Coordinate, schedule and implement. Now that you know what is changing, why it’s changing, how it’s changing, where it’s going and when it can change. The rest comes down to execution.


Following these steps will help manufacturers face the obsolete equipment haunting a facility with minimal pain and help companies look to the future.


Robert Herman, program manager, Avanceon, a CFE Media content partner. This article originally appeared on Avanceon’s website. Edited by Chris Vavra, associate editor, Control Engineering, CFE Media and Technology, cvavra@cfemedia.com.

A guide to understanding and using data distribution service (DDS), time-sensitive networking (TSN), and OPC Unified Architecture (OPC UA) for advanced manufacturing applications.


Courtesy: Industrial Internet Consortium (IIC)

The top Industrial Internet of Things (IIoT) connectivity framework standards are OPC Foundation’s OPC Unified Architecture (OPC UA) and Object Management Group’s (OMG’s) Data Distribution Service (DDS). Both are gaining widespread adoption in industrial systems, though not the same sectors.


Each differ from many of today’s discrete automation systems, which use a simple architecture. A programmable logic controller (PLC) connects devices over a fieldbus. The PLC controls the devices and manages upstream connections to higher-level software such as human-machine interfaces (HMIs) and data historians. Factory-floor software is straightforward. It reads sensors, executes logic, and drives actuators, thereby implementing a repetitive operation. The factory has a series of workcells, each with a few dozen devices.


Why designs are changing


The traditional PLC and HMI design served well for the last three decades. However, it may not survive the next one. Why? Processor speeds and easy interconnectivity offer more capable compute resources. The PLC-centric workcell design can build reliable systems that endlessly repeat an operation. They aren’t truly “smart,” though. They don’t adapt well to change. They can’t take advantage of the explosion in compute and networking capacity. In short, they don’t provide a path to intelligent, but more complex, software.


The IIoT has the potential to transform industrial systems. To do that, it must share data across the workcell, factory, and front office. Of course, it’s not that simple. Pervasive data use requires a new architecture and new approach to connectivity.


OPC UA and DDS solve entirely different problems. Hardware engineers use OPC UA because it makes device connections simple. System architects use DDS because it spans system layers with a consistent model. DDS and OPC UA are different, but it’s not a matter of choosing the right one; they do not compete.


In fact, there is growing appreciation for how they can work together to build a powerful industrial communication architecture in the future. The real challenge is deciding which problem needs to be solved. That makes it critical to understand what OPC UA and DDS can do. It’s important to identify when to use DDS alone, when to use OPC UA alone, and when to use a combination of both frameworks.


OPC UA and TSN connect


In the discrete manufacturing sector, OPC UA and time-sensitive networking (TSN) offer a potential path to resolving the “fieldbus wars.” OPC UA is useful for integrating dedicated devices, such as conveyor belts, sensors, repetitive robots, and drives into a workcell. It can connect workcells to software like HMIs and historians. It does this by modeling devices and allowing factory technicians and manufacturing engineers to coordinate these devices through a PLC controller (see Figure 1).

Figure 1: Locally-connected pubsub device networks. OPC UA client/server uses a client/server pattern to connect workcells to human-machine interfaces (HMIs) and historians. When the OPC UA pubsub specification is used, devices and programmable logic controllers (PLCs) publish or subscribe to simple numeric data types and communicate over local connections, with time-sensitive networking (TSN) replacing a fieldbus in workcells. Courtesy: Industrial Internet Consortium (IIC)

Workcells aren’t so much programmed as they are configured. Manufacturing engineers or technicians use a palette of devices to implement a function in the cell. The devices come with standard models so the factory isn’t locked to one vendor. OPC UA systems are compositions of devices and existing modules such as data historians and HMIs. This design makes it easy to assemble workcells of devices with little software effort.


OPC UA connects workcell data to systemwide data by changing the communication pattern from pubsub to client/server (request/reply). To receive data, an application or higher-level client has to discover and connect to the server. This architecture is not designed to enable programming teams. For instance, translating pubsub and client/server presents an inconsistent programming model across levels. And it doesn’t let teams pre-define new software interfaces or shared data types. Without these, OPC UA doesn’t provide one source of “system truth” for systemwide software.


OPC UA is optimal for integrating devices into a workcell, although OPC UA can frustrate teams trying to build complex system software.


DDS enables system software


DDS, on the other hand, targets teams building distributed software applications. The first DDS application was feedback control over Ethernet for intelligent robotics. DDS then spread into software-intensive distributed applications such as autonomous vehicles and Navy combat system management.


Its fundamental purpose is combining software applications into a complex system-of-systems with one consistent model. Most DDS systems combine “functional” artificial intelligence with 10 to 50 applications and devices, but some DDS systems are comprised of hundreds of thousands of devices and applications, which are written by thousands of programmers.


The key to understanding DDS is to realize that distributed systems are fundamentally parallel and the system architecture must match that reality. This isn’t new; the heart of a current distributed control system (DCS) is a control execution engine that manages timeslices and control loops. All data is stored in “sandbox RAM” so processes can access any data without unwanted interaction. The DCS provides an environment to combine function blocks into parallel, deterministic feedback loops in one box.


Figure 2: OPC UA focuses on device integration and features common device models that enable vendor interoperability. Courtesy: Industrial Internet Consortium (IIC)

DDS takes that same concept and distributes it. DDS implements a data-centric shared “global data space.” This means all data appears as if it lives inside every device and algorithm. This is, of course, an illusion—all data can’t be everywhere. DDS works by keeping track of which application needs what data and when, and then delivers it. As a result, data an application actually needs is present in local memory on time.


The essence of data centricity is instant local access to anything by every device and every algorithm, at every level, in the same way, at any time. It’s best to think of it as a distributed shared memory, similar to the DCS sandbox RAM. There are no servers or objects or special locations. It’s a parallel software architecture across the system.


DDS is about data centricity, not patterns. While most standards use pubsub, the standard also specifies requests/replies and some vendors support queuing. Applications interact in many ways, but only with the shared distributed memory, not with each other directly. DDS also defines system interfaces (data types) and quality of service (QoS) flow control. It integrates modules with a transparent and consistent systemwide architecture that’s independent of patterns. This is the connectivity analog to data-centric system “truth” databases use to power the enterprise.


However, DDS doesn’t model devices. Factory engineers and technicians can’t combine devices into workcells without writing code.


Should you use OPC UA, or DDS, or both?

Table 1: DDS and OPC UA are nearly opposites. DDS is widely deployed in industries that need sophisticated distributed software. OPC UA targets manufacturing, where device interoperability matters more. Create, read, update, and delete (CRUD) are functions of relational databases.

Manufacturing systems compete on the same basis they have for decades: reliability, production rates, or implementation cost. In the not-too-distant future, clever industrial software engineers may figure out how to apply artificial intelligence, distributed information control, or smart flexibility. Those applications require sophisticated software and a systemwide approach. If you believe software can be bought and remain competitive, you don’t have to change. If, on the other hand, you see a future where the best software wins, you will need a different path to keep up see Figure 3.

A system may also need to be built from interoperable devices. Fortunately, this doesn’t have to be an all-or-nothing decision; DDS, OPC UA, and TSN can work together. The Object Management Group (OMG), the parent organization for the Industrial Internet Consortium (IIC), recently approved a standard to integrate DDS with OPC UA. OMG and OPC Foundation are working on standards to use TSN with DDS and OPC UA. DDS vendors are working on easy configuration tools.


IIC developed an integrated architecture and has several testbeds using OPC UA in manufacturing applications and DDS in applications such as electric power and health care. Some use OPC UA and DDS like the IIC Security Claims Evaluation testbed and the IIC Smart Factory Machine Learning for Predictive Maintenance testbed. Combining the flexibility of interchangeable devices with a powerful software development environment is not that far off.


The real challenge is to fully understand how OPC UA and DDS work in advanced manufacturing environments. Many people have difficulty defining what these technologies do. To stay competitive in the future, it’s vital to research and ask questions to ensure the right platform, or the right combination, is chosen.


Stan Schneider, PhD, is vice chair of the Industrial Internet Consortium (IIC), a CFE Media content partner, and is CEO of Real-Time Innovations (RTI). Edited by Emily Guenther, associate content manager, Control Engineering, CFE Media, eguenther@cfemedia.com.


MORE ANSWERS

KEYWORDS: Data Distribution Service (DDS), OPC Unified Architecture (UA)

When to use OPC UA and DDS frameworks

When to use a combination of both standard frameworks

Defining how DDS and OPC UA.

Consider this: What framework would be the best fit for your manufacturing operations?

Springfield Research
bottom of page