Faithful virtual replicas of reality, digital twins can help better understand systems, processes, machines, and potentially even humans.
To draw better insights and improved decisions, companies need to run experiments which cannot be done on real systems as it can be expensive, cumbersome, and sometimes even destructive.
While there are some solutions that deploy physics-based models on control systems to sense, understand equipment performance and monitor, control or optimize equipment; real systems may exhibit different behaviors due to variations in raw materials or environmental conditions.
Digital twins present themselves as a reliable and adaptive technological intervention. Faithful virtual replicas of reality, they enable rigorous analysis, simulations, and experimentations for evidence-based decisioning and serve as a way to better understand systems, processes, machines, and potentially even humans. Some examples are vehicle twins used by original equipment makers, customer behavior twins used by retailers and patients’ digital twins that healthcare providers rely on to improve patient outcomes.
The many forms of industrial digital twins
Industrial digital twins fall into three main categories: asset twins, process twins, and factory digital twins.
Creating an asset digital twin for a critical piece of machine such as rotary equipment (a large pump for an oil pipeline or a large furnace fan), or other static equipment such as a boiler can generate possibilities for creating predictive maintenance schedules to reduce downtime or testing load scenarios for higher efficiency. Standard equipment monitoring involves operators reacting to preset threshold alerts or taking regular preventive actions. An asset twin, by contrast, collects and analyzes a history of sensor data combined with operating conditions to analyze patterns and understand the behavior in real time. It helps detect anomalies at an early stage, giving operators time to plan maintenance and prevent failures.
Industrial process twins are used to optimize any continuous and batch production process of chemicals, plastics, paints, fuels, rubber, metals, cement, and other industrial materials. A process twin is meant to ensure that a process runs as near as possible to the original design specifications. It also provides a way to optimize the process through proactive control of variables that affect the outcome. The twin may be helpful in testing process improvements to save energy, increase yield, reduce byproducts, or curb pollution.
One example is that of an industry leading steelmaker that sought to improve the efficiency of its sintering facility where iron ore fines are agglomerated by heat and output sinters are fed to the blast furnace as a key raw material. Variation in raw material quality can affect steel quality, which can only be known post time-consuming laboratory testing. The energy consumption of the sintering plant was also significant. By implementing a virtual sinter twin, a real-time, physics-based quality prediction at regular intervals was made possible, allowing for control of process parameters to adjust product quality, and energy use. The result? A 5% productivity improvement and a 2% fuel savings.
Another example is the use of a digital twin by a chemical company to gain visibility into the conversion efficiency of its carbonation process, which is critical for soda ash production. Variation in temperature and other control parameters affect the conversion efficiency. Traditionally, the efficiency is measured every four hours, but since the results do not provide insights that help better understand the process parameters to control outcomes, improving efficiency was difficult. With the process twin, the company could monitor conversion efficiency on an hourly basis. It could also gain better understanding of control parameters affecting conversion efficiency and get actionable set point advisories. Yields were stabilized near the hypothetical maximum, providing ongoing cost savings.
The third category of industrial digital twin—a factory digital twin—is aimed at understanding whether an entire facility or production line is operating at optimal efficiency, in terms of the resources used, energy used, and desired output. It is prevalent in certain industries where digital twin factory simulation tools are helpful to arrive at the best possible design for optimum throughput. A factory twin makes it possible to adjust parameters such as product mix, try out new scenarios, and even visualize results in a virtual environment. However, daunted by the complexities of developing factory twins, relatively few companies in the process industry have so far developed factory digital twins.
Digital twin adoption faces multiple challenges including the lack of sensors, good quality historical data, and necessary contextual information.
While new machinery mostly have sensors, most legacy processes and equipment lack the sensors to capture comprehensive information that can help understand their overall behavior.
Historical data sets, which may come from a company’s systems or from more adjacent distributed (and often siloed) sources, are vital for the machine learning and AI algorithms at the heart of any twin. Enough information about the past (data for a minimum of six to eight months) are needed to understand patterns that enable future behavior predictions. Contextual information, describing both running environment and load conditions, are also key to accurately model an asset or system.
To make a digital twin work, an organization first needs a certain minimum level of digital maturity. A scalable and flexible architecture to respond to information requests and data challenges is also a must. Equally, the organization needs the change management skills to be able to act on the insights the twin ultimately provides. If the twin can provide information that help reduce energy consumption and improve quality in the production of cement, for example, that information must be translated into operational instructions that are performed consistently. But more often than not, companies lack the subject matter expertise to interpret the process and tweak the models as required, and are overwhelmed by the sheer complexity of the information models of full-scale assets or operations.
We recommend a 4C-led approach to derive actionable insights from digital twins.
At their core, industrial digital twins offer the promise of improved performance and efficiency for a range of factory assets and processes, reduced costs, and higher revenues with greater yields. But it is also worth contemplating how digital twins can solve a broader range of business challenges.
For instance, with better insights and advisories from twins, operations can become more adaptable, allowing for adjustments based on demand or raw material costs. Product quality can be improved. Waste can be managed better and reduced, while emissions can be cut as machines and processes will run more efficiently. Wear and tear on equipment can be managed better, and unexpected downtime reduced. What’s more, with better tools to recognize normal versus abnormal operating conditions, accidents and hazards can be avoided and safety at a facility improved.
To draw relevant, actionable insights from an industrial digital twin, we recommend a 4C-led approach:
Connect– Connectivity is the starting point for any digital twin; it is core to the free flow of real-time data required to model and monitor equipment and processes. Companies need to work on real-time connectivity to logic controllers associated with a piece of equipment or the supervisory control and data acquisition (SCADA) system in a plant.
Collect – Collecting both real-time and historical data in a system where it can be stored safely and manipulated effectively is the next step. Often today, the most appropriate option is to collect and store data on the cloud.
Collate – Collated data is key to getting the right mix of parameters to develop deeper insights into an asset or process. The data needs to be formed into a coordinated set, rather than remaining as individual data points—and not all data will be relevant. With collated data, the digital twin, through AI pattern recognition, can begin to make sense of the information.
Contextualize – Data about a piece of equipment or a process won’t tell the whole story until it is contextualized with information about workload conditions, failure scenarios, maintenance operations, and so on. These provide the context to make the patterns identified in the data relevant and useful.
Making digital twins an integral part of the manufacturing value chain—in product engineering, production operations, or products ‘in-use’—is key.
While the industry is reaping the benefits of large-scale digital investments made in the last two decades—in IoT and other technologies such as 3-D, augmented reality, virtual reality, real-time data integration, and artificial intelligence— the gestation for digital twins appears to be longer compared to the general Industry 4.0 capabilities, especially when it comes to deployment at scale. Also, many digital twin pilots do not progress to scaled deployment owing to ‘pilot purgatory’. One big reason usually is the lack of a robust strategy encompassing use-case discovery, solution blueprint creation, and implementation.
Teaming up with a strategic consulting partner can help define a clear vision, draw actionable roadmap, and launch an agile implementation to achieve business outcomes quicker. For digital twin success, it is important for the twin to become an integral part of the manufacturing value chain—be it in product engineering, production operations, or products ‘in-use’. Effectively, twins have to find applications in product or process development and in-field operations, either for assets, or systems of assets, or full processes. Afterall, for any technology to succeed, it needs to become part of the everyday life of a user or operator.
Very often, companies take the path of ‘digital twin of everything’. But creating the twin of an entire process or plant, and the time and cost that goes into completing the project could defy the potential returns of such investments. Instead, a planned progression toward this state by working on overcoming the bottleneck assets or processes and continuing to squeeze inefficiency out of the system to get to a perfect ‘single piece flow’ holds the answer. This requires a well-crafted vision of the end state and the technology architecture to realize that state. A composable architecture and a ‘neural manufacturing’ driven approach that covers all dimensions from operational assets and the data fabric to the partner ecosystem and end user roles could help bring optimal returns from digital twins.